• Title/Summary/Keyword: Transformation Data

Search Result 2,100, Processing Time 0.029 seconds

Comparative analysis on darcy-forchheimer flow of 3-D MHD hybrid nanofluid (MoS2-Fe3O4/H2O) incorporating melting heat and mass transfer over a rotating disk with dufour and soret effects

  • A.M. Abd-Alla;Esraa N. Thabet;S.M.M.El-Kabeir;H. A. Hosham;Shimaa E. Waheed
    • Advances in nano research
    • /
    • v.16 no.4
    • /
    • pp.325-340
    • /
    • 2024
  • There are several novel uses for dispersing many nanoparticles into a conventional fluid, including dynamic sealing, damping, heat dissipation, microfluidics, and more. Therefore, melting heat and mass transfer characteristics of a 3-D MHD Hybrid Nanofluid flow over a rotating disc with presenting dufour and soret effects are assessed numerically in this study. In this instance, we investigated both ferric sulfate and molybdenum disulfide as nanoparticles suspended within base fluid water. The governing partial differential equations are transformed into linked higher-order non-linear ordinary differential equations by the local similarity transformation. The collection of these deduced equations is then resolved using a Chebyshev spectral collocation-based algorithm built into the Mathematica software. To demonstrate how different instances of hybrid/ nanofluid are impacted by changes in temperature, velocity, and the distribution of nanoparticle concentration, examples of graphical and numerical data are given. For many values of the material parameters, the computational findings are shown. Simulations conducted for different physical parameters in the model show that adding hybrid nanoparticle to the fluid mixture increases heat transfer in comparison to simple nanofluids. It has been identified that hybrid nanoparticles, as opposed to single-type nanoparticles, need to be taken into consideration to create an effective thermal system. Furthermore, porosity lowers the velocities of simple and hybrid nanofluids in both cases. Additionally, results show that the drag force from skin friction causes the nanoparticle fluid to travel more slowly than the hybrid nanoparticle fluid. The findings also demonstrate that suction factors like magnetic and porosity parameters, as well as nanoparticles, raise the skin friction coefficient. Furthermore, It indicates that the outcomes from different flow scenarios correlate and are in strong agreement with the findings from the published literature. Bar chart depictions are altered by changes in flow rates. Moreover, the results confirm doctors' views to prescribe hybrid nanoparticle and particle nanoparticle contents for achalasia patients and also those who suffer from esophageal stricture and tumors. The results of this study can also be applied to the energy generated by the melting disc surface, which has a variety of industrial uses. These include, but are not limited to, the preparation of semiconductor materials, the solidification of magma, the melting of permafrost, and the refreezing of frozen land.

Defining Competency for Developing Digital Technology Curriculum (디지털 신기술 교육과정 개발을 위한 역량 정의)

  • Ho Lee;Juhyeon Lee;Junho Bae;Woosik Shin;Hee-Woong Kim
    • Knowledge Management Research
    • /
    • v.25 no.1
    • /
    • pp.135-154
    • /
    • 2024
  • As the digital transformation accelerates, the demand for professionals with competencies in various digital technologies such as artificial intelligence, big data is increasing in the industry. In response, the government is developing various educational programs to nurture talent in these emerging technology fields. However, the lack of a clear definition of competencies, which is the foundation of curriculum development and operation, has posed challenges in effectively designing digital technology education programs. This study systematically reviews the definitions and characteristics of competencies presented in prior research based on a literature review. Subsequently, in-depth interviews were conducted with 30 experts in emerging technology fields to derive a definition of competencies suitable for technology education programs. This research defines competencies for the development of technology education programs as 'a set of one or more knowledge and skills required to perform effectively at the expected level of a given task.' Additionally, the study identifies the elements of competencies, including knowledge and skills, as well as the principles of competency construction. The definition and characteristics of competencies provided in this study can be utilized to create more systematic and effective educational programs in emerging technology fields and bridge the gap between education and industry practice.

Enhancing Throughput and Reducing Network Load in Central Bank Digital Currency Systems using Reinforcement Learning (강화학습 기반의 CBDC 처리량 및 네트워크 부하 문제 해결 기술)

  • Yeon Joo Lee;Hobin Jang;Sujung Jo;GyeHyun Jang;Geontae Noh;Ik Rae Jeong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.1
    • /
    • pp.129-141
    • /
    • 2024
  • Amidst the acceleration of digital transformation across various sectors, the financial market is increasingly focusing on the development of digital and electronic payment methods, including currency. Among these, Central Bank Digital Currencies (CBDC) are emerging as future digital currencies that could replace physical cash. They are stable, not subject to value fluctuation, and can be exchanged one-to-one with existing physical currencies. Recently, both domestic and international efforts are underway in researching and developing CBDCs. However, current CBDC systems face scalability issues such as delays in processing large transactions, response times, and network congestion. To build a universal CBDC system, it is crucial to resolve these scalability issues, including the low throughput and network overload problems inherent in existing blockchain technologies. Therefore, this study proposes a solution based on reinforcement learning for handling large-scale data in a CBDC environment, aiming to improve throughput and reduce network congestion. The proposed technology can increase throughput by more than 64 times and reduce network congestion by over 20% compared to existing systems.

Analysis of Changes in the Concept of Digital Curation through Definitions in Academic Literature (학술 문헌 내 정의문을 통해 살펴본 디지털 큐레이션 개념 변화 분석)

  • Hyunsoo Kim;Hyo-Jung Oh
    • Journal of the Korean Society for information Management
    • /
    • v.41 no.3
    • /
    • pp.269-288
    • /
    • 2024
  • In the era of digital transformation, discussions about digital curation have become increasingly active not only in academia but also in various fields. The primary purpose of this study is to analyze the conceptual changes in digital curation over time, particularly by examining the definition statements related to digital curation as described in academic literature. To achieve this, academic research papers from 2009, when the term "digital curation" was first mentioned, to 2023 were collected, and definition statements that explained relevant concepts were extracted. Basic statistical analyses were conducted. Using DMR topic modeling and word networks, the relationships among keywords and the changes in their importance over time were examined, and a conceptual map of digital curation was made focusing on the main topics. The results revealed that the concept of digital curation is primarily centered around the themes of "data preservation," "traditional curator roles," and "product recommendation curation." Depending on the researchers' intentions for utilizing digital curation, the concept was expanded to include topics such as "content distribution and classification," "information usage," and "curation models." This study is significant in that it analyzed the concept of digital curation through definition statements reflecting the perspectives of researchers. Additionally, the study holds value in explicitly identifying changes in the concepts that researchers emphasize over time through the trends in topic prevalence.

The Usability Evaluation of Kiosks for Individuals with Low Vision (저시력 시각장애인의 키오스크 사용성 평가 연구)

  • Kyounghoon Kim;Yumi Kim;Sumin Baeck;Jeong Hyeun Ko
    • Journal of the Korean Society for information Management
    • /
    • v.41 no.3
    • /
    • pp.331-358
    • /
    • 2024
  • In the rapid digital transformation era, kiosks have become a common element in daily life. However, their widespread deployment has introduced new challenges for socially marginalized groups, including individuals with disabilities and the elderly. This study aims to evaluate the usability of kiosks for individuals with low vision and propose improvement strategies. The study was conducted with eight low-vision university students from A University in Gyeongsangbuk-do and four non-disabled university students from Daegu. Usability was assessed through experiments involving a self-service certificate issuance kiosk and a fast-food restaurant kiosk, using Jakob Nielsen's five usability evaluation criteria: learnability, efficiency, memorability, error prevention, and satisfaction. The results revealed that individuals with low vision faced significant difficulties with small text size, low contrast, no physical buttons, and lack of screen zoom functionality. To address these issues, the study recommends enhancements such as increasing text size and contrast, incorporating physical buttons, adding zoom functionality, ensuring consistent UI design, and providing auditory feedback. This study provides foundational data for enhancing information accessibility for individuals with low vision. It offers critical insights into kiosk design and policy recommendations, thereby contributing to the mitigation of the digital divide.

eBPF-based Container Activity Analysis System (eBPF를 활용한 컨테이너 활동 분석 시스템)

  • Jisu Kim;Jaehyun Nam
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.9
    • /
    • pp.404-412
    • /
    • 2024
  • The adoption of cloud environments has revolutionized application deployment and management, with microservices architecture and container technology serving as key enablers of this transformation. However, these advancements have introduced new challenges, particularly the necessity to precisely understand service interactions and conduct detailed analyses of internal processes within complex service environments such as microservices. Traditional monitoring techniques have proven inadequate in effectively analyzing these complex environments, leading to increased interest in eBPF (extended Berkeley Packet Filter) technology as a solution. eBPF is a powerful tool capable of real-time event collection and analysis within the Linux kernel, enabling the monitoring of various events, including file system activities within the kernel space. This paper proposes a container activity analysis system based on eBPF, which monitors events occurring in the kernel space of both containers and host systems in real-time and analyzes the collected data. Furthermore, this paper conducts a comparative analysis of prominent eBPF-based container monitoring systems (Tetragon, Falco, and Tracee), focusing on aspects such as event detection methods, default policy application, event type identification, and system call blocking and alert generation. Through this evaluation, the paper identifies the strengths and weaknesses of each system and determines the necessary features for effective container process monitoring and restriction. In addition, the proposed system is evaluated in terms of container metadata collection, internal activity monitoring, and system metadata integration, and the effectiveness and future potential of eBPF-based monitoring systems.

Mapping and estimating forest carbon absorption using time-series MODIS imagery in South Korea (시계열 MODIS 영상자료를 이용한 산림의 연간 탄소 흡수량 지도 작성)

  • Cha, Su-Young;Pi, Ung-Hwan;Park, Chong-Hwa
    • Korean Journal of Remote Sensing
    • /
    • v.29 no.5
    • /
    • pp.517-525
    • /
    • 2013
  • Time-series data of Normal Difference Vegetation Index (NDVI) obtained by the Moderate-resolution Imaging Spectroradiometer(MODIS) satellite imagery gives a waveform that reveals the characteristics of the phenology. The waveform can be decomposed into harmonics of various periods by the Fourier transformation. The resulting $n^{th}$ harmonics represent the amount of NDVI change in a period of a year divided by n. The values of each harmonics or their relative relation have been used to classify the vegetation species and to build a vegetation map. Here, we propose a method to estimate the annual amount of carbon absorbed on the forest from the $1^{st}$ harmonic NDVI value. The $1^{st}$ harmonic value represents the amount of growth of the leaves. By the allometric equation of trees, the growth of leaves can be considered to be proportional to the total amount of carbon absorption. We compared the $1^{st}$ harmonic NDVI values of the 6220 sample points with the reference data of the carbon absorption obtained by the field survey in the forest of South Korea. The $1^{st}$ harmonic values were roughly proportional to the amount of carbon absorption irrespective of the species and ages of the vegetation. The resulting proportionality constant between the carbon absorption and the $1^{st}$ harmonic value was 236 tCO2/5.29ha/year. The total amount of carbon dioxide absorption in the forest of South Korea over the last ten years has been estimated to be about 56 million ton, and this coincides with the previous reports obtained by other methods. Considering that the amount of the carbon absorption becomes a kind of currency like carbon credit, our method is very useful due to its generality.

The Empirical Study on the Effects of the Team Empowerment caused by the Team-Based Organizational Structure in KBS (팀제가 팀 임파워먼트에 미치는 영향에 관한 연구;KBS 팀제를 중심으로)

  • Ahn, Dong-Su;Kim, Hong
    • 한국벤처창업학회:학술대회논문집
    • /
    • 2006.04a
    • /
    • pp.167-201
    • /
    • 2006
  • Korean corporations are transforming their vertical operational structure to a team-based structure to compete in the rapidly changing environment and for improved performance. However, a high percentage of the respondents in KBS said that despite the appearance of the present team structure, the organization operates much like a vertically-structured organization. This result can be attributed to the lack of study and implementation toward the goal of empowerment, the key variable for the success of the team-based structure. This study aims to provide policy suggestions on how to implement the process of empowerment, by investigating the conditions that hinder the process and the attitude of the KBS employees. For the cross-sectional study, this thesis examined the domestic and international references, conducted a survey of KBS employees, personal interviews and made direct observations. Approximately 1,200 copies of the Questionnaire were distributed and 474 were completed and returned. The analysis used SPSS 12.0 software to process the data collected from 460 respondents. For the longitudinal-study, six categories that were common to this study and "The Report of the Findings of KBS Employees' View of the Team Structure" were selected. The comparative study analyzed the changes in a ten-month period. The survey findings showed a decrease of 24.2%p in the number of responses expressing negative views of the team structure and a decrease of 1.29%p in the number of positive responses. The findings indicated a positive transformation illustrating employees' improved understanding and approval of the team structure. However, KBS must address the issue on an ongoing basis. It has been proven that the employee empowerment increases the productivity of the individual and the group. In order to boost the level of empowerment, the management must exercise new, innovative leadership and build trust between the managers and the employees first. Additional workload as a result of shirking at work places was prevalent throughout all divisions and ranks, according to the survey data. This outcome leads to the conclusion that the workload is not evenly distributed or shared. And the data also showed the employees do not trust the assessment and rewards system. More attention and consideration must be paid to the team size and job allocation in order to address this matter; the present assessment and rewards system need to be complemented. The type of leadership varies depending on the characteristics of the organization's structure and employees' disposition. KBS must develop and reform its own management, leadership style to suit the characteristics of individual teams. Finally, for a soft-landing of KBS team structure, in-house training and education are necessary.

  • PDF

PCA­based Waveform Classification of Rabbit Retinal Ganglion Cell Activity (주성분분석을 이용한 토끼 망막 신경절세포의 활동전위 파형 분류)

  • 진계환;조현숙;이태수;구용숙
    • Progress in Medical Physics
    • /
    • v.14 no.4
    • /
    • pp.211-217
    • /
    • 2003
  • The Principal component analysis (PCA) is a well-known data analysis method that is useful in linear feature extraction and data compression. The PCA is a linear transformation that applies an orthogonal rotation to the original data, so as to maximize the retained variance. PCA is a classical technique for obtaining an optimal overall mapping of linearly dependent patterns of correlation between variables (e.g. neurons). PCA provides, in the mean-squared error sense, an optimal linear mapping of the signals which are spread across a group of variables. These signals are concentrated into the first few components, while the noise, i.e. variance which is uncorrelated across variables, is sequestered in the remaining components. PCA has been used extensively to resolve temporal patterns in neurophysiological recordings. Because the retinal signal is stochastic process, PCA can be used to identify the retinal spikes. With excised rabbit eye, retina was isolated. A piece of retina was attached with the ganglion cell side to the surface of the microelectrode array (MEA). The MEA consisted of glass plate with 60 substrate integrated and insulated golden connection lanes terminating in an 8${\times}$8 array (spacing 200 $\mu$m, electrode diameter 30 $\mu$m) in the center of the plate. The MEA 60 system was used for the recording of retinal ganglion cell activity. The action potentials of each channel were sorted by off­line analysis tool. Spikes were detected with a threshold criterion and sorted according to their principal component composition. The first (PC1) and second principal component values (PC2) were calculated using all the waveforms of the each channel and all n time points in the waveform, where several clusters could be separated clearly in two dimension. We verified that PCA-based waveform detection was effective as an initial approach for spike sorting method.

  • PDF

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.