• Title/Summary/Keyword: Information processing knowledge

Search Result 1,093, Processing Time 0.03 seconds

Applied Practices on Codification Through Mapping Design Thinking Mechanism with Software Development Process (소프트웨어개발 프로세스와 디자인씽킹 메커니즘의 접목을 통한 코딩화 적용 사례)

  • Seo, Chae Yun;Kim, Jang Hwan;Kim, R.Young Chul
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.4
    • /
    • pp.107-116
    • /
    • 2021
  • In the 4th Industrial Revolution situation it is essential to need the high quality of software in diverse industrial areas. In particular current software centered schools attempt to educate the creative thinking based coding to non-major students and beginners of computer. But the problem is insufficient on the definition and idea of the creative thinking based software. In addition in a aspect of coding education for non-major and new students we recognize to have no relationship between creative thinking methods and coding. In other words we should give them how to practically code and design through learning the creative thinking. To solve this problem we propose the codification of design thinking mechanism without the knowledge of software engineering through mapping creative thinking with software development process. With this mechanism we may expect for students to have some coding ability with the creative design.

Analysis on Optimal Approach of Blind Deconvolution Algorithm in Chest CT Imaging (흉부 컴퓨터단층촬영 영상에서 블라인드 디컨볼루션 알고리즘 최적화 방법에 대한 연구)

  • Lee, Young-Jun;Min, Jung-Whan
    • Journal of radiological science and technology
    • /
    • v.45 no.2
    • /
    • pp.145-150
    • /
    • 2022
  • The main purpose of this work was to restore the blurry chest CT images by applying a blind deconvolution algorithm. In general, image restoration is the procedure of improving the degraded image to get the true or original image. In this regard, we focused on a blind deblurring approach with chest CT imaging by using digital image processing in MATLAB, which the blind deconvolution technique performed without any whole knowledge or information as to the fundamental point spread function (PSF). For our approach, we acquired 30 chest CT images from the public source and applied three type's PSFs for finding the true image and the original PSF. The observed image might be convolved with an isotropic gaussian PSF or motion blurring PSF and the original image. The PSFs are assumed as a black box, hence restoring the image is called blind deconvolution. For the 30 iteration times, we analyzed diverse sizes of the PSF and tried to approximate the true PSF and the original image. For improving the ringing effect, we employed the weighted function by using the sobel filter. The results was compared with the three criteria including mean squared error (MSE), root mean squared error (RMSE) and peak signal-to-noise ratio (PSNR), which all values of the optimal-sized image outperformed those that the other reconstructed two-sized images. Therefore, we improved the blurring chest CT image by using the blind deconvolutin algorithm for optimal approach.

A De Facto Standard for ERC-20 API Functional Specifications and Its Conformance Review Method for Ethereum Smart Contracts (이더리움 스마트 계약 프로그램의 ERC-20 API 기능 명세의 관례상 표준과 적합성 리뷰 방법)

  • Moon, Hyeon-Ah;Park, Sooyong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.10
    • /
    • pp.399-408
    • /
    • 2022
  • ERC-20, the standard API for Ethereum token smart contracts, was introduced to ensure compatibility among applications such as wallets and decentralized exchanges. However, many compatibility vulnerability problems have existed because there is no rigorous functional specifications for each API nor conformance review tools for the standard. In this paper, we proposed a new review procedure and a tool to perform the procedure to review if ERC-20 token smart contract programs for the Ethereum blockchain conform to the de facto standards. Based on the knowledge from an analysis on the ERC-20 API functional behavior of the top 100 token smart contract programs in the existing Ethereum blockchain, a new specification for the de facto standard for ERC-20 API was explicitly defined. The new specification enabled us to design a systematic review method for Ethereum smart contract programs. We developed a tool to support this review method and we evaluated a few benchmark programs with the tool.

Comparison of Experienced and Inexperienced Consumers' Utilisation of Extrinsic Cues in Product Evaluation: Evidence from the Korean Fine Arts Market

  • Kim, Yoonjeun;Park, Kiwan;Kim, Yaeri;Chung, Youngmok
    • Asia Marketing Journal
    • /
    • v.17 no.3
    • /
    • pp.105-127
    • /
    • 2015
  • This study compares experienced and inexperienced consumers' patterns in cue utilisation in product evaluations in the arts market. Borrowing the notion of high- and low-scope cues introduced by the cue-diagnosticity framework, we differentiate between the two most readily discernible extrinsic cues in the fine arts market - an art gallery's brand reputation (a high-scope cue) and certificates of authenticity (a low-scope cue). These two cues are different in nature; the former is more abstract, intangible, and rich in content, so is more difficult to interpret than the latter. Given the differences in experienced and inexperienced consumers' information processing styles, we hypothesise that experienced arts consumers form perceived credibility of and purchase intentions towards artworks based on high-scope cues, whereas inexperienced consumers do so based on low-scope cues. To test our hypothesis, we conducted a consumer intercept study at Korea's two most representative art fairs. The survey participants were categorised into either experienced or inexperienced consumers based on their prior purchase experience, and their responses to a set of attribute combinations about two artworks created by the same artist were collected. The results indicate that experienced participants show higher purchase intentions when an art gallery's reputation is very high, whereas inexperienced participants show higher purchase intentions when artworks are accompanied by certificates of authenticity. This congruency effect between prior experience and cue type is mediated by the perceived credibility of the artworks. The findings suggest a correspondence between a consumer's prior experience and the types of extrinsic cues that are important in product evaluations. To the best of our knowledge, this study is the first attempt ever to investigate the role of prior experience in determining when to use high- or low-scope cues. It also provides a useful frame of reference to advise marketers on the effective sales approach based on a client's prior purchase experience.

A study on Korean multi-turn response generation using generative and retrieval model (생성 모델과 검색 모델을 이용한 한국어 멀티턴 응답 생성 연구)

  • Lee, Hodong;Lee, Jongmin;Seo, Jaehyung;Jang, Yoonna;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.1
    • /
    • pp.13-21
    • /
    • 2022
  • Recent deep learning-based research shows excellent performance in most natural language processing (NLP) fields with pre-trained language models. In particular, the auto-encoder-based language model proves its excellent performance and usefulness in various fields of Korean language understanding. However, the decoder-based Korean generative model even suffers from generating simple sentences. Also, there is few detailed research and data for the field of conversation where generative models are most commonly utilized. Therefore, this paper constructs multi-turn dialogue data for a Korean generative model. In addition, we compare and analyze the performance by improving the dialogue ability of the generative model through transfer learning. In addition, we propose a method of supplementing the insufficient dialogue generation ability of the model by extracting recommended response candidates from external knowledge information through a retrival model.

Analysis of Artificial Intelligence Mathematics Textbooks: Vectors and Matrices (<인공지능 수학> 교과서의 행렬과 벡터 내용 분석)

  • Lee, Youngmi;Han, Chaereen;Lim, Woong
    • Communications of Mathematical Education
    • /
    • v.37 no.3
    • /
    • pp.443-465
    • /
    • 2023
  • This study examines the content of vectors and matrices in Artificial Intelligence Mathematics textbooks (AIMTs) from the 2015 revised mathematics curriculum. We analyzed the implementation of foundational mathematical concepts, specifically definitions and related sub-concepts of vectors and matrices, in these textbooks, given their importance for understanding AI. The findings reveal significant variations in the presentation of vector-related concepts, definitions, sub-concepts, and levels of contextual information and descriptions such as vector size, distance between vectors, and mathematical interpretation. While there are few discrepancies in the presentation of fundamental matrix concepts, differences emerge in the subtypes of matrices used and the matrix operations applied in image data processing across textbooks. There is also variation in how textbooks emphasize the interconnectedness of mathematics for explaining vector-related concepts versus the textbooks place more emphasis on AI-related knowledge than on mathematical concepts and principles. The implications for future curriculum development and textbook design are discussed, providing insights into improving AI mathematics education.

X-TOP: Design and Implementation of TopicMaps Platform for Ontology Construction on Legacy Systems (X-TOP: 레거시 시스템상에서 온톨로지 구축을 위한 토픽맵 플랫폼의 설계와 구현)

  • Park, Yeo-Sam;Chang, Ok-Bae;Han, Sung-Kook
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.2
    • /
    • pp.130-142
    • /
    • 2008
  • Different from other ontology languages, TopicMap is capable of integrating numerous amount of heterogenous information resources using the locational information without any information transformation. Although many conventional editors have been developed for topic maps, they are standalone-type only for writing XTM documents. As a result, these tools request too much time for handling large-scale data and provoke practical problems to integrate with legacy systems which are mostly based on relational database. In this paper, we model a large-scale topic map structure based on XTM 1.0 into RDB structure to minimize the processing time and build up the ontology in legacy systems. We implement a topic map platform called X-TOP that can enhance the efficiency of ontology construction and provide interoperability between XTM documents and database. Moreover, we can use conventional SQL tools and other application development tools for topic map construction in X-TOP. The X-TOP is implemented to have 3-tier architecture to support flexible user interfaces and diverse DBMS. This paper shows the usability of X-TOP by means of the comparison with conventional tools and the application to healthcare cancer ontology management.

A study on legal service of AI

  • Park, Jong-Ryeol;Noe, Sang-Ouk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.7
    • /
    • pp.105-111
    • /
    • 2018
  • Last March, the world Go competition between AlphaGo, AI Go program developed by Google Deep Mind and professional Go player Lee Sedol has shown us that the 4th industrial revolution using AI has come close. Especially, there ar many system combined with AI hae been developing including program for researching legal information, system for expecting jurisdiction, and processing big data, there is saying that even AI legal person is ready for its appearance. As legal field is mostly based on text-based document, such characteristic makes it easier to adopt artificial intelligence technology. When a legal person receives a case, the first thing to do is searching for legal information and judical precedent, which is the one of the strength of AI. It is very difficult for a human being to utilize a flow of legal knowledge and figures by analyzing them but for AI, this is nothing but a simple job. The ability of AI searching for regulation, precedent, and literature related to legal issue is way over our expectation. AI is evaluated to be able to review 1 billion pages of legal document per second and many people agree that lot of legal job will be replaced by AI. Along with development of AI service, legal service is becoming more advanced and if it devotes to ethical solving of legal issues, which is the final goal, not only the legal field but also it will help to gain nation's trust. If nations start to trust the legal service, it would never be completely replaced by AI. What is more, if it keeps offering advanced, ethical, and quick legal service, value of law devoting to the society will increase and finally, will make contribution to the nation. In this time where we have to compete with AI, we should try hard to increase value of traditional legal service provided by human. In the future, priority of good legal person will be his/her ability to use AI. The only field left to human will be understanding and recovering emotion of human caused by legal problem, which cannot be done by AI's controlling function. Then, what would be the attitude of legal people in this period? It would be to learn the new technology and applying in the field rather than going against it, this will be the way to survive in this new AI period.

Design and Implementation of Feature Catalogue Builder based on the S-100 Standard (S-100 표준 기반 피처 카탈로그 제작지원 시스템의 설계 및 구현)

  • Park, Daewon;Kwon, Hyuk-Chul;Park, Suhyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.8
    • /
    • pp.571-578
    • /
    • 2013
  • The IHO S-100 is a standard on the universal hydorgraphic data model for supporting information services that integrate various data in maritime and provide proper information for safety of vessels. The S-100 is used to develop S-10x product specifications which are standards on guideline for creation and delivery of specific data set in maritime. The product specification for feature-based data such as ENC(Electronic Navigational Chart) data includes a feature catalogue that describes characteristics of features in that feature-based data. The feature catalogue is developed by domain experts with knowledge on data of the target domain. However, it is not feasible to develop a feature catalogue according to the XML schema by manual. In the IHO TSMAD committee meeting, needs of developing technology on building feature catalogue has been discussed. Therefore, we present a feature catalogue builder that is a GUI(Graphic User Interface) system supporting domain experts to build feature catalogues in XML. The feature catalogue builder is developed to connect with the FCD(Feature Concept Dictionary) register in the IHO(International Hydrographic Organization) GI(Geographic Information) registry. Also, it supports domain experts to select proper feature items based on the relationships between register items.

Analysis of fMRI Signal Using Independent Component Analysis (Independent Component Analysis를 이용한 fMRI신호 분석)

  • 문찬홍;나동규;박현욱;유재욱;이은정;변홍식
    • Investigative Magnetic Resonance Imaging
    • /
    • v.3 no.2
    • /
    • pp.188-195
    • /
    • 1999
  • The fMRI signals are composed of many various signals. It is very difficult to find the accurate parameter for the model of fMRI signal containing only neural activity, though we may estimating the signal patterns by the modeling of several signal components. Besides the nose by the physiologic motion, the motion of object and noise of MR instruments make it more difficult to analyze signals of fMRI. Therefore, it is not easy to select an accurate reference data that can accurately reflect neural activity, and the method of an analysis of various signal patterns containing the information of neural activity is an issue of the post-processing methods for fMRI. In the present study, fMRI data was analyzed with the Independent Component Analysis(ICA) method that doesn't need a priori-knowledge or reference data. ICA can be more effective over the analytic method using cross-correlation analysis and can separate the signal patterns of the signals with delayed response or motion related components. The Principal component Analysis (PCA) threshold, wavelet spatial filtering and analysis of a part of whole images can be used for the reduction of the freedom of data before ICA analysis, and these preceding analyses may be useful for a more effective analysis. As a result, ICA method will be effective for the degree of freedom of the data.

  • PDF