• Title/Summary/Keyword: step-by-step learning

Search Result 682, Processing Time 0.032 seconds

유비쿼터스 컴퓨팅 황경에서 발생하는 에이전트간 충돌 해결 모델

  • 이건수;김민구
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2004.11a
    • /
    • pp.249-258
    • /
    • 2004
  • 오늘날 활발하게 이루어지고 있는 유비쿼터스 컴퓨팅 관련 기술 연구는 사용자가 시간과 장소에 구애받지 않고 네트워크에 접근해 다양한 컴퓨터 관련 서비스를 제공 받을 수 있는 방법에 초점을 맞추고 있다. 이 처럼 시간과 공간의 한계를 뛰어 넘은 네트워크로의 자유로운 접근은 일상 생활의 패러다임을 바꾸어 놓게 될 것이다. 유비쿼터스 컴퓨팅 기술을 통해 가장 큰 변화가 일어나는 분야는 일반 가정환경에서 일어나는 인텔리전트 홈 네트워크 (Intelligent Home Network) 라고 할 수 있다. 집에 들어오면, 자동으로 문을 열어주고, 불을 켜주며, 놓쳤던 TV 프로그램을 자동으로 녹화해 놓았다가 원하는 시간에 보여주고, 적당한 시간에 목욕물을 미리 받아준다. 또한 집밖으로 나가기 전, 일기예보에 따라 우산을 챙겨주고, 일정을 확인시켜주며 입고 나갈 옷을 골라줄 수도 있다. 이 모든 일들이 유비쿼터스 컴퓨팅 기술이 가져올 인텔리전트 홈 네트워크의 모습이다. 그러나, 모든 사용자에게 효과적인 서비스를 제공하기 위해서는 홈 네트워크 상의 자원 관리에서 일어날 수 있는 에이전트들간의 자원 접근 권한 충돌을 효율적으로 방지할 수 있는 기술이 필요하다. 유비쿼터스 컴퓨팅 환경에서 자원관리 특성은 점유의 연속성, 자원 사이의 연관성, 그리고 자원과 사용자 사 사이의 연계성의 3 가지 특성을 지니고 있다. 본 논문에서는 유비쿼터스 컴퓨팅 환경에서 일어날 수 있는 자원 충돌 상황을 효율적으로 처리하기 위한 자원 협상 방법을 제안한다. 본 방법은 자원 관리 특성을 바탕으로 시간논리에 기반을 둔 자원 선점과 분배 규칙으로 구성된다.트 시스템은 b-Cart를 기반으로 할 것으로 예측할 수 있다.타났다. 또한, 스네이크의 초기 제어점을 얼굴은 44개, 눈은 16개, 입은 24개로 지정하여 MER추출에 성공한 영상에 대해 스네이크 알고리즘을 수행한 결과, 추출된 영역의 오차율은 각각 2.2%, 2.6%, 2.5%로 나타났다.해서 Template-based reasoning 예를 보인다 본 방법론은 검색노력을 줄이고, 검색에 있어 Feasibility와 Admissibility를 보장한다.매김할 수 있는 중요한 계기가 될 것이다.재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data b

  • PDF

A Study on the Knowledge Base Construction of Expert System for S/W Project Management (소프트웨어 사업관리 지원용 전문가시스템의 지식베이스 구축에 관한 연구)

  • 김화수;최병권
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2000.11a
    • /
    • pp.397-406
    • /
    • 2000
  • 대부분의 국방정보시스템의 소프트웨어는 높은 가용성, 신뢰성, 신속성, 정확성 등을 요구하는 대규모이면서 복잡한 실시간 시스템이다. 이러한 국방정보시스템의 소프트웨어 개발사업에 있어서 저비용 고효율의 미개국방경영 건설을 위하고 강한 전투력을 육성하기 위해서는 국방정보시스템의 효율적인 소프트웨어 개발사법이 요구된다. 따라서, 국방정보시스템의 소프트웨어 사업관리자가 개발사업을 관리하고 감독하는데 있어서 개발자와 사용자간의 조정 및 통제 기능을 수행하고 해당 국방정보시스템의 특성을 파악하여 성공적인 사업수행을 할 수 있도록 기술적인 사업관리 측면에서 구체적이고 상세화된 방안/지침을 제공하기 위한 전문가시스템의 지식베이스 도메인 지식개발에 관한 연구이다. 기존의 국방정보시스템의 사업관리자가 경험을 동해 축적해 온 기술, 정책, 아이디어, 노하우 등에 대한 지식을 습득하고 사업 관련자료에서 제시한 소프트웨어 생명주기 단계별 방안이나 지침 등을 바탕으로 하여 식별된 사실이나 내용을 지식베이스로 구축하여 국방정보시스템의 사업관리자가 필요로 할 때 설명모듈을 거쳐 임무 및 세부활동사항을 게시하여 줌으로써 사업관리 경험이 부족하거나 사업관리자가 교체되었을 때 사업관리자들이 업무를 지속적으로 연계시켜 임무수행이 가능하도록 기초/기반 여건을 제공하고자 한다. 본 논문은 국방정보시스템의 소프트웨어 개발사업에서 소프트웨어 생명주기 단계별 사업관리자의 임무 및 세부활동사항 지원용 전문가시스템을 개발할 때 이용할 수 있도록 도메인 지식을 개발하는 것이며 논문의 결과를 활용시 기대되는 효과는 본문을 참고 바란다.의 장점을 취합하여 설계되었다. 본 시스템은 기존의 UN/EDIFACT표준을 사용하고 있는 EDI환경과 기존 VAN 방식의 EDI 중계 시스템과 연동되며, 향후 관세청의 XML/EDI 표준 시행을 미리 대비하는 선도연구로서 자리매김이 된다. 본 연구에서는 개발된 XML/EDI 통관시스템은 향후, 서비스의 최대 걸림돌이 되어왔던 값비싼 EDI 사용료의 부담에서 벗어날 수 있게 할 것이며, 저렴한 EDI구축/운영 비용으로 전자문서교환의 활성화와 XML이 인터넷 기반의 문서유통 표준으로 자리매김할 수 있는 중요한 계기가 될 것이다.재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is

  • PDF

An Improvement in K-NN Graph Construction using re-grouping with Locality Sensitive Hashing on MapReduce (MapReduce 환경에서 재그룹핑을 이용한 Locality Sensitive Hashing 기반의 K-Nearest Neighbor 그래프 생성 알고리즘의 개선)

  • Lee, Inhoe;Oh, Hyesung;Kim, Hyoung-Joo
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.11
    • /
    • pp.681-688
    • /
    • 2015
  • The k nearest neighbor (k-NN) graph construction is an important operation with many web-related applications, including collaborative filtering, similarity search, and many others in data mining and machine learning. Despite its many elegant properties, the brute force k-NN graph construction method has a computational complexity of $O(n^2)$, which is prohibitive for large scale data sets. Thus, (Key, Value)-based distributed framework, MapReduce, is gaining increasingly widespread use in Locality Sensitive Hashing which is efficient for high-dimension and sparse data. Based on the two-stage strategy, we engage the locality sensitive hashing technique to divide users into small subsets, and then calculate similarity between pairs in the small subsets using a brute force method on MapReduce. Specifically, generating a candidate group stage is important since brute-force calculation is performed in the following step. However, existing methods do not prevent large candidate groups. In this paper, we proposed an efficient algorithm for approximate k-NN graph construction by regrouping candidate groups. Experimental results show that our approach is more effective than existing methods in terms of graph accuracy and scan rate.

Scheduling System using CSP leer Effective Assignment of Repair Warrant Job (효율적인 A/S작업 배정을 위한 CSP기반의 스케줄링 시스템)

  • 심명수;조근식
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2000.11a
    • /
    • pp.247-256
    • /
    • 2000
  • 오늘날의 기업은 상품을 판매하는 것 뿐만 아니라 기업의 신용과 이미지를 위해 그 상품에 대한 사후처리(After Service) 업무에 많은 투자를 하고 있다. 이러한 양질의 사후서비스를 고객에게 공급하기 위해서는 많은 인력을 합리적으로 관리해야 하고 요청되는 고장수리 서비스 업무를 빠르게 해결하기 위해서는 업무를 인력들에게 합리적으로 배정을 하고 회사의 비용을 최소화하면서 정해진 시간에 요청된 작업을 처리하기 위해서는 인력들에게 작업을 배정하고 스케줄링하는 문제가 발생된다. 본 논문에서는 이러한 문제를 해결하기 위해 화학계기의 A/S 작업을 인력에게 합리적으로 배정하는 스케줄링 시스템에 관한 연구이다. 먼저 스케줄링 모델을 HP 사의 화학분석 및 시스템을 판매, 유지보수 해 주는 "영진과학(주)"회사의 작업 스케줄을 분석하여 필요한 도메인과 고객서비스전략과 인력관리전략에서 제약조건을 추출하였고 여기에 스케줄링 문제를 해결하기 위한 방법으로 제약만족문제(CSP) 해결기법인 도메인 여과기법을 적용하였다. 도메인 여과기법은 제약조건에 의해 변수가 갖는 도메인의 불필요한 부분을 여과하는 것으로 제약조건과 관련되어 있는 변수의 도메인이 축소되는 것이다. 또한, 스케줄링을 하는데에 있어서 비용적인 측면에서의 스케줄링방법과 고객 만족도에서의 스케줄링 방법을 비교하여 가장 이상적인 해를 찾는데 트래이드오프(Trade-off)를 이용하여 최적의 해를 구했으며 실험을 통해 인력에게 더욱 효율적으로 작업들을 배정 할 수 있었고 또한, 정해진 시간에 많은 작업을 처리 할 수 있었으며 작업을 처리하는데 있어 소요되는 비용을 감소하는 결과를 얻을 수 있었다. 검증하였다.를, 지지도(support), 신뢰도(confidence), 리프트(lift), 컨빅션(conviction)등의 관계를 통해 다양한 방법으로 모색해본다. 이 연구에서 제안하는 이러한 개념계층상의 흥미로운 부분의 탐색은, 전자 상거래에서의 CRM(Customer Relationship Management)나 틈새시장(niche market) 마케팅 등에 적용가능하리라 여겨진다.선의 효과가 나타났다. 표본기업들을 훈련과 시험용으로 구분하여 분석한 결과는 전체적으로 재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity

  • PDF

Development of Block-based Code Generation and Recommendation Model Using Natural Language Processing Model (자연어 처리 모델을 활용한 블록 코드 생성 및 추천 모델 개발)

  • Jeon, In-seong;Song, Ki-Sang
    • Journal of The Korean Association of Information Education
    • /
    • v.26 no.3
    • /
    • pp.197-207
    • /
    • 2022
  • In this paper, we develop a machine learning based block code generation and recommendation model for the purpose of reducing cognitive load of learners during coding education that learns the learner's block that has been made in the block programming environment using natural processing model and fine-tuning and then generates and recommends the selectable blocks for the next step. To develop the model, the training dataset was produced by pre-processing 50 block codes that were on the popular block programming language web site 'Entry'. Also, after dividing the pre-processed blocks into training dataset, verification dataset and test dataset, we developed a model that generates block codes based on LSTM, Seq2Seq, and GPT-2 model. In the results of the performance evaluation of the developed model, GPT-2 showed a higher performance than the LSTM and Seq2Seq model in the BLEU and ROUGE scores which measure sentence similarity. The data results generated through the GPT-2 model, show that the performance was relatively similar in the BLEU and ROUGE scores except for the case where the number of blocks was 1 or 17.

Building Sentence Meaning Identification Dataset Based on Social Problem-Solving R&D Reports (사회문제 해결 연구보고서 기반 문장 의미 식별 데이터셋 구축)

  • Hyeonho Shin;Seonki Jeong;Hong-Woo Chun;Lee-Nam Kwon;Jae-Min Lee;Kanghee Park;Sung-Pil Choi
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.4
    • /
    • pp.159-172
    • /
    • 2023
  • In general, social problem-solving research aims to create important social value by offering meaningful answers to various social pending issues using scientific technologies. Not surprisingly, however, although numerous and extensive research attempts have been made to alleviate the social problems and issues in nation-wide, we still have many important social challenges and works to be done. In order to facilitate the entire process of the social problem-solving research and maximize its efficacy, it is vital to clearly identify and grasp the important and pressing problems to be focused upon. It is understandable for the problem discovery step to be drastically improved if current social issues can be automatically identified from existing R&D resources such as technical reports and articles. This paper introduces a comprehensive dataset which is essential to build a machine learning model for automatically detecting the social problems and solutions in various national research reports. Initially, we collected a total of 700 research reports regarding social problems and issues. Through intensive annotation process, we built totally 24,022 sentences each of which possesses its own category or label closely related to social problem-solving such as problems, purposes, solutions, effects and so on. Furthermore, we implemented four sentence classification models based on various neural language models and conducted a series of performance experiments using our dataset. As a result of the experiment, the model fine-tuned to the KLUE-BERT pre-trained language model showed the best performance with an accuracy of 75.853% and an F1 score of 63.503%.

Implementation of Git's Commit Message Classification Model Using GPT-Linked Source Change Data

  • Ji-Hoon Choi;Jae-Woong Kim;Seong-Hyun Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.123-132
    • /
    • 2023
  • Git's commit messages manage the history of source changes during project progress or operation. By utilizing this historical data, project risks and project status can be identified, thereby reducing costs and improving time efficiency. A lot of research related to this is in progress, and among these research areas, there is research that classifies commit messages as a type of software maintenance. Among published studies, the maximum classification accuracy is reported to be 95%. In this paper, we began research with the purpose of utilizing solutions using the commit classification model, and conducted research to remove the limitation that the model with the highest accuracy among existing studies can only be applied to programs written in the JAVA language. To this end, we designed and implemented an additional step to standardize source change data into natural language using GPT. This text explains the process of extracting commit messages and source change data from Git, standardizing the source change data with GPT, and the learning process using the DistilBERT model. As a result of verification, an accuracy of 91% was measured. The proposed model was implemented and verified to ensure accuracy and to be able to classify without being dependent on a specific program. In the future, we plan to study a classification model using Bard and a management tool model helpful to the project using the proposed classification model.

Gross, organoleptic and histologic assessment of cadaveric equine heads preserved using chemical methods for veterinary surgical teaching

  • Rodrigo Romero Correa;Rubens Peres Mendes;Diego Darley Velasquez Pineros;Aymara Eduarda De Lima;Andre Luis do Valle De Zoppa;Luis Claudio Lopes Correia da Silva;Ricardo de Francisco Strefezzi;Silvio Henrique de Freitas
    • Journal of Veterinary Science
    • /
    • v.25 no.2
    • /
    • pp.29.1-29.11
    • /
    • 2024
  • Background: Preservation of biological tissues has been used since ancient times. Regardless of the method employed, tissue preservation is thought to be a vital step in veterinary surgery teaching and learning. Objectives: This study was designed to determine the usability of chemically preserved cadaveric equine heads for surgical teaching in veterinary medicine. Methods: Six cadaveric equine heads were collected immediately after death or euthanasia and frozen until fixation. Fixation was achieved by using a hypertonic solution consisting of sodium chloride, sodium nitrite and sodium nitrate, and an alcoholic solution containing ethanol and glycerin. Chemically preserved specimens were stored at low temperatures (2℃ to 6℃) in a conventional refrigerator. The specimens were submitted to gross and organoleptic assessment right after fixative solution injection (D0) and within 10, 20, and 30 days of fixation (D10, D20, and D30, respectively). Samples of tissue from skin, tongue, oral vestibule, and masseter muscle were collected for histological evaluation at the same time points. Results: Physical and organoleptic assessments revealed excellent specimen quality (mean scores higher than 4 on a 5-point scale) in most cases. In some specimens, lower scores (3) were assigned to the range of mouth opening, particularly on D0 and D10. A reduced the range of mouth opening may be a limiting factor in teaching activities involving structures located in the oral cavity. Conclusions: The excellent physical, histologic, and organoleptic characteristics of the specimens in this sample support their usability in teaching within the time frame considered. Appropriate physical and organoleptic characteristics (color, texture, odor, and flexibility) of the specimens in this study support the use of the method described for preparation of reusable anatomical specimens.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Development of deep learning network based low-quality image enhancement techniques for improving foreign object detection performance (이물 객체 탐지 성능 개선을 위한 딥러닝 네트워크 기반 저품질 영상 개선 기법 개발)

  • Ki-Yeol Eom;Byeong-Seok Min
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • Along with economic growth and industrial development, there is an increasing demand for various electronic components and device production of semiconductor, SMT component, and electrical battery products. However, these products may contain foreign substances coming from manufacturing process such as iron, aluminum, plastic and so on, which could lead to serious problems or malfunctioning of the product, and fire on the electric vehicle. To solve these problems, it is necessary to determine whether there are foreign materials inside the product, and may tests have been done by means of non-destructive testing methodology such as ultrasound ot X-ray. Nevertheless, there are technical challenges and limitation in acquiring X-ray images and determining the presence of foreign materials. In particular Small-sized or low-density foreign materials may not be visible even when X-ray equipment is used, and noise can also make it difficult to detect foreign objects. Moreover, in order to meet the manufacturing speed requirement, the x-ray acquisition time should be reduced, which can result in the very low signal- to-noise ratio(SNR) lowering the foreign material detection accuracy. Therefore, in this paper, we propose a five-step approach to overcome the limitations of low resolution, which make it challenging to detect foreign substances. Firstly, global contrast of X-ray images are increased through histogram stretching methodology. Second, to strengthen the high frequency signal and local contrast, we applied local contrast enhancement technique. Third, to improve the edge clearness, Unsharp masking is applied to enhance edges, making objects more visible. Forth, the super-resolution method of the Residual Dense Block (RDB) is used for noise reduction and image enhancement. Last, the Yolov5 algorithm is employed to train and detect foreign objects after learning. Using the proposed method in this study, experimental results show an improvement of more than 10% in performance metrics such as precision compared to low-density images.