• Title/Summary/Keyword: Approaches to Learning

Search Result 973, Processing Time 0.026 seconds

The Life Experiences of the Deaf Elderly (농아노인의 생활 경험)

  • Park, Ina;Hwang, YoungHee;Kim, Hanho
    • 한국노년학
    • /
    • v.36 no.3
    • /
    • pp.525-540
    • /
    • 2016
  • The purpose of this study was to investigate what kind of experiences the deaf elderly would have in the course of life. It also aimed to promote the understanding of their living difficulties and culture among people with normal hearing and provide basic data to help them live with others as members of the community. Phenomenological qualitative research was conducted as part of the methodology. The subjects include seven deaf old people. Based on the results of in-depth interview and analysis, the life experiences of the deaf elderly were categorized into "unforgettable wounds," "life in the community," "life with the family," "marriage of the deaf elderly", and "living by adjusting to reality." First, the subcategories of "unforgettable wounds" include "receiving no treatment for fever," "damage by the Korean War," "alienation from the family," and "people's cold eyes." It turned out that the deaf elderly had led a life, suffering from the heart wounds that they could not forget. Second, the subcategories of "life in the community" include "inconvenience in life," "disadvantages in life," and "severed life." The deaf elderly were not only subjected to inconvenience and disadvantages in life, but also suffered loneliness, being cut off from the community. Third, the subcategories of "life with the family" include "not communicating with children," "being abandoned again," "being used by the family," "being lonely even with the family," and "wishing to live independently from the family." The deaf elderly were not supported by their families and were abandoned or used by them, leading a solitary life. Fourth, the subcategories of "marriage of the deaf elderly" include"send as a surrogate mother," "frequent remarriage and divorce," "lean on as a married couple." Deaf elderly form their own culture of the marriage and lean on each other. Finally, the subcategories of "living by adjusting to reality" include "getting help from neighbors," "behaving oneself right in life," "learning Hangul," "living by working," "living freely," "living by missing," and "controlling the impulse to end life," "resorting to religion." The deaf elderly made the most alienated and vulnerable group with no access to benefits due to their limitations as a linguistic and social minority, but they made efforts to form their own culture and adjust to reality for themselves. Based on those findings, the study made the following proposals: first, there is a need for practical approaches to heal the ineffaceable wounds in the hearts of deaf elderly. Second, there is a need for policies to help them experience no inconvenience and disadvantages as members of community and communicate with people with normal hearing. Third, there should be practical approaches to enable them to get recognition and support from their families and share love with them. Finally, there should be practical policy approaches to help people with normal hearing understand the culture of deaf elderly and assist the deaf elderly to receive supports from the community and live with others within the community.

A Generalized Adaptive Deep Latent Factor Recommendation Model (일반화 적응 심층 잠재요인 추천모형)

  • Kim, Jeongha;Lee, Jipyeong;Jang, Seonghyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.249-263
    • /
    • 2023
  • Collaborative Filtering, a representative recommendation system methodology, consists of two approaches: neighbor methods and latent factor models. Among these, the latent factor model using matrix factorization decomposes the user-item interaction matrix into two lower-dimensional rectangular matrices, predicting the item's rating through the product of these matrices. Due to the factor vectors inferred from rating patterns capturing user and item characteristics, this method is superior in scalability, accuracy, and flexibility compared to neighbor-based methods. However, it has a fundamental drawback: the need to reflect the diversity of preferences of different individuals for items with no ratings. This limitation leads to repetitive and inaccurate recommendations. The Adaptive Deep Latent Factor Model (ADLFM) was developed to address this issue. This model adaptively learns the preferences for each item by using the item description, which provides a detailed summary and explanation of the item. ADLFM takes in item description as input, calculates latent vectors of the user and item, and presents a method that can reflect personal diversity using an attention score. However, due to the requirement of a dataset that includes item descriptions, the domain that can apply ADLFM is limited, resulting in generalization limitations. This study proposes a Generalized Adaptive Deep Latent Factor Recommendation Model, G-ADLFRM, to improve the limitations of ADLFM. Firstly, we use item ID, commonly used in recommendation systems, as input instead of the item description. Additionally, we apply improved deep learning model structures such as Self-Attention, Multi-head Attention, and Multi-Conv1D. We conducted experiments on various datasets with input and model structure changes. The results showed that when only the input was changed, MAE increased slightly compared to ADLFM due to accompanying information loss, resulting in decreased recommendation performance. However, the average learning speed per epoch significantly improved as the amount of information to be processed decreased. When both the input and the model structure were changed, the best-performing Multi-Conv1d structure showed similar performance to ADLFM, sufficiently counteracting the information loss caused by the input change. We conclude that G-ADLFRM is a new, lightweight, and generalizable model that maintains the performance of the existing ADLFM while enabling fast learning and inference.

Improve the Performance of People Detection using Fisher Linear Discriminant Analysis in Surveillance (서베일런스에서 피셔의 선형 판별 분석을 이용한 사람 검출의 성능 향상)

  • Kang, Sung-Kwan;Lee, Jung-Hyun
    • Journal of Digital Convergence
    • /
    • v.11 no.12
    • /
    • pp.295-302
    • /
    • 2013
  • Many reported methods assume that the people in an image or an image sequence have been identified and localization. People detection is one of very important variable to affect for the system's performance as the basis technology about the detection of other objects and interacting with people and computers, motion recognition. In this paper, we present an efficient linear discriminant for multi-view people detection. Our approaches are based on linear discriminant. We define training data with fisher Linear discriminant to efficient learning method. People detection is considerably difficult because it will be influenced by poses of people and changes in illumination. This idea can solve the multi-view scale and people detection problem quickly and efficiently, which fits for detecting people automatically. In this paper, we extract people using fisher linear discriminant that is hierarchical models invariant pose and background. We estimation the pose in detected people. The purpose of this paper is to classify people and non-people using fisher linear discriminant.

The Development of Efficient Multimedia Retrieval System of the Object-Based using the Hippocampal Neural Network (해마신경망을 이용한 관심 객체 기반의 효율적인 멀티미디어 검색 시스템의 개발)

  • Jeong Seok-Hoon;Kang Dae-Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.57-64
    • /
    • 2006
  • Tn this paper, We propose a user friendly object-based multimedia retrieval system using the HCNN(HippoCampus Neural Network. Most existing approaches to content-based retrieval rely on query by example or user based low-level features such as color, shape, texture. In this paper we perform a scene change detection and key frame extraction for the compressed video stream that is video compression standard such as MPEG. We propose a method for automatic color object extraction and ACE(Adaptive Circular filter and Edge) of content-based multimedia retrieval system. And we compose multimedia retrieval system after learned by the HCNN such extracted features. Proposed HCNN makes an adaptive real-time content-based multimedia retrieval system using excitatory teaming method that forwards important features to long-term memories and inhibitory learning method that forwards unimportant features to short-term memories controlled by impression.

Empirical Process Monitoring Via On-line Analysis of Complex Process Measurement Data (복잡한 공정 측정 데이터의 실시간 분석을 통한 공정 감시)

  • Cho, Hyun-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.7
    • /
    • pp.374-379
    • /
    • 2016
  • On-line process monitoring schemes are designed to give early warnings of process faults. In the artificial intelligence and machine learning fields, reliable approaches have been utilized, such as kernel-based nonlinear techniques. This work presents a kernel-based empirical monitoring scheme with a small sample problem. The measurement data of normal operations are easy to collect, whereas special events or faults data are difficult to collect. In such situations, noise filtering techniques can be helpful in enhancing the process monitoring performance. This can be achieved by the preprocessing of raw process data and eliminating unwanted variations of data. In this work, the performance of several monitoring schemes was demonstrated using three-dimensional batch process data. The results showed that the monitoring performance was improved significantly in terms of the detection success rate.

An Experimental Evaluation of Short Opinion Document Classification Using A Word Pattern Frequency (단어패턴 빈도를 이용한 단문 오피니언 문서 분류기법의 실험적 평가)

  • Chang, Jae-Young;Kim, Ilmin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.5
    • /
    • pp.243-253
    • /
    • 2012
  • An opinion mining technique which was developed from document classification in area of data mining now becomes a common interest in domestic as well as international industries. The core of opinion mining is to decide precisely whether an opinion document is a positive or negative one. Although many related approaches have been previously proposed, a classification accuracy was not satisfiable enough to applying them in practical applications. A opinion documents written in Korean are not easy to determine a polarity automatically because they often include various and ungrammatical words in expressing subjective opinions. Proposed in this paper is a new approach of classification of opinion documents, which considers only a frequency of word patterns and excludes the grammatical factors as much as possible. In proposed method, we express a document into a bag of words and then apply a learning algorithm using a frequency of word patterns, and finally decide the polarity of the document using a score function. Additionally, we also present the experiment results for evaluating the accuracy of the proposed method.

Big Data Meets Telcos: A Proactive Caching Perspective

  • Bastug, Ejder;Bennis, Mehdi;Zeydan, Engin;Kader, Manhal Abdel;Karatepe, Ilyas Alper;Er, Ahmet Salih;Debbah, Merouane
    • Journal of Communications and Networks
    • /
    • v.17 no.6
    • /
    • pp.549-557
    • /
    • 2015
  • Mobile cellular networks are becoming increasingly complex to manage while classical deployment/optimization techniques and current solutions (i.e., cell densification, acquiring more spectrum, etc.) are cost-ineffective and thus seen as stopgaps. This calls for development of novel approaches that leverage recent advances in storage/memory, context-awareness, edge/cloud computing, and falls into framework of big data. However, the big data by itself is yet another complex phenomena to handle and comes with its notorious 4V: Velocity, voracity, volume, and variety. In this work, we address these issues in optimization of 5G wireless networks via the notion of proactive caching at the base stations. In particular, we investigate the gains of proactive caching in terms of backhaul offloadings and request satisfactions, while tackling the large-amount of available data for content popularity estimation. In order to estimate the content popularity, we first collect users' mobile traffic data from a Turkish telecom operator from several base stations in hours of time interval. Then, an analysis is carried out locally on a big data platformand the gains of proactive caching at the base stations are investigated via numerical simulations. It turns out that several gains are possible depending on the level of available information and storage size. For instance, with 10% of content ratings and 15.4Gbyte of storage size (87%of total catalog size), proactive caching achieves 100% of request satisfaction and offloads 98% of the backhaul when considering 16 base stations.

GAN-based Automated Generation of Web Page Metadata for Search Engine Optimization (검색엔진 최적화를 위한 GAN 기반 웹사이트 메타데이터 자동 생성)

  • An, Sojung;Lee, O-jun;Lee, Jung-Hyeon;Jung, Jason J.;Yong, Hwan-Sung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.79-82
    • /
    • 2019
  • This study aims to design and implement automated SEO tools that has applied the artificial intelligence techniques for search engine optimization (SEO; Search Engine Optimization). Traditional Search Engine Optimization (SEO) on-page optimization show limitations that rely only on knowledge of webpage administrators. Thereby, this paper proposes the metadata generation system. It introduces three approaches for recommending metadata; i) Downloading the metadata which is the top of webpage ii) Generating terms which is high relevance by using bi-directional Long Short Term Memory (LSTM) based on attention; iii) Learning through the Generative Adversarial Network (GAN) to enhance overall performance. It is expected to be useful as an optimizing tool that can be evaluated and improve the online marketing processes.

  • PDF

Development and Clinical Application of Real-Time Light-Guided Vocal Fold Injection (실시간 광유도 성대주입술의 개발과 임상적 적용)

  • Huh, Gene;Cha, Wonjae
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.33 no.1
    • /
    • pp.1-6
    • /
    • 2022
  • Vocal fold injection (VFI) is widely accepted as a first line treatment in treating unilateral vocal fold paralysis and other vocal fold diseases. Although VFI is advantageous for its minimal invasiveness and efficiency, the invisibility of the needle tip remains an essential handicap in precise localization. Real-time light-guided vocal fold injection (RL-VFI) is a novel technique that was developed under the concept of performing simultaneous injection with precise placement of the needle tip under light guidance. RL-VFI has confirmed its possibility of technical implementation and the feasibility in injecting the needle from various directions through ex vivo animal studies. Further in vivo animal study has approved the safety and feasibility of the procedure when various transcutaneous approaches were applied. Currently, RL-VFI device is authorized for clinical use by the Ministry of Food and Drug Safety in South Korea and is clinically applied to patients with safe and favorable outcome. Several clinical studies are currently under process to approve the safety and the efficiency of RL-VFI. RL-VFI is expected to improve the complication rate and the functional outcome of voice. Furthermore, it will support laryngologists in overcoming the steep learning curve by its intuitive guidance.

A novel analytical evaluation of the laboratory-measured mechanical properties of lightweight concrete

  • S. Sivakumar;R. Prakash;S. Srividhya;A.S. Vijay Vikram
    • Structural Engineering and Mechanics
    • /
    • v.87 no.3
    • /
    • pp.221-229
    • /
    • 2023
  • Urbanization and industrialization have significantly increased the amount of solid waste produced in recent decades, posing considerable disposal problems and environmental burdens. The practice of waste utilization in concrete has gained popularity among construction practitioners and researchers for the efficient use of resources and the transition to the circular economy in construction. This study employed Lytag aggregate, an environmentally friendly pulverized fuel ash-based lightweight aggregate, as a substitute for natural coarse aggregate. At the same time, fly ash, an industrial by-product, was used as a partial substitute for cement. Concrete mix M20 was experimented with using fly ash and Lytag lightweight aggregate. The percentages of fly ash that make up the replacements were 5%, 10%, 15%, 20%, and 25%. The Compressive Strength (CS), Split Tensile Strength (STS), and deflection were discovered at these percentages after 56 days of testing. The concrete cube, cylinder, and beam specimens were examined in the explorations, as mentioned earlier. The results indicate that a 10% substitution of cement with fly ash and a replacement of coarse aggregate with Lytag lightweight aggregate produced concrete that performed well in terms of mechanical properties and deflection. The cementitious composites have varying characteristics as the environment changes. Therefore, understanding their mechanical properties are crucial for safety reasons. CS, STS, and deflection are the essential property of concrete. Machine learning (ML) approaches have been necessary to predict the CS of concrete. The Artificial Fish Swarm Optimization (AFSO), Particle Swarm Optimization (PSO), and Harmony Search (HS) algorithms were investigated for the prediction of outcomes. This work deftly explains the tremendous AFSO technique, which achieves the precise ideal values of the weights in the model to crown the mathematical modeling technique. This has been proved by the minimum, maximum, and sample median, and the first and third quartiles were used as the basis for a boxplot through the standardized method of showing the dataset. It graphically displays the quantitative value distribution of a field. The correlation matrix and confidence interval were represented graphically using the corrupt method.