• Title/Summary/Keyword: learning through the image

Search Result 925, Processing Time 0.026 seconds

Metaphor and Typeface Based on Children's Sensibilities for e-Learning

  • Jo, Mi-Heon;Han, Jeong-Hye
    • Journal of Information Processing Systems
    • /
    • v.2 no.3 s.4
    • /
    • pp.178-182
    • /
    • 2006
  • Children exhibit different behaviors, skills, and motivations. The main aim of this research was to investigate children's sensibility factors for icons, and to look for the best typeface for application to Web-Based Instruction (WBI) for e-Learning. Three types of icons were used to assess children's sensibilities toward metaphors: text-image, representational, and spatial mapping. Through the factor analysis, we found that children exhibited more diverse reactions to the text-image and representational types of icons than to the spatial mapping type of icons. Children commonly showedn higher sensibilities to the aesthetic-factor than to the familiarity-factor or the brevity-factor. In addition, we propose a collaborative-typeface system, which recommends the best typeface for children regarding the readability and aesthetic factor in WBI. Based on these results, we venture some suggestions on icon design and typeface selection for e-Learning.

A Deep Learning Model for Predicting User Personality Using Social Media Profile Images

  • Kanchana, T.S.;Zoraida, B.S.E.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.11
    • /
    • pp.265-271
    • /
    • 2022
  • Social media is a form of communication based on the internet to share information through content and images. Their choice of profile images and type of image they post can be closely connected to their personality. The user posted images are designated as personality traits. The objective of this study is to predict five factor model personality dimensions from profile images by using deep learning and neural networks. Developed a deep learning framework-based neural network for personality prediction. The personality types of the Big Five Factor model can be quantified from user profile images. To measure the effectiveness, proposed two models using convolution Neural Networks to classify each personality of the user. Done performance analysis among two different models for efficiently predict personality traits from profile image. It was found that VGG-69 CNN models are best performing models for producing the classification accuracy of 91% to predict user personality traits.

Bio-Cell Image Segmentation based on Deep Learning using Denoising Autoencoder and Graph Cuts (디노이징 오토인코더와 그래프 컷을 이용한 딥러닝 기반 바이오-셀 영상 분할)

  • Lim, Seon-Ja;Vununu, Caleb;Kwon, Oh-Heum;Lee, Suk-Hwan;Kwon, Ki-Ryoug
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.10
    • /
    • pp.1326-1335
    • /
    • 2021
  • As part of the cell division method, we proposed a method for segmenting images generated by topography microscopes through deep learning-based feature generation and graph segmentation. Hybrid vector shapes preserve the overall shape and boundary information of cells, so most cell shapes can be captured without any post-processing burden. NIH-3T3 and Hela-S3 cells have satisfactory results in cell description preservation. Compared to other deep learning methods, the proposed cell image segmentation method does not require postprocessing. It is also effective in preserving the overall morphology of cells and has shown better results in terms of cell boundary preservation.

An image-based deep learning network technique for structural health monitoring

  • Lee, Dong-Han;Koh, Bong-Hwan
    • Smart Structures and Systems
    • /
    • v.28 no.6
    • /
    • pp.799-810
    • /
    • 2021
  • When monitoring the structural integrity of a bridge using data collected through accelerometers, identifying the profile of the load exerted on the bridge from the vehicles passing over it becomes a crucial task. In this study, the speed and location of vehicles on the deck of a bridge is reconfigured using real-time video to implicitly associate the load applied to the bridge with the response from the bridge sensors to develop an image-based deep learning network model. Instead of directly measuring the load that a moving vehicle exerts on the bridge, the intention in the proposed method is to replace the correlation between the movement of vehicles from CCTV images and the corresponding response by the bridge with a neural network model. Given the framework of an input-output-based system identification, CCTV images secured from the bridge and the acceleration measurements from a cantilevered beam are combined during the process of training the neural network model. Since in reality, structural damage cannot be induced in a bridge, the focus of the study is on identifying local changes in parameters by adding mass to a cantilevered beam in the laboratory. The study successfully identified the change in the material parameters in the beam by using the deep-learning neural network model. Also, the method correctly predicted the acceleration response of the beam. The proposed approach can be extended to the structural health monitoring of actual bridges, and its sensitivity to damage can also be improved through optimization of the network training.

Detection of Surface Water Bodies in Daegu Using Various Water Indices and Machine Learning Technique Based on the Landsat-8 Satellite Image (Landsat-8 위성영상 기반 수분지수 및 기계학습을 활용한 대구광역시의 지표수 탐지)

  • CHOUNG, Yun-Jae;KIM, Kyoung-Seop;PARK, In-Sun;CHUNG, Youn-In
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.24 no.1
    • /
    • pp.1-11
    • /
    • 2021
  • Detection of surface water features including river, wetland, reservoir from the satellite imagery can be utilized for sustainable management and survey of water resources. This research compared the water indices derived from the multispectral bands and the machine learning technique for detecting the surface water features from he Landsat-8 satellite image acquired in Daegu through the following steps. First, the NDWI(Normalized Difference Water Index) image and the MNDWI(Modified Normalized Difference Water Index) image were separately generated using the multispectral bands of the given Landsat-8 satellite image, and the two binary images were generated from these NDWI and MNDWI images, respectively. Then SVM(Support Vector Machine), the widely used machine learning techniques, were employed to generate the land cover image and the binary image was also generated from the generated land cover image. Finally the error matrices were used for measuring the accuracy of the three binary images for detecting the surface water features. The statistical results showed that the binary image generated from the MNDWI image(84%) had the relatively low accuracy than the binary image generated from the NDWI image(94%) and generated by SVM(96%). And some misclassification errors occurred in all three binary images where the land features were misclassified as the surface water features because of the shadow effects.

A Through-focus Scanning Optical Microscopy Dimensional Measurement Method based on a Deep-learning Regression Model (딥 러닝 회귀 모델 기반의 TSOM 계측)

  • Jeong, Jun Hee;Cho, Joong Hwee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.1
    • /
    • pp.108-113
    • /
    • 2022
  • The deep-learning-based measurement method with the through-focus scanning optical microscopy (TSOM) estimated the size of the object using the classification. However, the measurement performance of the method depends on the number of subdivided classes, and it is practically difficult to prepare data at regular intervals for training each class. We propose an approach to measure the size of an object in the TSOM image using the deep-learning regression model instead of using classification. We attempted our proposed method to estimate the top critical dimension (TCD) of through silicon via (TSV) holes with 2461 TSOM images and the results were compared with the existing method. As a result of our experiment, the average measurement error of our method was within 30 nm (1σ) which is 1/13.5 of the sampling distance of the applied microscope. Measurement errors decreased by 31% compared to the classification result. This result proves that the proposed method is more effective and practical than the classification method.

Domain Adaptation Image Classification Based on Multi-sparse Representation

  • Zhang, Xu;Wang, Xiaofeng;Du, Yue;Qin, Xiaoyan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.5
    • /
    • pp.2590-2606
    • /
    • 2017
  • Generally, research of classical image classification algorithms assume that training data and testing data are derived from the same domain with the same distribution. Unfortunately, in practical applications, this assumption is rarely met. Aiming at the problem, a domain adaption image classification approach based on multi-sparse representation is proposed in this paper. The existences of intermediate domains are hypothesized between the source and target domains. And each intermediate subspace is modeled through online dictionary learning with target data updating. On the one hand, the reconstruction error of the target data is guaranteed, on the other, the transition from the source domain to the target domain is as smooth as possible. An augmented feature representation produced by invariant sparse codes across the source, intermediate and target domain dictionaries is employed for across domain recognition. Experimental results verify the effectiveness of the proposed algorithm.

Image Comparison Using Directional Expansion Operation

  • Yoo, Suk Won
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.3
    • /
    • pp.173-177
    • /
    • 2018
  • Masks are generated by adding different fonts of learning data characters in pixel unit, and pixel values belonging to each of the masks are divided into 3 groups. Using the directional expansion operators, we expand the text area of the test data character into 4 diagonal directions in order to create the boundary areas to distinguish it from the background area. A mask with a minimum average discordance is selected as the final recognition result by calculating the degree of discordance between the expanded test data and the masks. Image comparison using directional expansion operations more accurately recognizes test data through 4 subdivided recognition processes. It is also possible to expand the ranges of 3 groups of pixel values of masks more evenly such that new fonts can easily be added to the given learning data.

A Study on Jaundice Computer-aided Diagnosis Algorithm using Scleral Color based Machine Learning

  • Jeong, Jin-Gyo;Lee, Myung-Suk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.12
    • /
    • pp.131-136
    • /
    • 2018
  • This paper proposes a computer-aided diagnostic algorithm in a non-invasive way. Currently, clinical diagnosis of jaundice is performed through blood sampling. Unlike the old methods, the non-invasive method will enable parents to measure newborns' jaundice by only using their mobile phones. The proposed algorithm enables high accuracy and quick diagnosis through machine learning. In here, we used the SVM model of machine learning that learned the feature extracted through image preprocessing and we used the international jaundice research data as the test data set. As a result of applying our developed algorithm, it took about 5 seconds to diagnose jaundice and it showed a 93.4% prediction accuracy. The software is real-time diagnosed and it minimizes the infant's pain by non-invasive method and parents can easily and temporarily diagnose newborns' jaundice. In the future, we aim to use the jaundice photograph of the newborn babies' data as our test data set for more accurate results.

A Performance Comparison Study of Lesion Detection Model according to Gastroscopy Image Quality (위 내시경 이미지 품질에 따른 병변 검출 모델의 성능 비교 연구)

  • Yul Hee Lee;Young Jae Kim;Kwang Gi Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.2
    • /
    • pp.118-124
    • /
    • 2023
  • Many recent studies have reported that the quality of input learning data was vital to the detection of regions of interest. However, due to a lack of research on the quality of learning data on lesion detetcting using gastroscopy, we aimed to quantify the impact of quality difference in endoscopic images to lesion detection models using Image Quality Assessment (IQA) algorithms. Through IQA methods such as BRISQUE (Blind/Referenceless Image Spatial Quality Evaluation), Laplacian Score, and PSNR (Peak Signal-To-Noise) algorithm on 430 sheets of high quality data (HQD) and 430 sheets of low quality data (PQD), we showed that there were significant differences between high and low quality images in lesion detecting through BRISQUE and Laplacian scores (p<0.05). The PSNR value showed 10.62±1.76 dB on average, illustrating the lower lesion detection performance of PQD than HQD. In addition, F1-Score of HQD showed higher detection performance at 77.42±3.36% while F1-Score of PQD showed 66.82±9.07%. Through this study, we hope to contribute to future gastroscopy lesion detection assistance systems that involve IQA algorithms by emphasizing the importance of using high quality data over lower quality data.