• Title/Summary/Keyword: Spatial learning

Search Result 850, Processing Time 0.027 seconds

A Study on Mathematical Literacy as a Basic Literacy in the Curriculum (교육과정에서 기초소양으로써 수리 소양에 관한 연구)

  • Park, Soomin
    • Communications of Mathematical Education
    • /
    • v.37 no.3
    • /
    • pp.349-368
    • /
    • 2023
  • The revised 2022 educational curriculum highlighted the significance of mathematical literacy as a foundational competency that can be cultivated through the learning of various subjects, along with language proficiency and digital literacy. However, due to the lack of a precise definition for mathematical literacy, there exists a challenge in systematically implementing it across all subjects in the educational curriculum. The aim of this study is to clarify the definition of mathematical literacy in the curriculum through a literature review and to analyze the application patterns of mathematical literacy in other subjects so that mathematical literacy can be systematically applied as a basic literacy in Korea's curriculum. To achieve this, the study first clarifies and categorizes the meaning of mathematical literacy through a comparative analysis of terms such as numeracy and mathematical competence via a literature review. Subsequently, the study compares the categories of mathematical literacy identified in both domestic and international educational curricula and analyzes the application of mathematical literacy in the education curriculum of New South Wales (NSW), Australia, where mathematical literacy is reflected in the achievement standards across various subjects. It is expected that understanding each property by subdividing the meaning of mathematical literacy and examining the application modality to the curriculum will help construct a curriculum that reflects mathematical literacy in subjects other than mathematics.

Detection of video editing points using facial keypoints (얼굴 특징점을 활용한 영상 편집점 탐지)

  • Joshep Na;Jinho Kim;Jonghyuk Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.15-30
    • /
    • 2023
  • Recently, various services using artificial intelligence(AI) are emerging in the media field as well However, most of the video editing, which involves finding an editing point and attaching the video, is carried out in a passive manner, requiring a lot of time and human resources. Therefore, this study proposes a methodology that can detect the edit points of video according to whether person in video are spoken by using Video Swin Transformer. First, facial keypoints are detected through face alignment. To this end, the proposed structure first detects facial keypoints through face alignment. Through this process, the temporal and spatial changes of the face are reflected from the input video data. And, through the Video Swin Transformer-based model proposed in this study, the behavior of the person in the video is classified. Specifically, after combining the feature map generated through Video Swin Transformer from video data and the facial keypoints detected through Face Alignment, utterance is classified through convolution layers. In conclusion, the performance of the image editing point detection model using facial keypoints proposed in this paper improved from 87.46% to 89.17% compared to the model without facial keypoints.

A Study on Real-time Autonomous Driving Simulation System Construction based on Digital Twin - Focused on Busan EDC - (디지털트윈 기반 실시간 자율주행 시뮬레이션 시스템 구축 방안 연구 - 부산 EDC 중심으로 -)

  • Kim, Min-Soo;Park, Jong-Hyun;Sim, Min-Seok
    • Journal of Cadastre & Land InformatiX
    • /
    • v.53 no.2
    • /
    • pp.53-66
    • /
    • 2023
  • Recently, there has been a significant interest in the development of autonomous driving simulation environment based on digital twin. In the development of such digital twin-based simulation environment, many researches has been conducted not only performance and functionality validation of autonomous driving, but also generation of virtual training data for deep learning. However, such digital twin-based autonomous driving simulation system has the problem of requiring a significant amount of time and cost for the system development and the data construction. Therefore, in this research, we aim to propose a method for rapidly designing and implementing a digital twin-based autonomous driving simulation system, using only the existing 3D models and high-definition map. Specifically, we propose a method for integrating 3D model of FBX and NGII HD Map for the Busan EDC area into CARLA, and a method for adding and modifying CARLA functions. The results of this research show that it is possible to rapidly design and implement the simulation system at a low cost by using the existing 3D models and NGII HD map. Also, the results show that our system can support various functions such as simulation scenario configuration, user-defined driving, and real-time simulation of traffic light states. We expect that usability of the system will be significantly improved when it is applied to broader geographical area in the future.

Mapping Mammalian Species Richness Using a Machine Learning Algorithm (머신러닝 알고리즘을 이용한 포유류 종 풍부도 매핑 구축 연구)

  • Zhiying Jin;Dongkun Lee;Eunsub Kim;Jiyoung Choi;Yoonho Jeon
    • Journal of Environmental Impact Assessment
    • /
    • v.33 no.2
    • /
    • pp.53-63
    • /
    • 2024
  • Biodiversity holds significant importance within the framework of environmental impact assessment, being utilized in site selection for development, understanding the surrounding environment, and assessing the impact on species due to disturbances. The field of environmental impact assessment has seen substantial research exploring new technologies and models to evaluate and predict biodiversity more accurately. While current assessments rely on data from fieldwork and literature surveys to gauge species richness indices, limitations in spatial and temporal coverage underscore the need for high-resolution biodiversity assessments through species richness mapping. In this study, leveraging data from the 4th National Ecosystem Survey and environmental variables, we developed a species distribution model using Random Forest. This model yielded mapping results of 24 mammalian species' distribution, utilizing the species richness index to generate a 100-meter resolution map of species richness. The research findings exhibited a notably high predictive accuracy, with the species distribution model demonstrating an average AUC value of 0.82. In addition, the comparison with National Ecosystem Survey data reveals that the species richness distribution in the high-resolution species richness mapping results conforms to a normal distribution. Hence, it stands as highly reliable foundational data for environmental impact assessment. Such research and analytical outcomes could serve as pivotal new reference materials for future urban development projects, offering insights for biodiversity assessment and habitat preservation endeavors.

Detecting high-resolution usage status of individual parcel of land using object detecting deep learning technique (객체 탐지 딥러닝 기법을 활용한 필지별 조사 방안 연구)

  • Jeon, Jeong-Bae
    • Journal of Cadastre & Land InformatiX
    • /
    • v.54 no.1
    • /
    • pp.19-32
    • /
    • 2024
  • This study examined the feasibility of image-based surveys by detecting objects in facilities and agricultural land using the YOLO algorithm based on drone images and comparing them with the land category by law. As a result of detecting objects through the YOLO algorithm, buildings showed a performance of detecting objects corresponding to 96.3% of the buildings provided in the existing digital map. In addition, the YOLO algorithm developed in this study detected 136 additional buildings that were not located in the digital map. Plastic greenhouses detected a total of 297 objects, but the detection rate was low for some plastic greenhouses for fruit trees. Also, agricultural land had the lowest detection rate. This result is because agricultural land has a larger area and irregular shape than buildings, so the accuracy is lower than buildings due to the inconsistency of training data. Therefore, segmentation detection, rather than box-shaped detection, is likely to be more effective for agricultural fields. Comparing the detected objects with the land category by law, it was analyzed that some buildings exist in agricultural and forest areas where it is difficult to locate buildings. It seems that it is necessary to link with administrative information to understand that these buildings are used illegally. Therefore, at the current level, it is possible to objectively determine the existence of buildings in fields where it is difficult to locate buildings.

Development of Inquiry Activity Materials for Visualizing Typhoon Track using GK-2A Satellite Images (천리안 위성 2A호 영상을 활용한 태풍 경로 시각화 탐구활동 수업자료 개발)

  • Chae-Young Lim;Kyung-Ae Park
    • Journal of the Korean earth science society
    • /
    • v.45 no.1
    • /
    • pp.48-71
    • /
    • 2024
  • Typhoons are representative oceanic and atmospheric phenomena that cause interactions within the Earth's system with diverse influences. In recent decades, the typhoons have tended to strengthen due to rapidly changing climate. The 2022 revised science curriculum emphasizes the importance of teaching-learning activities using advanced science and technology to cultivate digital literacy as a citizen of the future society. Therefore, it is necessary to solve the temporal and spatial limitations of textbook illustrations and to develop effective instructional materials using global-scale big data covered in the field of earth science. In this study, according to the procedure of the PDIE (Preparation, Development, Implementation, Evaluation) model, the inquiry activity data was developed to visualize the track of the typhoon using the image data of GK-2A. In the preparatory stage, the 2015 and 2022 revised curriculum and the contents of the inquiry activities of the current textbooks were analyzed. In the development stage, inquiry activities were organized into a series of processes that can collect, process, visualize, and analyze observational data, and a GUI (Graphic User Interface)-based visualization program that can derive results with a simple operation was created. In the implementation and evaluation stage, classes were conducted with students, and classes using code and GUI programs were conducted respectively to compare the characteristics of each activity and confirm its applicability in the school field. The class materials presented in this study enable exploratory activities using actual observation data without professional programming knowledge which is expected to contribute to students' understanding and digital literacy in the field of earth science.

3DentAI: U-Nets for 3D Oral Structure Reconstruction from Panoramic X-rays (3DentAI: 파노라마 X-ray로부터 3차원 구강구조 복원을 위한 U-Nets)

  • Anusree P.Sunilkumar;Seong Yong Moon;Wonsang You
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.7
    • /
    • pp.326-334
    • /
    • 2024
  • Extra-oral imaging techniques such as Panoramic X-rays (PXs) and Cone Beam Computed Tomography (CBCT) are the most preferred imaging modalities in dental clinics owing to its patient convenience during imaging as well as their ability to visualize entire teeth information. PXs are preferred for routine clinical treatments and CBCTs for complex surgeries and implant treatments. However, PXs are limited by the lack of third dimensional spatial information whereas CBCTs inflict high radiation exposure to patient. When a PX is already available, it is beneficial to reconstruct the 3D oral structure from the PX to avoid further expenses and radiation dose. In this paper, we propose 3DentAI - an U-Net based deep learning framework for 3D reconstruction of oral structure from a PX image. Our framework consists of three module - a reconstruction module based on attention U-Net for estimating depth from a PX image, a realignment module for aligning the predicted flattened volume to the shape of jaw using a predefined focal trough and ray data, and lastly a refinement module based on 3D U-Net for interpolating the missing information to obtain a smooth representation of oral cavity. Synthetic PXs obtained from CBCT by ray tracing and rendering were used to train the networks without the need of paired PX and CBCT datasets. Our method, trained and tested on a diverse datasets of 600 patients, achieved superior performance to GAN-based models even with low computational complexity.

Enhancing A Neural-Network-based ISP Model through Positional Encoding (위치 정보 인코딩 기반 ISP 신경망 성능 개선)

  • DaeYeon Kim;Woohyeok Kim;Sunghyun Cho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.81-86
    • /
    • 2024
  • The Image Signal Processor (ISP) converts RAW images captured by the camera sensor into user-preferred sRGB images. While RAW images contain more meaningful information for image processing than sRGB images, RAW images are rarely shared due to their large sizes. Moreover, the actual ISP process of a camera is not disclosed, making it difficult to model the inverse process. Consequently, research on learning the conversion between sRGB and RAW has been conducted. Recently, the ParamISP[1] model, which directly incorporates camera parameters (exposure time, sensitivity, aperture size, and focal length) to mimic the operations of a real camera ISP, has been proposed by advancing the simple network structures. However, existing studies, including ParamISP[1], have limitations in modeling the camera ISP as they do not consider the degradation caused by lens shading, optical aberration, and lens distortion, which limits the restoration performance. This study introduces Positional Encoding to enable the camera ISP neural network to better handle degradations caused by lens. The proposed positional encoding method is suitable for camera ISP neural networks that learn by dividing the image into patches. By reflecting the spatial context of the image, it allows for more precise image restoration compared to existing models.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Effect of Treatment with Docosahexaenoic Acid into N-3 Fatty Acid Deficient and Adequate Diets on Rat Brain and Liver Fatty Acid Composition (필수 지방산 조성이 다른 식이의 docosahexaenoic acid 투여가 흰쥐 뇌 및 간의 지방산 조성에 미치는 영향)

  • Lim, Sun-Young
    • Journal of Life Science
    • /
    • v.19 no.10
    • /
    • pp.1417-1423
    • /
    • 2009
  • Previous studies have suggested that docosahexaenoic acid (DHA) supplementation into n-3 fatty acid deficient diet improved spatial learning performance, but there was no significant difference in brain related function when DHA was added into a n-3 fatty acid adequate diet. Here, we investigated the effect of adding DHA into an n-3 fatty acid deficient or adequate diet on brain and liver fatty acid composition. On the second day after conception, Sprague Dawley strain dams were divided into four groups as follows; n-3 fatty acid deficient (Def), n-3 fatty acid deficient plus DHA (Def+DHA, 10.2% DHA), n-3 fatty acid adequate (Adq, 3.4% linolenic acid), and n-3 fatty acid adequate plus DHA (Adq+DHA, 3.31% linolenic acid plus 9.65% DHA). After weaning, male pups were fed on the same diets of their respective dams until adulthood. In brain fatty acid composition, the Def group showed a lower brain DHA (64% decrease), which was largely compensated for by an increase in docosapentaenoic acid (22:5n-6). Brain DHA in the Def+DHA group was increased to almost the same extent as in the Adq and Adq+DHA groups and there were no significant differences among them. Liver fatty acid composition showed a similar pattern to that of the brain, but liver DHA in the Def+DHA showed the highest percentage among the diet groups. In conclusion, n-3 fatty acid deficiency from gestation to adulthood leads to decreased brain DHA, which has been shown to be highly associated with poor spatial leaning performance. Thus, adequate brain DHA levels are required for optimal nervous function.