• Title/Summary/Keyword: 컴퓨터 이용 공학

Search Result 3,764, Processing Time 0.042 seconds

An Experiment for Surface Soil Moisture Mapping Using Sentinel-1 and Sentinel-2 Image on Google Earth Engine (Google Earth Engine 제공 Sentinel-1과 Sentinel-2 영상을 이용한 지표 토양수분도 제작 실험)

  • Jihyun Lee ;Kwangseob Kim;Kiwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.599-608
    • /
    • 2023
  • The increasing interest in soil moisture data using satellite data for applications of hydrology, meteorology, and agriculture has led to the development of methods for generating soil moisture maps of variable resolution. This study demonstrated the capability of generating soil moisture maps using Sentinel-1 and Sentinel-2 data provided by Google Earth Engine (GEE). The soil moisture map was derived using synthetic aperture radar (SAR) image and optical image. SAR data provided by the Sentinel-1 analysis ready data in GEE was applied with normalized difference vegetation index (NDVI) based on Sentinel-2 and Environmental Systems Research Institute (ESRI)-based Land Cover map. This study produced a soil moisture map in the research area of Victoria, Australia and compared it with field measurements obtained from a previous study. As for the validation of the applied method's result accuracy, the comparative experimental results showed a meaningful range of consistency as 4-10%p between the values obtained using the algorithm applied in this study and the field-based ones, and they also showed very high consistency with satellite-based soil moisture data as 0.5-2%p. Therefore, public open data provided by GEE and the algorithm applied in this study can be used for high-resolution soil moisture mapping to represent regional land surface characteristics.

A Study on Biometric Model for Information Security (정보보안을 위한 생체 인식 모델에 관한 연구)

  • Jun-Yeong Kim;Se-Hoon Jung;Chun-Bo Sim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.317-326
    • /
    • 2024
  • Biometric recognition is a technology that determines whether a person is identified by extracting information on a person's biometric and behavioral characteristics with a specific device. Cyber threats such as forgery, duplication, and hacking of biometric characteristics are increasing in the field of biometrics. In response, the security system is strengthened and complex, and it is becoming difficult for individuals to use. To this end, multiple biometric models are being studied. Existing studies have suggested feature fusion methods, but comparisons between feature fusion methods are insufficient. Therefore, in this paper, we compared and evaluated the fusion method of multiple biometric models using fingerprint, face, and iris images. VGG-16, ResNet-50, EfficientNet-B1, EfficientNet-B4, EfficientNet-B7, and Inception-v3 were used for feature extraction, and the fusion methods of 'Sensor-Level', 'Feature-Level', 'Score-Level', and 'Rank-Level' were compared and evaluated for feature fusion. As a result of the comparative evaluation, the EfficientNet-B7 model showed 98.51% accuracy and high stability in the 'Feature-Level' fusion method. However, because the EfficietnNet-B7 model is large in size, model lightweight studies are needed for biocharacteristic fusion.

A Study on Wearable Emotion Monitoring System Under Natural Conditions Applying Noncontact Type Inductive Sensor (자연 상태에서의 인간감성 평가를 위한 비접촉식 인덕티브 센싱 기반의 착용형 센서 연구)

  • Hyun-Seung Cho;Jin-Hee Yang;Sang-Yeob Lee;Jeong-Whan Lee;Joo-Hyeon Lee;Hoon Kim
    • Science of Emotion and Sensibility
    • /
    • v.26 no.3
    • /
    • pp.149-160
    • /
    • 2023
  • This study develops a time-varying system-based noncontact fabric sensor that can measure cerebral blood-flow signals to explore the possibility of brain blood-signal detection and emotional evaluation. The textile sensor was implemented as a coil-type sensor by combining 30 silver threads of 40 deniers and then embroidering it with the computer machine. For the cerebral blood-flow measurement experiment, subjects were asked to attach a coil-type sensor to the carotid artery area, wear an electrocardiogram (ECG) electrode and a respiration (RSP) measurement belt. In addition, Doppler ultrasonography was performed using an ultrasonic diagnostic device to measure the speed of blood flow. The subject was asked to wear Meta Quest 2, measure the blood-flow change signal when viewing the manipulated image visual stimulus, and fill out an emotional-evaluation questionnaire. The measurement results show that the textile-sensor-measured signal also changes with a change in the blood-flow rate signal measured using the Doppler ultrasonography. These findings verify that the cerebral blood-flow signal can be measured using a coil-type textile sensor. In addition, the HRV extracted from ECG and PLL signals (textile sensor signals) are calculated and compared for emotional evaluation. The comparison results show that for the change in the ratio because of the activation of the sympathetic and parasympathetic nervous systems due to visual stimulation, the values calculated using the textile sensor and ECG signals tend to be similar. In conclusion, a the proposed time-varying system-based coil-type textile sensor can be used to study changes in the cerebral blood flow and monitor emotions.

Research on Generative AI for Korean Multi-Modal Montage App (한국형 멀티모달 몽타주 앱을 위한 생성형 AI 연구)

  • Lim, Jeounghyun;Cha, Kyung-Ae;Koh, Jaepil;Hong, Won-Kee
    • Journal of Service Research and Studies
    • /
    • v.14 no.1
    • /
    • pp.13-26
    • /
    • 2024
  • Multi-modal generation is the process of generating results based on a variety of information, such as text, images, and audio. With the rapid development of AI technology, there is a growing number of multi-modal based systems that synthesize different types of data to produce results. In this paper, we present an AI system that uses speech and text recognition to describe a person and generate a montage image. While the existing montage generation technology is based on the appearance of Westerners, the montage generation system developed in this paper learns a model based on Korean facial features. Therefore, it is possible to create more accurate and effective Korean montage images based on multi-modal voice and text specific to Korean. Since the developed montage generation app can be utilized as a draft montage, it can dramatically reduce the manual labor of existing montage production personnel. For this purpose, we utilized persona-based virtual person montage data provided by the AI-Hub of the National Information Society Agency. AI-Hub is an AI integration platform aimed at providing a one-stop service by building artificial intelligence learning data necessary for the development of AI technology and services. The image generation system was implemented using VQGAN, a deep learning model used to generate high-resolution images, and the KoDALLE model, a Korean-based image generation model. It can be confirmed that the learned AI model creates a montage image of a face that is very similar to what was described using voice and text. To verify the practicality of the developed montage generation app, 10 testers used it and more than 70% responded that they were satisfied. The montage generator can be used in various fields, such as criminal detection, to describe and image facial features.

A Study on the Telemetry System for the Inhabitant Environment and Distribution of Fish-III -Oxygen, pH, Turbidity and Distribution of Fishes- (어류의 서식환경과 분포생태의 원격계측에 관한 연구 -III -$용존산\cdot$pH 및 독도와 어류의 분포생태-)

  • 신형일;안영화;신현옥
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.35 no.2
    • /
    • pp.136-146
    • /
    • 1999
  • The telemetry system for the oxygen, pH, turbidity and the distribution ecology of fishes was constructed by the authors in order to product and manage effectively in shallow sea culture and setnets fisheries, and then the experiments for the telemetry system carried out at the culturing fishing ground in coast of Sanyang-Myon, Kyoungsangnam-Do and the set net fishing ground located Nungpo bay in Kojedo province respectively from October, 1997 to June 1998.As those results, the techniques suggested in the telemetry system for which find out the relationship between the physical and chemical environment in the sea and the distribution ecology of fishes gave full display its function, and its system could be operated as real time system. This research can also provide base-line data to develope a hybrid system unifying the marine environment information and the fisheries resources information in order to manage effectively coastal fishing ground.

  • PDF

A Comparative Analysis of Korean and Chinese Medicinal Plant Resources and Traditional Knowledge Using Data Analysis (데이터 분석기법을 이용한 한국과 중국의 약용식물자원과 전통지식 정보 비교분석)

  • Na, Minho;Hong, Seong-Eun;Kim, Ki-Yoon;Cheong, Eun Ju
    • Journal of Korean Society of Forest Science
    • /
    • v.107 no.4
    • /
    • pp.456-477
    • /
    • 2018
  • We analyzed the data on medicinal plants and related traditional knowledge in Korea and China using data analysis method. There are 108 families 214 genera 542 species, and 202 families 660 genera 1,261 species were found in Korea and China respectively. Total of 86 families (79.6%) and 130 genera (60.7%) were in common both countries. More than one information were recorded in many species, however, there was only one information from some species (32.7% of genera in Korea, and 58.8% of genera in China). The most frequent appeared plant family was Compositae (Asteraceae) in both countries (8.4% in Korea and 10.7% in China) and followed by Rosecease and Leguminocae (Fabaceae). Plant parts were classified 11 parts and roots were mostly used in Korea and whole plants in China. Usages were described in different terms of ailments or symptoms. There are 120 usage in Korea and 230 in China. Plant uses for the ailment or symptoms are pain, digestive system disorder, cold and etc. in Korea. In China, plants were mostly used for clear heat, digestive system disorder, cough and etc. Relation between the plant and ailment(symptom) of the top 10 plants in Korea and China was different although from same plant family. We also analyzed the relations between plant species and part used, and plants parts and ailment(symptom). With the data analysis method, we were able to collect the medicinal plant resources data and found the differences in plant resources, usage, and plant part for use. The result provide important information of the plant resources and related traditional knowledge of Korea for use of plant resources in industry and facilitate to plan a strategy to cope with Nagoya Protocol in the future.

LCD 연구 개발 동향

  • 이종천
    • The Magazine of the IEIE
    • /
    • v.29 no.6
    • /
    • pp.76-80
    • /
    • 2002
  • 'Liquid Crystal의 상전이(相轉移)와 광학적 이방성(異方性)이 1888년과 1889년 F. Reinitzer와 O. Lehmann에 의해 Monatsch Chem.과 Z.Physikal.Chem.에 각각 보고된 후 부터 제2차 세계대전이 끝난 뒤인 1950년대 까지는 Liquid Crystal을 단지실험실에서의 기초학문 차원의 연구 대상으로만 다루어 왔다. 1963년 Williams가 Liquid Crystal Device로는 최초로 특허 출원을 하였으며, 1968년 RCA사의 Heilmeier등은 Nematic 액정(液晶)에 저주파(低周波) 전압(電壓)을 인가하면 투명한 액정이 혼탁(混濁)상태로 변화하는 '동적산란(動的散亂)'(Dynamic Scattering) 현상을 이용하여 최초의 DSM(Dynamic Scattering Mode) LCD(Liquid Crystal Display)를 발명하였다. 비록 150V 이상의 높은 구동전압과 과소비전력의 특성 때문에 실용화에는 실패하였지만 Guest-Host효과와 Memory효과 등을 발견하였다. 1970년대에 이르러 실온에서 안정되게 사용 가능한 액정물질들이 합성되고(H. Kelker에 의해 MBBA, G. Gray에 의한 Cyano-Biphenyl 액정의 합성), CMOS 트랜지스터의 발명, 투명도전막(ITO), 수은전지등의 주변기술들의 발전으로 인하여 LCD의 상품화가 본격적으로 이루어지게 되었다. 1971년에는 M. Shadt, W. Helfrich, J.L. Fergason등이 TN(Twisted Nematic) LCD를 발명하여 전자 계산기와 손목시계에 응용되었고, 1970년대 말에는 Sharp에서 Dot Matrix형의 휴대형 컴퓨터를 발매하였다. 이러한 단순 구동형의 TN LCD는 그래픽 정보를 표시하는 데에는 품질의 한계가 있어 1979년 영국의 Le Comber에 의해 a-Si TFT(amorphous Silicon Thin Film Transistor) LCD의 연구가 시작되었고, 1983년 T.J. Scheffer, J. Nehring, G. Waters에 의해 STN(Super Twisted Nematic) LCD가 창안되었고, 1980년 N. Clark, S. Lagerwall 및 1983년 K.Yossino에 의해 Ferroelectric LCD가 등장하여 LCD의 정보 표시량 증대에 크게 기여하였다. Color화의 진전은 1972년 A.G. Ficher의 셀 외부에 RGB(Red, Green, Blue) filter를 부착하는 방안과, 1981년 T. Uchida 등에 의한 셀 내부에 RGB filter를 부착하는 방법에 의해 상품화가 되었다. 1985년에는 J.L. Fergason에 의해 Polymer Dispersed LCD가 발명되었고, 1980년대 중반에 이르러 동화상(動畵像) 표시가 가능한 a-Si TFT LCD의 시제품(試製品) 개발이 이루어지고 1990년부터는 본격적인 양산 시대에 접어들게 되었다. 1990년대 초에는 STN LCD의 Color화 및 대형화(大型化) 고(高)품위화에 힘입어 Note-Book PC에 LCD가 본격적으로 적용이 되었고, 1990년대 후반에는TFT LCD의 표시품질 대비 가격경쟁력 확보로 인하여 Note-Book PC 시장을 독점하기에 이르렀다. 이후로는 TFT LCD의 대형화가 중요한 쟁점으로 부각되고 있고, 1995년 삼성전자는 당시 세계최대 크기의 22' TFT LCD를 개발하였다. 또한 LCD의 고정세(高情細)화를 위해 Poly Si TFT LCD의 개발이 이루어졌고, 디지타이져 일체형 LCD의 상품화가 그 응용의 폭을 넓혔으며, LCD의 대형화를 위해 1994년 Canon에 의해 14.8', 21' 등의 FLCD가 개발되었다. 대형화 방안으로 Tiled LCD 기술이 개발되고 있으며, 1995년에 Sharp에 의해 21' 두장의 Panel을 이어 붙인 28' TFT LCD가 전시되었고 1996년에는 21' 4장의 Panel을 이어 붙인 40'급 까지의 개발이 시도 되었으며 현재는 LCD의 특성향상과 생산설비의 성능개선과 안정적인 공정관리기술을 바탕으로 삼성전자에서 단패널 40' TFT LCD가 최근에 개발되었다. Projection용 디스플레이로는 Poly-Si TFT LCD를 이용하여 $25'{\sim}100'$사이의 배면투사형과 전면투사형 까지 개발되어 대형 TV시장을 주도하고 있다. 21세기 디지털방송 시대를 맞아 플라즈마디스플레이패널(PDP) TV, 액정표시장치 (LCD)TV, 강유전성액정(FLCD) TV 등 2005년에 약 1500만대 규모의 거대 시장을 형성할 것으로 예상되는 이른바 '벽걸이TV'로 불리는 차세대 초박형 TV 시장을 선점하기 위하여 세계 가전업계들이 양산에 총력을 기울이고 있다. 벽걸이TV 시장이 본격적으로 형성되더라도 PDP TV와 LCD TV가 직접적으로 시장에서 경쟁을 벌이는 일은 별로 없을 것으로 보인다. 향후 디지털TV 시장이 본격적으로 열리면 40인치 이하의 중대형 시장은 LCD TV가 주도하고 40인치 이상 대화면 시장은 PDP TV가 주도할 것으로 보는 시각이 지배적이기 때문이다. 그러나 이러한 직시형 중대형(重大型)디스플레이는 그 가격이 너무 높아서 현재의 브라운관 TV를 대체(代替)하기에는 시일이 많이 소요될 것으로 추정되고 있다. 그 대안(代案)으로는 비교적 저가격(低價格)이면서도 고품질의 디지털 화상구현이 가능한 고해상도 프로젝션 TV가 유력시되고 있다. 이러한 고해상도 프로젝션 TV용으로 DMD(Digital Micro-mirror Display), Poly-Si TFT LCD와 LCOS(Liquid Crystals on Silicon) 등의 상품화가 진행되고 있다. 인터넷과 정보통신 기술의 발달로 휴대형 디스플레이의 시장이 예상 외로 급성장하고 있으며, 요구되는 디스플레이의 품질도 단순한 문자표시에서 그치지 않고 고해상도의 그래픽 동화상 표시와 칼라 표시 및 3차원 화상표시까지 점차로 그 영역이 넓어지고 있다. <표 1>에서 보여주는 바와 같이 LCD의 시장규모는 적용분야 별로 지속적인 성장이 예상되며, 새로운 응용분야의 시장도 성장성을 어느 정도 예측할 수 있다. 따라서 LCD기술의 연구개발 방향은 크게 두가지로 분류할 수 있으며 첫째로는, 현재 양산되고 있는 LCD 상품의 경쟁력강화를 위하여 원가(原價) 절감(節減)과 표시품질을 향상시키는 것이며 둘째로는, 새로운 타입의 LCD를 개발하여 기존 상품을 대체하거나 새로운 시장을 창출하는 분야로 나눌 수 있다. 이와 같은 관점에서 현재 진행되고 있는 LCD기술개발은 다음과 같이 분류할 수 있다. 1) 원가 절감 2) 특성 향상 3) New Type LCD 개발.

  • PDF

Utility-Based Video Adaptation in MPEG-21 for Universal Multimedia Access (UMA를 위한 유틸리티 기반 MPEG-21 비디오 적응)

  • 김재곤;김형명;강경옥;김진웅
    • Journal of Broadcast Engineering
    • /
    • v.8 no.4
    • /
    • pp.325-338
    • /
    • 2003
  • Video adaptation in response to dynamic resource conditions and user preferences is required as a key technology to enable universal multimedia access (UMA) through heterogeneous networks by a multitude of devices In a seamless way. Although many adaptation techniques exist, selections of appropriate adaptations among multiple choices that would satisfy given constraints are often ad hoc. To provide a systematic solution, we present a general conceptual framework to model video entity, adaptation, resource, utility, and relations among them. It allows for formulation of various adaptation problems as resource-constrained utility maximization. We apply the framework to a practical case of dynamic bit rate adaptation of MPEG-4 video streams by employing combination of frame dropping and DCT coefficient dropping. Furthermore, we present a descriptor, which has been accepted as a part of MPEG-21 Digital Item Adaptation (DIA), for supporting terminal and network quality of service (QoS) in an interoperable manner. Experiments are presented to demonstrate the feasibility of the presented framework using the descriptor.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

The preliminary study of developing computational thinking practice analysis tool and its implementation (컴퓨팅 사고 실천 분석도구 개발 및 이의 활용에 대한 기초연구)

  • Park, Young-Shin;Hwang, Jin-Kyung
    • Journal of the Korean Society of Earth Science Education
    • /
    • v.10 no.2
    • /
    • pp.140-160
    • /
    • 2017
  • The purpose of this study was to develop computational thinking (CT) analysis tool that can be used to analyze CT practices; first, by defining what CT practices are, and then, by identifying which components of CT are reflected in STEAM classes. Exploring various kinds of CT practices, which can be identified while applying the proposed CT analysis tool for exemplary STEAM classes, is another goal of this study. Firstly, to answer the question of "What is CT in science education" and thereby to develop the proposed CT practice analysis tool, three types of published documents about CT definition as the main data in this study have been considered. In the first "analysis tool development" part of this study, the following five elements have been identified as the main components of CT analysis tool as follows; (1) connecting open problems with computing, (2) using tools or computers to develop computing artifact, (3) abstraction process, (4) analyzing and evaluating computing process and artifact, and (5) communicating and cooperating. Based on the understandings that there is a consistent flow among the five components due to their interactions, a flow chart of CT practice has also been developed. In the second part of this study, which is an implementation study, the proposed CT practice analysis tool has been applied in one exemplary STEAM program. To select the candidate STEAM program, four selection criteria have been identified. Then, the proposed CT practice analysis tool has been applied for the selected STEAM program to determine the degree of CT practice reflected in the program and furthermore, to suggest a way of improving the proposed CT analysis tool if it shows some weak points. Through the findings of this study, we suggest that the actual definition of computational thinking will be helpful to converge Technology and Engineering to STEAM education and a strong complement to reinforce STEAM education.