• Title/Summary/Keyword: Korea Image

Search Result 19,802, Processing Time 0.047 seconds

Can We Hear the Shape of a Noise Source\ulcorner (소음원의 모양을 들어서 상상할 수 있을까\ulcorner)

  • Kim, Yang-Hann
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.14 no.7
    • /
    • pp.586-603
    • /
    • 2004
  • One of the subtle problems that make noise control difficult for engineers is “the invisibility of noise or sound.” The visual image of noise often helps to determine an appropriate means for noise control. There have been many attempts to fulfill this rather challenging objective. Theoretical or numerical means to visualize the sound field have been attempted and as a result, a great deal of progress has been accomplished, for example in the field of visualization of turbulent noise. However, most of the numerical methods are not quite ready to be applied practically to noise control issues. In the meantime, fast progress has made it possible instrumentally by using multiple microphones and fast signal processing systems, although these systems are not perfect but are useful. The state of the art system is recently available but still has many problematic issues : for example, how we can implement the visualized noise field. The constructed noise or sound picture always consists of bias and random errors, and consequently it is often difficult to determine the origin of the noise and the spatial shape of noise, as highlighted in the title. The first part of this paper introduces a brief history, which is associated with “sound visualization,” from Leonardo da Vinci's famous drawing on vortex street (Fig. 1) to modern acoustic holography and what has been accomplished by a line or surface array. The second part introduces the difficulties and the recent studies. These include de-Dopplerization and do-reverberation methods. The former is essential for visualizing a moving noise source, such as cars or trains. The latter relates to what produces noise in a room or closed space. Another mar issue associated this sound/noise visualization is whether or not Ivecan distinguish mutual dependence of noise in space : for example, we are asked to answer the question, “Can we see two birds singing or one bird with two beaks?"

Phosphorus Phases in the Surface Sediment of the South Sea (남해 표층 퇴적물에서의 인의 존재상)

  • SON Jaekyung;LEE Tongsup;YANG Han Soeb
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.32 no.5
    • /
    • pp.680-687
    • /
    • 1999
  • To understand the role of shelf sediment in phosphorus biogeochemical cycle, we carried out sequential sediment extraction (SEDEX) of P and porewater analysis on 14 core samples collected in the South Sea of Korea, SEDEX classified P-pools into 5 phases and results are grouped into two categories: reactive P (loosely sorbed-P and Fe bound-P) and refractory P (detrital inorganic-p, authigenic mineral-P and organic-P). Total P concentrations are decreased with sediment depth in all samples as a result of dissolution to porewater. Reactive P comprises about $20\~50\%$ of total P, and iron bound-P is the major form consisting $70\~80\%$ of reactive P-pool. Iron bound-P decreases sharply with depth. Depth profiles of dissolved P concentration in porewater show mirror image of iron bound-P, revealing the role of FeOOH as a regulator of reactive P supply to overlying water column. Authigenic mineral-P consists less than $5\%$ of total P, thus removal of reactive P by converting into refractory P seems inefficient in shelf sediment. This implies that continental shelf sediment sequesters P temporarily rather than permanently. Results show local variation. Nakdong estuary receiving large amount of terrigenous input shows the highest concentration of total P and reactive P. Here iron oxyhydroxides at the surface sediment control the water column flux of P from sediment. Although total P content at the surface is comparable (500$\~$600 ${\mu}g{\cdot}g^{-1}$) between the South Sea and East China Sea, the former contains more iron bound-P and less derital inorganic-P than the latter. Reasons for the difference seem due in part to particle texture, and to biological productivity which depends roughly on the distance from land.

  • PDF

Assembly and Testing of a Visible and Near-infrared Spectrometer with a Shack-Hartmann Wavefront Sensor (샤크-하트만 센서를 이용한 가시광 및 근적외선 분광기 조립 및 평가)

  • Hwang, Sung Lyoung;Lee, Jun Ho;Jeong, Do Hwan;Hong, Jin Suk;Kim, Young Soo;Kim, Yeon Soo;Kim, Hyun Sook
    • Korean Journal of Optics and Photonics
    • /
    • v.28 no.3
    • /
    • pp.108-115
    • /
    • 2017
  • We report the assembly procedure and performance evaluation of a visible and near-infrared spectrometer in the wavelength region of 400-900 nm, which is later to be combined with fore-optics (a telescope) to form a f/2.5 imaging spectrometer with a field of view of ${\pm}7.68^{\circ}$. The detector at the final image plane is a $640{\times}480$ charge-coupled device with a $24{\mu}m$ pixel size. The spectrometer is in an Offner relay configuration consisting of two concentric, spherical mirrors, the secondary of which is replaced by a convex grating mirror. A double-pass test method with an interferometer is often applied in the assembly process of precision optics, but was excluded from our study due to a large residual wavefront error (WFE) in optical design of 210 nm ($0.35{\lambda}$ at 600 nm) root-mean-square (RMS). This results in a single-path test method with a Shack-Hartmann sensor. The final assembly was tested to have a RMS WFE increase of less than 90 nm over the entire field of view, a keystone of 0.08 pixels, a smile of 1.13 pixels and a spectral resolution of 4.32 nm. During the procedure, we confirmed the validity of using a Shack-Hartmann wavefront sensor to monitor alignment in the assembly of an Offner-like spectrometer.

A Study on the Direction of Human Identity and Dignity Education in the AI Era. (AI시대, 인간의 정체성과 존엄성 교육의 방향)

  • Seo, Mikyoung
    • Journal of Christian Education in Korea
    • /
    • v.67
    • /
    • pp.157-194
    • /
    • 2021
  • The issue of AI's ethical consciousness has been constantly on the rise. AI learns and imitates everything behavior human beings do, just like a child. Therefore, the ethical consciousness we currently demand from AI is first the ethical consciousness required of humans, and at the center of it is the dignity of humans. Thus, this study analyzed human identity and its problems according to the development of AI technology, apologized the theological premises and characteristics of human dignity, and sought the direction of human dignity education as follows. First, this study discussed the development of AI and its relation to human beings. The development of AI's technology has led to the sharing of "reason or intelligence" with machines called AI which have been restricted to the exclusive property of mankind. This raised the question of the superior humanity which humans would be remained to be distinguished from AI machines. Second, this study discussed transhumanism and human identity. Transhumanism has been argued for the combination of AI machines and humans in order to improve inefficient human intelligence and human capabilities. However, the combination of AI machines with humans raised the issue of human identity. In the AI era, human identity is to believe thoughts that God had when he built us. Third, this study apologized theological premise and characteristic about human dignity. Human dignity has become a key concept of the constitution and international human rights treaties around the world. Nonetheless, declarative conviction that human is dignified is difficult to be understanded without Christian theological premise. Theological premise of human dignity lies on the fact that human is dignified feature being granted life by Heavenly Father. This feature lies on longing for "Goodness" and "eternality", pursuit of beauty, a happy being in relationship with others. Fourth, this study presented the direction of human dignity education. The direction of human dignity education has to awaken what is identity of human and how human beings were created and how much they are precious. Furthermore, it lead human to ponder consciously and accept the highest value of what human beings are, how they were created, and how precious they are. That is about educating human identity, and its core is that regardless of the circumstances - the wealth gap, knowledge level, skin color, gender, age, disability, etc. - all people are in God's image and for the glory of God, thereby being very important to God.

Analysis of Polar Region-Related Topics in Domestic and Foreign Textbooks (국내외 교과서에 수록된 극지 관련 내용 분석)

  • Chung, Sueim;Choi, Haneul;Choi, Youngjin;Kang, Hyeonji;Jeon, Jooyoung;Shin, Donghee
    • Journal of the Korean earth science society
    • /
    • v.42 no.2
    • /
    • pp.201-220
    • /
    • 2021
  • The objective of this study is to increase awareness and interest regarding polar science and thereby aid in establishing the concept and future direction of polar literacy. To analyze the current status, textbooks based on the common school curriculum pertaining to polar topics were reviewed. Six countries that actively conduct polar science, namely Korea, France, Japan, Germany, the United States, and the United Kingdom, were chosen. Subsequently, 402 cases in 110 science and social studies (geography) textbooks of these countries were analyzed through both quantitative and qualitative methods. Based on the obtained results, the importance of polar research in geoscience education and the need for spreading awareness regarding polar research as an indicator of global environmental changes were examined. It was found that the primary polar topics described in the textbooks are polar glaciers, polar volcanism, solid geophysics, polar infrastructure, and preservation of geological resources and heritage. This demonstrates that the polar region is a field of research with important clues to Earth's past, present, and future environments and is also a good teaching subject for geological education. However, an educational approach is needed for systematically laying emphasis on polar research. The implications of this study are manifold, such as the establishment of a cooperative system between polar scientists and educators, extraction of core concepts for polar literacy and content reconstruction, discovery of new polar topics associated with the curriculum, diversification of forms of presentation in textbooks, and development of an affective image that is based on correct cognitive understanding. Furthermore, through the continuance of polar topics in textbooks, students can improve their awareness regarding polar literacy and polar science culture, which in turn will serve as the driving force for sustainable polar research in the future.

Host-Based Intrusion Detection Model Using Few-Shot Learning (Few-Shot Learning을 사용한 호스트 기반 침입 탐지 모델)

  • Park, DaeKyeong;Shin, DongIl;Shin, DongKyoo;Kim, Sangsoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.271-278
    • /
    • 2021
  • As the current cyber attacks become more intelligent, the existing Intrusion Detection System is difficult for detecting intelligent attacks that deviate from the existing stored patterns. In an attempt to solve this, a model of a deep learning-based intrusion detection system that analyzes the pattern of intelligent attacks through data learning has emerged. Intrusion detection systems are divided into host-based and network-based depending on the installation location. Unlike network-based intrusion detection systems, host-based intrusion detection systems have the disadvantage of having to observe the inside and outside of the system as a whole. However, it has the advantage of being able to detect intrusions that cannot be detected by a network-based intrusion detection system. Therefore, in this study, we conducted a study on a host-based intrusion detection system. In order to evaluate and improve the performance of the host-based intrusion detection system model, we used the host-based Leipzig Intrusion Detection-Data Set (LID-DS) published in 2018. In the performance evaluation of the model using that data set, in order to confirm the similarity of each data and reconstructed to identify whether it is normal data or abnormal data, 1D vector data is converted to 3D image data. Also, the deep learning model has the drawback of having to re-learn every time a new cyber attack method is seen. In other words, it is not efficient because it takes a long time to learn a large amount of data. To solve this problem, this paper proposes the Siamese Convolutional Neural Network (Siamese-CNN) to use the Few-Shot Learning method that shows excellent performance by learning the little amount of data. Siamese-CNN determines whether the attacks are of the same type by the similarity score of each sample of cyber attacks converted into images. The accuracy was calculated using Few-Shot Learning technique, and the performance of Vanilla Convolutional Neural Network (Vanilla-CNN) and Siamese-CNN was compared to confirm the performance of Siamese-CNN. As a result of measuring Accuracy, Precision, Recall and F1-Score index, it was confirmed that the recall of the Siamese-CNN model proposed in this study was increased by about 6% from the Vanilla-CNN model.

Real-time CRM Strategy of Big Data and Smart Offering System: KB Kookmin Card Case (KB국민카드의 빅데이터를 활용한 실시간 CRM 전략: 스마트 오퍼링 시스템)

  • Choi, Jaewon;Sohn, Bongjin;Lim, Hyuna
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.1-23
    • /
    • 2019
  • Big data refers to data that is difficult to store, manage, and analyze by existing software. As the lifestyle changes of consumers increase the size and types of needs that consumers desire, they are investing a lot of time and money to understand the needs of consumers. Companies in various industries utilize Big Data to improve their products and services to meet their needs, analyze unstructured data, and respond to real-time responses to products and services. The financial industry operates a decision support system that uses financial data to develop financial products and manage customer risks. The use of big data by financial institutions can effectively create added value of the value chain, and it is possible to develop a more advanced customer relationship management strategy. Financial institutions can utilize the purchase data and unstructured data generated by the credit card, and it becomes possible to confirm and satisfy the customer's desire. CRM has a granular process that can be measured in real time as it grows with information knowledge systems. With the development of information service and CRM, the platform has change and it has become possible to meet consumer needs in various environments. Recently, as the needs of consumers have diversified, more companies are providing systematic marketing services using data mining and advanced CRM (Customer Relationship Management) techniques. KB Kookmin Card, which started as a credit card business in 1980, introduced early stabilization of processes and computer systems, and actively participated in introducing new technologies and systems. In 2011, the bank and credit card companies separated, leading the 'Hye-dam Card' and 'One Card' markets, which were deviated from the existing concept. In 2017, the total use of domestic credit cards and check cards grew by 5.6% year-on-year to 886 trillion won. In 2018, we received a long-term rating of AA + as a result of our credit card evaluation. We confirmed that our credit rating was at the top of the list through effective marketing strategies and services. At present, Kookmin Card emphasizes strategies to meet the individual needs of customers and to maximize the lifetime value of consumers by utilizing payment data of customers. KB Kookmin Card combines internal and external big data and conducts marketing in real time or builds a system for monitoring. KB Kookmin Card has built a marketing system that detects realtime behavior using big data such as visiting the homepage and purchasing history by using the customer card information. It is designed to enable customers to capture action events in real time and execute marketing by utilizing the stores, locations, amounts, usage pattern, etc. of the card transactions. We have created more than 280 different scenarios based on the customer's life cycle and are conducting marketing plans to accommodate various customer groups in real time. We operate a smart offering system, which is a highly efficient marketing management system that detects customers' card usage, customer behavior, and location information in real time, and provides further refinement services by combining with various apps. This study aims to identify the traditional CRM to the current CRM strategy through the process of changing the CRM strategy. Finally, I will confirm the current CRM strategy through KB Kookmin card's big data utilization strategy and marketing activities and propose a marketing plan for KB Kookmin card's future CRM strategy. KB Kookmin Card should invest in securing ICT technology and human resources, which are becoming more sophisticated for the success and continuous growth of smart offering system. It is necessary to establish a strategy for securing profit from a long-term perspective and systematically proceed. Especially, in the current situation where privacy violation and personal information leakage issues are being addressed, efforts should be made to induce customers' recognition of marketing using customer information and to form corporate image emphasizing security.

An essay on appraisal method over official administration records ill-balanced. -For development of appraisal process and method over chosun government-general office records- (불균형 잔존 행정기록의 평가방법 시론 - 조선총독부 공문서의 평가절차론 수립을 위하여 -)

  • Kim, Ik-Han
    • The Korean Journal of Archival Studies
    • /
    • no.13
    • /
    • pp.179-203
    • /
    • 2006
  • This study develops the process and method of official administration documents which have remained ill-balanced like the official documents of the government-general of Chosun(the pro-Japanese colonial government (1910-1945)). At first, the existing Appraisal-theories are recomposed. The Appraisal-Theories of Schellenberg is focused valuation about value of records itself, but fuction-Appraisal theory is attached importance to operational activities which take the record into action. But given that the record is a re-presentation of operational activities, the both are the same on the philosophy aspect. Therefore, in the case that the process - method is properly designed, it can be possible to use a composite type between operational activities and records. Also, a method of the Curve has its strong points in the macro and balanced aspect while the Absolute has it's strength in the micro aspect, so that chances are that both alternate methodologies are applied to the study. Hereby, the existing Appraisal theories are concluded to be the mutually-complemented things that can be easily put together into various forms according to the characteristics of an object and its situation, in the terms of the specific Appraisal methodology. Especially, in the case of this article dealing with the imbalance remains official-documents, it is necessary to compromise more properly process with a indicated useful method than establishing a method and process by choosing the only one theory. In order to appraise the official-documents of the pro-Japanese colonial government (1910-1945), a macro appraisal of value has to be appraised about them by understanding a system, functions and using the historical-cultural evolution, after analysing Disposal Authority. From this, map the record so that organization function maps are constructed regarding the value rank of functions and detailed-functions. After this, establish the appraisal strategy considering the internal environment of archival agencies and based on micro appraisal to a great quantity of records remained and supplying other meaning to a small quantity of records remained for example, the oral resources production are accomplished. The study has not yet reached the following aspects ; a function analysis, historical decoding techniques, a curve valuation of the record, the official gazette of the government general of Chosun( the pro-Japanese government for 1910-1945), an analysis method of the other historical materials and it's process, presentation of appraisal output image. As the result, that's just simply a proposal and we should fill in the above-mentioned shortages of the study through development of all the up-coming studies.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

The Influence of Number of Targets on Commonness Knowledge Generation and Brain Activity during the Life Science Commonness Discovery Task Performance (생명과학 공통성 발견 과제 수행에서 대상의 수가 공통성 지식 생성과 뇌 활성에 미치는 영향)

  • Kim, Yong-Seong;Jeong, Jin-Su
    • Journal of Science Education
    • /
    • v.43 no.1
    • /
    • pp.157-172
    • /
    • 2019
  • The purpose of this study is to analyze the influence of number of targets on common knowledge generation and brain activity during the common life science discovery task performance. In this study, 35 preliminary life science teachers participated. This study was intentionally made a block designed for EEG recording. EEGs were collected while subjects were performing common discovery tasks. The sLORETA method and the relative power spectrum analysis method were used to analyze the brain activity difference and the role of activated cortical and subcortical regions according to the degree of difficulty of common discovery task. As a result of the study, in the case of the Theta wave, the activity of the Theta wave was significantly decreased in the frontal lobe and increased in the occipital lobe when the difficult difficulty task was compared with the easy difficulty task. In the case of Alpha wave, the activity of Alpha decreased significantly in the frontal lobe when performing difficult task with difficulty. Beta wave activity decreased significantly in the frontal lobe, parietal lobe, and occipital lobe when performing difficult task. Finally, in the case of Gamma wave, activity of Gamma wave decreased in the frontal lobe and activity increased in the parietal lobe and temporal lobe when performing the difficult difficulty task compared to the task of easy difficulty. The level of difficulty of the commonality discovery task is determined by the cingulate gyrus, the cuneus, the lingual gyrus, the posterior cingulate, the precuneus, and the sub-gyral where it was shown to have an impact. Therefore, the difficulty of the commonality discovery task is the process of integrating the visual information extracted from the image and the location information, comparing the attributes of the objects, selecting the necessary information, visual work memory process of the selected information. It can be said to affect the process of perception.