• Title/Summary/Keyword: 이미지 생성 시스템

Search Result 457, Processing Time 0.023 seconds

Improved Anatomical Landmark Detection Using Attention Modules and Geometric Data Augmentation in X-ray Images (어텐션 모듈과 기하학적 데이터 증강을 통한 X-ray 영상 내 해부학적 랜드마크 검출 성능 향상)

  • Lee, Hyo-Jeong;Ma, Se-Rie;Choi, Jang-Hwan
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.55-65
    • /
    • 2022
  • Recently, deep learning-based automated systems for identifying and detecting landmarks have been proposed. In order to train such a deep learning-based model without overfitting, a large amount of image and labeling data is required. Conventionally, an experienced reader manually identifies and labels landmarks in a patient's image. However, such measurement is not only expensive, but also has poor reproducibility, so the need for an automated labeling method has been raised. In addition, in the X-ray image, since various human tissues on the path through which the photons pass are displayed, it is difficult to identify the landmark compared to a general natural image or a 3D image modality image. In this study, we propose a geometric data augmentation technique that enables the generation of a large amount of labeling data in X-ray images. In addition, the optimal attention mechanism for landmark detection was presented through the implementation and application of various attention techniques to improve the detection performance of 16 major landmarks in the skull. Finally, among the major cranial landmarks, markers that ensure stable detection are derived, and these markers are expected to have high clinical application potential.

Study on the Effect of Emissivity for Estimation of the Surface Temperature from Drone-based Thermal Images (드론 열화상 화소값의 타겟 온도변환을 위한 방사율 영향 분석)

  • Jo, Hyeon Jeong;Lee, Jae Wang;Jung, Na Young;Oh, Jae Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.1
    • /
    • pp.41-49
    • /
    • 2022
  • Recently interests on the application of thermal cameras have increased with the advance of image analysis technology. Aside from a simple image acquisition, applications such as digital twin and thermal image management systems have gained popularity. To this end, we studied the effect of emissivity on the DN (Digital Number) value in the process of derivation of a relational expression for converting DN to an actual surface temperature. The DN value is a number representing the spectral band value of the thermal image, and is an important element constituting the thermal image data. However, the DN value is not a temperature value indicating the actual surface temperature, but a brightness value indicating high and low heat as brightness, and has a non-linear relationship with the actual surface temperature. The reliable relationship between DN and the actual surface temperature is critical for a thermal image processing. We tested the relationship between the actual surface temperature and the DN value of the thermal image, and then the radiation adjustment was performed to better estimate actual surface temperatures. As a result, the relation graph between the actual surface temperature and the DN value similarly show linear pattern with the relation graph between the radiation-controlled non-contact thermometer and the DN value. And the non-contact temperature after adjusting the emissivity was closer to the actual surface temperature than before adjusting the emissivity.

Simultaneous Removal of NO and SO2 using Microbubble and Reducing Agent (마이크로버블과 환원제를 이용한 습식 NO 및 SO2의 동시제거)

  • Song, Dong Hun;Kang, Jo Hong;Park, Hyun Sic;Song, Hojun;Chung, Yongchul G.
    • Clean Technology
    • /
    • v.27 no.4
    • /
    • pp.341-349
    • /
    • 2021
  • In combustion facilities, the nitrogen and sulfur in fossil fuels react with oxygen to generate air pollutants such as nitrogen oxides (NOX) and sulfur oxides (SOX), which are harmful to the human body and cause environmental pollution. There are regulations worldwide to reduce NOX and SOX, and various technologies are being applied to meet these regulations. There are commercialized methods to reduce NOX and SOX emissions such as selective catalytic reduction (SCR), selective non-catalytic reduction (SNCR) and wet flue gas desulfurization (WFGD), but due to the disadvantages of these methods, many studies have been conducted to simultaneously remove NOX and SOX. However, even in the NOX and SOX simultaneous removal methods, there are problems with wastewater generation due to oxidants and absorbents, costs incurred due to the use of catalysts and electrolysis to activate specific oxidants, and the harmfulness of gas oxidants themselves. Therefore, in this research, microbubbles generated in a high-pressure disperser and reducing agents were used to reduce costs and facilitate wastewater treatment in order to compensate for the shortcomings of the NOX, SOX simultaneous treatment method. It was confirmed through image processing and ESR (electron spin resonance) analysis that the disperser generates real microbubbles. NOX and SOX removal tests according to temperature were also conducted using only microbubbles. In addition, the removal efficiencies of NOX and SOX are about 75% and 99% using a reducing agent and microbubbles to reduce wastewater. When a small amount of oxidizing agent was added to this microbubble system, both NOX and SOX removal rates achieved 99% or more. Based on these findings, it is expected that this suggested method will contribute to solving the cost and environmental problems associated with the wet oxidation removal method.

Contactless Data Society and Reterritorialization of the Archive (비접촉 데이터 사회와 아카이브 재영토화)

  • Jo, Min-ji
    • The Korean Journal of Archival Studies
    • /
    • no.79
    • /
    • pp.5-32
    • /
    • 2024
  • The Korean government ranked 3rd among 193 UN member countries in the UN's 2022 e-Government Development Index. Korea, which has consistently been evaluated as a top country, can clearly be said to be a leading country in the world of e-government. The lubricant of e-government is data. Data itself is neither information nor a record, but it is a source of information and records and a resource of knowledge. Since administrative actions through electronic systems have become widespread, the production and technology of data-based records have naturally expanded and evolved. Technology may seem value-neutral, but in fact, technology itself reflects a specific worldview. The digital order of new technologies, armed with hyper-connectivity and super-intelligence, not only has a profound influence on traditional power structures, but also has an a similar influence on existing information and knowledge transmission media. Moreover, new technologies and media, including data-based generative artificial intelligence, are by far the hot topic. It can be seen that the all-round growth and spread of digital technology has led to the augmentation of human capabilities and the outsourcing of thinking. This also involves a variety of problems, ranging from deep fakes and other fake images, auto profiling, AI lies hallucination that creates them as if they were real, and copyright infringement of machine learning data. Moreover, radical connectivity capabilities enable the instantaneous sharing of vast amounts of data and rely on the technological unconscious to generate actions without awareness. Another irony of the digital world and online network, which is based on immaterial distribution and logical existence, is that access and contact can only be made through physical tools. Digital information is a logical object, but digital resources cannot be read or utilized without some type of device to relay it. In that respect, machines in today's technological society have gone beyond the level of simple assistance, and there are points at which it is difficult to say that the entry of machines into human society is a natural change pattern due to advanced technological development. This is because perspectives on machines will change over time. Important is the social and cultural implications of changes in the way records are produced as a result of communication and actions through machines. Even in the archive field, what problems will a data-based archive society face due to technological changes toward a hyper-intelligence and hyper-connected society, and who will prove the continuous activity of records and data and what will be the main drivers of media change? It is time to research whether this will happen. This study began with the need to recognize that archives are not only records that are the result of actions, but also data as strategic assets. Through this, author considered how to expand traditional boundaries and achieves reterritorialization in a data-driven society.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Characterization of SID2 that is required for the production of salicylic acid by using β-GLUCURONIDASE and LUCIFERASE reporter system in Arabidoposis (리포트 시스템을 이용한 살리실산 생합성 유전자 SID2의 발현 해석)

  • Hong, Mi-Ju;Cheong, Mi-Sun;Lee, Ji-Young;Kim, Hun;Jeong, Jae-Cheol;Shen, Mingzhe;Ali, Zahir;Park, Bo-Kyung;Choi, Won-Kyun;Yun, Dae-Jin
    • Journal of Plant Biotechnology
    • /
    • v.35 no.3
    • /
    • pp.169-176
    • /
    • 2008
  • Salicylic acid(SA) is a phytohormone that is related to plant defense mechanism. The SA accumulation is triggered by abiotic and biotic stresses. SA acts as a signal molecular compound mediating systemic acquired resistance and hypersensitive response in plant. Although the role of SA has been studied extensively, an understanding of the SA regulatory mechanism is still lacking in plants. In order to comprehend SA regulatory mechanism, we have been transformed with a SID2 promoter:GUS::LUC fusion construct into siz1-2 mutant and wild plant(Col-0). SIZ1 encodes SUMO E3 ligase and negatively regulates SA accumulation in plants. SID2(SALICYLIC ACID INDUCTION DEFICIENT2) is a crucial enzyme of SA biosynthesis. The Arabidopsis SID2 gene encodes isochorismate synthase(ICS) that controls SA level by conversion of chorismate to isochorismate. We compared the regulation of SID2 in wild-type and siz1-2 transgenic plants that express SID2 promoter:GUS::LUC constructs respectively. The expressions of $\beta$-GLUCURONIDASE and LUCIFERASE were higher in siz 1-2 transgenic plant without any stress treatment. SID2 promoter:GUS::LUC/siz1-2 transgenic plant will be used as a starting material for isolation of siz1-2 suppressor mutants and genes involved in SA-mediated stress signaling pathway.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.