• Title/Summary/Keyword: 데이터 개발

Search Result 15,033, Processing Time 0.054 seconds

Application of Amplitude Demodulation to Acquire High-sampling Data of Total Flux Leakage for Tendon Nondestructive Estimation (덴던 비파괴평가를 위한 Total Flux Leakage에서 높은 측정빈도의 데이터를 획득하기 위한 진폭복조의 응용)

  • Joo-Hyung Lee;Imjong Kwahk;Changbin Joh;Ji-Young Choi;Kwang-Yeun Park
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.2
    • /
    • pp.17-24
    • /
    • 2023
  • A post-processing technique for the measurement signal of a solenoid-type sensor is introduced. The solenoid-type sensor nondestructively evaluates an external tendon of prestressed concrete using the total flux leakage (TFL) method. The TFL solenoid sensor consists of primary and secondary coils. AC electricity, with the shape of a sinusoidal function, is input in the primary coil. The signal proportional to the differential of the input is induced in the secondary coil. Because the amplitude of the induced signal is proportional to the cross-sectional area of the tendon, sectional loss of the tendon caused by ruptures or corrosion can be identified by the induced signal. Therefore, it is important to extract amplitude information from the measurement signal of the TFL sensor. Previously, the amplitude was extracted using local maxima, which is the simplest way to obtain amplitude information. However, because the sampling rate is dramatically decreased by amplitude extraction using the local maxima, the previous method places many restrictions on the direction of TFL sensor development, such as applying additional signal processing and/or artificial intelligence. Meanwhile, the proposed method uses amplitude demodulation to obtain the signal amplitude from the TFL sensor, and the sampling rate of the amplitude information is same to the raw TFL sensor data. The proposed method using amplitude demodulation provides ample freedom for development by eliminating restrictions on the first coil input frequency of the TFL sensor and the speed of applying the sensor to external tension. It also maintains a high measurement sampling rate, providing advantages for utilizing additional signal processing or artificial intelligence. The proposed method was validated through experiments, and the advantages were verified through comparison with the previous method. For example, in this study the amplitudes extracted by amplitude demodulation provided a sampling rate 100 times greater than those of the previous method. There may be differences depending on the given situation and specific equipment settings; however, in most cases, extracting amplitude information using amplitude demodulation yields more satisfactory results than previous methods.

Estimation of Representative Wave Period and Optimal Probability Density Function Using Wave Observed Data around Korean Western Coast (국내 서해안 파랑 관측자료를 이용한 대표주기 산정 및 최적 확률밀도함수 추정)

  • Uk-Jae Lee;Hong-Yeon Cho;Jin Ho Park;Dong-Hui Ko
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.35 no.6
    • /
    • pp.146-154
    • /
    • 2023
  • In this study, the peak wave period Tp and mean wave period T02 and Tm-1, 0, which are major parameters for classifying ocean characteristics, were calculated using water surface elevation data observed from the second west coast oceanographic and meteorological observation tower. In addition, the ratio of abnormal data, correlation analysis, and optimal probability density function were estimated. In the case of Tp among the calculated representative periods, the proportion of abnormal data was 5.73% and 0.67% at each point, and T02 was 4.35% and 0.01%. Tm-1, 0 was found to be 2.82% and 0.03%. Meanwhile, as a result of analyzing the relationship between T02 and Tp, the relationship was calculated to be 0.53 and 0.63 for each point. The relationship between Tm-1, 0 and Tp was 1.15 and 1.32, respectively, and T02, Tm-1, 0 was 1.18 and 1.22. As a result of estimating the optimal probability density function of the calculated representative period, Tp followed the 'Log-normal' and 'Normal' distributions at each point, and T02 was 'Gamma', 'Normal' distribution and Tm-1, 0 showed that 'Log-normal' and 'Normal' distribution were dominant, respectively. It is decided that these results can be used as basic data for wave analysis conducted on the west coast.

Development of Cloud Detection Method Considering Radiometric Characteristics of Satellite Imagery (위성영상의 방사적 특성을 고려한 구름 탐지 방법 개발)

  • Won-Woo Seo;Hongki Kang;Wansang Yoon;Pyung-Chae Lim;Sooahm Rhee;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1211-1224
    • /
    • 2023
  • Clouds cause many difficult problems in observing land surface phenomena using optical satellites, such as national land observation, disaster response, and change detection. In addition, the presence of clouds affects not only the image processing stage but also the final data quality, so it is necessary to identify and remove them. Therefore, in this study, we developed a new cloud detection technique that automatically performs a series of processes to search and extract the pixels closest to the spectral pattern of clouds in satellite images, select the optimal threshold, and produce a cloud mask based on the threshold. The cloud detection technique largely consists of three steps. In the first step, the process of converting the Digital Number (DN) unit image into top-of-atmosphere reflectance units was performed. In the second step, preprocessing such as Hue-Value-Saturation (HSV) transformation, triangle thresholding, and maximum likelihood classification was applied using the top of the atmosphere reflectance image, and the threshold for generating the initial cloud mask was determined for each image. In the third post-processing step, the noise included in the initial cloud mask created was removed and the cloud boundaries and interior were improved. As experimental data for cloud detection, CAS500-1 L2G images acquired in the Korean Peninsula from April to November, which show the diversity of spatial and seasonal distribution of clouds, were used. To verify the performance of the proposed method, the results generated by a simple thresholding method were compared. As a result of the experiment, compared to the existing method, the proposed method was able to detect clouds more accurately by considering the radiometric characteristics of each image through the preprocessing process. In addition, the results showed that the influence of bright objects (panel roofs, concrete roads, sand, etc.) other than cloud objects was minimized. The proposed method showed more than 30% improved results(F1-score) compared to the existing method but showed limitations in certain images containing snow.

Use of ChatGPT in college mathematics education (대학수학교육에서의 챗GPT 활용과 사례)

  • Sang-Gu Lee;Doyoung Park;Jae Yoon Lee;Dong Sun Lim;Jae Hwa Lee
    • The Mathematical Education
    • /
    • v.63 no.2
    • /
    • pp.123-138
    • /
    • 2024
  • This study described the utilization of ChatGPT in teaching and students' learning processes for the course "Introductory Mathematics for Artificial Intelligence (Math4AI)" at 'S' University. We developed a customized ChatGPT and presented a learning model in which students supplement their knowledge of the topic at hand by utilizing this model. More specifically, first, students learn the concepts and questions of the course textbook by themselves. Then, for any question they are unsure of, students may submit any questions (keywords or open problem numbers from the textbook) to our own ChatGPT at https://math4ai.solgitmath.com/ to get help. Notably, we optimized ChatGPT and minimized inaccurate information by fully utilizing various types of data related to the subject, such as textbooks, labs, discussion records, and codes at http://matrix.skku.ac.kr/Math4AI-ChatGPT/. In this model, when students have questions while studying the textbook by themselves, they can ask mathematical concepts, keywords, theorems, examples, and problems in natural language through the ChatGPT interface. Our customized ChatGPT then provides the relevant terms, concepts, and sample answers based on previous students' discussions and/or samples of Python or R code that have been used in the discussion. Furthermore, by providing students with real-time, optimized advice based on their level, we can provide personalized education not only for the Math4AI course, but also for any other courses in college math education. The present study, which incorporates our ChatGPT model into the teaching and learning process in the course, shows promising applicability of AI technology to other college math courses (for instance, calculus, linear algebra, discrete mathematics, engineering mathematics, and basic statistics) and in K-12 math education as well as the Lifespan Learning and Continuing Education.

Analysis of the Impact of Generative AI based on Crunchbase: Before and After the Emergence of ChatGPT (Crunchbase를 바탕으로 한 Generative AI 영향 분석: ChatGPT 등장 전·후를 중심으로)

  • Nayun Kim;Youngjung Geum
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.19 no.3
    • /
    • pp.53-68
    • /
    • 2024
  • Generative AI is receiving a lot of attention around the world, and ways to effectively utilize it in the business environment are being explored. In particular, since the public release of the ChatGPT service, which applies the GPT-3.5 model, a large language model developed by OpenAI, it has attracted more attention and has had a significant impact on the entire industry. This study focuses on the emergence of Generative AI, especially ChatGPT, which applies OpenAI's GPT-3.5 model, to investigate its impact on the startup industry and compare the changes that occurred before and after its emergence. This study aims to shed light on the actual application and impact of generative AI in the business environment by examining in detail how generative AI is being used in the startup industry and analyzing the impact of ChatGPT's emergence on the industry. To this end, we collected company information of generative AI-related startups that appeared before and after the ChatGPT announcement and analyzed changes in industry, business content, and investment information. Through keyword analysis, topic modeling, and network analysis, we identified trends in the startup industry and how the introduction of generative AI has revolutionized the startup industry. As a result of the study, we found that the number of startups related to Generative AI has increased since the emergence of ChatGPT, and in particular, the total and average amount of funding for Generative AI-related startups has increased significantly. We also found that various industries are attempting to apply Generative AI technology, and the development of services and products such as enterprise applications and SaaS using Generative AI has been actively promoted, influencing the emergence of new business models. The findings of this study confirm the impact of Generative AI on the startup industry and contribute to our understanding of how the emergence of this innovative new technology can change the business ecosystem.

  • PDF

Exploring Pre-Service Earth Science Teachers' Understandings of Computational Thinking (지구과학 예비교사들의 컴퓨팅 사고에 대한 인식 탐색)

  • Young Shin Park;Ki Rak Park
    • Journal of the Korean earth science society
    • /
    • v.45 no.3
    • /
    • pp.260-276
    • /
    • 2024
  • The purpose of this study is to explore whether pre-service teachers majoring in earth science improve their perception of computational thinking through STEAM classes focused on engineering-based wave power plants. The STEAM class involved designing the most efficient wave power plant model. The survey on computational thinking practices, developed from previous research, was administered to 15 Earth science pre-service teachers to gauge their understanding of computational thinking. Each group developed an efficient wave power plant model based on the scientific principal of turbine operation using waves. The activities included problem recognition (problem solving), coding (coding and programming), creating a wave power plant model using a 3D printer (design and create model), and evaluating the output to correct errors (debugging). The pre-service teachers showed a high level of recognition of computational thinking practices, particularly in "logical thinking," with the top five practices out of 14 averaging five points each. However, participants lacked a clear understanding of certain computational thinking practices such as abstraction, problem decomposition, and using bid data, with their comprehension of these decreasing after the STEAM lesson. Although there was a significant reduction in the misconception that computational thinking is "playing online games" (from 4.06 to 0.86), some participants still equated it with "thinking like a computer" and "using a computer to do calculations". The study found slight improvements in "problem solving" (3.73 to 4.33), "pattern recognition" (3.53 to 3.66), and "best tool selection" (4.26 to 4.66). To enhance computational thinking skills, a practice-oriented curriculum should be offered. Additional STEAM classes on diverse topics could lead to a significant improvement in computational thinking practices. Therefore, establishing an educational curriculum for multisituational learning is essential.

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

The difference of image quality using other radioactive isotope in uniformity correction map of myocardial perfusion SPECT (심근 관류 SPECT에서 핵종에 따른 Uniformity correction map 설정을 통한 영상의 질 비교)

  • Song, Jae hyuk;Kim, Kyeong Sik;Lee, Dong Hoon;Kim, Sung Hwan;Park, Jang Won
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.2
    • /
    • pp.87-92
    • /
    • 2015
  • Purpose When the patients takes myocardial perfusion SPECT using $^{201}Tl$, the operator gives the patients an injection of $^{201}Tl$. But the uniformity correction map in SPECT uses $^{99m}Tc$ uniformity correction map. Thus, we want to compare the image quality when it uses $^{99m}Tc$ uniformity correction map and when it uses $^{201}Tl$ uniformity correction map. Materials and Methods Phantom study is performed. We take the data by Asan medical center daily QC condition with flood phantom including $^{201}Tl$ 21.3 kBq/mL. After postprocessing with this data, we analyze CFOV integral uniformity(I.U) and differential uniformity(D.U). And we take the data with Jaszczak ECT Phantom by American college of radiology accreditation program instruction including $^{201}Tl$ 33.4 kBq/mL. After post processing with this data, we analyze spatial Resolution, Integral Uniformity(I.U), coefficient of variation(C.V) and Contrast with Interactive data language program. Results In the flood phantom test, when it uses $^{99m}Tc$ uniformity correction map, Flood I.U is 3.6% and D.U is 3.0%. When it uses $^{201}Tl$ uniformity correction map, Flood I.U is 3.8% and D.U is 2.1%. The flood I.U is worsen about 5%, but the D.U is improved about 30% inversely. In the Jaszczak ECT phantom test, when it uses $^{99m}Tc$ uniformity correction map, SPECT I.U, C.V and contrast is 13.99%, 4.89% and 0.69. When it uses $^{201}Tl$ uniformity correction map, SPECT I.U, C.V and contrast is 11.37%, 4.79% and 0.78. All of data are improved about 18%, 2%, 13% The spatial resolution was no significant changes. Conclusion In the flood phantom test, Flood I.U is worsen but Flood D.U is improved. Therefore, it's uncertain that an image quality is improved with flood phantom test. On the other hand, SPECT I.U, C.V, Contrast are improved about 18%, 2%, 13% in the Jaszczak ECT phantom test. This study has limitations that we can't take all variables into account and study with two phantoms. We need think about things that it has a good effect when doctors decipher the nuclear medicine image and it's possible to improve the image quality using the uniformity correction map of other radionuclides other than $^{99m}Tc$, $^{201}Tl$ when we make other nuclear medicine examinations.

  • PDF

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

Study on 3D Printer Suitable for Character Merchandise Production Training (캐릭터 상품 제작 교육에 적합한 3D프린터 연구)

  • Kwon, Dong-Hyun
    • Cartoon and Animation Studies
    • /
    • s.41
    • /
    • pp.455-486
    • /
    • 2015
  • The 3D printing technology, which started from the patent registration in 1986, was a technology that did not attract attention other than from some companies, due to the lack of awareness at the time. However, today, as expiring patents are appearing after the passage of 20 years, the price of 3D printers have decreased to the level of allowing purchase by individuals and the technology is attracting attention from industries, in addition to the general public, such as by naturally accepting 3D and to share 3D data, based on the generalization of online information exchange and improvement of computer performance. The production capability of 3D printers, which is based on digital data enabling digital transmission and revision and supplementation or production manufacturing not requiring molding, may provide a groundbreaking change to the process of manufacturing, and may attain the same effect in the character merchandise sector. Using a 3D printer is becoming a necessity in various figure merchandise productions which are in the forefront of the kidult culture that is recently gaining attention, and when predicting the demand by the industrial sites related to such character merchandise and when considering the more inexpensive price due to the expiration of patents and sharing of technology, expanding opportunities and sectors of employment and cultivating manpower that are able to engage in further creative work seems as a must, by introducing education courses cultivating manpower that can utilize 3D printers at the education field. However, there are limits in the information that can be obtained when seeking to introduce 3D printers in school education. Because the press or information media only mentions general information, such as the growth of the industrial size or prosperous future value of 3D printers, the research level of the academic world also remains at the level of organizing contents in an introductory level, such as by analyzing data on industrial size, analyzing the applicable scope in the industry, or introducing the printing technology. Such lack of information gives rise to problems at the education site. There would be no choice but to incur temporal and opportunity expenses, since the technology would only be able to be used after going through trials and errors, by first introducing the technology without examining the actual information, such as through comparing the strengths and weaknesses. In particular, if an expensive equipment introduced does not suit the features of school education, the loss costs would be significant. This research targeted general users without a technology-related basis, instead of specialists. By comparing the strengths and weaknesses and analyzing the problems and matters requiring notice upon use, pursuant to the representative technologies, instead of merely introducing the 3D printer technology as had been done previously, this research sought to explain the types of features that a 3D printer should have, in particular, when required in education relating to the development of figure merchandise as an optional cultural contents at cartoon-related departments, and sought to provide information that can be of practical help when seeking to provide education using 3D printers in the future. In the main body, the technologies were explained by making a classification based on a new perspective, such as the buttress method, types of materials, two-dimensional printing method, and three-dimensional printing method. The reason for selecting such different classification method was to easily allow mutual comparison of the practical problems upon use. In conclusion, the most suitable 3D printer was selected as the printer in the FDM method, which is comparatively cheap and requires low repair and maintenance cost and low materials expenses, although rather insufficient in the quality of outputs, and a recommendation was made, in addition, to select an entity that is supportive in providing technical support.