Journal of the Korea Society of Computer and Information
/
v.28
no.4
/
pp.65-73
/
2023
In this paper, we propose a data augmentation method based on CNN(Convolutional Neural Network) learning for efficiently obtaining concrete crack image datasets. Real concrete crack images are not only difficult to obtain due to their unstructured shape and complex patterns, but also may be exposed to dangerous situations when acquiring data. In this paper, we solve the problem of collecting datasets exposed to such situations efficiently in terms of cost and time by using vector and thickness-based data augmentation techniques. To demonstrate the effectiveness of the proposed method, experiments were conducted in various scenes using U-Net-based crack detection, and the performance was improved in all scenes when measured by IoU accuracy. When the concrete crack data was not augmented, the percentage of incorrect predictions was about 25%, but when the data was augmented by our method, the percentage of incorrect predictions was reduced to 3%.
In this paper, we propose a novel method for the cross-correlation based double-talk detection (DTD), which employing the Gaussian Mixture Model (GMM) in the frequency domain. The proposed algorithm transforms the cross correlation coefficient used in the time domain into 16 channels in the frequency domain using the discrete fourier transform (DFT). The channels are then selected into seven feature vectors for GMM and we identify three different regions such as far-end, double-talk and near-end speech using the likelihood comparison based on those feature vectors. The presented DTD algorithm detects efficiently the double-talk regions without Voice Activity Detector which has been used in conventional cross correlation based double-talk detection. The performance of the proposed algorithm is evaluated under various conditions and yields better results compared with the conventional schemes. especially, show the robustness against detection errors resulting from the background noises or echo path change which one of the key issues in practical DTD.
This study stems from a question, "How should we understand the pattern of the Korean economy after the 1990s?" Among various analytic methods applicable, this study chooses a Structural Vector Autoregression (SVAR) with long-run restrictions, identifies diverse impacts that gave rise to the current status of the Korean economy, and differentiates relative contributions of those impacts. To that end, SVAR is applied to four economic models; Blanchard and Quah (1989)'s 2-variable model, its 3-variable extensions, and the two other New Keynesian type linear models modified from Stock and Watson (2002). Especially, the latter two models are devised to reflect the recent transitions in the determination of foreign exchange rate (from a fixed rate regime to a flexible rate one) as well as the monetary policy rule (from aggregate targeting to inflation targeting). When organizing the assumed results in the form of impulse response and forecasting error variance decomposition, two common denominators are found as follows. First, changes in the rate of economic growth are mainly attributable to the impact on productivity, and such trend has grown strong since the 2000s, which indicates that Korea's economic growth since the 2000s has been closely associated with its potential growth rate. Second, the magnitude or consistency of impact responses tends to have subsided since the 2000s. Given Korea's high dependence on trade, it is possible that low interest rates, low inflation, steady growth, and the economic emergence of China as a world player have helped secure capital and demand for export and import, which therefore might reduced the impact of each sector on overall economic status. Despite the fact that a diverse mixture of models and impacts has been used for analysis, always two common findings are observed in the result. Therefore, it can be concluded that the decreased rate of economic growth of Korea since 2000 appears to be on the same track as the decrease in Korea's potential growth rate. The contents of this paper are constructed as follows: The second section observes the recent trend of the economic development of Korea and related Korean articles, which might help in clearly defining the scope and analytic methodology of this study. The third section provides an analysis model to be used in this study, which is Structural VAR as mentioned above. Variables used, estimation equations, and identification conditions of impacts are explained. The fourth section reports estimation results derived by the previously introduced model, and the fifth section concludes.
Journal of Korean Society of Coastal and Ocean Engineers
/
v.35
no.6
/
pp.109-120
/
2023
The High-Frequency Radar (HFR) is an equipment designed to measure real-time surface ocean currents in broad maritime areas.It emits radio waves at a specific frequency (HF) towards the sea surface and analyzes the backscattered waves to measure surface current vectors (Crombie, 1955; Barrick, 1972).The Seasonde HF Radar from Codar, utilized in this study, determines the speed and location of radial currents by analyzing the Bragg peak intensity of transmitted and received waves from an omnidirectional antenna and employing the Multiple Signal Classification (MUSIC) algorithm. The generated currents are initially considered ideal patterns without taking into account the characteristics of the observed electromagnetic wave propagation environment. To correct this, Antenna Pattern Measurement (APM) is performed, measuring the strength of signals at various positions received by the antenna and calculating the corrected measured vector to radial currents.The APM principle involves modifying the position and phase information of the currents based on the measured signal strength at each location. Typically, experiments are conducted by installing an antenna on a ship (Kim et al., 2022). However, using a ship introduces various environmental constraints, such as weather conditions and maritime situations. To reduce dependence on maritime conditions and enhance economic efficiency, this study explores the possibility of using unmanned aerial vehicles (drones) for APM. The research conducted APM experiments using a high-frequency radar installed at Dangsa Lighthouse in Dangsa-ri, Wando County, Jeollanam-do. The study compared and analyzed the results of APM experiments using ships and drones, utilizing the calculated radial currents and surface current fields obtained from each experiment.
In this study, muscle activity was measured using surface EMG (sEMG) during a voluntary maneuver (ankle dorsiflexion) in the supine position was compared pre and post gait training. Nine patients with incomplete spinal cord injury participated in a supported treadmill ambulation training (STAT), twenty minutes a day, five days a week for three months. Two tests, a gait speed test and a voluntary maneuver test, were made the same day, or at least the same week, pre and post gait training. Ten healthy subjects' data recorded using the same voluntary maneuvers were used for the reference. sEMG measured from ten lower limb muscles was used to observe the two features of amplitude and motor control distribution pattern, named response vector. The result showed that the average gait speed of patients increased significantly (p〈0.1) from 0.47$\pm$0.35 m/s to 0.68$\pm$0.52 m/s. In sEMG analysis, six out of nine patients showed a tendency to increase the right tibialis anterior activity during right ankle dorsiflexion from 109.7$\pm$148.5 $mutextrm{V}$ to 145.9$\pm$180.7 $mutextrm{V}$ but it was not significant (p〈0.055). In addition, only two patients showed increase of correlation coefficient and total muscle activity in the left fide during left dorsiflexion. Patients' muscle activity changes after gait training varied individually and generally depended on their muscle control abilities of the pre-STAT status. Response vector being introduced for quantitative analysis showed good Possibility to anticipate. evaluate, and/or guide patients with SCI, before and after gait training.
The Principal component analysis (PCA) is a well-known data analysis method that is useful in linear feature extraction and data compression. The PCA is a linear transformation that applies an orthogonal rotation to the original data, so as to maximize the retained variance. PCA is a classical technique for obtaining an optimal overall mapping of linearly dependent patterns of correlation between variables (e.g. neurons). PCA provides, in the mean-squared error sense, an optimal linear mapping of the signals which are spread across a group of variables. These signals are concentrated into the first few components, while the noise, i.e. variance which is uncorrelated across variables, is sequestered in the remaining components. PCA has been used extensively to resolve temporal patterns in neurophysiological recordings. Because the retinal signal is stochastic process, PCA can be used to identify the retinal spikes. With excised rabbit eye, retina was isolated. A piece of retina was attached with the ganglion cell side to the surface of the microelectrode array (MEA). The MEA consisted of glass plate with 60 substrate integrated and insulated golden connection lanes terminating in an 8${\times}$8 array (spacing 200 $\mu$m, electrode diameter 30 $\mu$m) in the center of the plate. The MEA 60 system was used for the recording of retinal ganglion cell activity. The action potentials of each channel were sorted by offline analysis tool. Spikes were detected with a threshold criterion and sorted according to their principal component composition. The first (PC1) and second principal component values (PC2) were calculated using all the waveforms of the each channel and all n time points in the waveform, where several clusters could be separated clearly in two dimension. We verified that PCA-based waveform detection was effective as an initial approach for spike sorting method.
Kim, Jeongha;Lee, Jipyeong;Jang, Seonghyun;Cho, Yoonho
Journal of Intelligence and Information Systems
/
v.29
no.1
/
pp.249-263
/
2023
Collaborative Filtering, a representative recommendation system methodology, consists of two approaches: neighbor methods and latent factor models. Among these, the latent factor model using matrix factorization decomposes the user-item interaction matrix into two lower-dimensional rectangular matrices, predicting the item's rating through the product of these matrices. Due to the factor vectors inferred from rating patterns capturing user and item characteristics, this method is superior in scalability, accuracy, and flexibility compared to neighbor-based methods. However, it has a fundamental drawback: the need to reflect the diversity of preferences of different individuals for items with no ratings. This limitation leads to repetitive and inaccurate recommendations. The Adaptive Deep Latent Factor Model (ADLFM) was developed to address this issue. This model adaptively learns the preferences for each item by using the item description, which provides a detailed summary and explanation of the item. ADLFM takes in item description as input, calculates latent vectors of the user and item, and presents a method that can reflect personal diversity using an attention score. However, due to the requirement of a dataset that includes item descriptions, the domain that can apply ADLFM is limited, resulting in generalization limitations. This study proposes a Generalized Adaptive Deep Latent Factor Recommendation Model, G-ADLFRM, to improve the limitations of ADLFM. Firstly, we use item ID, commonly used in recommendation systems, as input instead of the item description. Additionally, we apply improved deep learning model structures such as Self-Attention, Multi-head Attention, and Multi-Conv1D. We conducted experiments on various datasets with input and model structure changes. The results showed that when only the input was changed, MAE increased slightly compared to ADLFM due to accompanying information loss, resulting in decreased recommendation performance. However, the average learning speed per epoch significantly improved as the amount of information to be processed decreased. When both the input and the model structure were changed, the best-performing Multi-Conv1d structure showed similar performance to ADLFM, sufficiently counteracting the information loss caused by the input change. We conclude that G-ADLFRM is a new, lightweight, and generalizable model that maintains the performance of the existing ADLFM while enabling fast learning and inference.
Journal of the Korea Society of Computer and Information
/
v.19
no.10
/
pp.63-70
/
2014
Process of recognizing objects in binary images consists of image segmentation and pattern matching. If binary objects in the image are assumed to be separated, global features such as area, length of perimeter, or the ratio of the two can be used to recognize the objects in the image. However, if such an assumption is not valid, the global features can not be used but local features such as points or line segments should be used to recognize the objects. In this paper points with large curvature along the perimeter are chosen to be the feature points, and pairs of points selected from them are used as local features. Similarity of two local features are defined using elastic deformation energy for making the lengths and angles between gradient vectors at the end points same. Neighbour support value is defined and used for robust recognition of partially occluded binary objects. An experiment on Kimia-25 data showed that the proposed algorithm runs 4.5 times faster than the maximum clique algorithm with same recognition rate.
The purpose of this study is to segment recreationists into groups which are homogeneous with respect to their spending patterns and trip characteristics. Date were derived from a larger study aimed at developing nationally representative expenditure profiles for recreation visitors to Corps of Engineers projects. Segmentation of these data reduces variance and helps to identify distinctive final demand vectors for input - output application. A - priori and cluster analysis approaches for identifying segments are compared. The a - priori segmentation approach identified 12 segments and the cluster analysis approach identified 3 segments. The 3 nonresident clusters - labeled "day use", "overnight", and "overnight camping" - show lower mean squares within groups than the a - priori segments on almost all nonresident spending categories with an exception of boating expenses. For the Corps of Engineers, implications of these findings for the estimation of economic impacts are discussed.
Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
/
v.17
no.2
/
pp.177-187
/
1999
Up to date, in many application fields of GSIS, we usually have used vector-based spatial overlay or grid-based spatial algebra for extraction and analysis of spatial data. But, because these methods are based on traditional crisp set, concept which is used these methods. shows that many kinds of spatial data are partitioned with sharp boundary. That is not agree with spatial distribution pattern of data in the real world. Therefore, it has a error that a region or object is restricted within only one attribution (One-Entity-one-value). In this study, for improving previous methods that deal with spatial data based on crisp set, we are suggested to apply into spatial overlay process the concept of fuzzy set which is good for expressing the boundary vagueness or ambiguity of spatial data. two methods be given. First method is a fuzzy interval partition by fuzzy subsets in case of spatially continuous data, and second method is fuzzy boundary set applied on categorical data. with a case study to get a land suitability map for the development site selection of new town, we compared results between Boolean analysis method and fuzzy spatial overlay method. And as a result, we could find out that suitability map using fuzzy spatial overlay method provide more reasonable information about development site of new town, and is more adequate type in the aspect of presentation.
본 웹사이트에 게시된 이메일 주소가 전자우편 수집 프로그램이나
그 밖의 기술적 장치를 이용하여 무단으로 수집되는 것을 거부하며,
이를 위반시 정보통신망법에 의해 형사 처벌됨을 유념하시기 바랍니다.
[게시일 2004년 10월 1일]
이용약관
제 1 장 총칙
제 1 조 (목적)
이 이용약관은 KoreaScience 홈페이지(이하 “당 사이트”)에서 제공하는 인터넷 서비스(이하 '서비스')의 가입조건 및 이용에 관한 제반 사항과 기타 필요한 사항을 구체적으로 규정함을 목적으로 합니다.
제 2 조 (용어의 정의)
① "이용자"라 함은 당 사이트에 접속하여 이 약관에 따라 당 사이트가 제공하는 서비스를 받는 회원 및 비회원을
말합니다.
② "회원"이라 함은 서비스를 이용하기 위하여 당 사이트에 개인정보를 제공하여 아이디(ID)와 비밀번호를 부여
받은 자를 말합니다.
③ "회원 아이디(ID)"라 함은 회원의 식별 및 서비스 이용을 위하여 자신이 선정한 문자 및 숫자의 조합을
말합니다.
④ "비밀번호(패스워드)"라 함은 회원이 자신의 비밀보호를 위하여 선정한 문자 및 숫자의 조합을 말합니다.
제 3 조 (이용약관의 효력 및 변경)
① 이 약관은 당 사이트에 게시하거나 기타의 방법으로 회원에게 공지함으로써 효력이 발생합니다.
② 당 사이트는 이 약관을 개정할 경우에 적용일자 및 개정사유를 명시하여 현행 약관과 함께 당 사이트의
초기화면에 그 적용일자 7일 이전부터 적용일자 전일까지 공지합니다. 다만, 회원에게 불리하게 약관내용을
변경하는 경우에는 최소한 30일 이상의 사전 유예기간을 두고 공지합니다. 이 경우 당 사이트는 개정 전
내용과 개정 후 내용을 명확하게 비교하여 이용자가 알기 쉽도록 표시합니다.
제 4 조(약관 외 준칙)
① 이 약관은 당 사이트가 제공하는 서비스에 관한 이용안내와 함께 적용됩니다.
② 이 약관에 명시되지 아니한 사항은 관계법령의 규정이 적용됩니다.
제 2 장 이용계약의 체결
제 5 조 (이용계약의 성립 등)
① 이용계약은 이용고객이 당 사이트가 정한 약관에 「동의합니다」를 선택하고, 당 사이트가 정한
온라인신청양식을 작성하여 서비스 이용을 신청한 후, 당 사이트가 이를 승낙함으로써 성립합니다.
② 제1항의 승낙은 당 사이트가 제공하는 과학기술정보검색, 맞춤정보, 서지정보 등 다른 서비스의 이용승낙을
포함합니다.
제 6 조 (회원가입)
서비스를 이용하고자 하는 고객은 당 사이트에서 정한 회원가입양식에 개인정보를 기재하여 가입을 하여야 합니다.
제 7 조 (개인정보의 보호 및 사용)
당 사이트는 관계법령이 정하는 바에 따라 회원 등록정보를 포함한 회원의 개인정보를 보호하기 위해 노력합니다. 회원 개인정보의 보호 및 사용에 대해서는 관련법령 및 당 사이트의 개인정보 보호정책이 적용됩니다.
제 8 조 (이용 신청의 승낙과 제한)
① 당 사이트는 제6조의 규정에 의한 이용신청고객에 대하여 서비스 이용을 승낙합니다.
② 당 사이트는 아래사항에 해당하는 경우에 대해서 승낙하지 아니 합니다.
- 이용계약 신청서의 내용을 허위로 기재한 경우
- 기타 규정한 제반사항을 위반하며 신청하는 경우
제 9 조 (회원 ID 부여 및 변경 등)
① 당 사이트는 이용고객에 대하여 약관에 정하는 바에 따라 자신이 선정한 회원 ID를 부여합니다.
② 회원 ID는 원칙적으로 변경이 불가하며 부득이한 사유로 인하여 변경 하고자 하는 경우에는 해당 ID를
해지하고 재가입해야 합니다.
③ 기타 회원 개인정보 관리 및 변경 등에 관한 사항은 서비스별 안내에 정하는 바에 의합니다.
제 3 장 계약 당사자의 의무
제 10 조 (KISTI의 의무)
① 당 사이트는 이용고객이 희망한 서비스 제공 개시일에 특별한 사정이 없는 한 서비스를 이용할 수 있도록
하여야 합니다.
② 당 사이트는 개인정보 보호를 위해 보안시스템을 구축하며 개인정보 보호정책을 공시하고 준수합니다.
③ 당 사이트는 회원으로부터 제기되는 의견이나 불만이 정당하다고 객관적으로 인정될 경우에는 적절한 절차를
거쳐 즉시 처리하여야 합니다. 다만, 즉시 처리가 곤란한 경우는 회원에게 그 사유와 처리일정을 통보하여야
합니다.
제 11 조 (회원의 의무)
① 이용자는 회원가입 신청 또는 회원정보 변경 시 실명으로 모든 사항을 사실에 근거하여 작성하여야 하며,
허위 또는 타인의 정보를 등록할 경우 일체의 권리를 주장할 수 없습니다.
② 당 사이트가 관계법령 및 개인정보 보호정책에 의거하여 그 책임을 지는 경우를 제외하고 회원에게 부여된
ID의 비밀번호 관리소홀, 부정사용에 의하여 발생하는 모든 결과에 대한 책임은 회원에게 있습니다.
③ 회원은 당 사이트 및 제 3자의 지적 재산권을 침해해서는 안 됩니다.
제 4 장 서비스의 이용
제 12 조 (서비스 이용 시간)
① 서비스 이용은 당 사이트의 업무상 또는 기술상 특별한 지장이 없는 한 연중무휴, 1일 24시간 운영을
원칙으로 합니다. 단, 당 사이트는 시스템 정기점검, 증설 및 교체를 위해 당 사이트가 정한 날이나 시간에
서비스를 일시 중단할 수 있으며, 예정되어 있는 작업으로 인한 서비스 일시중단은 당 사이트 홈페이지를
통해 사전에 공지합니다.
② 당 사이트는 서비스를 특정범위로 분할하여 각 범위별로 이용가능시간을 별도로 지정할 수 있습니다. 다만
이 경우 그 내용을 공지합니다.
제 13 조 (홈페이지 저작권)
① NDSL에서 제공하는 모든 저작물의 저작권은 원저작자에게 있으며, KISTI는 복제/배포/전송권을 확보하고
있습니다.
② NDSL에서 제공하는 콘텐츠를 상업적 및 기타 영리목적으로 복제/배포/전송할 경우 사전에 KISTI의 허락을
받아야 합니다.
③ NDSL에서 제공하는 콘텐츠를 보도, 비평, 교육, 연구 등을 위하여 정당한 범위 안에서 공정한 관행에
합치되게 인용할 수 있습니다.
④ NDSL에서 제공하는 콘텐츠를 무단 복제, 전송, 배포 기타 저작권법에 위반되는 방법으로 이용할 경우
저작권법 제136조에 따라 5년 이하의 징역 또는 5천만 원 이하의 벌금에 처해질 수 있습니다.
제 14 조 (유료서비스)
① 당 사이트 및 협력기관이 정한 유료서비스(원문복사 등)는 별도로 정해진 바에 따르며, 변경사항은 시행 전에
당 사이트 홈페이지를 통하여 회원에게 공지합니다.
② 유료서비스를 이용하려는 회원은 정해진 요금체계에 따라 요금을 납부해야 합니다.
제 5 장 계약 해지 및 이용 제한
제 15 조 (계약 해지)
회원이 이용계약을 해지하고자 하는 때에는 [가입해지] 메뉴를 이용해 직접 해지해야 합니다.
제 16 조 (서비스 이용제한)
① 당 사이트는 회원이 서비스 이용내용에 있어서 본 약관 제 11조 내용을 위반하거나, 다음 각 호에 해당하는
경우 서비스 이용을 제한할 수 있습니다.
- 2년 이상 서비스를 이용한 적이 없는 경우
- 기타 정상적인 서비스 운영에 방해가 될 경우
② 상기 이용제한 규정에 따라 서비스를 이용하는 회원에게 서비스 이용에 대하여 별도 공지 없이 서비스 이용의
일시정지, 이용계약 해지 할 수 있습니다.
제 17 조 (전자우편주소 수집 금지)
회원은 전자우편주소 추출기 등을 이용하여 전자우편주소를 수집 또는 제3자에게 제공할 수 없습니다.
제 6 장 손해배상 및 기타사항
제 18 조 (손해배상)
당 사이트는 무료로 제공되는 서비스와 관련하여 회원에게 어떠한 손해가 발생하더라도 당 사이트가 고의 또는 과실로 인한 손해발생을 제외하고는 이에 대하여 책임을 부담하지 아니합니다.
제 19 조 (관할 법원)
서비스 이용으로 발생한 분쟁에 대해 소송이 제기되는 경우 민사 소송법상의 관할 법원에 제기합니다.
[부 칙]
1. (시행일) 이 약관은 2016년 9월 5일부터 적용되며, 종전 약관은 본 약관으로 대체되며, 개정된 약관의 적용일 이전 가입자도 개정된 약관의 적용을 받습니다.