• Title/Summary/Keyword: the types of errors

Search Result 821, Processing Time 0.025 seconds

Comparison of Multi-Satellite Sea Surface Temperatures and In-situ Temperatures from Ieodo Ocean Research Station (이어도 해양과학기지 관측 수온과 위성 해수면온도 합성장 자료와의 비교)

  • Woo, Hye-Jin;Park, Kyung-Ae;Choi, Do-Young;Byun, Do-Seung;Jeong, Kwang-Yeong;Lee, Eun-Il
    • Journal of the Korean earth science society
    • /
    • v.40 no.6
    • /
    • pp.613-623
    • /
    • 2019
  • Over the past decades, daily sea surface temperature (SST) composite data have been produced using periodically and extensively observed satellite SST data, and have been used for a variety of purposes, including climate change monitoring and oceanic and atmospheric forecasting. In this study, we evaluated the accuracy and analyzed the error characteristic of the SST composite data in the sea around the Korean Peninsula for optimal utilization in the regional seas. We evaluated the four types of multi-satellite SST composite data including OSTIA (Operational Sea Surface Temperature and Sea Ice Analysis), OISST (Optimum Interpolation Sea Surface Temperature), CMC (Canadian Meteorological Centre) SST, and MURSST (Multi-scale Ultra-high Resolution Sea Surface Temperature) collected from January 2016 to December 2016 by using in-situ temperature data measured from the Ieodo Ocean Research Station (IORS). Each SST composite data showed biases of the minimum of 0.12℃ (OISST) and the maximum of 0.55℃ (MURSST) and root mean square errors (RMSE) of the minimum of 0.77℃ (CMC SST) and the maximum of 0.96℃ (MURSST) for the in-situ temperature measurements from the IORS. Inter-comparison between the SST composite fields exhibited biases of -0.38-0.38℃ and RMSE of 0.55-0.82℃. The OSTIA and CMC SST data showed the smallest error while the OISST and MURSST data showed the most obvious error. The results of comparing time series by extracting the SST data at the closest point to the IORS showed that there was an apparent seasonal variation not only in the in-situ temperature from the IORS but also in all the SST composite data. In spring, however, SST composite data tended to be overestimated compared to the in-situ temperature observed from the IORS.

Measurement and Quality Control of MIROS Wave Radar Data at Dokdo (독도 MIROS Wave Radar를 이용한 파랑관측 및 품질관리)

  • Jun, Hyunjung;Min, Yongchim;Jeong, Jin-Yong;Do, Kideok
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.32 no.2
    • /
    • pp.135-145
    • /
    • 2020
  • Wave observation is widely used to direct observation method for observing the water surface elevation using wave buoy or pressure gauge and remote-sensing wave observation method. The wave buoy and pressure gauge can produce high-quality wave data but have disadvantages of the high risk of damage and loss of the instrument, and high maintenance cost in the offshore area. On the other hand, remote observation method such as radar is easy to maintain by installing the equipment on the land, but the accuracy is somewhat lower than the direct observation method. This study investigates the data quality of MIROS Wave and Current Radar (MWR) installed at Dokdo and improve the data quality of remote wave observation data using the wave buoy (CWB) observation data operated by the Korea Meteorological Administration. We applied and developed the three types of wave data quality control; 1) the combined use (Optimal Filter) of the filter designed by MIROS (Reduce Noise Frequency, Phillips Check, Energy Level Check), 2) Spike Test Algorithm (Spike Test) developed by OOI (Ocean Observatories Initiative) and 3) a new filter (H-Ts QC) using the significant wave height-period relationship. As a result, the wave observation data of MWR using three quality control have some reliability about the significant wave height. On the other hand, there are still some errors in the significant wave period, so improvements are required. Also, since the wave observation data of MWR is different somewhat from the CWB data in high waves of over 3 m, further research such as collection and analysis of long-term remote wave observation data and filter development is necessary.

Assessment of Positioning Accuracy of UAV Photogrammetry based on RTK-GPS (RTK-GPS 무인항공사진측량의 위치결정 정확도 평가)

  • Lee, Jae-One;Sung, Sang-Min
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.4
    • /
    • pp.63-68
    • /
    • 2018
  • The establishment of Ground Control Points (GCPs) in UAV-Photogrammetry is a working process that requires the most time and expenditure. Recently, the rapid developments of navigation sensors and communication technologies have enabled Unmanned Aerial Vehicles (UAVs) to conduct photogrammetric mapping without using GCP because of the availability of new methods such as RTK (Real Time Kinematic) and PPK (Post Processed Kinematic) technology. In this study, an experiment was conducted to evaluate the potential of RTK-UAV mapping with no GCPs compared to that of non RTK-UAV mapping. The positioning accuracy results produced by images obtained simultaneously from the two different types of UAVs were compared and analyzed. One was a RTK-UAV without GCPs and the other was a non RTK-UAV with different numbers of GCPs. The images were taken with a Canon IXUS 127 camera (focal length 4.3mm, pixel size $1.3{\mu}m$) at a flying height of approximately 160m, corresponding to a nominal GSD of approximately 4.7cm. As a result, the RMSE (planimetric/vertical) of positional accuracy according to the number of GCPs by the non-RTK method was 4.8cm/8.2cm with 5 GCPs, 5.4cm/10.3cm with 4 GCPs, and 6.2cm/12.0cm with 3 GCPs. In the case of non RTK-UAV photogrammetry with no GCP, the positioning accuracy was decreased greatly to approximately 112.9 cm and 204.6 cm in the horizontal and vertical coordinates, respectively. On the other hand, in the case of the RTK method with no ground control point, the errors in the planimetric and vertical position coordinates were reduced remarkably to 13.1cm and 15.7cm, respectively, compared to the non-RTK method. Overall, UAV photogrammetry supported by RTK-GPS technology, enabling precise positioning without a control point, is expected to be useful in the field of spatial information in the future.

An Evaluation of the Use of Statistical Methods in the Journal of Tuberculosis and Respiratory Diseases ([결핵 및 호흡기질환] 게재 논문의 통계적 기법 활용에 대한 평가)

  • Koh, Won-Jung;Lee, Seung-Joon;Kang, Min Jong;Lee, Hun Jae
    • Tuberculosis and Respiratory Diseases
    • /
    • v.57 no.2
    • /
    • pp.168-179
    • /
    • 2004
  • Background : The statistical analysis is an essential procedure ensuring that the results of researches are based on evidences rather than opinion. The purpose of this study is to evaluate which statistical techniques are used and whether these statistical methods are used appropriately or not in the journal of Tuberculosis and Respiratory Diseases. Materials and Methods : We reviewed 185 articles reported in the journal of Tuberculosis and Respiratory Diseases in 1999. We evaluated the validity of used statistical methods based upon the checklist that was developed on the basis of the guideline for statistical reporting in articles for medical journals by International Committee of Medical Journal Editors. Results : Among 185 articles, original articles and case reports were 110 (59.5%) and 61 (33.0%) respectively. In 112 articles excluding case reports and reviews, statistical techniques were used in 107 articles (95.5%). In 94 articles (83.9%) descriptive and inferential methods were used, while in 13 (11.6%) articles only descriptive methods were used. With the types of inferential statistical techniques, comparison of means was most commonly used (64/94, 68.1%), followed by contingency table (43/94, 45.7%) and correlation or regression (18/94, 19.1%). Among the articles in which descriptive methods were used, 83.2% (89/107) showed inappropriate central tendency and dispersion. In the articles in which inferential methods were used, improper methods were applied in 88.8% (79/89) and the most frequent misuse of statistical methods was inappropriate use of parametric methods (35/89, 39.3%). Only 14 articles (13.1%) were satisfactory in utilization of statistical methodology. Conclusion : Most of the statistical errors found in the journal were misuses of statistical methods related to basic statistics. This study suggests that researchers should be more careful when they describe and apply statistical methods and more extensive statistical refereeing system would be needed.

How does the introduction of smart technology change school science inquiry?: Perceptions of elementary school teachers (스마트 기기 도입이 과학탐구 활동을 어떻게 변화시킬 것인가? -교육대학원 초등과학 전공 교사의 인식 사례를 중심으로-)

  • Chang, Jina;Joung, Yong Jae
    • Journal of The Korean Association For Science Education
    • /
    • v.37 no.2
    • /
    • pp.359-370
    • /
    • 2017
  • The purpose of this study is to explore the changes caused by using smart technology in school science inquiry. For this, we investigated 12 elementary school teachers' perceptions by using an open-ended questionnaire, group discussions, classroom discussions, and participant interviews. The results of this study indicate that the introduction of technology into classroom inquiry can open up the various possibilities and can cause additional burdens as well. First, teachers explained that smart technology can expand the opportunities to observe natural phenomena such as constellations and changing phases of the moon. However, some teachers insisted that, sometimes, learning how to use new devices disrupts students' concentration on the inquiry process itself. Second, teachers introduced the way of digital measurement using smart phone sensors in inquiry activities. They said that digital measurement is useful in terms of the reduction of errors and of the simplicity to measure. However, other teachers insisted that using new devices in classroom inquiry can entail additional variables and confuse the students' focus of inquiry. Communication about inquiry process can also be improved by using digital media. However, some teachers emphasized that they always talked about both the purpose of using SNS and online etiquettes with their students before using SNS. Based on these results, we discussed the necessity of additional analysis on the various ways of using digital devices depending on teachers' perceptions, the types of digital competency required in science inquiry using smart technology, and the features of norms shaped in inquiry activities using smart technology.

Methodological Comparison of the Quantification of Total Carbon and Organic Carbon in Marine Sediment (해양 퇴적물내 총탄소 및 유기탄소의 분석기법 고찰)

  • Kim, Kyeong-Hong;Son, Seung-Kyu;Son, Ju-Won;Ju, Se-Jong
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.9 no.4
    • /
    • pp.235-242
    • /
    • 2006
  • The precise estimation of total and organic carbon contents in sediments is fundamental to understand the benthic environment. To test the precision and accuracy of CHN analyzer and the procedure to quantify total and organic carbon contents(using in-situ acidification with sulfurous acid($H_2SO_3$)) in the sediment, the reference material s such as Acetanilide($C_8H_9NO$), Sulfanilammide($C_6H_8N_2O_2S$), and BCSS-1(standard estuary sediment) were used. The results indicate that CHN analyzer to quantify carbon and nitrogen content has high precision(percent error=3.29%) and accuracy(relative standard deviation=1.26%). Additionally, we conducted the instrumental comparison of carbon values analyzed using CHN analyzer and Coulometeric Carbon Analyzer. Total carbon contents measured from two different instruments were highly correlated($R^2=0.9993$, n=84, p<0.0001) with a linear relationship and show no significant differences(paired t-test, p=0.0003). The organic carbon contents from two instruments also showed the similar results with a significant linear relationship($R^2=0.8867$, n=84, p<0.0001) and no significant differences(paired t-test, p<0.0001). Although it is possible to overestimate organic carbon contents for some sediment types having high inorganic carbon contents(such as calcareous ooze) due to procedural and analytical errors, analysis of organic carbon contents in sediments using CHN Analyzer and current procedures seems to provide the best estimates. Therefore, we recommend that this method can be applied to measure the carbon content in normal any sediment samples and are considered to be one of the best procedure far routine analysis of total and organic carbon.

  • PDF

Repeatability and Reproducibility in Effective Porosity Measurements of Rock Samples (암석시험편 유효공극률 측정의 반복성과 재현성)

  • Lee, Tae Jong;Lee, Sang Kyu
    • Geophysics and Geophysical Exploration
    • /
    • v.15 no.4
    • /
    • pp.209-218
    • /
    • 2012
  • Repeatability and reproducibility in solid weight and effective porosity measurements have been discussed using 8 core samples with different diameters, lengths, rock types, and effective porosities. Further, the effect of temperature on the effective porosity measurement has been discussed as well. Effective porosity of each sample has been measured 7 times with vacuum saturation method with vacuum pressure of 1 torr and vacuum time of 80 minutes. Firstly, effective porosity of each sample is measured one by one, so that it can provide a reference value. Then for reproducibility check, effective porosity measurements with vacuum saturation of 2, 4, and 8 samples simultaneously have been performed. And finally, repeated measurements for 3 times for each sample are made for repeatability check. Average deviation from the reference set in solid weight showed 0.00 $g/cm^3$, which means perfect repeatability and reproducibility. For effective porosity, average deviations are less than 0.07% and 0.05% in repeatability and reproducibility test sets, respectively, which are in good agreement too. Most of porosities measured in reproducibility test lies within the deviation range in repeatability test sets. Thus, simultaneous vacuum saturation of several samples has little impact on the effective porosity measurement when high vacuum pressure of 1 torr is used. Air temperature can cause errors on submerged weight read and even effective porosity, because it is closely related to the temperature, density, and buoyancy of water. Consequently, for accurate measurement of effective porosity in a laboratory, efforts for maintaining air or water temperature constant during the experiment, or a temperature correction from other information are needed.

An Exploratory Study on the Competition Patterns Between Internet Sites in Korea (한국 인터넷사이트들의 산업별 경쟁유형에 대한 탐색적 연구)

  • Park, Yoonseo;Kim, Yongsik
    • Asia Marketing Journal
    • /
    • v.12 no.4
    • /
    • pp.79-111
    • /
    • 2011
  • Digital economy has grown rapidly so that the new business area called 'Internet business' has been dramatically extended as time goes on. However, in the case of Internet business, market shares of individual companies seem to fluctuate very extremely. Thus marketing managers who operate the Internet sites have seriously observed the competition structure of the Internet business market and carefully analyzed the competitors' behavior in order to achieve their own business goals in the market. The newly created Internet business might differ from the offline ones in management styles, because it has totally different business circumstances when compared with the existing offline businesses. Thus, there should be a lot of researches for finding the solutions about what the features of Internet business are and how the management style of those Internet business companies should be changed. Most marketing literatures related to the Internet business have focused on individual business markets. Specifically, many researchers have studied the Internet portal sites and the Internet shopping mall sites, which are the most general forms of Internet business. On the other hand, this study focuses on the entire Internet business industry to understand the competitive circumstance of online market. This approach makes it possible not only to have a broader view to comprehend overall e-business industry, but also to understand the differences in competition structures among Internet business markets. We used time-series data of Internet connection rates by consumers as the basic data to figure out the competition patterns in the Internet business markets. Specifically, the data for this research was obtained from one of Internet ranking sites, 'Fian'. The Internet business ranking data is obtained based on web surfing record of some pre-selected sample group where the possibility of double-count for page-views is controlled by method of same IP check. The ranking site offers several data which are very useful for comparison and analysis of competitive sites. The Fian site divides the Internet business areas into 34 area and offers market shares of big 5 sites which are on high rank in each category daily. We collected the daily market share data about Internet sites on each area from April 22, 2008 to August 5, 2008, where some errors of data was found and 30 business area data were finally used for our research after the data purification. This study performed several empirical analyses in focusing on market shares of each site to understand the competition among sites in Internet business of Korea. We tried to perform more statistically precise analysis for looking into business fields with similar competitive structures by applying the cluster analysis to the data. The research results are as follows. First, the leading sites in each area were classified into three groups based on averages and standard deviations of daily market shares. The first group includes the sites with the lowest market shares, which give more increased convenience to consumers by offering the Internet sites as complimentary services for existing offline services. The second group includes sites with medium level of market shares, where the site users are limited to specific small group. The third group includes sites with the highest market shares, which usually require online registration in advance and have difficulty in switching to another site. Second, we analyzed the second place sites in each business area because it may help us understand the competitive power of the strongest competitor against the leading site. The second place sites in each business area were classified into four groups based on averages and standard deviations of daily market shares. The four groups are the sites showing consistent inferiority compared to the leading sites, the sites with relatively high volatility and medium level of shares, the sites with relatively low volatility and medium level of shares, the sites with relatively low volatility and high level of shares whose gaps are not big compared to the leading sites. Except 'web agency' area, these second place sites show relatively stable shares below 0.1 point of standard deviation. Third, we also classified the types of relative strength between leading sites and the second place sites by applying the cluster analysis to the gap values of market shares between two sites. They were also classified into four groups, the sites with the relatively lowest gaps even though the values of standard deviation are various, the sites with under the average level of gaps, the sites with over the average level of gaps, the sites with the relatively higher gaps and lower volatility. Then we also found that while the areas with relatively bigger gap values usually have smaller standard deviation values, the areas with very small differences between the first and the second sites have a wider range of standard deviation values. The practical and theoretical implications of this study are as follows. First, the result of this study might provide the current market participants with the useful information to understand the competitive circumstance of the market and build the effective new business strategy for the market success. Also it might be useful to help new potential companies find a new business area and set up successful competitive strategies. Second, it might help Internet marketing researchers take a macro view of the overall Internet market so that make possible to begin the new studies on overall Internet market beyond individual Internet market studies.

  • PDF

A Study on the Impact of Artificial Intelligence on Decision Making : Focusing on Human-AI Collaboration and Decision-Maker's Personality Trait (인공지능이 의사결정에 미치는 영향에 관한 연구 : 인간과 인공지능의 협업 및 의사결정자의 성격 특성을 중심으로)

  • Lee, JeongSeon;Suh, Bomil;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.231-252
    • /
    • 2021
  • Artificial intelligence (AI) is a key technology that will change the future the most. It affects the industry as a whole and daily life in various ways. As data availability increases, artificial intelligence finds an optimal solution and infers/predicts through self-learning. Research and investment related to automation that discovers and solves problems on its own are ongoing continuously. Automation of artificial intelligence has benefits such as cost reduction, minimization of human intervention and the difference of human capability. However, there are side effects, such as limiting the artificial intelligence's autonomy and erroneous results due to algorithmic bias. In the labor market, it raises the fear of job replacement. Prior studies on the utilization of artificial intelligence have shown that individuals do not necessarily use the information (or advice) it provides. Algorithm error is more sensitive than human error; so, people avoid algorithms after seeing errors, which is called "algorithm aversion." Recently, artificial intelligence has begun to be understood from the perspective of the augmentation of human intelligence. We have started to be interested in Human-AI collaboration rather than AI alone without human. A study of 1500 companies in various industries found that human-AI collaboration outperformed AI alone. In the medicine area, pathologist-deep learning collaboration dropped the pathologist cancer diagnosis error rate by 85%. Leading AI companies, such as IBM and Microsoft, are starting to adopt the direction of AI as augmented intelligence. Human-AI collaboration is emphasized in the decision-making process, because artificial intelligence is superior in analysis ability based on information. Intuition is a unique human capability so that human-AI collaboration can make optimal decisions. In an environment where change is getting faster and uncertainty increases, the need for artificial intelligence in decision-making will increase. In addition, active discussions are expected on approaches that utilize artificial intelligence for rational decision-making. This study investigates the impact of artificial intelligence on decision-making focuses on human-AI collaboration and the interaction between the decision maker personal traits and advisor type. The advisors were classified into three types: human, artificial intelligence, and human-AI collaboration. We investigated perceived usefulness of advice and the utilization of advice in decision making and whether the decision-maker's personal traits are influencing factors. Three hundred and eleven adult male and female experimenters conducted a task that predicts the age of faces in photos and the results showed that the advisor type does not directly affect the utilization of advice. The decision-maker utilizes it only when they believed advice can improve prediction performance. In the case of human-AI collaboration, decision-makers higher evaluated the perceived usefulness of advice, regardless of the decision maker's personal traits and the advice was more actively utilized. If the type of advisor was artificial intelligence alone, decision-makers who scored high in conscientiousness, high in extroversion, or low in neuroticism, high evaluated the perceived usefulness of the advice so they utilized advice actively. This study has academic significance in that it focuses on human-AI collaboration that the recent growing interest in artificial intelligence roles. It has expanded the relevant research area by considering the role of artificial intelligence as an advisor of decision-making and judgment research, and in aspects of practical significance, suggested views that companies should consider in order to enhance AI capability. To improve the effectiveness of AI-based systems, companies not only must introduce high-performance systems, but also need employees who properly understand digital information presented by AI, and can add non-digital information to make decisions. Moreover, to increase utilization in AI-based systems, task-oriented competencies, such as analytical skills and information technology capabilities, are important. in addition, it is expected that greater performance will be achieved if employee's personal traits are considered.

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.