• Title/Summary/Keyword: computation

Search Result 8,007, Processing Time 0.034 seconds

Literature Review of AI Hallucination Research Since the Advent of ChatGPT: Focusing on Papers from arXiv (챗GPT 등장 이후 인공지능 환각 연구의 문헌 검토: 아카이브(arXiv)의 논문을 중심으로)

  • Park, Dae-Min;Lee, Han-Jong
    • Informatization Policy
    • /
    • v.31 no.2
    • /
    • pp.3-38
    • /
    • 2024
  • Hallucination is a significant barrier to the utilization of large-scale language models or multimodal models. In this study, we collected 654 computer science papers with "hallucination" in the abstract from arXiv from December 2022 to January 2024 following the advent of Chat GPT and conducted frequency analysis, knowledge network analysis, and literature review to explore the latest trends in hallucination research. The results showed that research in the fields of "Computation and Language," "Artificial Intelligence," "Computer Vision and Pattern Recognition," and "Machine Learning" were active. We then analyzed the research trends in the four major fields by focusing on the main authors and dividing them into data, hallucination detection, and hallucination mitigation. The main research trends included hallucination mitigation through supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF), inference enhancement via "chain of thought" (CoT), and growing interest in hallucination mitigation within the domain of multimodal AI. This study provides insights into the latest developments in hallucination research through a technology-oriented literature review. This study is expected to help subsequent research in both engineering and humanities and social sciences fields by understanding the latest trends in hallucination research.

Theoretical analysis of erosion degradation and safety assessment of submarine shield tunnel segment based on ion erosion

  • Xiaohan Zhou;Yangyang Yang;Zhongping Yang;Sijin Liu;Hao Wang;Weifeng Zhou
    • Geomechanics and Engineering
    • /
    • v.37 no.6
    • /
    • pp.599-614
    • /
    • 2024
  • To evaluate the safety status of deteriorated segments in a submarine shield tunnel during its service life, a seepage model was established based on a cross-sea shield tunnel project. This model was used to study the migration patterns of erosive ions within the shield segments. Based on these laws, the degree of deterioration of the segments was determined. Using the derived analytical solution, the internal forces within the segments were calculated. Lastly, by applying the formula for calculating safety factors, the variation trends in the safety factors of segments with different degrees of deterioration were obtained. The findings demonstrate that corrosive seawater presents the evolution characteristics of continuous seepage from the outside to the inside of the tunnel. The nearby seepage field shows locally concentrated characteristics when there is leakage at the joint, which causes the seepage field's depth and scope to significantly increase. The chlorine ion content decreases gradually with the increase of the distance from the outer surface of the tunnel. The penetration of erosion ions in the segment is facilitated by the presence of water pressure. The ion content of the entire ring segment lining structure is related in the following order: vault < haunch < springing. The difference in the segment's rate of increase in chlorine ion content decreases as service time increases. Based on the analytical solution calculation, the segment's safety factor drops more when the joint leaks than when its intact, and the change rate between the two states exhibits a general downward trend. The safety factor shows a similar change rule at different water depths and continuously decreases at the same segment position as the water depth increases. The three phases of "sudden drop-rise-stability" are represented by a "spoon-shaped" change rule on the safety factor's change curve. The issue of the poor applicability of indicators in earlier studies is resolved by the analytical solution, which only requires determining the loss degree of the segment lining's effective bearing thickness to calculate the safety factor of any cross-section of the shield tunnel. The analytical solution's computation results, however, have some safety margins and are cautious. The process of establishing the evaluation model indicates that the secondary lining made of molded concrete can also have its safety status assessed using the analytical solution. It is very important for the safe operation of the tunnel and the safety of people's property and has a wide range of applications.

Histological Validation of Cardiovascular Magnetic Resonance T1 Mapping for Assessing the Evolution of Myocardial Injury in Myocardial Infarction: An Experimental Study

  • Lu Zhang;Zhi-gang Yang;Huayan Xu;Meng-xi Yang;Rong Xu;Lin Chen;Ran Sun;Tianyu Miao;Jichun Zhao;Xiaoyue Zhou;Chuan Fu;Yingkun Guo
    • Korean Journal of Radiology
    • /
    • v.21 no.12
    • /
    • pp.1294-1304
    • /
    • 2020
  • Objective: To determine whether T1 mapping could monitor the dynamic changes of injury in myocardial infarction (MI) and be histologically validated. Materials and Methods: In 22 pigs, MI was induced by ligating the left anterior descending artery and they underwent serial cardiovascular magnetic resonance examinations with modified Look-Locker inversion T1 mapping and extracellular volume (ECV) computation in acute (within 24 hours, n = 22), subacute (7 days, n = 13), and chronic (3 months, n = 7) phases of MI. Masson's trichrome staining was performed for histological ECV calculation. Myocardial native T1 and ECV were obtained by region of interest measurement in infarcted, peri-infarct, and remote myocardium. Results: Native T1 and ECV in peri-infarct myocardium differed from remote myocardium in acute (1181 ± 62 ms vs. 1113 ± 64 ms, p = 0.002; 24 ± 4% vs. 19 ± 4%, p = 0.031) and subacute phases (1264 ± 41 ms vs. 1171 ± 56 ms, p < 0.001; 27 ± 4% vs. 22 ± 2%, p = 0.009) but not in chronic phase (1157 ± 57 ms vs. 1120 ± 54 ms, p = 0.934; 23 ± 2% vs. 20 ± 1%, p = 0.109). From acute to chronic MI, infarcted native T1 peaked in subacute phase (1275 ± 63 ms vs. 1637 ± 123 ms vs. 1471 ± 98 ms, p < 0.001), while ECV progressively increased with time (35 ± 7% vs. 46 ± 6% vs. 52 ± 4%, p < 0.001). Native T1 correlated well with histological findings (R2 = 0.65 to 0.89, all p < 0.001) so did ECV (R2 = 0.73 to 0.94, all p < 0.001). Conclusion: T1 mapping allows the quantitative assessment of injury in MI and the noninvasive monitoring of tissue injury evolution, which correlates well with histological findings.

Usability Evaluation Criteria Development and Application for Map-Based Data Visualization (지도 기반 데이터 시각화 플랫폼 사용성 평가 기준 개발 및 적용 연구)

  • Sungha Moon;Hyunsoo Yoon;Seungwon Yang;Sanghee Oh
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.58 no.2
    • /
    • pp.225-249
    • /
    • 2024
  • The purpose of this study is to develop an evaluation tool for map-based data visualization platforms and to conduct heuristic usability evaluations on existing platforms representing inter-regional information. We compared and analyzed the usability evaluation criteria of map-based platforms from the previous studies along with Nielsen's (1994) 10 usability evaluation principles. We proposed nine evaluation criteria, including (1) visibility, (2) representation of the real world, (3) consistency and standards, (4) user control and friendliness, (5) flexibility, (6) design, (7) compatibility, (8) error prevention and handling, and (9) help provision and documentation. Additionally, to confirm the effectiveness of the proposed criteria, four experts was invited to evaluate five domestic and international map-based data visualization platforms. As a result, the experts were able to rank the usability of the five platforms using the proposed map-based data visualization usability evaluation criteria, which included quantified scores and subjective opinions. The results of this study are expected to serve as foundational material for the future development and evaluation of map-based visualization platforms.

Effects of Environmental Conditions on Vegetation Indices from Multispectral Images: A Review

  • Md Asrakul Haque;Md Nasim Reza;Mohammod Ali;Md Rejaul Karim;Shahriar Ahmed;Kyung-Do Lee;Young Ho Khang;Sun-Ok Chung
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.319-341
    • /
    • 2024
  • The utilization of multispectral imaging systems (MIS) in remote sensing has become crucial for large-scale agricultural operations, particularly for diagnosing plant health, monitoring crop growth, and estimating plant phenotypic traits through vegetation indices (VIs). However, environmental factors can significantly affect the accuracy of multispectral reflectance data, leading to potential errors in VIs and crop status assessments. This paper reviewed the complex interactions between environmental conditions and multispectral sensors emphasizing the importance of accounting for these factors to enhance the reliability of reflectance data in agricultural applications.An overview of the fundamentals of multispectral sensors and the operational principles behind vegetation index (VI) computation was reviewed. The review highlights the impact of environmental conditions, particularly solar zenith angle (SZA), on reflectance data quality. Higher SZA values increase cloud optical thickness and droplet concentration by 40-70%, affecting reflectance in the red (-0.01 to 0.02) and near-infrared (NIR) bands (-0.03 to 0.06), crucial for VI accuracy. An SZA of 45° is optimal for data collection, while atmospheric conditions, such as water vapor and aerosols, greatly influence reflectance data, affecting forest biomass estimates and agricultural assessments. During the COVID-19 lockdown,reduced atmospheric interference improved the accuracy of satellite image reflectance consistency. The NIR/Red edge ratio and water index emerged as the most stable indices, providing consistent measurements across different lighting conditions. Additionally, a simulated environment demonstrated that MIS surface reflectance can vary 10-20% with changes in aerosol optical thickness, 15-30% with water vapor levels, and up to 25% in NIR reflectance due to high wind speeds. Seasonal factors like temperature and humidity can cause up to a 15% change, highlighting the complexity of environmental impacts on remote sensing data. This review indicated the importance of precisely managing environmental factors to maintain the integrity of VIs calculations. Explaining the relationship between environmental variables and multispectral sensors offers valuable insights for optimizing the accuracy and reliability of remote sensing data in various agricultural applications.

Object Detection Performance Analysis between On-GPU and On-Board Analysis for Military Domain Images

  • Du-Hwan Hur;Dae-Hyeon Park;Deok-Woong Kim;Jae-Yong Baek;Jun-Hyeong Bak;Seung-Hwan Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.8
    • /
    • pp.157-164
    • /
    • 2024
  • In this paper, we propose a discussion that the feasibility of deploying a deep learning-based detector on the resource-limited board. Although many studies evaluate the detector on machines with high-performed GPUs, evaluation on the board with limited computation resources is still insufficient. Therefore, in this work, we implement the deep-learning detectors and deploy them on the compact board by parsing and optimizing a detector. To figure out the performance of deep learning based detectors on limited resources, we monitor the performance of several detectors with different H/W resource. On COCO detection datasets, we compare and analyze the evaluation results of detection model in On-Board and the detection model in On-GPU in terms of several metrics with mAP, power consumption, and execution speed (FPS). To demonstrate the effect of applying our detector for the military area, we evaluate them on our dataset consisting of thermal images considering the flight battle scenarios. As a results, we investigate the strength of deep learning-based on-board detector, and show that deep learning-based vision models can contribute in the flight battle scenarios.

The Comparison of Basic Science Research Capacity of OECD Countries

  • Lim, Yang-Taek;Song, Choong-Han
    • Journal of Technology Innovation
    • /
    • v.11 no.1
    • /
    • pp.147-176
    • /
    • 2003
  • This Paper Presents a new measurement technique to derive the level of BSRC (Basic Science and Research Capacity) index by use of the factor analysis which is extended with the assumption of the standard normal probability distribution of the selected explanatory variables. The new measurement method is used to forecast the gap of Korea's BSRC level compared with those of major OECD countries in terms of time lag and to make their international comparison during the time period of 1981∼1999, based on the assumption that the BSRC progress function of each country takes the form of the logistic curve. The US BSRC index is estimated to be 0.9878 in 1981, 0.9996 in 1990 and 0.99991 in 1999, taking the 1st place. The US BSRC level has been consistently the top among the 16 selected variables, followed by Japan, Germany, France and the United Kingdom, in order. Korea's BSRC is estimated to be 0.2293 in 1981, taking the lowest place among the 16 OECD countries. However, Korea's BSRC indices are estimated to have been increased to 0.3216 (in 1990) and 0.44652 (in 1999) respectively, taking 10th place. Meanwhile, Korea's BSRC level in 1999 (0.44652) is estimated to reach those of the US and Japan in 2233 and 2101, respectively. This means that Korea falls 234 years behind USA and 102 years behind Japan, respectively. Korea is also estimated to lag 34 years behind Germany, 16 years behind France and the UK, 15 years behind Sweden, 11 years behind Canada, 7 years behind Finland, and 5 years behind the Netherlands. For the period of 1981∼1999, the BSRC development speed of the US is estimated to be 0.29700. Its rank is the top among the selected OECD countries, followed by Japan (0.12800), Korea (0.04443), and Germany (0.04029). the US BSRC development speed (0.2970) is estimated to be 2.3 times higher than that of Japan (0.1280), and 6.7 times higher than that of Korea. German BSRC development speed (0.04029) is estimated to be fastest in Europe, but it is 7.4 times slower than that of the US. The estimated BSRC development speeds of Belgium, Finland, Italy, Denmark and the UK stand between 0.01 and 0.02, which are very slow. Particularly, the BSRC development speed of Spain is estimated to be minus 0.0065, staying at the almost same level of BSRC over time (1981 ∼ 1999). Since Korea shows BSRC development speed much slower than those of the US and Japan but relative]y faster than those of other countries, the gaps in BSRC level between Korea and the other countries may get considerably narrower or even Korea will surpass possibly several countries in BSRC level, as time goes by. Korea's BSRC level had taken 10th place till 1993. However, it is estimated to be 6th place in 2010 by catching up the UK, Sweden, Finland and Holland, and 4th place in 2020 by catching up France and Canada. The empirical results are consistent with OECD (2001a)'s computation that Korea had the highest R&D expenditures growth during 1991∼1999 among all OECD countries ; and the value-added of ICT industries in total business sectors value added is 12% in Korea, but only 8% in Japan. And OECD (2001b) observed that Korea, together with the US, Sweden, and Finland, are already the four most knowledge-based countries. Hence, the rank of the knowledge-based country was measured by investment in knowledge which is defined as public and private spending on higher education, expenditures on R&D and investment in software.

  • PDF

Real-time Color Recognition Based on Graphic Hardware Acceleration (그래픽 하드웨어 가속을 이용한 실시간 색상 인식)

  • Kim, Ku-Jin;Yoon, Ji-Young;Choi, Yoo-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • In this paper, we present a real-time algorithm for recognizing the vehicle color from the indoor and outdoor vehicle images based on GPU (Graphics Processing Unit) acceleration. In the preprocessing step, we construct feature victors from the sample vehicle images with different colors. Then, we combine the feature vectors for each color and store them as a reference texture that would be used in the GPU. Given an input vehicle image, the CPU constructs its feature Hector, and then the GPU compares it with the sample feature vectors in the reference texture. The similarities between the input feature vector and the sample feature vectors for each color are measured, and then the result is transferred to the CPU to recognize the vehicle color. The output colors are categorized into seven colors that include three achromatic colors: black, silver, and white and four chromatic colors: red, yellow, blue, and green. We construct feature vectors by using the histograms which consist of hue-saturation pairs and hue-intensity pairs. The weight factor is given to the saturation values. Our algorithm shows 94.67% of successful color recognition rate, by using a large number of sample images captured in various environments, by generating feature vectors that distinguish different colors, and by utilizing an appropriate likelihood function. We also accelerate the speed of color recognition by utilizing the parallel computation functionality in the GPU. In the experiments, we constructed a reference texture from 7,168 sample images, where 1,024 images were used for each color. The average time for generating a feature vector is 0.509ms for the $150{\times}113$ resolution image. After the feature vector is constructed, the execution time for GPU-based color recognition is 2.316ms in average, and this is 5.47 times faster than the case when the algorithm is executed in the CPU. Our experiments were limited to the vehicle images only, but our algorithm can be extended to the input images of the general objects.

Multi-day Trip Planning System with Collaborative Recommendation (협업적 추천 기반의 여행 계획 시스템)

  • Aprilia, Priska;Oh, Kyeong-Jin;Hong, Myung-Duk;Ga, Myeong-Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.159-185
    • /
    • 2016
  • Planning a multi-day trip is a complex, yet time-consuming task. It usually starts with selecting a list of points of interest (POIs) worth visiting and then arranging them into an itinerary, taking into consideration various constraints and preferences. When choosing POIs to visit, one might ask friends to suggest them, search for information on the Web, or seek advice from travel agents; however, those options have their limitations. First, the knowledge of friends is limited to the places they have visited. Second, the tourism information on the internet may be vast, but at the same time, might cause one to invest a lot of time reading and filtering the information. Lastly, travel agents might be biased towards providers of certain travel products when suggesting itineraries. In recent years, many researchers have tried to deal with the huge amount of tourism information available on the internet. They explored the wisdom of the crowd through overwhelming images shared by people on social media sites. Furthermore, trip planning problems are usually formulated as 'Tourist Trip Design Problems', and are solved using various search algorithms with heuristics. Various recommendation systems with various techniques have been set up to cope with the overwhelming tourism information available on the internet. Prediction models of recommendation systems are typically built using a large dataset. However, sometimes such a dataset is not always available. For other models, especially those that require input from people, human computation has emerged as a powerful and inexpensive approach. This study proposes CYTRIP (Crowdsource Your TRIP), a multi-day trip itinerary planning system that draws on the collective intelligence of contributors in recommending POIs. In order to enable the crowd to collaboratively recommend POIs to users, CYTRIP provides a shared workspace. In the shared workspace, the crowd can recommend as many POIs to as many requesters as they can, and they can also vote on the POIs recommended by other people when they find them interesting. In CYTRIP, anyone can make a contribution by recommending POIs to requesters based on requesters' specified preferences. CYTRIP takes input on the recommended POIs to build a multi-day trip itinerary taking into account the user's preferences, the various time constraints, and the locations. The input then becomes a multi-day trip planning problem that is formulated in Planning Domain Definition Language 3 (PDDL3). A sequence of actions formulated in a domain file is used to achieve the goals in the planning problem, which are the recommended POIs to be visited. The multi-day trip planning problem is a highly constrained problem. Sometimes, it is not feasible to visit all the recommended POIs with the limited resources available, such as the time the user can spend. In order to cope with an unachievable goal that can result in no solution for the other goals, CYTRIP selects a set of feasible POIs prior to the planning process. The planning problem is created for the selected POIs and fed into the planner. The solution returned by the planner is then parsed into a multi-day trip itinerary and displayed to the user on a map. The proposed system is implemented as a web-based application built using PHP on a CodeIgniter Web Framework. In order to evaluate the proposed system, an online experiment was conducted. From the online experiment, results show that with the help of the contributors, CYTRIP can plan and generate a multi-day trip itinerary that is tailored to the users' preferences and bound by their constraints, such as location or time constraints. The contributors also find that CYTRIP is a useful tool for collecting POIs from the crowd and planning a multi-day trip.

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF