• Title/Summary/Keyword: Computation

Search Result 7,985, Processing Time 0.037 seconds

A study on the design of an efficient hardware and software mixed-mode image processing system for detecting patient movement (환자움직임 감지를 위한 효율적인 하드웨어 및 소프트웨어 혼성 모드 영상처리시스템설계에 관한 연구)

  • Seungmin Jung;Euisung Jung;Myeonghwan Kim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.29-37
    • /
    • 2024
  • In this paper, we propose an efficient image processing system to detect and track the movement of specific objects such as patients. The proposed system extracts the outline area of an object from a binarized difference image by applying a thinning algorithm that enables more precise detection compared to previous algorithms and is advantageous for mixed-mode design. The binarization and thinning steps, which require a lot of computation, are designed based on RTL (Register Transfer Level) and replaced with optimized hardware blocks through logic circuit synthesis. The designed binarization and thinning block was synthesized into a logic circuit using the standard 180n CMOS library and its operation was verified through simulation. To compare software-based performance, performance analysis of binary and thinning operations was also performed by applying sample images with 640 × 360 resolution in a 32-bit FPGA embedded system environment. As a result of verification, it was confirmed that the mixed-mode design can improve the processing speed by 93.8% in the binary and thinning stages compared to the previous software-only processing speed. The proposed mixed-mode system for object recognition is expected to be able to efficiently monitor patient movements even in an edge computing environment where artificial intelligence networks are not applied.

Usability Evaluation Criteria Development and Application for Map-Based Data Visualization (지도 기반 데이터 시각화 플랫폼 사용성 평가 기준 개발 및 적용 연구)

  • Sungha Moon;Hyunsoo Yoon;Seungwon Yang;Sanghee Oh
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.58 no.2
    • /
    • pp.225-249
    • /
    • 2024
  • The purpose of this study is to develop an evaluation tool for map-based data visualization platforms and to conduct heuristic usability evaluations on existing platforms representing inter-regional information. We compared and analyzed the usability evaluation criteria of map-based platforms from the previous studies along with Nielsen's (1994) 10 usability evaluation principles. We proposed nine evaluation criteria, including (1) visibility, (2) representation of the real world, (3) consistency and standards, (4) user control and friendliness, (5) flexibility, (6) design, (7) compatibility, (8) error prevention and handling, and (9) help provision and documentation. Additionally, to confirm the effectiveness of the proposed criteria, four experts was invited to evaluate five domestic and international map-based data visualization platforms. As a result, the experts were able to rank the usability of the five platforms using the proposed map-based data visualization usability evaluation criteria, which included quantified scores and subjective opinions. The results of this study are expected to serve as foundational material for the future development and evaluation of map-based visualization platforms.

Literature Review of AI Hallucination Research Since the Advent of ChatGPT: Focusing on Papers from arXiv (챗GPT 등장 이후 인공지능 환각 연구의 문헌 검토: 아카이브(arXiv)의 논문을 중심으로)

  • Park, Dae-Min;Lee, Han-Jong
    • Informatization Policy
    • /
    • v.31 no.2
    • /
    • pp.3-38
    • /
    • 2024
  • Hallucination is a significant barrier to the utilization of large-scale language models or multimodal models. In this study, we collected 654 computer science papers with "hallucination" in the abstract from arXiv from December 2022 to January 2024 following the advent of Chat GPT and conducted frequency analysis, knowledge network analysis, and literature review to explore the latest trends in hallucination research. The results showed that research in the fields of "Computation and Language," "Artificial Intelligence," "Computer Vision and Pattern Recognition," and "Machine Learning" were active. We then analyzed the research trends in the four major fields by focusing on the main authors and dividing them into data, hallucination detection, and hallucination mitigation. The main research trends included hallucination mitigation through supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF), inference enhancement via "chain of thought" (CoT), and growing interest in hallucination mitigation within the domain of multimodal AI. This study provides insights into the latest developments in hallucination research through a technology-oriented literature review. This study is expected to help subsequent research in both engineering and humanities and social sciences fields by understanding the latest trends in hallucination research.

Theoretical analysis of erosion degradation and safety assessment of submarine shield tunnel segment based on ion erosion

  • Xiaohan Zhou;Yangyang Yang;Zhongping Yang;Sijin Liu;Hao Wang;Weifeng Zhou
    • Geomechanics and Engineering
    • /
    • v.37 no.6
    • /
    • pp.599-614
    • /
    • 2024
  • To evaluate the safety status of deteriorated segments in a submarine shield tunnel during its service life, a seepage model was established based on a cross-sea shield tunnel project. This model was used to study the migration patterns of erosive ions within the shield segments. Based on these laws, the degree of deterioration of the segments was determined. Using the derived analytical solution, the internal forces within the segments were calculated. Lastly, by applying the formula for calculating safety factors, the variation trends in the safety factors of segments with different degrees of deterioration were obtained. The findings demonstrate that corrosive seawater presents the evolution characteristics of continuous seepage from the outside to the inside of the tunnel. The nearby seepage field shows locally concentrated characteristics when there is leakage at the joint, which causes the seepage field's depth and scope to significantly increase. The chlorine ion content decreases gradually with the increase of the distance from the outer surface of the tunnel. The penetration of erosion ions in the segment is facilitated by the presence of water pressure. The ion content of the entire ring segment lining structure is related in the following order: vault < haunch < springing. The difference in the segment's rate of increase in chlorine ion content decreases as service time increases. Based on the analytical solution calculation, the segment's safety factor drops more when the joint leaks than when its intact, and the change rate between the two states exhibits a general downward trend. The safety factor shows a similar change rule at different water depths and continuously decreases at the same segment position as the water depth increases. The three phases of "sudden drop-rise-stability" are represented by a "spoon-shaped" change rule on the safety factor's change curve. The issue of the poor applicability of indicators in earlier studies is resolved by the analytical solution, which only requires determining the loss degree of the segment lining's effective bearing thickness to calculate the safety factor of any cross-section of the shield tunnel. The analytical solution's computation results, however, have some safety margins and are cautious. The process of establishing the evaluation model indicates that the secondary lining made of molded concrete can also have its safety status assessed using the analytical solution. It is very important for the safe operation of the tunnel and the safety of people's property and has a wide range of applications.

The Comparison of Basic Science Research Capacity of OECD Countries

  • Lim, Yang-Taek;Song, Choong-Han
    • Journal of Technology Innovation
    • /
    • v.11 no.1
    • /
    • pp.147-176
    • /
    • 2003
  • This Paper Presents a new measurement technique to derive the level of BSRC (Basic Science and Research Capacity) index by use of the factor analysis which is extended with the assumption of the standard normal probability distribution of the selected explanatory variables. The new measurement method is used to forecast the gap of Korea's BSRC level compared with those of major OECD countries in terms of time lag and to make their international comparison during the time period of 1981∼1999, based on the assumption that the BSRC progress function of each country takes the form of the logistic curve. The US BSRC index is estimated to be 0.9878 in 1981, 0.9996 in 1990 and 0.99991 in 1999, taking the 1st place. The US BSRC level has been consistently the top among the 16 selected variables, followed by Japan, Germany, France and the United Kingdom, in order. Korea's BSRC is estimated to be 0.2293 in 1981, taking the lowest place among the 16 OECD countries. However, Korea's BSRC indices are estimated to have been increased to 0.3216 (in 1990) and 0.44652 (in 1999) respectively, taking 10th place. Meanwhile, Korea's BSRC level in 1999 (0.44652) is estimated to reach those of the US and Japan in 2233 and 2101, respectively. This means that Korea falls 234 years behind USA and 102 years behind Japan, respectively. Korea is also estimated to lag 34 years behind Germany, 16 years behind France and the UK, 15 years behind Sweden, 11 years behind Canada, 7 years behind Finland, and 5 years behind the Netherlands. For the period of 1981∼1999, the BSRC development speed of the US is estimated to be 0.29700. Its rank is the top among the selected OECD countries, followed by Japan (0.12800), Korea (0.04443), and Germany (0.04029). the US BSRC development speed (0.2970) is estimated to be 2.3 times higher than that of Japan (0.1280), and 6.7 times higher than that of Korea. German BSRC development speed (0.04029) is estimated to be fastest in Europe, but it is 7.4 times slower than that of the US. The estimated BSRC development speeds of Belgium, Finland, Italy, Denmark and the UK stand between 0.01 and 0.02, which are very slow. Particularly, the BSRC development speed of Spain is estimated to be minus 0.0065, staying at the almost same level of BSRC over time (1981 ∼ 1999). Since Korea shows BSRC development speed much slower than those of the US and Japan but relative]y faster than those of other countries, the gaps in BSRC level between Korea and the other countries may get considerably narrower or even Korea will surpass possibly several countries in BSRC level, as time goes by. Korea's BSRC level had taken 10th place till 1993. However, it is estimated to be 6th place in 2010 by catching up the UK, Sweden, Finland and Holland, and 4th place in 2020 by catching up France and Canada. The empirical results are consistent with OECD (2001a)'s computation that Korea had the highest R&D expenditures growth during 1991∼1999 among all OECD countries ; and the value-added of ICT industries in total business sectors value added is 12% in Korea, but only 8% in Japan. And OECD (2001b) observed that Korea, together with the US, Sweden, and Finland, are already the four most knowledge-based countries. Hence, the rank of the knowledge-based country was measured by investment in knowledge which is defined as public and private spending on higher education, expenditures on R&D and investment in software.

  • PDF

Real-time Color Recognition Based on Graphic Hardware Acceleration (그래픽 하드웨어 가속을 이용한 실시간 색상 인식)

  • Kim, Ku-Jin;Yoon, Ji-Young;Choi, Yoo-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.1
    • /
    • pp.1-12
    • /
    • 2008
  • In this paper, we present a real-time algorithm for recognizing the vehicle color from the indoor and outdoor vehicle images based on GPU (Graphics Processing Unit) acceleration. In the preprocessing step, we construct feature victors from the sample vehicle images with different colors. Then, we combine the feature vectors for each color and store them as a reference texture that would be used in the GPU. Given an input vehicle image, the CPU constructs its feature Hector, and then the GPU compares it with the sample feature vectors in the reference texture. The similarities between the input feature vector and the sample feature vectors for each color are measured, and then the result is transferred to the CPU to recognize the vehicle color. The output colors are categorized into seven colors that include three achromatic colors: black, silver, and white and four chromatic colors: red, yellow, blue, and green. We construct feature vectors by using the histograms which consist of hue-saturation pairs and hue-intensity pairs. The weight factor is given to the saturation values. Our algorithm shows 94.67% of successful color recognition rate, by using a large number of sample images captured in various environments, by generating feature vectors that distinguish different colors, and by utilizing an appropriate likelihood function. We also accelerate the speed of color recognition by utilizing the parallel computation functionality in the GPU. In the experiments, we constructed a reference texture from 7,168 sample images, where 1,024 images were used for each color. The average time for generating a feature vector is 0.509ms for the $150{\times}113$ resolution image. After the feature vector is constructed, the execution time for GPU-based color recognition is 2.316ms in average, and this is 5.47 times faster than the case when the algorithm is executed in the CPU. Our experiments were limited to the vehicle images only, but our algorithm can be extended to the input images of the general objects.

Multi-day Trip Planning System with Collaborative Recommendation (협업적 추천 기반의 여행 계획 시스템)

  • Aprilia, Priska;Oh, Kyeong-Jin;Hong, Myung-Duk;Ga, Myeong-Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.159-185
    • /
    • 2016
  • Planning a multi-day trip is a complex, yet time-consuming task. It usually starts with selecting a list of points of interest (POIs) worth visiting and then arranging them into an itinerary, taking into consideration various constraints and preferences. When choosing POIs to visit, one might ask friends to suggest them, search for information on the Web, or seek advice from travel agents; however, those options have their limitations. First, the knowledge of friends is limited to the places they have visited. Second, the tourism information on the internet may be vast, but at the same time, might cause one to invest a lot of time reading and filtering the information. Lastly, travel agents might be biased towards providers of certain travel products when suggesting itineraries. In recent years, many researchers have tried to deal with the huge amount of tourism information available on the internet. They explored the wisdom of the crowd through overwhelming images shared by people on social media sites. Furthermore, trip planning problems are usually formulated as 'Tourist Trip Design Problems', and are solved using various search algorithms with heuristics. Various recommendation systems with various techniques have been set up to cope with the overwhelming tourism information available on the internet. Prediction models of recommendation systems are typically built using a large dataset. However, sometimes such a dataset is not always available. For other models, especially those that require input from people, human computation has emerged as a powerful and inexpensive approach. This study proposes CYTRIP (Crowdsource Your TRIP), a multi-day trip itinerary planning system that draws on the collective intelligence of contributors in recommending POIs. In order to enable the crowd to collaboratively recommend POIs to users, CYTRIP provides a shared workspace. In the shared workspace, the crowd can recommend as many POIs to as many requesters as they can, and they can also vote on the POIs recommended by other people when they find them interesting. In CYTRIP, anyone can make a contribution by recommending POIs to requesters based on requesters' specified preferences. CYTRIP takes input on the recommended POIs to build a multi-day trip itinerary taking into account the user's preferences, the various time constraints, and the locations. The input then becomes a multi-day trip planning problem that is formulated in Planning Domain Definition Language 3 (PDDL3). A sequence of actions formulated in a domain file is used to achieve the goals in the planning problem, which are the recommended POIs to be visited. The multi-day trip planning problem is a highly constrained problem. Sometimes, it is not feasible to visit all the recommended POIs with the limited resources available, such as the time the user can spend. In order to cope with an unachievable goal that can result in no solution for the other goals, CYTRIP selects a set of feasible POIs prior to the planning process. The planning problem is created for the selected POIs and fed into the planner. The solution returned by the planner is then parsed into a multi-day trip itinerary and displayed to the user on a map. The proposed system is implemented as a web-based application built using PHP on a CodeIgniter Web Framework. In order to evaluate the proposed system, an online experiment was conducted. From the online experiment, results show that with the help of the contributors, CYTRIP can plan and generate a multi-day trip itinerary that is tailored to the users' preferences and bound by their constraints, such as location or time constraints. The contributors also find that CYTRIP is a useful tool for collecting POIs from the crowd and planning a multi-day trip.

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF

An Alternative Perspective to Resolve Modelling Uncertainty in Reliability Analysis for D/t Limitation Models of CFST (CFST의 D/t 제한모델들에 대한 신뢰성해석에서 모델링불확실성을 해결하는 선택적 방법)

  • Han, Taek Hee;Kim, Jung Joong
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.28 no.4
    • /
    • pp.409-415
    • /
    • 2015
  • For the design of Concrete-Filled Steel Tube(CFST) columns, the outside diameter D to the steel tube thickness t ratio(D/t ratio) is limited to prevent the local buckling of steel tubes. Each design code proposes the respective model to compute the maximum D/t ratio using the yield strength of steel $f_y$ or $f_y$ and the elastic modulus of steel E. Considering the uncertainty in $f_y$ and E, the reliability index ${beta}$ for the local buckling of a CFST section can be calculated by formulating the limit state function including the maximum D/t models. The resulted ${beta}$ depends on the maximum D/t model used for the reliability analysis. This variability in reliability analysis is due to ambiguity in choosing computational models and it is called as "modelling uncertainty." This uncertainty can be considered as "non-specificity" of an epistemic uncertainty and modelled by constructing possibility distribution functions. In this study, three different computation models for the maximum D/t ratio are used to conduct reliability analyses for the local buckling of a CFST section and the reliability index ${beta}$ will be computed respectively. The "non-specific ${beta}s$" will be modelled by possibility distribution function and a metric, degree of confirmation, is measured from the possibility distribution function. It is shown that the degree of confirmation increases when ${beta}$ decreases. Conclusively, a new set of reliability indices associated with a degree of confirmation is determined and it is allowed to decide reliability index for the local buckling of a CFST section with an acceptable confirmation level.

Speed-up Techniques for High-Resolution Grid Data Processing in the Early Warning System for Agrometeorological Disaster (농업기상재해 조기경보시스템에서의 고해상도 격자형 자료의 처리 속도 향상 기법)

  • Park, J.H.;Shin, Y.S.;Kim, S.K.;Kang, W.S.;Han, Y.K.;Kim, J.H.;Kim, D.J.;Kim, S.O.;Shim, K.M.;Park, E.W.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.153-163
    • /
    • 2017
  • The objective of this study is to enhance the model's speed of estimating weather variables (e.g., minimum/maximum temperature, sunshine hour, PRISM (Parameter-elevation Regression on Independent Slopes Model) based precipitation), which are applied to the Agrometeorological Early Warning System (http://www.agmet.kr). The current process of weather estimation is operated on high-performance multi-core CPUs that have 8 physical cores and 16 logical threads. Nonetheless, the server is not even dedicated to the handling of a single county, indicating that very high overhead is involved in calculating the 10 counties of the Seomjin River Basin. In order to reduce such overhead, several cache and parallelization techniques were used to measure the performance and to check the applicability. Results are as follows: (1) for simple calculations such as Growing Degree Days accumulation, the time required for Input and Output (I/O) is significantly greater than that for calculation, suggesting the need of a technique which reduces disk I/O bottlenecks; (2) when there are many I/O, it is advantageous to distribute them on several servers. However, each server must have a cache for input data so that it does not compete for the same resource; and (3) GPU-based parallel processing method is most suitable for models such as PRISM with large computation loads.