• Title/Summary/Keyword: Data-driven approach

Search Result 308, Processing Time 0.033 seconds

On the Vorticity and Pressure Boundary Conditions for Viscous Incompressible Flows (비압축성 점성유동의 와도와 압력 경계조건)

  • Suh J.-C.
    • 한국전산유체공학회:학술대회논문집
    • /
    • 1998.05a
    • /
    • pp.15-28
    • /
    • 1998
  • As an alternative for solving the incompressible Navier-Stokes equations, we present a vorticity-based integro-differential formulation for vorticity, velocity and pressure variables. One of the most difficult problems encountered in the vorticity-based methods is the introduction of the proper value-value of vorticity or vorticity flux at the solid surface. A practical computational technique toward solving this problem is presented in connection with the coupling between the vorticity and the pressure boundary conditions. Numerical schemes based on an iterative procedure are employed to solve the governing equations with the boundary conditions for the three variables. A finite volume method is implemented to integrate the vorticity transport equation with the dynamic vorticity boundary condition . The velocity field is obtained by using the Biot-Savart integral derived from the mathematical vector identity. Green's scalar identity is used to solve the total pressure in an integral approach similar to the surface panel methods which have been well-established for potential flow analysis. The calculated results with the present mettled for two test problems are compared with data from the literature in order for its validation. The first test problem is one for the two-dimensional square cavity flow driven by shear on the top lid. Two cases are considered here: (i) one driven both by the specified non-uniform shear on the top lid and by the specified body forces acting through the cavity region, for which we find the exact solution, and (ii) one of the classical type (i.e., driven only by uniform shear). Secondly, the present mettled is applied to deal with the early development of the flow around an impulsively started circular cylinder.

  • PDF

Target Birth Intensity Estimation Using Measurement-Driven PHD Filter

  • Zhang, Huanqing;Ge, Hongwei;Yang, Jinlong
    • ETRI Journal
    • /
    • v.38 no.5
    • /
    • pp.1019-1029
    • /
    • 2016
  • The probability hypothesis density (PHD) filter is an effective means to track multiple targets in that it avoids explicit data associations between the measurements and targets. However, the target birth intensity as a prior is assumed to be known before tracking in a traditional target-tracking algorithm; otherwise, the performance of a conventional PHD filter will decline sharply. Aiming at this problem, a novel target birth intensity scheme and an improved measurement-driven scheme are incorporated into the PHD filter. The target birth intensity estimation scheme, composed of both PHD pre-filter technology and a target velocity extent method, is introduced to recursively estimate the target birth intensity by using the latest measurements at each time step. Second, based on the improved measurement-driven scheme, the measurement set at each time step is divided into the survival target measurement set, birth target measurement set, and clutter set, and meanwhile, the survival and birth target measurement sets are used to update the survival and birth targets, respectively. Lastly, a Gaussian mixture implementation of the PHD filter is presented under a linear Gaussian model assumption. The results of numerical experiments demonstrate that the proposed approach can achieve a better performance in tracking systems with an unknown newborn target intensity.

The World as Seen from Venice (1205-1533) as a Case Study of Scalable Web-Based Automatic Narratives for Interactive Global Histories

  • NANETTI, Andrea;CHEONG, Siew Ann
    • Asian review of World Histories
    • /
    • v.4 no.1
    • /
    • pp.3-34
    • /
    • 2016
  • This introduction is both a statement of a research problem and an account of the first research results for its solution. As more historical databases come online and overlap in coverage, we need to discuss the two main issues that prevent 'big' results from emerging so far. Firstly, historical data are seen by computer science people as unstructured, that is, historical records cannot be easily decomposed into unambiguous fields, like in population (birth and death records) and taxation data. Secondly, machine-learning tools developed for structured data cannot be applied as they are for historical research. We propose a complex network, narrative-driven approach to mining historical databases. In such a time-integrated network obtained by overlaying records from historical databases, the nodes are actors, while thelinks are actions. In the case study that we present (the world as seen from Venice, 1205-1533), the actors are governments, while the actions are limited to war, trade, and treaty to keep the case study tractable. We then identify key periods, key events, and hence key actors, key locations through a time-resolved examination of the actions. This tool allows historians to deal with historical data issues (e.g., source provenance identification, event validation, trade-conflict-diplomacy relationships, etc.). On a higher level, this automatic extraction of key narratives from a historical database allows historians to formulate hypotheses on the courses of history, and also allow them to test these hypotheses in other actions or in additional data sets. Our vision is that this narrative-driven analysis of historical data can lead to the development of multiple scale agent-based models, which can be simulated on a computer to generate ensembles of counterfactual histories that would deepen our understanding of how our actual history developed the way it did. The generation of such narratives, automatically and in a scalable way, will revolutionize the practice of history as a discipline, because historical knowledge, that is the treasure of human experiences (i.e. the heritage of the world), will become what might be inherited by machine learning algorithms and used in smart cities to highlight and explain present ties and illustrate potential future scenarios and visionarios.

A Data Mining Approach for Selecting Bitmap Join Indices

  • Bellatreche, Ladjel;Missaoui, Rokia;Necir, Hamid;Drias, Habiba
    • Journal of Computing Science and Engineering
    • /
    • v.1 no.2
    • /
    • pp.177-194
    • /
    • 2007
  • Index selection is one of the most important decisions to take in the physical design of relational data warehouses. Indices reduce significantly the cost of processing complex OLAP queries, but require storage cost and induce maintenance overhead. Two main types of indices are available: mono-attribute indices (e.g., B-tree, bitmap, hash, etc.) and multi-attribute indices (join indices, bitmap join indices). To optimize star join queries characterized by joins between a large fact table and multiple dimension tables and selections on dimension tables, bitmap join indices are well adapted. They require less storage cost due to their binary representation. However, selecting these indices is a difficult task due to the exponential number of candidate attributes to be indexed. Most of approaches for index selection follow two main steps: (1) pruning the search space (i.e., reducing the number of candidate attributes) and (2) selecting indices using the pruned search space. In this paper, we first propose a data mining driven approach to prune the search space of bitmap join index selection problem. As opposed to an existing our technique that only uses frequency of attributes in queries as a pruning metric, our technique uses not only frequencies, but also other parameters such as the size of dimension tables involved in the indexing process, size of each dimension tuple, and page size on disk. We then define a greedy algorithm to select bitmap join indices that minimize processing cost and verify storage constraint. Finally, in order to evaluate the efficiency of our approach, we compare it with some existing techniques.

A Macro Parametric Data Representation far CAD Model Exchange using XML (CAD 모델 교환을 위한 매크로 파라메트릭 정보의 XML 표현)

  • 양정삼;한순흥;김병철;박찬국
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.27 no.12
    • /
    • pp.2061-2071
    • /
    • 2003
  • The macro-parametric approach, which is a method of CAD model exchange, has recently been proposed. CAD models can be exchanged in the form of a macro file, which is a sequence of modeling commands. As an event-driven commands set, the standard macro file can transfer design intents such as parameters, features and constraints. Moreover it is suitable for the network environment because the standard macro commands are open, explicit, and the data size is small. This paper introduces the concept of the macro-parametric method and proposes its representation using XML technology. Representing the macro-parametric data using XML allows managing vast amount of dynamic contents, Web-enabled distributed applications, and inherent characteristic of structure and validation.

Structure and Physical Conditions in MHD Jets from Young Stars

  • SHANG HSIEN
    • Journal of The Korean Astronomical Society
    • /
    • v.34 no.4
    • /
    • pp.297-299
    • /
    • 2001
  • We have constructed the foundations to a series of theoretical diagnostic methods to probe the jet phenomenon in young stars as observed at various optical forbidden lines. We calculate and model in a self-consistent manner the physical and radiative processes which arise within an inner disk-wind driven magneto centrifugally from the circumstellar accretion disk of a young sun-like star. Comparing with real data taken at high angular resolution, our approach will provide the basis of systematic diagnostics for jets and their related young stellar objects, to attest the emission mechanisms of such phenomena. This work can help bring first-principle theoretical predictions to confront actual multi-wavelength observations, and will bridge the link between many very sophiscated numerical simulations and observational data. Analysis methods discussed here are immediately applicable to new high-resolution data obtained with HST and Adaptic Optics.

  • PDF

Analysis on Differences in Muscle Activities Depending on Distance Changes and Success or Failure in Connection with Golf Approach Swings (골프 어프로치 스윙 시 거리변화와 성공·실패에 따른 EMG 차이 분석)

  • Lee, Kyung-Ill;You, Moon-Seok;Hong, Wan-Ki
    • Korean Journal of Applied Biomechanics
    • /
    • v.25 no.1
    • /
    • pp.21-28
    • /
    • 2015
  • Objectives : The purpose of this study was to compare differences in muscle activities according to distance changes, and success or failure in relation to approaches during a round of golf in order to obtain basic data on golf swings. Methods : To achieve our research goal, we asked eight professional golfers playing for the Korea Professional Golfers' Association (height: $1.76{\pm}0.05m$, weight: $73.87{\pm}9.21kg$, career duration: $12.87{\pm}4.48yr$) to perform approach swings at distances of 30, 50, and 70 m. Results : No differences were observed in the muscle activity of the extensor carpi radialis that were caused by the distance changes. In addition, we found that the wrist extensors seemed unaffected by the increase in approach distance. Also, we found that the powers of the approach shots were driven by efficient movements rather than by the strength of the arms. We confirmed that when the distance of the approach increased, the golfers should perform their back-swing tops and follow-through right from the right to the left pelvic limb. To achieve successful approach swings despite distance changes, golfers should first work on the activity of the erector spinae to prepare for rotatory power in the P1 section. Moreover, golfers should increase the activity of the erector spinae on the left when they need to deal with the distance improvements in the P2 and P3 sections. Conclusion : In the light of the discussion above, we may infer that despite approach distance changes during a round of golf, ideal swings can be realized by consistent activities of the wrist extensor muscles and improved performances of the pelvic limb muscles. Furthermore, this study suggests that golfers should improve the consistency of muscle activities in all the other body parts to achieve the ideal swing.

Re-approach to the Concept of Data Literacy and Its Application to Library Information Services (데이터 리터러시 개념에 대한 재접근 및 도서관 정보서비스에의 적용)

  • Lee, Jeong-Mee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.53 no.1
    • /
    • pp.159-179
    • /
    • 2019
  • The purpose of this study is to re-approach the concept of data literacy, to describe the differences with other literacies along with the redefined concept of data literacy. Also, it is tried to find out why and how to use data literacy for library and information services. Research has shown that data literacy plays a central role in interacting with other literacy concepts, and should be understood as a data-driven problem-solving ability that is essential for the future human society. Based on these concept definitions, we propose the application of data literacy to library information service in terms of education service and research support service. In this study, data literacy is defined as the ability to utilize data needed by users in a data - based society, is to explain why data literacy is the ability to utilize data for users in modern society by distinguishing differences from other literacy. This concludes with a discussion and proposal on what library information services can be implemented.

Human Limbs Modeling from 3D Scan Data (3차원 스캔 데이터로부터의 인체 팔, 다리 형상 복원)

  • Hyeon, Dae-Eun;Yun, Seung-Hyeon;Kim, Myeong-Su
    • Journal of the Korea Computer Graphics Society
    • /
    • v.8 no.4
    • /
    • pp.1-7
    • /
    • 2002
  • This paper presents a new approach for modeling human limbs shape from 3D scan data. Based on the cylindrical structure of limbs, the overall shape is approximated with a set of ellipsoids through ellipsoid fitting and interpolation of fit-ellipsoids. Then, the smooth domain surface representing the coarse shape is generated as the envelope surface of ellipsoidal sweep, and the fine details are reconstructed by constructing parametric displacement function on the domain surface. For fast calculation, the envelope surface is approximated with ellipse sweep surface, and points on the reconstructed surface are mapped onto the corresponding ellipsoid. We demonstrate the effectiveness of our approach for skeleton-driven body deformation.

  • PDF

CNN based data anomaly detection using multi-channel imagery for structural health monitoring

  • Shajihan, Shaik Althaf V.;Wang, Shuo;Zhai, Guanghao;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.181-193
    • /
    • 2022
  • Data-driven structural health monitoring (SHM) of civil infrastructure can be used to continuously assess the state of a structure, allowing preemptive safety measures to be carried out. Long-term monitoring of large-scale civil infrastructure often involves data-collection using a network of numerous sensors of various types. Malfunctioning sensors in the network are common, which can disrupt the condition assessment and even lead to false-negative indications of damage. The overwhelming size of the data collected renders manual approaches to ensure data quality intractable. The task of detecting and classifying an anomaly in the raw data is non-trivial. We propose an approach to automate this task, improving upon the previously developed technique of image-based pre-processing on one-dimensional (1D) data by enriching the features of the neural network input data with multiple channels. In particular, feature engineering is employed to convert the measured time histories into a 3-channel image comprised of (i) the time history, (ii) the spectrogram, and (iii) the probability density function representation of the signal. To demonstrate this approach, a CNN model is designed and trained on a dataset consisting of acceleration records of sensors installed on a long-span bridge, with the goal of fault detection and classification. The effect of imbalance in anomaly patterns observed is studied to better account for unseen test cases. The proposed framework achieves high overall accuracy and recall even when tested on an unseen dataset that is much larger than the samples used for training, offering a viable solution for implementation on full-scale structures where limited labeled-training data is available.