• Title/Summary/Keyword: Generate Data

Search Result 3,066, Processing Time 0.039 seconds

Generation of Dataset for Detection of Black Screen in Video Wall Controller (비디오 월 컨트롤러의 블랙 스크린 감지를 위한 데이터셋 생성)

  • Kim, Sung-jin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.521-523
    • /
    • 2021
  • Data augmentation are techniques used to increase the amount of data by using small amount of existing data. With the spread of the Internet, we can easily obtain data. However, there are still certain industries, like medicine, where it is difficult to obtain data. The same is true for image data in which a black screen is displayed on video wall controller. Because it is rare that a black screen is displayed during operation, it is not easy to obtain an image with a black screen. We propose a DCGAN based architecture that generate dataset using a small amount of black screen image.

  • PDF

Self-Photo Image Analysis and Reporting System Using ChatGPT4o (ChatGPT4o를 활용한 셀프포토 이미지 분석 및 리포팅 시스템)

  • Bong-Ki Son
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.5
    • /
    • pp.745-753
    • /
    • 2024
  • In this paper, we propose a system that extracts customer data from self-photos taken through a photo booth and automatically generates an operation report consisting of analysis results for each data and marketing strategy suggestions. The customer data to be extracted was selected based on attributes that could be used to analyze event operation results or to plan next year's event and establish promotional strategies. We utilize ChatGPT4o in image analysis, customer data analysis, and next marketing strategy proposal. As a result of analyzing self-photos taken at a local festival through the proposed system, customer data such as the number of people photographed, gender, age, relationship, and hairstyle were analyzed with high accuracy. In addition, the proposed system was shown to automatically generate operational reports based on customer data and marketing strategies extracted and analyzed by ChatGPT4o.

Design of a Data Model for the Rainfall-Runoff Simulation Based on Spatial Database (공간DB 기반의 강우-유출 모의를 위한 데이터 모델 설계)

  • Kim, Ki-Uk;Kim, Chang-Soo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.13 no.4
    • /
    • pp.1-11
    • /
    • 2010
  • This study proposed the method for the SWMM data generation connected with the spatial database and designed the data model in order to display flooding information such as the runoff sewer system, flooding areas and depth. A variety of data, including UIS, documents related to the disasters, and rainfall data are used to generate the attributes for flooding analysis areas. The spatial data is constructed by the ArcSDE and Oracle DB. The prototype system is also developed to display the runoff areas based on the GIS using the ArcGIS ArcObjects and spatial DB. The results will be applied to the flooding analysis based on the SWMM.

From proteomics toward systems biology: integration of different types of proteomics data into network models

  • Rho, Sang-Chul;You, Sung-Yong;Kim, Yong-Soo;Hwang, Dae-Hee
    • BMB Reports
    • /
    • v.41 no.3
    • /
    • pp.184-193
    • /
    • 2008
  • Living organisms are comprised of various systems at different levels, i.e., organs, tissues, and cells. Each system carries out its diverse functions in response to environmental and genetic perturbations, by utilizing biological networks, in which nodal components, such as, DNA, mRNAs, proteins, and metabolites, closely interact with each other. Systems biology investigates such systems by producing comprehensive global data that represent different levels of biological information, i.e., at the DNA, mRNA, protein, or metabolite levels, and by integrating this data into network models that generate coherent hypotheses for given biological situations. This review presents a systems biology framework, called the 'Integrative Proteomics Data Analysis Pipeline' (IPDAP), which generates mechanistic hypotheses from network models reconstructed by integrating diverse types of proteomic data generated by mass spectrometry-based proteomic analyses. The devised framework includes a serial set of computational and network analysis tools. Here, we demonstrate its functionalities by applying these tools to several conceptual examples.

Data Structure of a Program Generation and Managing Track Data for Smart Train Route Control (Smart열차진로제어를 위한 선로데이터 생성관리프로그램의 데이터 구조)

  • Yoon, Yong-Ki;Hwang, Jong-Gyu;Jo, Hyun-Jeong;Lee, Jae-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2007.04c
    • /
    • pp.234-236
    • /
    • 2007
  • Even though the existing train route controlling method, using track circuits, ensures the sufficient number of operation, it still has problems such as discordance between train numbers which was planned for operating and train numbers being operated on the track, and allowing only one train entering for one route. To solve those problems, we study and develop the Smart train route controlling system which uses the real-time informations of train positions. This system enables improve the coefficient of utilization in a certain train route controlling section, and the safety level of train route controlling, but we should ensure satisfying reliability of data about tracks operated by trains. In addition, there is a need to protect accidents caused by erroneous information of train position, by reflecting changes of tracks, for example maintenance, improvement, expansion of tracks. In this paper, we describes data structure of a developed program which required to change CAD files to wiring diagrams and to generate them to data of tracks. And we show the result that the simulator, using the data structure, controls speed and route of trains without problems.

  • PDF

Performance Improvement of Deep Clustering Networks for Multi Dimensional Data (다차원 데이터에 대한 심층 군집 네트워크의 성능향상 방법)

  • Lee, Hyunjin
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.952-959
    • /
    • 2018
  • Clustering is one of the most fundamental algorithms in machine learning. The performance of clustering is affected by the distribution of data, and when there are more data or more dimensions, the performance is degraded. For this reason, we use a stacked auto encoder, one of the deep learning algorithms, to reduce the dimension of data which generate a feature vector that best represents the input data. We use k-means, which is a famous algorithm, as a clustering. Sine the feature vector which reduced dimensions are also multi dimensional, we use the Euclidean distance as well as the cosine similarity to increase the performance which calculating the similarity between the center of the cluster and the data as a vector. A deep clustering networks combining a stacked auto encoder and k-means re-trains the networks when the k-means result changes. When re-training the networks, the loss function of the stacked auto encoder and the loss function of the k-means are combined to improve the performance and the stability of the network. Experiments of benchmark image ad document dataset empirically validated the power of the proposed algorithm.

A Decision Support System for Product Design Common Attribute Selection under the Semantic Web and SWCL (시맨틱 웹과 SWCL하의 제품설계 최적 공통속성 선택을 위한 의사결정 지원 시스템)

  • Kim, Hak-Jin;Youn, Sohyun
    • Journal of Information Technology Services
    • /
    • v.13 no.2
    • /
    • pp.133-149
    • /
    • 2014
  • It is unavoidable to provide products that meet customers' needs and wants so that firms may survive under the competition in this globalized market. This paper focuses on how to provide levels for attributes that compse product so that firms may give the best products to customers. In particular, its main issue is how to determine common attributes and the others with their appropriate levels to maximize firms' profits, and how to construct a decision support system to ease decision makers' decisons about optimal common attribute selection using the Semantic Web and SWCL technologies. Parameter data in problems and the relationships in the data are expressed in an ontology data model and a set of constraints by using the Semantic Web and SWCL technologies. They generate a quantitative decision making model through the automatic process in the proposed system, which is fed into the solver using the Logic-based Benders Decomposition method to obtain an optimal solution. The system finally provides the generated solution to the decision makers. This presentation suggests the opportunity of the integration of the proposed system with the broader structured data network and other decision making tools because of the easy data shareness, the standardized data structure and the ease of machine processing in the Semantic Web technology.

Improving data reliability on oligonucleotide microarray

  • Yoon, Yeo-In;Lee, Young-Hak;Park, Jin-Hyun
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2004.11a
    • /
    • pp.107-116
    • /
    • 2004
  • The advent of microarray technologies gives an opportunity to moni tor the expression of ten thousands of genes, simultaneously. Such microarray data can be deteriorated by experimental errors and image artifacts, which generate non-negligible outliers that are estimated by 15% of typical microarray data. Thus, it is an important issue to detect and correct the se faulty probes prior to high-level data analysis such as classification or clustering. In this paper, we propose a systematic procedure for the detection of faulty probes and its proper correction in Genechip array based on multivariate statistical approaches. Principal component analysis (PCA), one of the most widely used multivariate statistical approaches, has been applied to construct a statistical correlation model with 20 pairs of probes for each gene. And, the faulty probes are identified by inspecting the squared prediction error (SPE) of each probe from the PCA model. Then, the outlying probes are reconstructed by the iterative optimization approach minimizing SPE. We used the public data presented from the gene chip project of human fibroblast cell. Through the application study, the proposed approach showed good performance for probe correction without removing faulty probes, which may be desirable in the viewpoint of the maximum use of data information.

  • PDF

A Development of Data Structure and Mesh Generation Algorithm for Global Ship Analysis Modeling System (선박의 전선해석 모델링 시스템을 위한 자료구조와 요소생성 알고리즘 개발)

  • Kim I.I.;Choi J.H.;Jo H.J.;Suh H.W.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.10 no.1
    • /
    • pp.61-69
    • /
    • 2005
  • In the global ship structure and vibration analysis, the FE(finite element) analysis model is required in the early design stage before the 3D CAD model is defined. And the analysis model generation process is a time-consuming job and takes much more time than the engineering work itself. In particular, ship structure has too many associated structural members such as stringers, stiffness and girders etc. These structural members should be satisfied as the constraints in analysis modeling. Therefore it is necessary to support generation of analysis model with satisfying these constraints as an automatic manner. For the effective support of the global ship analysis modeling, a method to generate analysis model using initial design information within ship design process, that hull form offset data and compartment data, is developed. In order to easily handle initial design information and FE model information, flexible data structure is proposed. An automatic quadrilateral mesh generation algorithm using initial design information to satisfy the constraints imposed on the ship structure is also proposed. The proposed data structure and mesh generation algorithm are applied for the various type of vessels for the usability test. Through this test, we have verified the stability and usefulness of this system including mesh generation algorithm.

Bio-Medical Data Transmission System using Multi-level Visible Light based on Resistor Ladder Circuit (저항 사다리 회로 기반의 다중레벨 가시광을 이용하는 의료 데이터 전송 시스템)

  • An, Jinyoung;Chung, Wan-Young
    • Journal of Sensor Science and Technology
    • /
    • v.25 no.2
    • /
    • pp.131-137
    • /
    • 2016
  • In this study, a multilevel visible light communication (VLC) system based on resistor ladder circuit is designed to transmit medical data. VLC technology is being considered as an alternative wireless communication due to various advantages such as ubiquity, license free operation, low energy consumption, and no radio frequency (RF) radiation characteristics. With VLC even in places where traditional RF communication (e.g., Wi-Fi) is forbidden, significant bio-medical signal including the electrocardiography (ECG) and photoplethysmography (PPG) data can be transmitted. More lives could be saved anywhere by this potential advantage of VLC with a fast emergency response time. A multilevel transmission scheme is adopted to improve the data capacity with keeping simplicity, where data transmission rate can increase by log2m times (m is the number of voltage levels) than that of conventional VLC transmission based on on/off keying. In order to generate multi-amplitudes, resistor ladder circuit, which is a basic principle of digital to analog convertor, is employed, and information is transferred through LED (Light-Emitting Diode) with different voltage level. In the receiver side, multilevel signal is detected by optical receiver including a photo diode. Then, the collected data are analyzed to serve the necessary medical care to the concerned patient.