• Title/Summary/Keyword: my data

Search Result 685, Processing Time 0.024 seconds

The Optimization of Human Sperm Decondensation Procedure for Fluorescence in Situ Hybridization (Fluorescence in Situ Hybridization 시행을 위한 인간정자 탈응축법의 적정화)

  • Pang, Myung-Geol
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.24 no.3
    • /
    • pp.369-375
    • /
    • 1997
  • Studies were conducted to determine the efficiency of decondensation protocols. Sperm obtained from seven normal donors was immediately washed after liquefaction and then decondensed using the method of West et al. (1989) and my original protocol. My optimized protocol entailed mixing 1 ml aliquots of semen with 4 ml phosphate buffered saline (PBS). Following centrifugation, pellets were resuspended in 1 ml PBS containing 6 mM EDTA. After centrifugation, pellets were resuspended in 1 ml PBS containing 2 mM dithiothreitol at $37^{\circ}C$ for 45 min. Following mixing with 2 ml PBS and centrifugation, pellets were resuspended by vortexing. While vortexing, 5 ml of fixative were gently added. Slide preparation was accomplished using the smear method and it was stored at $4^{\circ}C$. When comparing these protocols, the degree of sperm decondensation and head swelling was monitored by measuring nuclear length, area, perimeter, and degree of roundness using FISH analysis software. Apparent copy number for chromosome 1 and, separately, for the sex chromosomes was determined by FISH using satellite DNA probes for loci DIZ1, DXZ1 and DYZ3. Sperm treated by my decondensation protocol showed significant increases (p<0.05) in length, area, perimeter, and degree of roundness. There was a significant decrease (p<0.05) in the frequency of nuclei displaying no signal but no change in the frequency of nuclei with two signals in samples decondensed by my protocol. My data suggested that decondensation using my original protocol may lower the frequency of cells with spurious "nullisomy" due to hybridization failure without inducing spurious "disomy" resulting from increased distances between split signals.

  • PDF

A Study on Effective Real Estate Big Data Management Method Using Graph Database Model (그래프 데이터베이스 모델을 이용한 효율적인 부동산 빅데이터 관리 방안에 관한 연구)

  • Ju-Young, KIM;Hyun-Jung, KIM;Ki-Yun, YU
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.4
    • /
    • pp.163-180
    • /
    • 2022
  • Real estate data can be big data. Because the amount of real estate data is growing rapidly and real estate data interacts with various fields such as the economy, law, and crowd psychology, yet is structured with complex data layers. The existing Relational Database tends to show difficulty in handling various relationships for managing real estate big data, because it has a fixed schema and is only vertically extendable. In order to improve such limitations, this study constructs the real estate data in a Graph Database and verifies its usefulness. For the research method, we modeled various real estate data on MySQL, one of the most widely used Relational Databases, and Neo4j, one of the most widely used Graph Databases. Then, we collected real estate questions used in real life and selected 9 different questions to compare the query times on each Database. As a result, Neo4j showed constant performance even in queries with multiple JOIN statements with inferences to various relationships, whereas MySQL showed a rapid increase in its performance. According to this result, we have found out that a Graph Database such as Neo4j is more efficient for real estate big data with various relationships. We expect to use the real estate Graph Database in predicting real estate price factors and inquiring AI speakers for real estate.

A Personal Memex System Using Uniform Representation of the Data from Various Devices (다양한 기기로부터의 데이터 단일 표현을 통한 개인 미멕스 시스템)

  • Min, Young-Kun;Lee, Bog-Ju
    • The KIPS Transactions:PartB
    • /
    • v.16B no.4
    • /
    • pp.309-318
    • /
    • 2009
  • The researches on the system that automatically records and retrieves one's everyday life is relatively actively worked recently. These systems, called personal memex or life log, usually entail dedicated devices such as SenseCam in MyLifeBits project. This research paid attention to the digital devices such as mobile phones, credit cards, and digital camera that people use everyday. The system enables a person to store everyday life systematically that are saved in the devices or the deviced-related web pages (e.g., phone records in the cellular phone company) and to refer this quickly later. The data collection agent in the proposed system, called MyMemex, collects the personal life log "web data" using the web services that the web sites provide and stores the web data into the server. The "file data" stored in the off-line digital devices are also loaded into the server. Each of the file data or web data is viewed as a memex event that can be described by 4W1H form. The different types of data in different services are transformed into the memex event data in 4W1H form. The memex event ontology is used in this transform. Users can sign in to the web server of this service to view their life logs in the chronological manner. Users can also search the life logs using keywords. Moreover, the life logs can be viewed as a diary or story style by converting the memex events to sentences. The related memex events are grouped to be displayed as an "episode" by a heuristic identification method. A result with high accuracy has been obtained by the experiment for the episode identification using the real life log data of one of the authors.

The Phenomenological Study on the Health and Life of Low-income Seniors who live in Poverty Area in Metropolitan City (달동네에 거주하는 저소득층 노인의 건강과 삶에 대한 현상학적 연구 - 광주광역시 발산마을 거주 노인을 중심으로)

  • Jang, Dongyeop;Shin, Heontae
    • Journal of Society of Preventive Korean Medicine
    • /
    • v.21 no.2
    • /
    • pp.79-94
    • /
    • 2017
  • Objectives : The elderly in South Korea are the poorest among OECD countries in 2015. The aim of this study was to explore the health and life of the low-income elderly living in vulnerable areas in a metropolitan city. Methods : Data were collected through in-depth individual interviews with 7 participants from October to November 2015 and analyzed through Colaizzi's phenomenological methodology. The participants were interviewed for over 60 minutes in each person. Results : 7 categories were identified from 17 subcategories: "My life history: sick body," "Living with a sick body," "My poor but precious life," "A sense of distance from the hospital," "Narrowed area of my life," "Thankful for help," and "The village where I have lived my destiny." There is a lack of medical accessibility, mobility, and economic independence for low-income seniors. In addition, full-fledged redevelopment comes to them as violence. Conclusions : The health and life of the low-income elderly in vulnerable areas are products of many social factors, reaffirming the importance of social health.

Performance study design of CRUD operation of MongoDB and MySQL in big data environment (빅데이터 환경에서 MongoDB와 MySQL의 CRUD 연산의 성능 연구 설계)

  • Seo, Jung-Yeon;Jeon, Eun-Kwang;Chae, Min-su;Lee, Hwa-Min
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.04a
    • /
    • pp.854-856
    • /
    • 2017
  • 최근 들어 모바일 디바이스의 발전으로 인해 생성되는 데이터의 종류는 다양해지고, 양은 방대해지고 있다. 이렇게 생성된 방대한 양의 데이터를 빅데이터라고 한다. 빅데이터들은 기존의 데이터 처리 방법과 다른 방법으로 처리되어야한다. 빅데이터 처리의 대표적인 방법인 관계형데이터베이스시스템(RDBMS)와 NoSQL 방법 중 대표적인 방법인 MySQL과 MongoDB의 데이터를 모델링한다. 설계된 데이터를 바탕으로 보다 편하고 알맞게 데이터베이스시스템 성능평가를 수행한다.

Study on adoption of suitable encryption scheme according to data properties on MySQL Database (MySQL 데이터베이스에서 데이터 속성에 따른 적절한 암호화 기법의 적용에 관한 연구)

  • Shin, Young-Ho;Ryou, Jae-Cheol
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06d
    • /
    • pp.77-80
    • /
    • 2010
  • 최근 개인정보 및 민감한 데이터에 대한 불법적인 접근 및 유출로 인하여 사회적 문제를 야기하고, 이에 따른 경제적인 파급효과와 함께 개인정보 및 민감한 데이터에 대한 보안에 대한 관심이 더욱 증가하고 있다. 또한 법령상으로도 개인의 주민등록번호, 계좌번호, 패스워드 등 개인정보가 포함된 DB에 대하여 데이터를 암호화하여 저장, 관리하도록 규정하고있다. 본 논문에서 공개 데이터베이스인 MySQL에서 이러한 개인정보 및 민감한 데이터에 대한 암호화를 통하여 데이터를 저장, 관리하는데 있어서 데이터의 속성에 따라 적절한 암호화 기법을 사용함으로써 암호화를 통한 데이터보호와 함께 속도 등의 성능상의 오버헤드와 운영, 관리상의 효율을 높이기 위하여 지원하는 암호화 기법에 대하여 알아보고, 암호화 기법별로 성능을 시험하여 데이터의 속성에 따른 최적의 암호화 방식의 적용에 대한 방안을 제시한다.

  • PDF

Sources of Inducing Shame versus Anger at In-group Failure and Consumption Type

  • CHOI, Nak-Hwan;SHI, Jingyi;WANG, Li
    • Journal of Distribution Science
    • /
    • v.18 no.2
    • /
    • pp.79-89
    • /
    • 2020
  • Purpose: This research aimed at exploring the antecedents of feeling ashamed and anger when customers perceive the rightness of object of criticism induced from in-group failure triggered due to my mistake or others' mistake, and identifying the effects of shame and anger on customers' consumption type. Research design, data and methodology: This research used 2 (failure caused by my mistake versus failure caused by others' mistake) between- subjects design, and collected 353 data through on-line survey, and structural equation model of Amos 21.0 was used to verify the hypotheses developed by reviewing the past literature. Results: First, feeling anger motivates customers to choose compensatory consumption behaviors whereas shame leads people to choose adaptive consumption behaviors. Second, customer's feeling of shame and anger is depending on the perceived rightness of the criticism induced from the failure caused by my mistake or others' mistake. Conclusions: Marketers should notice that even shame and anger are included to negative emotions, customers who feel ashamed are different from customers who feel anger in view of approaching consumption. They should conduct their marketing focused on the adaptive consumption to ashamed consumers and do the marketing based on compensatory consumption to angry consumers.

The research regarding the grief flag solution plan of the outside travel goods. (국외여행상품 비수기 극복방안 연구)

  • 최동렬;장양례
    • Culinary science and hospitality research
    • /
    • v.7 no.2
    • /
    • pp.243-266
    • /
    • 2001
  • In order to attain the research objective which it sees with method of research the literature research which investigates an existing system with triangular position of one concept the relationship literature, a statistical data and information back from data analysis it arranged the data which is necessary in foundation, it applied. Travel ep with direct conversation investigation of the person in charge and experience of the researcher it accomplished a research with character during that time. The research result with afterwords provides a same current events point First. It is a diversification of marketing. Secondth. It is an exhibition and exhibition travel goods wool visitor concentrating. Thirdth. It attempts a travel agency merger anger. Fourthth, goods reservation it sleeps and the pro wool of the on-line travel agency against syen it is. Fifth. Advance reservation discount my execution or it is a customer objective card my introduction. Sixth. Must promote an individual travel goods development and an order travel goods development.

  • PDF

Prediction of Chest Deflection Using Frontal Impact Test Results and Deep Learning Model (정면충돌 시험결과와 딥러닝 모델을 이용한 흉부변형량의 예측)

  • Kwon-Hee Lee;Jaemoon Lim
    • Journal of Auto-vehicle Safety Association
    • /
    • v.15 no.1
    • /
    • pp.55-62
    • /
    • 2023
  • In this study, a chest deflection is predicted by introducing a deep learning technique with the results of the frontal impact of the USNCAP conducted for 110 car models from MY2018 to MY2020. The 120 data are divided into training data and test data, and the training data is divided into training data and validation data to determine the hyperparameters. In this process, the deceleration data of each vehicle is averaged in units of 10 ms from crash pulses measured up to 100 ms. The performance of the deep learning model is measured by the indices of the mean squared error and the mean absolute error on the test data. A DNN (Deep Neural Network) model can give different predictions for the same hyperparameter values at every run. Considering this, the mean and standard deviation of the MSE (Mean Squared Error) and the MAE (Mean Absolute Error) are calculated. In addition, the deep learning model performance according to the inclusion of CVW (Curb Vehicle Weight) is also reviewed.