• Title/Summary/Keyword: 데이터 평가 모델

Search Result 2,514, Processing Time 0.031 seconds

Understanding Public Opinion by Analyzing Twitter Posts Related to Real Estate Policy (부동산 정책 관련 트위터 게시물 분석을 통한 대중 여론 이해)

  • Kim, Kyuli;Oh, Chanhee;Zhu, Yongjun
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.56 no.3
    • /
    • pp.47-72
    • /
    • 2022
  • This study aims to understand the trends of subjects related to real estate policies and public's emotional opinion on the policies. Two keywords related to real estate policies such as "real estate policy" and "real estate measure" were used to collect tweets created from February 25, 2008 to August 31, 2021. A total of 91,740 tweets were collected and we applied sentiment analysis and dynamic topic modeling to the final preprocessed and categorized data of 18,925 tweets. Sentiment analysis and dynamic topic model analysis were conducted for a total of 18,925 posts after preprocessing data and categorizing them into supply, real estate tax, interest rate, and population variance. Keywords of each category are as follows: the supply categories (rental housing, greenbelt, newlyweds, homeless, supply, reconstruction, sale), real estate tax categories (comprehensive real estate tax, acquisition tax, holding tax, multiple homeowners, speculation), interest rate categories (interest rate), and population variance categories (Sejong, new city). The results of the sentiment analysis showed that one person posted on average one or two positive tweets whereas in the case of negative and neutral tweets, one person posted two or three. In addition, we found that part of people have both positive as well as negative and neutral opinions towards real estate policies. As the results of dynamic topic modeling analysis, negative reactions to real estate speculative forces and unearned income were identified as major negative topics and as for positive topics, expectation on increasing supply of housing and benefits for homeless people who purchase houses were identified. Unlike previous studies, which focused on changes and evaluations of specific real estate policies, this study has academic significance in that it collected posts from Twitter, one of the social media platforms, used emotional analysis, dynamic topic modeling analysis, and identified potential topics and trends of real estate policy over time. The results of the study can help create new policies that take public opinion on real estate policies into consideration.

Presenteeism in Agricultural, Forestry and Fishing Workers: Based on the 6th Korean Working Conditions Survey (농업, 임업 및 어업 종사자에서의 프리젠티즘: 제6차 근로환경조사를 바탕으로)

  • Sang-Hee Hong;Eun-Chul Jang;Soon-Chan Kwon;Hwa-Young Lee;Myoung-Je Song;Jong-Sun Kim;Mid-Eum Moon;Sang-Hyeon Kim;Ji-Suk Yun;Young-Sun Min
    • Journal of agricultural medicine and community health
    • /
    • v.49 no.1
    • /
    • pp.1-12
    • /
    • 2024
  • Objectives: Presenteeism is known to be a much more economically damaging social cost than disease rest while going to work despite physical pain. Since COVID-19, social discussions on the sickness benefit have been taking place as a countermeasure against presenteeism, and in particular, farmers and fishermen do not have an institutional mechanism for livelihood support when a disease other than work occurs. This study attempted to examine the relationship between agricultural, fishing, and forestry workers and presenteeism using the 6th Korean Work Conditions Survey. Methods: From October 2020 to January 2021, data from the 6th working conditions survey conducted on 17 cities and provinces in Korea were used, and a total of 34,981 people were studied. Control variables were gender, age, self-health assessment, education level, night work, shift work, monthly income, occupation, working hours per week, and employment status. Results: As a result of the analysis, farmers and fishermen showed the characteristics of the self-employed and the elderly, and as a result of the regression analysis, when farmers and fishermen analyzed the relationship with presenteeism tendency compared to other industry workers, farmers and fishermen increased by 23% compared to other industry groups. Conclusion: This study is significant in that it has representation by utilizing the 6th working conditions survey and objectively suggests the need for a sickness benefit for farmers and fishermen who may be overlooked in the sickness benefit.

Comparative study of flood detection methodologies using Sentinel-1 satellite imagery (Sentinel-1 위성 영상을 활용한 침수 탐지 기법 방법론 비교 연구)

  • Lee, Sungwoo;Kim, Wanyub;Lee, Seulchan;Jeong, Hagyu;Park, Jongsoo;Choi, Minha
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.181-193
    • /
    • 2024
  • The increasing atmospheric imbalance caused by climate change leads to an elevation in precipitation, resulting in a heightened frequency of flooding. Consequently, there is a growing need for technology to detect and monitor these occurrences, especially as the frequency of flooding events rises. To minimize flood damage, continuous monitoring is essential, and flood areas can be detected by the Synthetic Aperture Radar (SAR) imagery, which is not affected by climate conditions. The observed data undergoes a preprocessing step, utilizing a median filter to reduce noise. Classification techniques were employed to classify water bodies and non-water bodies, with the aim of evaluating the effectiveness of each method in flood detection. In this study, the Otsu method and Support Vector Machine (SVM) technique were utilized for the classification of water bodies and non-water bodies. The overall performance of the models was assessed using a Confusion Matrix. The suitability of flood detection was evaluated by comparing the Otsu method, an optimal threshold-based classifier, with SVM, a machine learning technique that minimizes misclassifications through training. The Otsu method demonstrated suitability in delineating boundaries between water and non-water bodies but exhibited a higher rate of misclassifications due to the influence of mixed substances. Conversely, the use of SVM resulted in a lower false positive rate and proved less sensitive to mixed substances. Consequently, SVM exhibited higher accuracy under conditions excluding flooding. While the Otsu method showed slightly higher accuracy in flood conditions compared to SVM, the difference in accuracy was less than 5% (Otsu: 0.93, SVM: 0.90). However, in pre-flooding and post-flooding conditions, the accuracy difference was more than 15%, indicating that SVM is more suitable for water body and flood detection (Otsu: 0.77, SVM: 0.92). Based on the findings of this study, it is anticipated that more accurate detection of water bodies and floods could contribute to minimizing flood-related damages and losses.

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Case Study on Venture and Small-Business Executives' Use of Strategic Intuition in the Decision Making Process (벤처.중소기업가의 전략적 직관에 의한 의사결정 모형에 대한 사례연구)

  • Park, Jong An;Kim, Young Su;Do, Man Seung
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.9 no.1
    • /
    • pp.15-23
    • /
    • 2014
  • A Case Study on Venture and Small-Business Executives' Use of Strategic Intuition in the Decision Making Process This paper is a case study on how Venture and Small-Business Executives managers can take advantage of their intuitions in situations where the business environment is increasingly uncertain, a novel situation occurs without any data to reflect on, when rational decision-making is not possible, and when the business environment changes. The case study is based on a literature review, in-depth interviews with 16 business managers, and an analysis of Klein, G's (1998) "Generic Mental Simulation Model." The "intuition" discussed in this analysis is classified into two types of intuition: the Expert Intuition which is based on one's own experiences, and Strategic Intuition which is based on the experience of others. Case study strategic management intuition and intuition, the experts were utilized differently. Features of professional intuition to work quickly without any effort by, while the strategic intuition, is time-consuming. Another feature that has already occurred, one expert intuition in decision-making about the widely used strategic intuition was used a lot in future decision-making. The case study results revealed that managers were using expert intuition and strategic intuition differentially. More specifically, Expert Intuition was activated effortlessly, while strategic intuition required more time. Also, expert intuition was used mainly for making judgments about events that have already happened, while strategic intuition was used more often for judgments regarding events in the future. The process of strategic intuition involved (1) Strategic concerns, (2) the discovery of medium, (3) Primary mental simulation, (4) The offsetting of key parameters, (5) secondary mental simulation, and (6) the decision making process. These steps were used to develop the "Strategic Intuition Decision-making Model" for Venture and Small-Business Executives. The case study results further showed that firstly, the success of decision-making was determined in the "secondary mental simulation' stage, and secondly, that more difficulty in management was encountered when expert intuition was used more than strategic intuition and lastly strategic intuition is possible to be educated.

  • PDF

Estimation of Uranium Particle Concentration in the Korean Peninsula Caused by North Korea's Uranium Enrichment Facility (북한 우라늄 농축시설로 인한 한반도에서의 공기중 우라늄 입자 농도 예측)

  • Kwak, Sung-Woo;Kang, Han-Byeol;Shin, Jung-Ki;Lee, Junghyun
    • Journal of Radiation Protection and Research
    • /
    • v.39 no.3
    • /
    • pp.127-133
    • /
    • 2014
  • North Korea's uranium enrichment facility is a matter of international concern. It is of particular alarming to South Korea with regard to the security and safety of the country. This situation requires continuous monitoring of the DPRK and emergency preparedness on the part of the ROK. To assess the detectability of an undeclared uranium enrichment plant in North Korea, uranium concentrations in the air at both a short and a long distance from the enrichment facility were estimated. $UF_6$ source terms were determined by using existing information on North Korean facility and data from the operation experience of enrichment plants from other countries. Using the calculated source terms, two atmospheric dispersion models (Gaussian Plume Model and HYSPLIT models) and meteorological data were used to estimate the uranium particle concentrations from the Yongbyon enrichment facility. A maximum uranium concentration and its location are dependent upon the meteorological conditions and the height of the UF6 release point. This study showed that the maximum uranium concentration around the enrichment facility was about $1.0{\times}10^{-7}g{\cdot}m^{-3}$. The location of the maximum concentration was within about 0.4 km of the facility. It has been assumed that the uranium sample of about a few micrograms (${\mu}g$) could be obtained; and that few micrograms of uranium can be easily measured with current measurement instruments. On the contrary, a uranium concentration at a distance of more than 100 kilometers from the enrichment facility was estimated to be about $1.0{\times}10^{-13}{\sim}1.0{\times}10^{-15}g{\cdot}m^{-3}$, which is less than back-ground level. Therefore, based on the results of our paper, an air sample taken within the vicinity of the Yongbyon enrichment facility could be used to determine as to whether or not North Korea is carrying out an undeclared nuclear program. However, the air samples taken at a longer distance of a few hundred kilometers would prove difficult in detecting a clandestine nuclear activities.

A personalized TV service under Open network environment (개방형 환경에서의 개인 맞춤형 TV 서비스)

  • Lye, Ji-Hye;Pyo, Sin-Ji;Im, Jeong-Yeon;Kim, Mun-Churl;Lim, Sun-Hwan;Kim, Sang-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.279-282
    • /
    • 2006
  • IP망을 이용한 IPTV 방송 서비스가 새로운 수익 모델로 인정받고 현재 국내의 KT, SKT 등이 IPTV 시범서비스를 준비하거나 진행 중에 있다 이 IPTV 서비스는 이전의 단방향 방송과는 달리 사용자와의 인터렉션을 중시하는 양방향 방송을 표방하기 때문에 지금까지의 방송과는 다른 혁신적인 방송서비스가 기대된다. 하지만 IPTV 서비스에 있어서 여러 통신사와 방송사가 참여할 수 있을 것으로 보여지는 것과는 달리 실상은 몇몇 거대 통신기업이 자신들의 망을 이용하는 가입자들을 상대로 한정된 사업을 벌이고 있다. 이는 IPTV 서비스를 위한 인프라가 구축되어 있지 않고 방통융합망의 개념을 만족시키기 위해 서비스 개발자가 알아야 할 프로토콜들이 너무나 많기 때문이다. 따라서 본 논문에서는 이러한 상황을 타개할 수 있는 수단을 Open API로 제안한다. 맞춤형 방송을 위한 시나리오를 TV-Anytime의 벤치마킹과 유저 시나리오를 참고하여 재구성하고 이 시나리오로부터 IPTV 방송 서비스를 위한 방통융합망의 기본적이고 강력한 기능들을 Open API 함수로 정의하였다. 여기에서의 방송 서비스는 NDR, EPG, 개인 맞춤형 광고 서비스를 말하며 각 서비스를 위한 서버는 통합망 위에 존재하고 이 서버들이 개방하는 API들은 다른 응용프로그램에 의해 사용되는 것이기 때문에 가장 기본적인 기능을 정의하게 된다. 또한, 제안한 Open API 함수를 이용하여 개인 맞춤형 방송 응용 서비스를 구현함으로써 서비스 검증을 하였다. Open API는 웹서비스를 통해 공개된 기능들로써 게이트웨이를 통해 다른 망에서 사용할 수 있게 된다. Open API 함수의 정의는 함수 이름, 기능, 입 출력 파라메터로 이루어져 있다. 사용자 맞춤 서비스를 위해 전달되는 사용자 상세 정보와 콘텐츠 상세 정보는 TV-Anytime 포럼에서 정의한 메타데이터 스키마를 이용하여 정의하였다.가능하게 한다. 제안된 방법은 프레임 간 모드 결정을 고속화함으로써 스케일러블 비디오 부호화기의 연산량과 복잡도를 최대 57%감소시킨다. 그러나 연산량 감소에 따른 비트율의 증가나 화질의 열화는 최대 1.74% 비트율 증가 및 0.08dB PSNR 감소로 무시할 정도로 작다., 반드시 이에 대한 검증이 필요함을 알 수 있었다. 현지관측에 비해 막대한 비용과 시간을 절약할 수 있는 위성영상해석방법을 이용한 방법은 해양수질파악이 가능할 것으로 판단되며, GIS를 이용하여 다양하고 복잡한 자료를 데이터베이스화함으로써 가시화하고, 이를 기초로 공간분석을 실시함으로써 환경요소별 공간분포에 대한 파악을 통해 수치모형실험을 이용한 각종 환경영향의 평가 및 예측을 위한 기초자료로 이용이 가능할 것으로 사료된다.염총량관리 기본계획 시 구축된 모형 매개변수를 바탕으로 분석을 수행하였다. 일차오차분석을 이용하여 수리매개변수와 수질매개변수의 수질항목별 상대적 기여도를 파악해 본 결과, 수리매개변수는 DO, BOD, 유기질소, 유기인 모든 항목에 일정 정도의 상대적 기여도를 가지고 있는 것을 알 수 있었다. 이로부터 수질 모형의 적용 시 수리 매개변수 또한 수질 매개변수의 추정 시와 같이 보다 세심한 주의를 기울여 추정할 필요가 있을 것으로 판단된다.변화와 기흉 발생과의 인과관계를 확인하고 좀 더 구체화하기 위한 연구가 필요할 것이다.게 이루어질 수 있을 것으로 기대된다.는 초과수익률이 상승하지만, 이후로는 감소하므로, 반전거래전략을 활용하는 경우 주식투자기간은 24개월이하의 중단기가 적합함을 발견하였다. 이상의 행태적 측면과 투자성과측면의 실증결과를 통하여 한국주식시장에 있어서 시장수익률을 평균적으로 초과할 수 있는 거래전략은 존재하므로 이러한 전략을 개발 및 활용할 수 있으며, 특히, 한국주식시장에 적합한 거래전략은 반전거래전략이고, 이 전략의 유용성은 투자자가 설정한 투자기간보다

  • PDF

A Topic Modeling-based Recommender System Considering Changes in User Preferences (고객 선호 변화를 고려한 토픽 모델링 기반 추천 시스템)

  • Kang, So Young;Kim, Jae Kyeong;Choi, Il Young;Kang, Chang Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.43-56
    • /
    • 2020
  • Recommender systems help users make the best choice among various options. Especially, recommender systems play important roles in internet sites as digital information is generated innumerable every second. Many studies on recommender systems have focused on an accurate recommendation. However, there are some problems to overcome in order for the recommendation system to be commercially successful. First, there is a lack of transparency in the recommender system. That is, users cannot know why products are recommended. Second, the recommender system cannot immediately reflect changes in user preferences. That is, although the preference of the user's product changes over time, the recommender system must rebuild the model to reflect the user's preference. Therefore, in this study, we proposed a recommendation methodology using topic modeling and sequential association rule mining to solve these problems from review data. Product reviews provide useful information for recommendations because product reviews include not only rating of the product but also various contents such as user experiences and emotional state. So, reviews imply user preference for the product. So, topic modeling is useful for explaining why items are recommended to users. In addition, sequential association rule mining is useful for identifying changes in user preferences. The proposed methodology is largely divided into two phases. The first phase is to create user profile based on topic modeling. After extracting topics from user reviews on products, user profile on topics is created. The second phase is to recommend products using sequential rules that appear in buying behaviors of users as time passes. The buying behaviors are derived from a change in the topic of each user. A collaborative filtering-based recommendation system was developed as a benchmark system, and we compared the performance of the proposed methodology with that of the collaborative filtering-based recommendation system using Amazon's review dataset. As evaluation metrics, accuracy, recall, precision, and F1 were used. For topic modeling, collapsed Gibbs sampling was conducted. And we extracted 15 topics. Looking at the main topics, topic 1, top 3, topic 4, topic 7, topic 9, topic 13, topic 14 are related to "comedy shows", "high-teen drama series", "crime investigation drama", "horror theme", "British drama", "medical drama", "science fiction drama", respectively. As a result of comparative analysis, the proposed methodology outperformed the collaborative filtering-based recommendation system. From the results, we found that the time just prior to the recommendation was very important for inferring changes in user preference. Therefore, the proposed methodology not only can secure the transparency of the recommender system but also can reflect the user's preferences that change over time. However, the proposed methodology has some limitations. The proposed methodology cannot recommend product elaborately if the number of products included in the topic is large. In addition, the number of sequential patterns is small because the number of topics is too small. Therefore, future research needs to consider these limitations.

Modified Traditional Calibration Method of CRNP for Improving Soil Moisture Estimation (산악지형에서의 CRNP를 이용한 토양 수분 측정 개선을 위한 새로운 중성자 강도 교정 방법 검증 및 평가)

  • Cho, Seongkeun;Nguyen, Hoang Hai;Jeong, Jaehwan;Oh, Seungcheol;Choi, Minha
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_1
    • /
    • pp.665-679
    • /
    • 2019
  • Mesoscale soil moisture measurement from the promising Cosmic-Ray Neutron Probe (CRNP) is expected to bridge the gap between large scale microwave remote sensing and point-based in-situ soil moisture observations. Traditional calibration based on $N_0$ method is used to convert neutron intensity measured at the CRNP to field scale soil moisture. However, the static calibration parameter $N_0$ used in traditional technique is insufficient to quantify long term soil moisture variation and easily influenced by different time-variant factors, contributing to the high uncertainties in CRNP soil moisture product. Consequently, in this study, we proposed a modified traditional calibration method, so-called Dynamic-$N_0$ method, which take into account the temporal variation of $N_0$ to improve the CRNP based soil moisture estimation. In particular, a nonlinear regression method has been developed to directly estimate the time series of $N_0$ data from the corrected neutron intensity. The $N_0$ time series were then reapplied to generate the soil moisture. We evaluated the performance of Dynamic-$N_0$ method for soil moisture estimation compared with the traditional one by using a weighted in-situ soil moisture product. The results indicated that Dynamic-$N_0$ method outperformed the traditional calibration technique, where correlation coefficient increased from 0.70 to 0.72 and RMSE and bias reduced from 0.036 to 0.026 and -0.006 to $-0.001m^3m^{-3}$. Superior performance of the Dynamic-$N_0$ calibration method revealed that the temporal variability of $N_0$ was caused by hydrogen pools surrounding the CRNP. Although several uncertainty sources contributed to the variation of $N_0$ were not fully identified, this proposed calibration method gave a new insight to improve field scale soil moisture estimation from the CRNP.