• Title/Summary/Keyword: Optimal Technique

Search Result 3,174, Processing Time 0.032 seconds

The Effect of PET Scan Time on the Off-Line PET Image Quality in Proton Therapy (양성자 치료에서 영상 획득 시간에 따른 Off Line PET의 효율성 검증)

  • Hong, Gun-Chul;Jang, Joon-Yung;Park, Se-Joon;Cha, Eun-Sun;Lee, Hyuk
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.2
    • /
    • pp.74-79
    • /
    • 2017
  • Purpose Proton therapy can deliver an optimal dose to tumor while reducing unnecessary dose to normal tissue as compared the conventional photon therapy. As proton beams are irradiated into tissue, various positron emitters are produced via nuclear fragmentation reactions. These positron emitters could be used for the dose verification by using PET. However, the short half-life of the radioisotopes makes it hard to obtain the enough amounts of events. The aim of this study is to investigate the effect of off-line PET imaging scan time on the PET image quality. Materials and Methods The various diameters of spheres (D=37, 28, 22 mm) filled with distilled water were inserted in a 2001 IEC body phantom. Then proton beams (100 MU) were irradiated into the center of the each sphere using the wobbling technique with the gantry angle of $0^{\circ}$. The modulation widths of the spread out bragg peak were 16.4, 14.7 and 9.3 cm for the spheres of 37, 28 and 22 mm in diameters respectively. After 5 min of the proton irradiation, the PET images of the IEC body phantom were obtained for 50 min. The PET images with different time courses (0-10 min, 11-20 min, 21-30 min, 31-40 min and 41-50 min) were obtained by dividing the frame with a duration of 10 min. In order to evaluate the off-line PET image quality with the different time courses, the contrast-to-noise ratio (CNR) of the PET image calculated for each sphere. Results The CNRs of the sphere (D=37 mm) were 0.43, 0.42, 0.40, 0.31 and 0.21 for the time courses of 0-10 min, 11-20 min, 21-30 min, 31-40 min and 41-50 min respectively. The CNRs of the sphere (D=28 mm) were 0.36, 0.32, 0.27, 0.19 and 0.09 for the time courses of 0-10 min, 11-20 min, 21-30 min, 31-40 min and 41-50 min respectively. The CNR of 37 mm sphere was decreased rapidly after 30 min of the proton irradiation. In case of the spheres of 28 mm and 22 mm, the CNR was decreased drastically after 20 min of the irradiation. Conclusion The off-line PET imaging time is an important factor for the monitoring of the proton therapy. In case of the lesion diameter of 22 mm, the off-line PET image should be obtained within 25 min after the proton irradiation. When it comes to small size of tumor, the long PET imaging time will be beneficial for the proton therapy treatment monitoring.

  • PDF

Method Development for the Profiling Analysis of Endogenous Metabolites by Accurate-Mass Quadrupole Time-of-Flight(Q-TOF) LC/MS (LC/TOFMS를 이용한 생체시료의 내인성 대사체 분석법 개발)

  • Lee, In-Sun;Kim, Jin-Ho;Cho, Soo-Yeul;Shim, Sun-Bo;Park, Hye-Jin;Lee, Jin-Hee;Lee, Ji-Hyun;Hwang, In-Sun;Kim, Sung-Il;Lee, Jung-Hee;Cho, Su-Yeon;Choi, Don-Woong;Cho, Yang-Ha
    • Journal of Food Hygiene and Safety
    • /
    • v.25 no.4
    • /
    • pp.388-394
    • /
    • 2010
  • Metabolomics aims at the comprehensive, qualitative and quantitative analysis of wide arrays of endogenous metabolites in biological samples. It has shown particular promise in the area of toxicology and drug development, functional genomics, system biology and clinical diagnosis. In this study, analytical technique of MS instrument with high resolution mass measurement, such as time-of-flight (TOF) was validated for the purpose of investigation of amino acids, sugars and fatty acids. Rat urine and serum samples were extracted by selected each solvent (50% acetonitrile, 100% acetonitrile, acetone, methanol, water, ether) extraction method. We determined the optimized liquid chromatography/time-of-flight mass spectrometry (LC/TOFMS) system and selected appropriated columns, mobile phases, fragment energy and collision energy, which could search 17 metabolites. The spectral data collected from LC/TOFMS were tested by ANOVA. Obtained with the use of LC/TOFMS technique, our results indicated that (1) MS and MS/MS parameters were optimized and most abundant product ion of each metabolite were selected to be monitorized; (2) with design of experiment analysis, methanol yielded the optimal extraction efficiency. Therefore, the results of this study are expected to be useful in the endogenous metabolite fields according to validated SOP for endogenous amino acids, sugars and fatty acids.

Evaluation for applicability of river depth measurement method depending on vegetation effect using drone-based spatial-temporal hyperspectral image (드론기반 시공간 초분광영상을 활용한 식생유무에 따른 하천 수심산정 기법 적용성 검토)

  • Gwon, Yeonghwa;Kim, Dongsu;You, Hojun
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.4
    • /
    • pp.235-243
    • /
    • 2023
  • Due to the revision of the River Act and the enactment of the Act on the Investigation, Planning, and Management of Water Resources, a regular bed change survey has become mandatory and a system is being prepared such that local governments can manage water resources in a planned manner. Since the topography of a bed cannot be measured directly, it is indirectly measured via contact-type depth measurements such as level survey or using an echo sounder, which features a low spatial resolution and does not allow continuous surveying owing to constraints in data acquisition. Therefore, a depth measurement method using remote sensing-LiDAR or hyperspectral imaging-has recently been developed, which allows a wider area survey than the contact-type method as it acquires hyperspectral images from a lightweight hyperspectral sensor mounted on a frequently operating drone and by applying the optimal bandwidth ratio search algorithm to estimate the depth. In the existing hyperspectral remote sensing technique, specific physical quantities are analyzed after matching the hyperspectral image acquired by the drone's path to the image of a surface unit. Previous studies focus primarily on the application of this technology to measure the bathymetry of sandy rivers, whereas bed materials are rarely evaluated. In this study, the existing hyperspectral image-based water depth estimation technique is applied to rivers with vegetation, whereas spatio-temporal hyperspectral imaging and cross-sectional hyperspectral imaging are performed for two cases in the same area before and after vegetation is removed. The result shows that the water depth estimation in the absence of vegetation is more accurate, and in the presence of vegetation, the water depth is estimated by recognizing the height of vegetation as the bottom. In addition, highly accurate water depth estimation is achieved not only in conventional cross-sectional hyperspectral imaging, but also in spatio-temporal hyperspectral imaging. As such, the possibility of monitoring bed fluctuations (water depth fluctuation) using spatio-temporal hyperspectral imaging is confirmed.

Development of Cloud Detection Method Considering Radiometric Characteristics of Satellite Imagery (위성영상의 방사적 특성을 고려한 구름 탐지 방법 개발)

  • Won-Woo Seo;Hongki Kang;Wansang Yoon;Pyung-Chae Lim;Sooahm Rhee;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1211-1224
    • /
    • 2023
  • Clouds cause many difficult problems in observing land surface phenomena using optical satellites, such as national land observation, disaster response, and change detection. In addition, the presence of clouds affects not only the image processing stage but also the final data quality, so it is necessary to identify and remove them. Therefore, in this study, we developed a new cloud detection technique that automatically performs a series of processes to search and extract the pixels closest to the spectral pattern of clouds in satellite images, select the optimal threshold, and produce a cloud mask based on the threshold. The cloud detection technique largely consists of three steps. In the first step, the process of converting the Digital Number (DN) unit image into top-of-atmosphere reflectance units was performed. In the second step, preprocessing such as Hue-Value-Saturation (HSV) transformation, triangle thresholding, and maximum likelihood classification was applied using the top of the atmosphere reflectance image, and the threshold for generating the initial cloud mask was determined for each image. In the third post-processing step, the noise included in the initial cloud mask created was removed and the cloud boundaries and interior were improved. As experimental data for cloud detection, CAS500-1 L2G images acquired in the Korean Peninsula from April to November, which show the diversity of spatial and seasonal distribution of clouds, were used. To verify the performance of the proposed method, the results generated by a simple thresholding method were compared. As a result of the experiment, compared to the existing method, the proposed method was able to detect clouds more accurately by considering the radiometric characteristics of each image through the preprocessing process. In addition, the results showed that the influence of bright objects (panel roofs, concrete roads, sand, etc.) other than cloud objects was minimized. The proposed method showed more than 30% improved results(F1-score) compared to the existing method but showed limitations in certain images containing snow.

Comparative study of flood detection methodologies using Sentinel-1 satellite imagery (Sentinel-1 위성 영상을 활용한 침수 탐지 기법 방법론 비교 연구)

  • Lee, Sungwoo;Kim, Wanyub;Lee, Seulchan;Jeong, Hagyu;Park, Jongsoo;Choi, Minha
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.181-193
    • /
    • 2024
  • The increasing atmospheric imbalance caused by climate change leads to an elevation in precipitation, resulting in a heightened frequency of flooding. Consequently, there is a growing need for technology to detect and monitor these occurrences, especially as the frequency of flooding events rises. To minimize flood damage, continuous monitoring is essential, and flood areas can be detected by the Synthetic Aperture Radar (SAR) imagery, which is not affected by climate conditions. The observed data undergoes a preprocessing step, utilizing a median filter to reduce noise. Classification techniques were employed to classify water bodies and non-water bodies, with the aim of evaluating the effectiveness of each method in flood detection. In this study, the Otsu method and Support Vector Machine (SVM) technique were utilized for the classification of water bodies and non-water bodies. The overall performance of the models was assessed using a Confusion Matrix. The suitability of flood detection was evaluated by comparing the Otsu method, an optimal threshold-based classifier, with SVM, a machine learning technique that minimizes misclassifications through training. The Otsu method demonstrated suitability in delineating boundaries between water and non-water bodies but exhibited a higher rate of misclassifications due to the influence of mixed substances. Conversely, the use of SVM resulted in a lower false positive rate and proved less sensitive to mixed substances. Consequently, SVM exhibited higher accuracy under conditions excluding flooding. While the Otsu method showed slightly higher accuracy in flood conditions compared to SVM, the difference in accuracy was less than 5% (Otsu: 0.93, SVM: 0.90). However, in pre-flooding and post-flooding conditions, the accuracy difference was more than 15%, indicating that SVM is more suitable for water body and flood detection (Otsu: 0.77, SVM: 0.92). Based on the findings of this study, it is anticipated that more accurate detection of water bodies and floods could contribute to minimizing flood-related damages and losses.

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

Comparison of Virtual Wedge versus Physical Wedge Affecting on Dose Distribution of Treated Breast and Adjacent Normal Tissue for Tangential Breast Irradiation (유방암의 방사선치료에서 Virtual Wedge와 Physical Wedge사용에 따른 유방선량 및 주변조직선량의 차이)

  • Kim Yeon-Sil;Kim Sung-Whan;Yoon Sel-Chul;Lee Jung-Seok;Son Seok-Hyun;Choi Ihl-Bong
    • Radiation Oncology Journal
    • /
    • v.22 no.3
    • /
    • pp.225-233
    • /
    • 2004
  • Purpose: The Ideal breast irradiation method should provide an optimal dose distribution In the treated breast volume and a minimum scatter dose to the nearby normal tissue. Physical wedges have been used to Improve the dose distribution In the treated breast, but unfortunately Introduce an Increased scatter dose outside the treatment yield, pavllculariy to the contralateral breast. The typical physical wedge (FW) was compared with 4he virtual wedge (VW) to do)ermine the difference In the dose distribution affecting on the treated breast and the contralateral breast, lung, heart and surrounding perlpheral soft tissue. Methods and Materials: The data collected consisted of a measurement taken with solid water, a Humanoid Alderson Rando phantom and patients. The radiation doses at the ipsllateral breast and skin, contralateral breast and skin, surrounding peripheral soft tissue, and Ipsllateral lung and heart were compared using the physical wedge and virtual wedge and the radiation dose distribution and DVH of the treated breast were compared. The beam-on time of each treatment technique was also compared Furthermore, the doses at treated breast skin, contralateral breast skin and skin 1.5 cm away from 4he field margin were also measured using TLD in 7 patients of tangential breast Irradiation and compared the results with phantom measurements. Results: The virtual wedge showed a decreased peripheral dose than those of a typical physical wedge at 15$^{\circ}$, 30$^{\circ}$, 45$^{\circ}$, and 60$^{\circ}$. According to the TLD measurements with 15$^{\circ}$ and 30$^{\circ}$ virtual wedge, the Irradiation dose decreased by 1.35$\%$ and 2.55$\%$ In the contralateral breast and by 0.87$\%$ and 1.9$\%$ In the skin of the contralateral breast respectively. Furthermore, the Irradiation dose decreased by 2.7$\%$ and 6.0$\%$ in the Ipsllateral lung and by 0.96$\%$ and 2.5$\%$ in the heart. The VW fields had lower peripheral doses than those of the PW fields by 1.8$\%$ and 2.33$\%$. However the skin dose Increased by 2.4$\%$ and 4.58$\%$ In the Ipsliateral breast. VW fields, In general, use less monitor units than PW fields and shoriened beam-on time about half of PW. The DVH analysis showed that each delivery technique results In comparable dose distribution in treated breast. Conclusion: A modest dose reduction to the surrounding normal tissue and uniform target homogeneity were observed using the VW technique compare to the PW beam in tangential breast Irradiation The VW field is dosmetrically superlor to the PW beam and can be an efficient method for minimizing acute, late radiation morbidity and reduce 4he linear accelerator loading bV decreasing the radiation delivery time.

Changes of the surface roughness depending on immersion time and powder/liquid ratio of various tissue conditioners (수종의 조직 양화재의 침수시간과 분액비에 따른 표면 거칠기의 변화)

  • Kim, Kyung-Soo;Moon, Hong-Suk;Shim, June-Sung;Jung, Moon-Kyu
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.47 no.2
    • /
    • pp.108-118
    • /
    • 2009
  • Statement of problem: Volume stability, microstructure reproducibility and fluidity along with compatibility with dental stone must be in consideration in order to use tissue conditioner as a material for functional impression. There are few studies concerning the influence of time factor in oral condition on surface roughness of the stone and optimal retention period in the oral cavity considering such changes in surface roughness. Purpose: The purpose of this study was to find out the influence of various kinds of tissue conditioner, its powder/liquid ratio and immersion time on surface roughness of the stone. Material and methods: Materials used in this study were the three kinds of tissue conditioners(Coe-Comfort, Visco-Gel, Soft-Liner) and were grouped into three: group R-mixed with standard powder/liquid ratio that was recommended by the manufacturers, group M-mixed with 20% more powder, group L-mixed with 20% less powder. Specimens were made with the size of 20 mm diameter and 2 mm width. Each tissue conditioner specimens were subdivided into 5 groups according to the immersion time(0 hour, 1 day, 3 days, 5 days, 7 days), completely immersed into artificial saliva and were stored under $37^{\circ}C$. Specimens of which the given immersion time elapsed were taken out and were poured with improved stone, making the stone specimens. Surface roughness of the stone specimens was measured by a profilometer. Results: Within the limitation of this study, the following results were drawn. 1. Major influencing factor on surface roughness of the stone model made from tissue conditioner was the retention period(contribution ratio($\rho$)=62.86%, P<.05) of the tissue conditioner in oral cavity to make functional impression. 2. In case of Coe-Comfort, higher mean surface roughness value of the stone model with statistical significance was observed compared to that of Soft-Liner and Visco-Gel as immersion time changes(P<.05). 3. In case of group L(less), higher mean surface roughness value of the stone model with statistical significance was observed compared to that of R(recommended) and M(more) group as immersion time changes(P<.05). Conclusion: We may conclude that as the retention period of time in oral cavity influences surface roughness of the stone model the most and as the kind of tissue conditioner and its P/L ratio may influence also, clinician should well understand the optimal retention period in oral cavity and choose the right tissue conditioner for the functional impression, thus making the functional impression with tissue conditioner usefully.

Geochemical Equilibria and Kinetics of the Formation of Brown-Colored Suspended/Precipitated Matter in Groundwater: Suggestion to Proper Pumping and Turbidity Treatment Methods (지하수내 갈색 부유/침전 물질의 생성 반응에 관한 평형 및 반응속도론적 연구: 적정 양수 기법 및 탁도 제거 방안에 대한 제안)

  • 채기탁;윤성택;염승준;김남진;민중혁
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.7 no.3
    • /
    • pp.103-115
    • /
    • 2000
  • The formation of brown-colored precipitates is one of the serious problems frequently encountered in the development and supply of groundwater in Korea, because by it the water exceeds the drinking water standard in terms of color. taste. turbidity and dissolved iron concentration and of often results in scaling problem within the water supplying system. In groundwaters from the Pajoo area, brown precipitates are typically formed in a few hours after pumping-out. In this paper we examine the process of the brown precipitates' formation using the equilibrium thermodynamic and kinetic approaches, in order to understand the origin and geochemical pathway of the generation of turbidity in groundwater. The results of this study are used to suggest not only the proper pumping technique to minimize the formation of precipitates but also the optimal design of water treatment methods to improve the water quality. The bed-rock groundwater in the Pajoo area belongs to the Ca-$HCO_3$type that was evolved through water/rock (gneiss) interaction. Based on SEM-EDS and XRD analyses, the precipitates are identified as an amorphous, Fe-bearing oxides or hydroxides. By the use of multi-step filtration with pore sizes of 6, 4, 1, 0.45 and 0.2 $\mu\textrm{m}$, the precipitates mostly fall in the colloidal size (1 to 0.45 $\mu\textrm{m}$) but are concentrated (about 81%) in the range of 1 to 6 $\mu\textrm{m}$in teams of mass (weight) distribution. Large amounts of dissolved iron were possibly originated from dissolution of clinochlore in cataclasite which contains high amounts of Fe (up to 3 wt.%). The calculation of saturation index (using a computer code PHREEQC), as well as the examination of pH-Eh stability relations, also indicate that the final precipitates are Fe-oxy-hydroxide that is formed by the change of water chemistry (mainly, oxidation) due to the exposure to oxygen during the pumping-out of Fe(II)-bearing, reduced groundwater. After pumping-out, the groundwater shows the progressive decreases of pH, DO and alkalinity with elapsed time. However, turbidity increases and then decreases with time. The decrease of dissolved Fe concentration as a function of elapsed time after pumping-out is expressed as a regression equation Fe(II)=10.l exp(-0.0009t). The oxidation reaction due to the influx of free oxygen during the pumping and storage of groundwater results in the formation of brown precipitates, which is dependent on time, $Po_2$and pH. In order to obtain drinkable water quality, therefore, the precipitates should be removed by filtering after the stepwise storage and aeration in tanks with sufficient volume for sufficient time. Particle size distribution data also suggest that step-wise filtration would be cost-effective. To minimize the scaling within wells, the continued (if possible) pumping within the optimum pumping rate is recommended because this technique will be most effective for minimizing the mixing between deep Fe(II)-rich water and shallow $O_2$-rich water. The simultaneous pumping of shallow $O_2$-rich water in different wells is also recommended.

  • PDF

Converting Ieodo Ocean Research Station Wind Speed Observations to Reference Height Data for Real-Time Operational Use (이어도 해양과학기지 풍속 자료의 실시간 운용을 위한 기준 고도 변환 과정)

  • BYUN, DO-SEONG;KIM, HYOWON;LEE, JOOYOUNG;LEE, EUNIL;PARK, KYUNG-AE;WOO, HYE-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.23 no.4
    • /
    • pp.153-178
    • /
    • 2018
  • Most operational uses of wind speed data require measurements at, or estimates generated for, the reference height of 10 m above mean sea level (AMSL). On the Ieodo Ocean Research Station (IORS), wind speed is measured by instruments installed on the lighthouse tower of the roof deck at 42.3 m AMSL. This preliminary study indicates how these data can best be converted into synthetic 10 m wind speed data for operational uses via the Korea Hydrographic and Oceanographic Agency (KHOA) website. We tested three well-known conventional empirical neutral wind profile formulas (a power law (PL); a drag coefficient based logarithmic law (DCLL); and a roughness height based logarithmic law (RHLL)), and compared their results to those generated using a well-known, highly tested and validated logarithmic model (LMS) with a stability function (${\psi}_{\nu}$), to assess the potential use of each method for accurately synthesizing reference level wind speeds. From these experiments, we conclude that the reliable LMS technique and the RHLL technique are both useful for generating reference wind speed data from IORS observations, since these methods produced very similar results: comparisons between the RHLL and the LMS results showed relatively small bias values ($-0.001m\;s^{-1}$) and Root Mean Square Deviations (RMSD, $0.122m\;s^{-1}$). We also compared the synthetic wind speed data generated using each of the four neutral wind profile formulas under examination with Advanced SCATterometer (ASCAT) data. Comparisons revealed that the 'LMS without ${\psi}_{\nu}^{\prime}$ produced the best results, with only $0.191m\;s^{-1}$ of bias and $1.111m\;s^{-1}$ of RMSD. As well as comparing these four different approaches, we also explored potential refinements that could be applied within or through each approach. Firstly, we tested the effect of tidal variations in sea level height on wind speed calculations, through comparison of results generated with and without the adjustment of sea level heights for tidal effects. Tidal adjustment of the sea levels used in reference wind speed calculations resulted in remarkably small bias (<$0.0001m\;s^{-1}$) and RMSD (<$0.012m\;s^{-1}$) values when compared to calculations performed without adjustment, indicating that this tidal effect can be ignored for the purposes of IORS reference wind speed estimates. We also estimated surface roughness heights ($z_0$) based on RHLL and LMS calculations in order to explore the best parameterization of this factor, with results leading to our recommendation of a new $z_0$ parameterization derived from observed wind speed data. Lastly, we suggest the necessity of including a suitable, experimentally derived, surface drag coefficient and $z_0$ formulas within conventional wind profile formulas for situations characterized by strong wind (${\geq}33m\;s^{-1}$) conditions, since without this inclusion the wind adjustment approaches used in this study are only optimal for wind speeds ${\leq}25m\;s^{-1}$.