• Title/Summary/Keyword: vertical resolution

Search Result 355, Processing Time 0.029 seconds

Single-Channel Seismic Data Processing via Singular Spectrum Analysis (특이 스펙트럼 분석 기반 단일 채널 탄성파 자료처리 연구)

  • Woodon Jeong;Chanhee Lee;Seung-Goo Kang
    • Geophysics and Geophysical Exploration
    • /
    • v.27 no.2
    • /
    • pp.91-107
    • /
    • 2024
  • Single-channel seismic exploration has proven effective in delineating subsurface geological structures using small-scale survey systems. The seismic data acquired through zero- or near-offset methods directly capture subsurface features along the vertical axis, facilitating the construction of corresponding seismic sections. However, substantial noise in single-channel seismic data hampers precise interpretation because of the low signal-to-noise ratio. This study introduces a novel approach that integrate noise reduction and signal enhancement via matrix rank optimization to address this issue. Unlike conventional rank-reduction methods, which retain selected singular values to mitigate random noise, our method optimizes the entire singular value spectrum, thus effectively tackling both random and erratic noises commonly found in environments with low signal-to-noise ratio. Additionally, to enhance the horizontal continuity of seismic events and mitigate signal loss during noise reduction, we introduced an adaptive weighting factor computed from the eigenimage of the seismic section. To access the robustness of the proposed method, we conducted numerical experiments using single-channel Sparker seismic data from the Chukchi Plateau in the Arctic Ocean. The results demonstrated that the seismic sections had significantly improved signal-to-noise ratios and minimal signal loss. These advancements hold promise for enhancing single-channel and high-resolution seismic surveys and aiding in the identification of marine development and submarine geological hazards in domestic coastal areas.

An Analysis on the Episodes of Large-scale Transport of Natural Airborne Particles and Anthropogenically Affected Particles from Different Sources in the East Asian Continent in 2008 (2008년 동아시아 대륙으로부터 기원이 다른 먼지와 인위적 오염 입자의 광역적 이동 사례에 대한 분석)

  • Kim, Hak-Sung;Yoon, Ma-Byong;Sohn, Jung-Joo
    • Journal of the Korean earth science society
    • /
    • v.31 no.6
    • /
    • pp.600-607
    • /
    • 2010
  • In 2008, multiple episodes of large-scale transport of natural airborne particles and anthropogenically affected particles from different sources in the East Asian continent were identified in the National Oceanic and Atmospheric Administration (NOAA) satellite RGB-composite images and the mass concentrations of ground level particulate matters. To analyze the aerosol size distribution during the large-scale transport of atmospheric aerosols, both aerosol optical depth (AOD; proportional to the aerosol total loading in the vertical column) and fine aerosol weighting (FW; fractional contribution of fine aerosol to the total AOD) of Moderate resolution Imaging Spectroradiometer (MODIS) aerosol products were used over the East Asian region. The six episodes of massive natural airborne particles were observed at Cheongwon, originating from sandstorms in northern China, Mongolia and the loess plateau of China. The $PM_{10}$ and $PM_{2.5}$ stood at 70% and 16% of the total mass concentration of TSP, respectively. However, the mass concentration of $PM_{2.5}$ among TSP increased as high as 23% in the episode in which they were flowing in by way f the industrial area in east China. In the other five episodes of anthropogenically affected particles that flowed into the Korean Peninsula from east China, the mass concentrations of $PM_{10}$ and $PM_{2.5}$ among TSP reached 82% and 65%, respectively. The average AOD for the large-scale transport of anthropogenically affected particle episodes in the East Asian region was measured at $0.42{\pm}0.17$ compared with AOD ($0.36{\pm}0.13$) for the natural airborne particle episodes. Particularly, the regions covering east China, the Yellow Sea, the Korean Peninsula, and the east Korean sea were characterized by high levels of AOD. The average FW values observed during the event of anthropogenically affected aerosols ($0.63{\pm}0.16$) were moderately higher than those of natural airborne particles ($0.52{\pm}0.13$). This observation suggests that anthropogenically affected particles contribute greatly to the atmospheric aerosols in East Asia.

Water droplet generation technique for 3D water drop sculptures (3차원 물방울 조각 생성장치의 구현을 위한 물방울 생성기법)

  • Lin, Long-Chun;Park, Yeon-yong;Jung, Moon Ryul
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.143-152
    • /
    • 2019
  • This paper presents two new techniques for solving the two problems of the water curtain: 'shape distortion' caused by gravity and 'resolution degradation' caused by fine satellite droplets around the shape. In the first method, when the user converts a three-dimensional model to a vertical sequence of slices, the slices are evenly spaced. The method is to adjust the time points at which the equi-distance slices are created by the nozzle array. In this method, even if the velocity of a water drop increases with time by gravity, the water drop slices maintain the equal interval at the moment of forming the whole shape, thereby preventing distortion. The second method is called the minimum time interval technique. The minimum time interval is the time between the open command of a nozzle and the next open command of the nozzle, so that consecutive water drops are clearly created without satellite drops. When the user converts a three-dimensional model to a sequence of slices, the slices are defined as close as possible, not evenly spaced, considering the minimum time interval of consecutive drops. The slices are arranged in short intervals in the top area of the shape, and the slices are arranged in long intervals in the bottom area of the shape. The minimum time interval is pre-determined by an experiment, and consists of the time from the open command of the nozzle to the time at which the nozzle is fully open, and the time in which the fully open state is maintained, and the time from the close command to the time at which the nozzle is fully closed. The second method produces water drop sculptures with higher resolution than does the first method.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

The Utilization of DEM Made by Digital Map in Height Evaluation of Buildings in a Flying Safety Area (비행안전구역 건물 높이 평가에서 수치지형도로 제작한 DEM의 활용성)

  • Park, Jong-Chul;Kim, Man-Kyu;Jung, Woong-Sun;Han, Gyu-Cheol;Ryu, Young-Ki
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.14 no.3
    • /
    • pp.78-95
    • /
    • 2011
  • This study has developed various DEMs with different spatial resolutions using many different interpolation methods with the aid of a 1:5,000 digital map. In addition, this study has evaluated the vertical accuracy of various DEMs constructed by check point data obtained from the network RTK GPS survey. The obtained results suggest that a DEM developed from the TIN-based Terrain method performs well in evaluating height restriction of buildings in a flying safety area considering general RMSE values, land-type RMSE values and profile evaluation results, etc. And, it has been found that three meters is the right spatial resolution for a DEM in evaluating height restriction of buildings in a flying safety area. Meanwhile, elevation values obtained by the DEM are not point estimation values but interval estimation values. This can be used to check whether the height of buildings in the vicinity of an airfield violates height limitation values of the area. To check whether the height of buildings measured in interval estimation values violates height limitation values of the area, this study has adopted three steps: 1) high probability of violation, 2) low probability of violation, 3) inconclusiveness about the violation. The obtained results will provide an important basis for developing a GIS related to the evaluation of height restriction of buildings in the vicinity of an airfield. Furthermore, although results are limited to the study area, the vertical accuracy values of the DEM constructed from a two-dimensional digital map may provide useful information to researchers who try to use DEMs.

Analysis of Optical Characteristic Near the Cloud Base of Before Precipitation Over the Yeongdong Region in Winter (영동지역 겨울철 스캔라이다로 관측된 강수 이전 운저 인근 수상체의 광학 특성 분석)

  • Nam, Hyoung-Gu;Kim, Yoo-Jun;Kim, Seon-Jeong;Lee, Jin-Hwa;Kim, Geon-Tea;An, Bo-Yeong;Shim, Jae-Kwan;Jeon, Gye-hak;Choi, Byoung-Choel;Kim, Byung-Gon
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.2_1
    • /
    • pp.237-248
    • /
    • 2018
  • The vertical distribution of hydrometeor before precipitation near the cloud base has been analyzed using a scanning lidar, rawinsonde data, and Cloud-Resolving Storm Simulator (CReSS). This study mostly focuses on 13 Desember 2016 only. The typical synoptic pattern of lake-effect snowstorm induced easterly in the Yeongdong region. Clouds generated due to high temperature difference between 850 hPa and sea surface (SST) penentrated in the Yeongdong region along with northerly and northeasterly, which eventually resulted precipitation. The cloud base height before the precipitation changed from 750 m to 1,280 m, which was in agreement with that from ceilometer at Sokcho. However, ceilometer tended to detect the cloud base 50 m ~ 100 m below strong signal of lidar backscattering coefficient. As a result, the depolarization ratio increased vertically while the backscattering coefficient decreased about 1,010 m~1,200 m above the ground. Lidar signal might be interpreted to be attenuated with the penetration depth of the cloud layer with of nonspherical hydrometeor (snow, ice cloud). An increase in backscattering signal and a decrease in depolarization ratio occured in the layer of 800 to 1,010 m, probably being associated with an increase in non-spherical particles. There seemed to be a shallow liquid layer with a low depolarization ratio (<0.1) in the layer of 850~900 m. As the altitude increases in the 680 m~850 m, the backscattering coefficient and depolarization ratio increase at the same time. In this range of height, the maximum value (0.6) is displayed. Such a result can be inferred that the nonspherical hydrometeor are distributed by a low density. At this time, the depolarization ratio and the backscattering coefficient did not increase under observed melting layer of 680 m. The lidar has a disadvantage that it is difficult for its beam to penetrate deep into clouds due to attenuation problem. However it is promising to distinguish hydrometeor morphology by utilizing the depolarization ratio and the backscattering coefficient, since its vertical high resolution (2.5 m) enable us to analyze detailed cloud microphysics. It would contribute to understanding cloud microphysics of cold clouds and snowfall when remote sensings including lidar, radar, and in-situ measurements could be timely utilized altogether.

An Installation and Model Assessment of the UM, U.K. Earth System Model, in a Linux Cluster (U.K. 지구시스템모델 UM의 리눅스 클러스터 설치와 성능 평가)

  • Daeok Youn;Hyunggyu Song;Sungsu Park
    • Journal of the Korean earth science society
    • /
    • v.43 no.6
    • /
    • pp.691-711
    • /
    • 2022
  • The state-of-the-art Earth system model as a virtual Earth is required for studies of current and future climate change or climate crises. This complex numerical model can account for almost all human activities and natural phenomena affecting the atmosphere of Earth. The Unified Model (UM) from the United Kingdom Meteorological Office (UK Met Office) is among the best Earth system models as a scientific tool for studying the atmosphere. However, owing to the expansive numerical integration cost and substantial output size required to maintain the UM, individual research groups have had to rely only on supercomputers. The limitations of computer resources, especially the computer environment being blocked from outside network connections, reduce the efficiency and effectiveness of conducting research using the model, as well as improving the component codes. Therefore, this study has presented detailed guidance for installing a new version of the UM on high-performance parallel computers (Linux clusters) owned by individual researchers, which would help researchers to easily work with the UM. The numerical integration performance of the UM on Linux clusters was also evaluated for two different model resolutions, namely N96L85 (1.875° ×1.25° with 85 vertical levels up to 85 km) and N48L70 (3.75° ×2.5° with 70 vertical levels up to 80 km). The one-month integration times using 256 cores for the AMIP and CMIP simulations of N96L85 resolution were 169 and 205 min, respectively. The one-month integration time for an N48L70 AMIP run using 252 cores was 33 min. Simulated results on 2-m surface temperature and precipitation intensity were compared with ERA5 re-analysis data. The spatial distributions of the simulated results were qualitatively compared to those of ERA5 in terms of spatial distribution, despite the quantitative differences caused by different resolutions and atmosphere-ocean coupling. In conclusion, this study has confirmed that UM can be successfully installed and used in high-performance Linux clusters.

Performance Evaluation of Siemens CTI ECAT EXACT 47 Scanner Using NEMA NU2-2001 (NEMA NU2-2001을 이용한 Siemens CTI ECAT EXACT 47 스캐너의 표준 성능 평가)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.3
    • /
    • pp.259-267
    • /
    • 2004
  • Purpose: NEMA NU2-2001 was proposed as a new standard for performance evaluation of whole body PET scanners. in this study, system performance of Siemens CTI ECAT EXACT 47 PET scanner including spatial resolution, sensitivity, scatter fraction, and count rate performance in 2D and 3D mode was evaluated using this new standard method. Methods: ECAT EXACT 47 is a BGO crystal based PET scanner and covers an axial field of view (FOV) of 16.2 cm. Retractable septa allow 2D and 3D data acquisition. All the PET data were acquired according to the NEMA NU2-2001 protocols (coincidence window: 12 ns, energy window: $250{\sim}650$ keV). For the spatial resolution measurement, F-18 point source was placed at the center of the axial FOV((a) x=0, and y=1, (b)x=0, and y=10, (c)x=70, and y=0cm) and a position one fourth of the axial FOV from the center ((a) x=0, and y=1, (b)x=0, and y=10, (c)x=10, and y=0cm). In this case, x and y are transaxial horizontal and vertical, and z is the scanner's axial direction. Images were reconstructed using FBP with ramp filter without any post processing. To measure the system sensitivity, NEMA sensitivity phantom filled with F-18 solution and surrounded by $1{\sim}5$ aluminum sleeves were scanned at the center of transaxial FOV and 10 cm offset from the center. Attenuation free values of sensitivity wire estimated by extrapolating data to the zero wall thickness. NEMA scatter phantom with length of 70 cm was filled with F-18 or C-11solution (2D: 2,900 MBq, 3D: 407 MBq), and coincidence count rates wire measured for 7 half-lives to obtain noise equivalent count rate (MECR) and scatter fraction. We confirmed that dead time loss of the last flame were below 1%. Scatter fraction was estimated by averaging the true to background (staffer+random) ratios of last 3 frames in which the fractions of random rate art negligibly small. Results: Axial and transverse resolutions at 1cm offset from the center were 0.62 and 0.66 cm (FBP in 2D and 3D), and 0.67 and 0.69 cm (FBP in 2D and 3D). Axial, transverse radial, and transverse tangential resolutions at 10cm offset from the center were 0.72 and 0.68 cm (FBP in 2D and 3D), 0.63 and 0.66 cm (FBP in 2D and 3D), and 0.72 and 0.66 cm (FBP in 2D and 3D). Sensitivity values were 708.6 (2D), 2931.3 (3D) counts/sec/MBq at the center and 728.7 (2D, 3398.2 (3D) counts/sec/MBq at 10 cm offset from the center. Scatter fractions were 0.19 (2D) and 0.49 (3D). Peak true count rate and NECR were 64.0 kcps at 40.1 kBq/mL and 49.6 kcps at 40.1 kBq/mL in 2D and 53.7 kcps at 4.76 kBq/mL and 26.4 kcps at 4.47 kBq/mL in 3D. Conclusion: Information about the performance of CTI ECAT EXACT 47 PET scanner reported in this study will be useful for the quantitative analysis of data and determination of optimal image acquisition protocols using this widely used scanner for clinical and research purposes.

Acoustic Characteristics of Gas-related Structures in the Upper Sedimentary Layer of the Ulleung Basin, East Sea (동해 울릉분지 퇴적층 상부에 존재하는 가스관련 퇴적구조의 음향 특성연구)

  • Park, Hyun-Tak;Yoo, Dong-Geun;Han, Hyuk-Soo;Lee, Jeong-Min;Park, Soo-Chul
    • Economic and Environmental Geology
    • /
    • v.45 no.5
    • /
    • pp.513-523
    • /
    • 2012
  • The upper sedimentary layer of the Ulleung Basin in the East Sea shows stacked mass-flow deposits such as slide/slump deposits in the upper slope, debris-flow deposits in the middle and lower slope, and turbidites in the basin plain. Shallow gases or gas hydrates are also reported in many area of the Ulleung Basin, which are very important in terms of marine resources, environmental changes, and geohazard. This paper aims at studying acoustic characteristics and distribution pattern of gas-related structures such as acoustic column, enhanced reflector, dome structure, pockmark, and gas seepage in the upper sedimentary layer, by analysing high-resolution chirp profiles. Acoustic column shows a transparent pillar shape in the sedimentary layer and mainly occurs in the basin plain. Enhanced reflector is characterized by an increased amplitude and laterally extended to several tens up kilometers. Dome structure is characterized by an upward convex feature at the seabed, and mainly occurs in the lower slope. The pockmark shows a small crater-like feature and usually occurs in the middle and lower slope. Gas seepage is commonly found in the middle slope of the southern Ulleung Basin. These gas-related structures seem to be mainly caused by gas migration and escape in the sedimentary layer. The distribution pattern of the gas-related structures indicates that formation of these structures in the Ulleung Basin is controlled not only by sedimentary facies in upper sedimentary layer but also by gas-solubility changes depending on water depth. Especially, it is interpreted that the chaotic and discontinuous sedimentary structures of debris-flow deposits cause the facilitation of gas migration, whereas the continuous sedimentary layers of turbidites restrict the vertical migration of gases.

"Legal Study on Boundary between Airspace and Outer Space" (영공(領空)과 우주공간(宇宙空間)의 한계(限界)에 관한 법적(法的) 고찰(考察))

  • Choi, Wan-Sik
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.2
    • /
    • pp.31-67
    • /
    • 1990
  • One of the first issues which arose in the evolution of air law was the determination of the vertical limits of airspace over private property. In 1959 the UN in its Ad Hoc Committee on the Peaceful Uses of Outer Space, started to give attention to the question of the meaning of the term "outer space". Discussions in the United Nations regarding the delimitation issue were often divided between those in favour of a functional approach ("functionalists"), and those seeking the delineation of a boundary ("spatialists"). The functionalists, backed initially by both major space powers, which viewed any boundary as possibly restricting their access to space(Whether for peaceful or military purposes), won the first rounds, starting with the 1959 Report of the Ad Hoc Committee on the Peaceful Uses of Outer Space which did not consider that the topic called for priority consideration. In 1966, however, the spatialists, were able to place the issue on the agenda of the Outer Sapce Committee pursuant to Resolution 2222 (xxx1). However, the spatialists were not able to present a common position since there existed a variety of propositions for delineation of a boundary. Over the years, the funtionalists have seemed to be losing ground. As the element of location is a decisive factor for the choice of the legal regime to be applied, a purely functional approach to the regulation of activities in the space above the Earth does not offer a solution. It is therefore to be welcomed that there is clear evidence of a growing recognition of the defect inherent to such an approach and that a spatial approach to the problem is gaining support both by a growing number of States as well as by publicists. The search for a solution of the problem of demarcating the two different legal regimes governing the space above the Earth has undoubtedly been facilitated, and a number of countries, among them Argentina, Belgium, France, Italy and Mexico have already advocated the acceptance of the lower boundary of outer space at a height of 100km. The adoption of the principle of sovereignty at that height does not mean that States would not be allowed to take protective measures against space activities above that height which constitute a threat to their security. A parallel can be drawn with the defence of the State's security on the high seas. Measures taken by States in their own protection on the high seas outside the territorial waters-provided that they are proportionate to the danger-are not considered to infringe the principle of international law. The most important issue in this context relates to the problem of a right of passage for space craft through foreign air space in order to reach outer space. In the reports to former ILA Conferences an explanation was given of the reasons why no customary rule of freedom of passage for aircraft through foreign territorial air space could as yet be said to exist. It was suggested, however, that though the essential elements for the creation of a rule of customary international law allowing such passage were still lacking, developments apperaed to point to a steady growth of a feeling of necessity for such a rule. A definite treaty solution of the demarcation problem would require further study which should be carried out by the UN Outer Space Committee in close co-operation with other interested international organizations, including ICAO. If a limit between air space and outer space were established, air space would automatically come under the regime of the Chicago Convention alone. The use of the word "recognize" in Art. I of chicago convention is an acknowledgement of sovereignty over airspace existing as a general principle of law, the binding force of which exists independently of the Convention. Further it is important to note that the Aricle recognizes this sovereignty, as existing for every state, holding it immaterial whether the state is or is not a contracting state. The functional criteria having been created by reference to either the nature of activity or the nature of the space object, the next hurdle would be to provide methods of verification. With regard to the question of international verification the establishment of an International Satelite Monitoring Agency is required. The path towards the successful delimitation of outer space from territorial space is doubtless narrow and stony but the establishment of a precise legal framework, consonant with the basic principles of international law, for the future activities of states in outer space will, it is still believed, remove a source of potentially dangerous conflicts between states, and furthermore afford some safeguard of the rights and interests of non-space powers which otherwise are likely to be eroded by incipient customs based on at present almost complete freedom of action of the space powers.

  • PDF