This Remote Sensing Essay example is published for educational and informational purposes only. If you need a custom essay or research paper on this topic, please use our writing services. EssayEmpire.com offers reliable custom essay writing services that can help you to receive high grades and impress your professors with the quality of each essay or research paper you hand in.
Remote sensing is the field of integrating the information, technology, and analysis of data collected by both satellite and airborne sensor systems of the earth’s surface, atmosphere, and near subsurface. Remote sensing could generally be described as the observation of objects or phenomena from afar, and is the opposite of in situ observation where those measuring are in direct contact with objects or phenomena. But since the 1950s, when the term was coined by Ms. Evelyn L. Pruitt of the U.S. Naval Office of Research, the term has come to mean more specifically the use of aerial photography and satellite imagery for monitoring the earth’s surface (including atmosphere, and more recently, some substrate). The mechanism that acquires the information is referred mto as a sensor, and the vehicle that carries it (an aircraft or satellite) is referred to as the platform; taken together, these are called a sensor system.
Early remote sensing efforts were limited by two factors: camera equipment and means of putting that equipment into the air. Passive or optical remote sensing uses the sun’s energy and measures reflections of that energy. The human eye is sensitive to energy from the visible portion of the electromagnetic spectrum, ranging over 400-700 nanometer wavelengths and associated with blue, green, and red light in order. The first permanent photograph was taken by Frenchman Joseph Nicephore Niepce in 1826 over an eight-hour exposure.
Aerial Photography
Once optical equipment and darkroom techniques were sufficient for photography, many creative attempts to take photos from above were performed using hot air balloons and even pigeons. The benefit of seeing the earth’s surface from above (known as a nadir or orthogonal perspective) was the ability to capture information over a large area at once (called a synoptic view) without the danger of direct contact. The development of aerial photography was thus often advanced by military and defense efforts.
One notable example came in the development of techniques that could record the reflection of the sun’s energy beyond that which the human eye could detect: infrared, the light that is just past the red light humans can see. The wavelength of infrared light closest to red (called near infrared or NIR) is given off very strongly in healthy, green vegetation in amounts two to five times higher than the green light reflected by vegetation that human eyes can see. (This light, or electromagnetic energy, is not to be confused with far infrared, also known as thermal infrared, which humans sense as heat.) Researchers and military strategists discovered that with infrared photography they could easily tell the difference between real vegetation and camouflage used to hide tanks and troops. The power of lenses for the sensors improved, with greater zoom capability. With increasing lens power, aircraft could fly higher and still discern important details. Typically, cameras were mounted on the underside of planes and the film developed once back on the ground.
Satellites
Many countries’ oldest air photos are associated with military efforts during World War II and subsequent international conflicts and wars. During the decades that followed, improvements in technology facilitated satellite testing and launching (particularly for meteorological purposes), and offered a choice of platform for observing even larger portions of the earth’s surface. Declassified by President Clinton by executive order in February 1995, the CORONA satellite photography database includes over 880,000 photos taken from the CORONA, ARGON, and LANYARD satellite missions from 1959 to 1972. A further challenge involved with high altitude photography was how to send the film back to the earth’s surface for development, which was typically achieved by sending the film in canisters to be caught midair by military planes waiting below.
The next advance in technology pushing remote sensing forward was the advent of digital acquisition and relay of photography, called imagery when collected and provided in digital form. Using a digital environment alleviated the need to send film back to the earth’s surface; instead, the data were sent digitally to stations on the ground (known as receiving stations) as the satellites passed overhead. Multiple digital devices recorded light reflectance in multiple portions of the electromagnetic spectrum and an image of each was held for transmission.
The first such satellite sensor system was launched on July 23, 1972, and was originally known as ERTS-1 (Earth Resource Technology Satellite), but was soon renamed Landsat MSS (multispectral scanner) 1. Landsat 1 had four bands or portions of the electromagnetic spectrum it imaged: green, red, and two in the near infrared. In the digital process, the earth’s surface was digitally divided into square cells (called pixels, short for picture elements) and the average reflectance for each band was recorded in that cell. Landsat 1 cell sizes were nominally 79 meter resolution, meaning each cell represented an area on the ground of roughly 79 meters by 79 meters (standard remote sensing interpretation requires at least four cells to accurately identify an object). The Landsat sensor system series has provided the longest running and noncommercial acquisition of publicly available satellite imagery in the world, with Landsats 2 and 3 providing similar information, and a new TM (thematic mapper) sensor put onboard for Landsats 4, 5, and 6 (though Landsat 6 failed to achieve orbit). The TM sensors added sensor capability in the blue, middle infrared, and far infrared (thermal) portions of the electromagnetic spectrum, moved the other band placements to avoid atmospheric interference, and improved spatial resolution to 30 meters (except thermal, which was 120 meters). Landsat 7 ETM+ (Enhanced Thematic Mapper, launched in 1999) added a higher resolution panchromatic (black and white) sensor with 15 meters resolution and improved thermal resolution to 60 meters. Currently no approved and budgeted plan exists for the next earth resource satellite, and the lifetime expectancy of these sensor systems suggests that in the near future many people will have to turn to other governments or the commercial sector for basic earth resource information and monitoring that likely will not be subsidized and/or easily available to the public.
Applications of passive or optical remote sensing have grown considerably during the history of remote sensing, and extend far beyond military and defense. Satellite imagery is now considered a critical tool for basic mapping and environmental and resource management for federal, state, and local U.S. government agencies; and it is used for managing, protecting, and monitoring homeland security, forestry, agriculture, emergency/hazards planning, transportation planning, land zoning, city and rural population estimates, geological surveying, coastal zone management, and air and water pollution.
Radar and Lidar
The advent of active sensor systems (those that record the return of an energy source of their own making, such as a laser pulse) has increased the toolkit available for integrated management. RADAR (radio detection and ranging) systems, flown on both aircraft and satellites, can penetrate some cloud conditions obscuring optical sensing and also can provide some level of penetration of vegetation cover (e.g., detecting human settlements under dense forest cover) or surficial soils (e.g., detecting changes in the upper levels of the substrate and soils as associated with subsidence). LIDAR (light detection and ranging) systems provide extremely accurate and detailed information regarding the height of the earth’s surface for producing topographic maps of higher quality and definition than can be manually surveyed. Such maps have recently proven extremely useful for monitoring coastal zone erosion, subsidence, and modeling water flow for floodprone areas. LIDAR systems also have some penetrative capabilities, useful for vegetation and geological applications.
Most recently, combining multiple forms of remotely sensed data has proven extremely effective in assessing damage, targeting rescue attempts, and planning for future disaster management and mitigation in areas hit by earthquakes, tsunamis, and hurricanes. While remote sensing is associated with indirect data collection, most remote sensing applications involve some level of fieldwork to help in model testing (calibration) or accuracy assessment (validation). The integration of field data, whether from household interviews or groundwater readings, is typically achieved by spatially integrating the data sources (often in a GIS, or Geographic Information System). Acquiring the spatial location of all field datapoints is typically enabled through the use of GPS (Global Positioning System) technology.
Bibliography:
- John Jensen, Remote Sensing of Environment: An Earth Resource Perspective (Prentice Hall, 2006);
- Willis T. Lee, The Face of the Earth as Seen from the Air: A Study in the Application of Airplane Photography to Geography (American Geographical Society, 1922);
- Diana Liverman et , eds., People and Pixels: Linking Remote Sensing and Social Science (National Academy Press, 1998);
- Roger M. McCoy, Field Methods in Remote Sensing (Guilford Publications, 2004);
- National Aeronautics and Space Administration, “Dr. Nicholas Short’s Remote Sensing Tutorial,” gsfc.nasa.gov (cited May 2006);
John Spencer et al., eds., Global Positioning System: A Field Guide for the Social Sciences (Blackwell Publishing, 2003);