How It Works: Satellite Imagery Interpretation
The process of scanning the earth using high-flying aircraft or satellites is called satellite imaging. These objects provide us with satellite photos and detailed information about the planet without physically touching it. Ever since the launch of the first satellite, humanity has sent many others for different purposes. Also, each of them uses different types of sensors to collect electromagnetic waves reflected by the earth’s surface. Passive sensors do not require any artificial source of energy. Instead, they use radiation emitted by the sun, which is reflected on the earth’s surface.
On the other hand, active sensors emit their own radiation and analyze the reflected rays from the earth’s surface. Active sensors require lots of energy to emit radiation. However, one of their main advantages is that they can be used at any time of the day and season. Also, they are able to emit different types of radiation not provided by the sun.
The sun emits visible and infrared bands of radiation to the earth’s surface. Objects on the earth, such as water, forests, snow, or pavement, have different reflective properties. For example, snow reflects strongly. That is why it appears as white. Water reflects little infrared or visible light. Vegetation strongly reflects infrared but absorbs visible light. Therefore, most objects can be identified based on their reflectance or “spectral signature.” This is not always the case because some objects have very similar reflectance. Now, let’s look at how to process images taken by the satellites.
Image processing
Image processing is getting vital information from the pictures taken by the satellite. Usually, it is done by specialized software using similar techniques used in popular apps like Adobe Illustrator.
One standard output of processing images from a satellite is when a photo-like image is created for printing or viewing. To see a band of imagery separately, the white color is assigned to the pixels with the highest degree of reflectance, back to pixels with the lowest degree of reflectance, and grey to the pixels in between.
Another way is using a false-color composite (FCC). It represents a multi-spectral image created from bands near the infrared range. Also, this augments the spectral separation and increases the data interpretability. False-color images sacrifice their natural color to ease the detection of specific features of an object which are not readily detectable. The spectral band choice is guided by the physical properties of the object you are investigating.
Image processing is more than just portraying the picture. The computer can find information about the object that can’t be seen with the naked eye. Image classification is the most common procedure used in this case. It usually identifies land cover types such as snow, urbanized areas, water, grasslands, and forests.
Some of the other things we can do in image processing include:
- Noice corrections due to sensor malfunction.
- Correcting issues caused by atmospheric interference
- Projecting the image on another map
- Stretching or augmenting the contrast of an image
- Distinguishing the edges between different types of features
Scale and Patterns
Some commercial and military satellites take images that are detailed enough to show places of business, schools, parks, homes, and even lakes. These types of satellites zoom in on specific areas to collect detailed information. However, to obtain this data, they need to sacrifice getting the bigger picture. NASA satellites, on the other hand, use the opposite approach. They often use a wide-angle lens to view atmospheric fronts and whole ecosystems. That is why their images are less detailed. Same with pictures taken from a digital camera, satellite photos are made up of pixels. The spatial resolution of a satellite determines the level of detail it can show. The best NASA satellite images show 10m in each pixel, while most commercial satellites show 50cm per pixel.
It is essential to know the scale of an image before you start interpreting it. There are lots of things you can learn from each scale. For example, in case of a flood, high-resolution photos will show the affected buildings surrounded by water. With a broader landscape view, you can see which part of a country or area is affected. An even more expansive view may show you the origin of the flood.
Finding patterns is also essential to identify different objects on an image. Rivers, lakes, and oceans are easy to identify because they have unique features and shapes. Artificial structures like farms usually have defined shapes such as circles or rectangles. A straight line on the image is often related to canals, roads, or boundaries. Mountain ranges may run in long wavy lines, while craters and volcanoes are circular.
So before you pick up a satellite image to start processing, make sure you have the necessary tools and knowledge to interpret it. This article gives you some tips on how to understand the images.
For additional information please visit the Local Digital Business.