By Scott Walls
November 25, 2025–The National Fish and Wildlife Foundation (NFWF) funded an aerial LiDAR* survey project, flown of the non-tidal regions of the Apalachicola River floodplain. LiDAR sensors use infrared lasers from a high-precision scanner to measure the ground surface and create high resolution surface models, even beneath trees. Previous LiDAR datasets of the region were flown at high or medium water, or only covered certain counties. LiDAR cannot penetrate water, therefore when the floodplain is inundated (covered in water) there is no topographic detail below the water surface. The LiDAR was flown by NV5 Geospatial in late October at very low flows (approximately 6,000 cfs). The resulting dataset will be the most detailed and comprehensive surface model of the floodplain available.

In November, the Slough Restoration Project team visited the current project sites. Using a handheld SLAM (Simultaneous Localization and Mapping) LiDAR scanner, existing conditions were surveyed. Getting LiDAR flown at really low flows will support slough restoration projects, as well as help in understanding inundation patterns. These scanners provide extremely dense point clouds of the ground and trees, and provide much more detail than aerial LiDAR. This emerging technology could vastly improve the efficiency and quality of surveying future potential projects.




Scott Walls is a hydrologist on the Slough Restoration Project team. He is an avid lover of the outdoors, including surfing, biking, snowboarding, and exploring rivers around the world. Photography is also one of his creative passions.
Lidar technology uses laser pulses to measure distances between the sensor and the target surface. A typical LiDAR system consists of: Laser Source: Emits laser beams towards the ground; Scanner: Captures the reflected laser light.; GPS Unit: Records the position of the sensor during data acquisition. As the laser beams hit surfaces, they reflect back to the sensor, allowing for precise measurements of elevation and distance. This data is compiled into a three dimensional dataset known as a point cloud, which represents the shape and location of the objects ands urfaces in the scanned environment.
