• Reality capture is transforming industries, especially when capturing an as-built structure's location, features, and character.
    • Some cutting-edge technologies required to get you there include point clouds, photogrammetry, and LiDAR, to name a few.
  • A digital twin is the melding of precise shape and dimensional information of an object or connected group of objects with near real-time data collected from an array of sensors throughout the physical world.
    • Digital twins are a way to reimagine monitoring and controlling systems in the real world and are gaining traction with all different types of utilities.

Reality capture; sounds simple, right? Reality capture is a set of technologies that allow you to "capture reality." Replicate the physical world and turn it into a virtual reality or digital realm. It's transforming industries, especially when capturing an as-built structure's location, features, and character. There are some cutting-edge technologies required to get you there - point clouds, photogrammetry, and LiDAR, to name but a few. Each technology is fundamentally different but conceptually the same and with its own strengths and weaknesses. So, what does that mean to us?

What is LiDAR?

LiDAR (Light Detection and Ranging) is a popular reality capture technology today. LiDAR uses ultraviolet, visible, or near-infrared light to map spatial relationships and shapes by measuring the time it takes for signals to bounce off objects and return to the scanner. LiDAR can capture accurate measurements for entire buildings down to the smallest architectural detail. Each scan comprises millions of individual measurements, one pulse laser at a time. Once these scans are processed, LiDAR data becomes point cloud data.

LiDAR is independent of lighting conditions. Because it brings its own light source, it can be done in cloudy conditions or at night. The use of lasers in the non-visible spectrum is especially appealing to law enforcement and the military. Because of the flexibility of the wavelength of light, LiDAR can map virtually any condition. Metal, as well as for non-metallic materials, can be scanned successfully along with ground and water features. Capture modes include hand-held devices for close-up or interior mappings such as machinery or artwork. Terrestrial capture usually takes the form of one or several tripod-mounted laser scanners strategically located around larger internal or external areas. Terrestrial capture includes mounting the LiDAR on vehicles such as service trucks to capture linear assets in view of the road. Aerial capture is used for capturing large areas, long features, or remote assets. Though the distance from the ground leads to a lower resolution, the small width of the laser beam can map objects to a resolution of 12in (30cm) which is often sufficient for analysis.

What are point clouds?

Point clouds are the output of LiDAR scans and consist of a large number of points that have location information with them. This is obtained from knowing the precise location of the laser and the relationship to the object. Through a process called registration, these points are converted into surfaces to develop a recognizable model of reality. The dramatic miniaturization of powerful computers and the latest developments in machine learning and cloud computing have improved the speed and robustness of the models.

The models created by LiDAR and point clouds can lead to precise location and shape information. What is lacking and becomes a time-consuming process for these technologies is matching color up with the shapes detected by the laser. This is where a technique called photogrammetry comes into play. Photogrammetry uses photographs and, therefore, ambient lighting conditions to gather data. Large numbers of photos are taken at different angles to obtain the geometry. Overlapping from one photo to the next helps to "knit" the images together. The photos can come from various sources, from high-precision optics to commercial cameras. Pictures taken from the web or unsuspecting tourists can even be used. Images can even be extracted from video data.

What is photogrammetry?

The models created by LiDAR and point clouds can lead to precise location and shape information. What is lacking and becomes a time-consuming process for these technologies is matching color up with the shapes detected by the laser. This is where a technique called photogrammetry comes into play. Photogrammetry uses photographs and, therefore, ambient lighting conditions to gather data. Large numbers of photos are taken at different angles to obtain the geometry. Overlapping from one photo to the next helps to "knit" the images together. The photos can come from various sources, from high-precision optics to commercial cameras. Pictures taken from the web or unsuspecting tourists can even be used. Images can even be extracted from video data.

A robust analytical solution for developing a virtual model from these techniques and extracting usable information must support any type of image, including those from drones, thermal, LiDAR, 2D, 3D, video, and aerial.

The created model can then be integrated into what is known as a digital twin. A digital twin is the melding of precise shape and dimensional information of an object or connected group of objects with near real-time data collected from an array of sensors throughout the physical world. Digital twins are a way to reimagine monitoring and controlling systems in the real world and are gaining traction with all different types of utilities. We will discuss digital twins in an upcoming article.

Other articles from our experts
  1. The Importance of Data Needs for an Electric Utility
  2. Utility Poles in the World of 5G
  3. How can utilities use technology to address our skilled workforce shortage?

Find out how IKE Insight can utilize any of your existing pole imagery to enhance your next pole project

Learn About IKE Insight

Attachments

  • Original Link
  • Original Document
  • Permalink

Disclaimer

ikeGPS Group Limited published this content on 16 March 2022 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 17 March 2022 15:40:01 UTC.