1. Introduction
Within geographic information science (GIS), the most common use of UAS (Unmanned Aerial Systems) is to obtain high-resolution georeferenced 2D orthophotos. Using UAS data to develop 3D products has increased in both research and commercial applications, such as generating terrain data and volume measurement. Pagilari et al. [
1] used UAS to measure the volume of flushed sediments in a reservoir in 2017. The Structure from Motion (SfM) and multi-view stereopsis (MVS) methods were used to generate DSM (Digital Surface Model) and DTM (Digital Terrain Model) by Iqbal et al. [
2]. The SfM algorithm relies on a series of non-georeferenced or georeferenced and overlapped 2D images. Different matching strategies such as SIFT (Scale Invariant Feature Transform) are used to find matching features and points [
3,
4]. Those matching points are used in addition to position and orientation values recorded by the drone in a bundle block adjustment [
5] to reconstruct a more exact position and orientation of the camera at every photograph used in the process. Based on this, 3D coordinates are calculated for each matching point. Then the points are triangulated to form a triangulated irregular network (TIN) to create a DSM and further a mesh. SfM is widely used in orthomosaic software and Oniga and Statescu [
6] found Drone2Map to be the best software for 3D model processing compared to others. DSM generation and volume estimation do not require detailed exterior surfaces. A more recent topic in 3D modeling is to create a 3D model with accurate shape and exterior surfaces for a construction structure.
Along with the development of UAS technology in recent years, the application of 3D printing is also expanding. After the FDM (Fused Deposition Modelling) technology patent expired in 2009 [
7], 3D printers became available to the general public. The basic operation of an FDM 3D printer is that thermoplastic filament is heated to a semi-liquid state and then deposited by an ultra-fine extrusion nozzle to the printing plate, line to layer, layer over layer, and finally, the model is printed [
8]. Nowadays, 3D printers with high accuracy and large printing plate (above 250 mm by 250 mm) are within the $1,000 range. Advantages of a printed 3D model include rapid prototyping, design and production, cost-effectiveness, and being environment friendly. A tangible 3D model serves well in classrooms.
A lot of research has been done in this area. However, not all of them finished the last step: the model printed. Themistocleous et al. [
9] used a GoPro Hero 3 camera placed on a UAS to collect images of a church in 2015. The images were processed with Agisoft PhotoScan and the generated 3D model was printed. The hardware and software for 3D model generation advanced rapidly in recent years. For example, autonomous flight planning became easier to set flight parameters to construct a 3D model [
10]. We believe it is necessary to investigate a relatively simple and low-cost method of 3D model generation and printing based on UAS images.
The Caddo House (Koo-Hoot Kiwat, named by Caddo Nation Native Americans) at the Caddo Mounds State Historical Site (Alto, Cherokee County, Texas, USA) was selected due to its relatively small size, ease of access, and cultural significance. The Caddo Site was occupied by native Caddo people between 850 to the early 1,300s A.D. The Caddo Mounds State Historical Site, previously known as the George C. Davis site which was established in 1974, was acquired by the Texas Parks and Wildlife Department for a historic park. On January 1, 2008, the 160 hectares site was transferred to the Texas Historical Commission [
11]. In 2016, the Caddo tribal elders collaborated with the East Texas community to construct the traditional grass house () in order to create a tangible connection between people and place. The construction took seventy-two volunteers’ 2,000 hours over 19 days [
12]. Filmmaker Curtis Craven produced the documentary Koo-Hoot Kiwat: Caddo Grass House, earning a 2019 Lone Star EMMY award.
On April 12, 2019, UAS images were acquired for the Caddo House, the visitor’s center and the Snake Women’s Garden using a DJI Phantom 4 Pro. It is worth mentioning that on April 13, 2109, one day after the image acquisition, an EF3 tornado (estimated peak wind speed at 257.5 km/h) completely destroyed the Caddo House and all the nearby buildings. There were multiple injuries, one fatality and several vehicles destroyed [
13].
The created 3D model of the Caddo House after it was lost to the tornado became a symbol of the event and was given to survivors of the tornado as a way of being connected to the past, present and future.
2. Materials and Methods
On April 12, 2019, as part of an undergraduate capstone course in geospatial science at Stephen F. Austin State University, three UAS missions were carried out at the Caddo Mounds State Historical Site with a DJI Phantom 4 Pro UAS with permission granted by the site administration. The detailed technical specifications of the UAS are shown in . This UAS was selected because of its high-resolution camera and being able to conduct a pre-planned flight mission. In addition, it has a reasonable price ($1,500), which makes it more accessible to the general public.
. Pix4DCapture mission of Caddo Mounds State Historical Site, April 12, 2019 (background satellite image does not represent site condition on that day).
. Orthomosaic of Caddo Mound State Historical Site, April 12, 2017, showing the Caddo House (a), visitors’ center (b) with tents set up for the Caddo Culture Day, planned for April 13, 2019, and Snake Women’s Garden (c).
. Oblique image of Caddo House and visitors’ center, April 12, 2019 set up with tents for Caddo Culture Day, planned for April 13, 2019.
. DJI Phantom 4 Pro technical specifications.
Another mission was flown as a free-flight which allows for maximum overlap between images and the ability to capture extra images of the doorway and ledges of the House. The free-flight was completed in four tiers as shown in . details the UAS setting and the number of images acquired for each of the four tiers.
. Tier 1, tier 2, tier 3, and tier 4 flight route demonstration.
. UAS setting for the free-flight tiers.
After completing these flight missions, all of the raw images were checked and organized into groups. The first group of 324 images consisted of double-grid photos. The second group had 313 images of the free-flight mission images. A third group consisted of only 137 images selected from the free-flight mission where the top of the Caddo House did not intersect the skyline. The intersection of the structure (the Caddo House) and skyline in a picture increase the chance of misplacing sky pixels as part of the subject, the Caddo House, during the matching process where similar pixels are grouped together. This is due to the general principle of SfM, how 3D models are developed from 2D images. An alternative to not including these images as inputs is to annotate the sky/cloud out of these pictures if the processing software has the option. The last group consisted of 136 photos including photos from the first and third groups where portions of the Caddo House can be seen in the frame.
Each group of photographs was then used as the input for Drone2Map
®, with the primary aim of producing an OBJ file. Some basic processing statistics are listed in .
. Summary statistics of image processing in Drone2Map.
Due to Drone2Map’s process for creating the point cloud, images with sky, especially where a subject of interest is intersecting the skyline, are very problematic. As mentioned before, the two easiest ways to mitigate the effect of these images are to annotate-out the sky in each individual photograph that contains it, or to simply not include any photos where the sky intersects the subject (or avoid this kind of image in the first place). Any post-processing editing is generally done to the point cloud itself so that a user can regenerate the 3D products after sufficiently cleaning up a noisy point cloud. The point cloud is valuable for this purpose, but generally is not useful outside of this context as a deliverable product. 3D mesh files are generally preferred by 3D printing, 3D modeling, animation, etc. For this reason, the primary outputs in this research were the OBJ files and the scene layer files. While the scene layer file format was used for the initial assessment of how well an output depicted the area of interest, any 3D mesh output created by Drone2Map could be used just as well. The scene layer was chosen simply for its ease of use and aesthetic value having the jpeg draped over the model in an ArcGIS® platform.
3. Results
*3.1. Image Group 1 – Double-Grid*
The ArcGIS
® scene layer output from this group yielded the general shape of the Caddo House, but not an astounding amount of detail (). For this double grid mission, the UAS was at 30.48 m height and images were taken at even further distance to the Caddo House. The 20-megapixel resolution camera may not catch all the details of the building. However, the main shape was precisely represented in the scene and also in the OBJ file.
. A screenshot of the scene layer created from Image group 1.
*3.2. Image Group 2 – Free-Flight*
The scene layer output for group 2 yielded a more accurate representation of the Caddo House when compared with the output from Image group 1, the double grid mission (). However, the house was off-axis and the scene contained numerous noisy objects in the air as shown below. After close examination, it was determined that the in-the-air objects were created because of the background cloud in the images. In other words, the software matched not only the pixels of the object, but also the pixels in the background. Furthermore, the output point cloud contained points that were placed around the object, while they should be far away in the sky.
. A screenshot of the scene layer created from Image group 2.
*3.3. Group 3 – Curated Free-Flight*
The scene layer output () for group 3 yielded an accurate representation of the Caddo House, but still contain some artifacts above the object.
. A screenshot of the scene layer created from Image group 3.
*3.4. Group 4 – Selected Image from Groups 1 and 2*
The scene layer output () from Image group 4 yielded the least accurate representation of the Caddo House. Due to the lack of ground control points (GCPs), rectifying the positional discrepancies between two separate flight events is difficult. Without points of known 3D coordinates to assign pixels to, we are forced to rely solely on the GPS of the drone and the matching points’ coordinates determined through the SfM algorithm of Drone2Map
®.
. A screenshot of the scene layer created from Image group 4.
*3.5. 3D Printing the Model*
Based on Group 3 images, the digital model generated with Drone2Map
® is an OBJ file. To be a printable model, it has to be manifold (no holes). Autodesk Meshmixer
® software (free) (San Francisco, USA) and Blender
® (open source) (Amsterdam, The Netherlands) were used to process the OBJ file for artifacts removal and clipping (). A base and engraved label were added using Autodesk TinkerCAD
® (free) (San Francisco, USA). Finally, the widely used Cura
® software (free) (Utrecht, The Netherlands) was used to slice the STL (stereolithography) model for printing.
A customer-grade 3D printer, the QIDI Tech X-Max, was used to print the created 3D model. The printer is around $1,000 and has a printing plate of 300 mm by 250 mm with a printing height of 300 mm. It has an 89 mm full-color touch screen and a flash drive that can be directly plugged in to input model for printing (no computer connection is needed). The printer can use PLA (polylactic acid), ABS (Acrylonitrile Butadiene Styrene), TPU (thermoplastic polyurethane), and nylon filament. PLA is a cornstarch-based thermoplastic with high strength and stiffness; in other words, it is a very environment friendly 3D printing material. One kg of PLA filament costs about $20 and a small model only uses 40-80 grams. shows the printed Caddo House model.
. (a) The 3D OBJ mesh after removing the artifacts; (b) The 3D OBJ mesh with its texture after removing artifacts.
. 3D printed model of Koo-Hoot Kiwat Caddo House.
4. Discussion
Benefits of using UAS include relatively lower costs, immediate flight deployment, monitoring camera view from the ground, and flying a programmed mission for geo-tagged and overlapping images [
14]. The UAS pilot has full control over the images and video taken. When an incident happens, UAS can be deployed to the site and document the area to assist in documentation, investigation, and recovery.
The best method of creating a 3D model of a subject with the least amount of post-acquisition cleaning is to capture the subject in orbital tiers with the subject as the focal point without including any images where the subject intersects the skyline. It is possible that an extremely accurate model could be created with images where the subject intersects the skyline, but the time spent in 3D software removing artifacts and imperfections would greatly increase. The more widely-used double grid flight method will yield a good model with the least artifacts produced by the SFM algorithm process, but the texture will be less crisp, and the surface will be more generalized. It may be caused by the longer distance from the camera to the subject. If the subject contains overhangs, ledges, or doorways, however, a double grid flight will have considerable difficulty in accurately capturing these features, depending on the angle of the camera throughout the flight. The findings of this study would suggest that if the goal is to create a true-to-life 3D model of an object using UAS, the best method would be the method used in the Image group 3 including images selected from the free-flight mission where the top of the subject did not intersect the skyline. Future research could certainly be conducted by standardizing the differences in angle, distance, and height for each tier or creating a better quantified ‘best practice’ for orbital methodology.
The problem of the 3D model created based on Image group 4 is due to the GNSS (Global Navigation Satellite System) tag for each image. The tagged position of each image taken on different flights may be significantly different, thus the software cannot really combine them together. While capturing lateral details of an object by aiming the camera with a large angle such as 50 degrees to the side, it introduces positional error where the position tag of the image is in fact the position of the drone instead of the center of the image. It adds to the complexity when mix using images taken from different missions where the camera angle settings are very different. With the current development of RTK (Real Time Kinetic) UAS and ground control points (GCP) such as Aero Points
®, image combinations from different flights may work better for future studies. However, the high cost of GCP could be a road blocker for general public UAS users.
5. Conclusions
This research kept in mind of creating a really printable 3D model at a cost that general public UAS users can afford. The object selected in this research has significant cultural and historical value. The findings of this study suggest that if the goal is to create a true-to-life 3D model of a single object using UAS, the best method would be the one used in group 3, using a selected image from a free flight mission. In other words, a systematic mission is preferred and more images do not necessarily result in better models. Weather conditions should be considered too. For example, a sunny and clear day may create shadow and a cloudy day may challenge the point-matching algorithm, like what was observed in this study. Post-image processing also plays an important role when printing a 3D model.
On April 17, 2019, a double grid UAS mission was carried out for tornado damage assessment. is the orthomosaic image showing the study area four days after the tornado. The Caddo House and the Snake Garden were completely gone. The visitor’s center was left with debris. The dark tornado track was still on the ground: a direct hit to the Caddo House. The digital images collected before the tornado and the created printable 3D model serve as a permanent record and memory for the cultural heritage.
. Caddo Mounds State Historical Site on April 17, 2019, following the April 13, 2019 EF3 tornado.
Acknowledgments
Jeff Williams helped to connect the research and the cultural community, and the authors appreciate his support.
Author Contributions
Conceptualization: D.K. and Y.Z.; Methodology, UAS flights and 3D modeling: J.G., D.U., D.K., Y. Z., I.H., X.W., and R.V.; Formal Analysis: J.G. and Y.Z.; Writing: Y.Z., D.K., J.G.; Writing—Review and Editing: D.U., I.H. and R.V.
Ethics Statement
Not applicable.
Informed Consent Statement
Not applicable.
Funding
This project was funded by McIntire Stennis funds administered by the Arthur Temple College of Forestry and Agriculture; and Office of Research and Graduate Studies, Stephen F. Austin State University, Nacogdoches, Texas, USA.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
References
1.
Pagliari D, Rossi L, Passoni D, Pinto L, De Michele C, Avanzi F. Measuring the volume of flushed sediments in a reservoir using multi-temporal images acquired with UAS.
Geomat. Nat. Hazards Risk 2017,
8, 150–166.
[Google Scholar]
2.
Iqbal F, Lucieer A, Barry K, Wells R. Poppy crop height and capsule volume estimation from a single UAS flight.
Remote Sens. 2017,
9, 647.
[Google Scholar]
3.
Lowe DG. Distinctive image features from scale-invariant key points.
Int. J. Comput. Vis. 2004,
60, 91–110.
[Google Scholar]
4.
Westoby MJ, Brasington J, Glasser NF, Hambrey MJ, Reynolds JM. ‘Structure-from-Motion’ photogrammetry: A low-cost, effective tool for geoscience applications.
Geomorphology 2012,
179, 300–314.
[Google Scholar]
5.
Triggs B, McLauchlan P, Hartley R, Fitzbibbon A. Bundle
adjustment—a modern synthesis. In Vision Algorithms: Theory and Practice;
Triggs B, Zisserman A, Szeliski R, Eds.; Springer-Verlag: Berlin, Germany, 2000;
pp. 298–372.
6.
Oniga E, Chirilă C, Stătescu F. Accuracy assessment of a complex building 3d model reconstructed from images acquired with a low-cost UAS.
Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017,
42, 551–558.
[Google Scholar]
8.
Yap YL, Tan YSE, Tan HKJ, Peh ZK, Low XY, Yeong WY, et al. 3D printed bio-models for medical applications.
Rapid Prototyp. J. 2017,
23, 227–235.
[Google Scholar]
9.
Themistocleous K, Ioannides M, Agapiou A, Hadjimitsis,
DG. The methodology of documenting cultural heritage sites using
photogrammetry, UAV, and 3D printing techniques: the case study of Asinou
Church in Cyprus. In Proceedings of the 3rd International Conference on Remote
Sensing and Geoinformation of the Environment, Chloraka, Cyprus, 16 March 2015.
10.
Almeshal AM, Mohammad RA, Abdullah KA. Accuracy assessment of small unmanned aerial vehicle for traffic accident photogrammetry in the extreme operating conditions of Kuwait.
Information 2020,
11, 442.
[Google Scholar]
13.
Leysath M, Galen R. Shahó and the power of place.
Teach. Artist. J. 2021,
19, 1–13.
[Google Scholar]
14.
Hawkins S. Using a drone and photogrammetry software to create orthomosaic images and 3D models of aircraft accident sites. In Proceedings of the ISASI 2016 Seminar, Reykjavik, Iceland, 17 October 2016.