Throughout our archaeological adventures, one of our primary focuses has been on the technological pipeline of data capture, processing and dissemination.
Fancy words for saying- how to we go from:
Step One: Collecting visual data of an archaeological site or object in the field or in a museum (with a camera, laser scanner, notes, etc). Its not as easy as it sounds and involves a lot of flexibility and ingenuity to get good data when you’re out in the middle of the desert with an over-sized scanner or underneath a castle with little to no lighting.
To Step Two –how do we process the data pieces together. For instance, with laser scanning, this would be where one puts all the different scans together to build one big gorgeous site model–rather like a 3D jigsaw puzzle of data points. BUT in this step, not only do we need to figure out ways to keep every bit of needed information stuck to each other (all the metadata about the data being collected and all the paradata of the equipment specs and processing itself), but also how to transmit this OUT of this step without losing the resolution or quality capacity to recall all the other information. This is the trickiest of the steps and the one where modern technology falls incredibly short. The output is always sadly a lower resolution than the input. AND if we want to be able to add information to something later (per the layered data realities annotation systems we’d been working on at the Center of Interdisciplinary Science for Art, Architecture, and Archaeology (CISA3))–then we need to build that capacity into the processing, so that even once its ‘processed out,’ new bits of information can still be added to it.
And then to Step Three–creating a final product. This is where the point cloud videos, augmented virtual reality systems, and 3D printing come into play.
To recap, we’ve been taking visual data from archaeological sites and artifacts, modelling them into digital 3D versions of themselves, and then using those to create diagnostic annotatable virtual models for further study and physical 3D printed variants.
This means that in addition to Vid having built his own agile point cloud buffering software, we also test out a lot of the software systems for modelling and rendering. And we like to play with that final output step 3 with different engaging ways to ‘re-build’ the data (hence the Open Access Antiquarianism art project 😉 ).
Here are a couple of examples of our modelling and fabrication games:
The Fountain of Asgard
The Artemision Bronze in the National Archaeology Museum in Athens
Replicated Archaeological Jewelry
My particular favorite sideline along the way, has been trying to extend the imaging/replicating capacity to deal with finer detailed pieces–like ancient jewelry. Its a cool concept not just because it means eventually people (like me) could come out of a museum or offline wearing their favorite piece of history (be it a tiara, necklace, ring, etc), but it means that some of the more ancient pieces could actually be replicated and experimented with. Because for all that archaeologists and curators have guessed at how certain pieces of prehistoric jewelry were worn–often this is pure guesswork.
The piece I’ve been working on for awhile is from the National Archaeological Museum at Athens. In their gold room, there is a set of Artemis and Aphrodite gold hair medallions, which sounds solid enough given their appearance. But think about it further—were both worn at once on Princess Leia buns? Only one at a time? Its not like the curators of the museum (let alone a visiting archaeologist with camera clearance) could pop them out of the case and start trying them on. But a 3D printed copy would let us do that.
(3D model image forthcoming)…
You’ll note that the 3D model will only handle the medallion bit, so the chains will have to be a different medium, added on later. Soon, very soon, that kind of lovely flexible chain replication will be possible on this kind of thing however. And I am ridiculously excited about it.