Schedule

Week 1 (Jan 9, 11)

  • Welcome to the class
  • Themes, topics, and Big Ideas: Digitization/Documentation/Preservation, Public Engagement/Public, Digital Heritage Futures
  • What we aren’t covering in this class?
  • What is the ethos of this course?
  • Understanding final projects
  • How we’re using Mattermost
  • What is heritage?
  • Cultural Heritage vs. Natural Heritage
  • Intangible heritage vs. tangible heritage. 
  • World heritage?  Universal value?
  • What is digital heritage?
  • Why should we care?
  • Challenges in digital heritage

Week 1 Required Readings

Week 1 Supplementary Readings

Week 1 Assignments

  • Sign up for ANP412 Mattermost (you will have received an invite link sent to your MSU email address). 

Week 2 (Jan 16, 18)

  • What is Heritage/What is Digital Heritage (Continued) 
  • Deconstructing 4 Digital Heritage case studies.

Week 2 Readings

Week 2 Supplementary Readings

Week 3 (Jan 23, 25)

  • Deconstructing 4 Digital Heritage case studies (Continued).

  • How are Digital Heritage Projects Designed and Built?
  • Project Lifecycle
  • Building a Vision Document
  • Building a team (who does what?)
  • Building a Workplan
  • Building a Functional Spec
  • Sustaining a digital heritage project. 

Week 4 (Jan 3, Feb 1)

  • How are Digital Heritage Projects Designed and Built? (Continued)

  • Begin working on Lab #1

Week 4 Readings

Week 4 Assignments

Lab Assignment #1 – Create a project vision document and functional spec document. Work during 2/6 and 2/8 class (and additional time outside of class if necessary) and then present the vision document during 2/15 class (and submitted by 9am on 2/15). Vision document should be one google doc per team and shared with watrall@msu.edu (this is how they are submitted).

Here is the scenario:

Your team been approached by X heritage institution (replace X with one of the institutions below, your choice).  They want to build a digital project of some kind to enable the general public to better explore and experience their collections, their institution, etc.

Work in groups (listed below) to envision a digital project for this scenario.  Produce a vision document (here is a template).  Be prepared to briefly present your concept. The project (focus, platform, implementation, etc) is totally open.

Pick one of the following museums:

Here are the Teams:

  • Team 1: Daniel Gutierrez, Amber Olguin, Allison Berry, Gillian Emerick, Jahira Maxwell-Myers
  • Team 2: Ryan Krueger, Kayla Cuatt, Amina Darable, Ohoebe Looman
  • Team 3: Ian Armit, Richie Brand, Jenna Buckman, Olivia Wuench, Chris Gates
  • Team 4: Jo Lilly, Yiwu Wang, Hunter Gibson, Madison Brown
  • Team 5: Uzair Chohan, James Cicenia, Dan Charette, Hui Wang
  • Team 6: Jacob Digiovanni, Jun Park, Casey Orr, Elize Nardo

Week 5 (Feb 6, 8)

  • Work on Lab #1 with your team

Week 6 (Feb 15)

  • Present Lab #1

Week 7 (Feb 20, 22)

  • Digitizing, Documenting, and Preserving Heritage
  • What is digitization?
  • Why digitize heritage?

Week 7 Assignments

  • Reflective Post #1 Due (by 5pm Sunday) – Prompt: Discuss the benefits and drawbacks of digital heritage applications, projects, and experiences for heritage institutions, site, cultural landscapes, sites of memory and memorialization, etc.  What do institutions (projects or scholars) gain from investing in digital heritage applications. What do they loose? Who are served by digital heritage applications?  Who are left out?  What are the benefits and drawbacks of digital heritage applications over traditional heritage projects like interpretive materials (plaques, displays, physical exhibits, etc) or face to face community engaged projects?  Use (and cite) at least two of the readings in the class so far to specifically support your arguments.  Be critical, honest, and thoughtful.  

Week 8 (Feb 27, 29)

  • Spring Break, No Class

Week 9 (March 5, 7)

  • Digitizing Heritage – 2 case studies.

  • Digitizing: Workflow, procedures, standards
  • How is heritage is digitized?

Week 9 Required Readings

Week 9 Supplementary Reading

Week 10 (March 12, 14)

  • Understanding Preservation & Access in Digital Heritage
  • How do you preserve digitized heritage?
  • What is Metadata and how is it used in heritage digitization?
  • How do you provide access to digitized heritage?
  • Ethics and digitized heritage
  • Who owns digitized heritage (copyright and patrimony) and how should digitized heritage be used (licensing and terms of use)
  • Digitizing in 3D – challenges, methods, approaches
  • Understanding 3D digitization technologies and workflows – Photogrammetry, structured light scanning, laster scanning.

Week 10 Required Readings

Week 10 Supplementary Readings

Week 11 (March 19, 21)

  • March 19 – Lab Introduction
  • March 21work on your lab. Ethan will be in LEADR during the class period to give help, advice, guidance. If you don’t need any help, you don’t have to come to class – just use the time to work on your lab.
  • Lab #2 – Creating and Sharing a 3D Model using Photogrammetry
    In this lab, you’re going to do 3D capture of an object (whatever you want) using photogrammetry and publish it online with Sketchfab. At the beginning of the lab, we’ll run through the basic workflow and discuss some of the basic guidelines for capturing data. It will be the student’s responsibility to (1) take the images of their selected object, (2) create the model using either Beholder or RealityScan and (3) upload the model to Sketchfab, and (4) post the public link to your model on Sketchfab to the “Official Course Stuff SS24” channel of lab chat by April 2, 5pm (this is your assignment submission). If you want to get a jump on things, you can take the images of your selected object prior to the in-class lab/demo/tutorial and use them instead of the tutorial images we’ll be using.

If you are using Beholder, here is the basic workflow:

  1. Sign up for Beholder. Beholder is the photogrammetry processing software you’re going to use to create a 3D model from the photos of your selected object. Beholder is cloud based, which means it doesn’t actually run locally on your computer. This means that all your computer doesn’t have to have the kind of specialized specs you’d need if you were running the photogrammetry software locally on your machine. It also means that you will need to have an active internet connection when using the tool. Beholder works on a credit system (you pay for credits that are used to create and process models). We’ll be using the free tier which gives you 100 credits/month (enough for about 2 models) without having to pay. Once you sign up for an account, watch their little intro tutorial video. Beholder also has a step-by step processing tutorial here. If you use up your available free credits working on the project, let me know and I’ll transfer more to you.
  2. Sign up for Sketchfab. You’ll be using Sketchfab to publish and share the 3D object you created in Beholder.
  3. Read through this quick guide on how to take pictures for photogrammetry. We’ll be going over a lot of this stuff in the lab, but its good to have this on hand.
  4. Based on the above guide, think about what object you are going to capture. If can be anything. Inside or outside, movable or immovable, a complete object or an interesting part of a larger thing (say an architectural element on a larger building).
  5. Once you have finished processing the model (including centering in the scene and cropping out all of the extraneous stuff you don’t want included in the model), download it in .obj format. If you are using Beholder, you’ll get a zip file that contains 3 files – the .obj file, an .mtl file, and a .jpg file. The .obj file is the actual model, the .jpg file is the texture (sliced up into a whole bunch of pieces like a puzzle), and the .mtl file is directions (to whatever is going to display the model…in this case Sketchfab) about how to wrap the texture file around the model file and make it look like the thing you captured).
  6. Log into your Sketchfab account, click the big orange Upload button, and then drag and drop the the model files into the upload box. You can either drag the original zip file or the three individual files – it doesn’t really matter. However, you must include all of the three files in your upload or the model won’t display properly.
  7. As the model uploads and processes (which usually takes a few minutes depending on how complicated the model is), give your model a title, enter a brief description of the object, and set the “who can see” dropdown to “Anyone on Sketchfab – Public.” If you want to fine tune how your model looks in Sketchfab, you can edit the 3D properties (by clicking on the Edit 3D Settings” button. You can find a quick tutorial on some of the simple things you can do to make your model look great here.
  8. Once your model is ready to be seen by the world, click on the Publish button in the lower right hand corner of the screen. When the model publishes, you can copy the link and post it to the class chat (that is how you submit the work).

If you are using RealityScan, , here is the basic workflow:

  1. Download the RealityScan app to your mobile device (works on both iOS and Android)
  2. Watch https://www.youtube.com/watch?v=spPIqK3NVwc&t=1s and https://www.youtube.com/watch?v=HVkvHZCmVjU&t=2s for a quick introduction and tutorial on how to use RealityScan.
  3. Read through this quick guide on how to take pictures for photogrammetry. We’ll be going over a lot of this stuff in the lab, but its good to have this on hand.
  4. Based on the above guide and videos, think about what object you are going to capture. If can be anything. Inside or outside, movable or immovable, a complete object or an interesting part of a larger thing (say an architectural element on a larger building).
  5. Create an Epic Games account (or use an existing one if you’ve got it). Why are you signing up for an Epic Games account? A couple of years ago, Epic Games (yes, the people who make Fortnight and Unreal Engine) bought Capturing Reality, the company that makes RealityScan). Since then, they started running all of their user authentication through their own Epic Games account system. So, you’ll need to have an Epic Games account to use RealityScan. Epic Games also bought Sketchfab (which you’ll use to upload and display your 3D model) – which means that when you create an Epic Games account, you also create a Sketchfab account.
  6. Once you’ve logged into RealityScan, start a new project and capture your object following the process in the two tutorial videos you watched above.
  7. Once you’ve finished capturing and cropping your model, you’ll be asked to give it a name and description. From here, RealityCapture will process the images and create your model. The more images you took, the longer processing takes. Once the model is done processing, it automatically uploads it to your Sketchfab account (which was automatically created when you created your Epic Games account previously)
  8. At this point, you need to head on over to Sketchfab to finish things up and make you model public. You can either go to sketchfab.com, login using the account you created, select your model (which was automatically uploaded to Sketchfab after it finished processing in RealityScan), and click the Edit Properties button in order to finish editing your model OR (from within RealityScan), tap on the Share button, and select the Publish on Sketchfab option (this gets you to the same Edit Model page where you can finish editing your model)
  9. Once you are on the Edit Model page for your model, you can change the title and description (remember, part of the lab is to ensure that your model has a real title and description) and set the “who can see” dropdown to “Anyone on Sketchfab – Public.” If you want to fine tune how your model looks in Sketchfab, you can edit the 3D properties (by clicking on the Edit 3D Settings” button. You can find a quick tutorial on some of the simple things you can do to make your model look great here.
  10. Once your model is ready to be seen by the world (which is why set the “who can see” dropdown to “Anyone on Sketchfab – Public.”, click on the Save button in the lower right hand corner of the screen. When the model publishes, you can copy the link and post it to the class chat (that is how you submit the work).

Some general guidelines about data capture:

  • Don’t choose an object that has transparency (glass, for instance) or shinyness (reflective metal, for instance). Photogrammetry doesn’t do well with either
  • Don’t choose something that moves or is moving. The object must stay completely still during the capture process.
  • Try note to choose something whose texture is completely homogenous. Photogrammetry uses details in an object’s texture in order to align the images you feed it.
  • Don’t choose an object that has very fine detail. for instance, something with hair (a human or animal). The hair is so fine that the photogrammetry software generally can’t distinguish it from other strands of hair. The end result is that you’ll have a whole bunch of blobs as part of your model as opposed to the individual pieces of hair (or other thin/fine element of the object).
  • Make sure that whatever object you choose has a high contrast between it and whatever is behind it in the photographs you take. This means, don’t choose a dark object against a dark background. If the thing you want to capture is dark (and you can move it), capture it in a location where the background is light (so there is contrast)
  • Avoid bright, directional light sources. Do your best to choose objects (or place objects) in locations that have ambient light.
  • If your object is small and moveable, you might try placing it on a rotating turntable or lazy susan (if you’ve got one). This way, you can stay in one place, rotate the object, and take the pictures as opposed to the object being stationary and you moving to move around it to take the pictures.

The lab will be graded on two main criteria – (1) whether you followed the directions (did the capture, uploaded to Sketchfab, included a title and brief description on Sketchfab, posted to the class chat, etc) and (2) the overall quality of the model (is it complete, were you thoughtful about the object you chose, does the model have unnecessary bits of the capture cropped out, does it look like the object or item, etc). Ultimately, the lab expects that you’ve invested the time and effort to produce a good quality model as opposed to throwing something together at the last minute. The model doesn’t have to be perfect, but it has to show that you’ve invested time, effort, and energy into doing the best possible capture you can.

Week 11 Required Readings

Week 11 Supplementary Reading

Week 11 Assignments

  • Reflective Post #2 (Due by 5pm Sunday) – Prompt: Select and discuss two ethical issues wrapped up in digital heritage method & practice.  Use (and cite) at least 2 of the course readings up until this point to support and illustrate your discussion.  
  • Final Project Proposal due Sunday by 5pm. Send to Ethan via email.

Week 12 (March 26, 28)

  • March 26: Building Digital Heritage Maps and Story Maps
  • March 28th: As we have discussed, there was an unexpected scheduling conflict with LEADR on the 28th which requires us to find an alternative place for class (just for this one day). I’ve decided to hold class on Zoom (synchronously). Just jump on to Zoom at 12:40 and I’ll walk you through what you’ll be building in Lab #3 https://msu.zoom.us/j/99017506046

Week 12 Readings

Week 12 Assignments

  • Lab #2 due April 2 by 5pm. Publish your model on Sketchfab (be sure to give it a title and a brief description) and post the link to the model on the class chat (this is how you submit the lab).

Week 13 (April 2, 4)

Ethan is out of town this week, so we won’t be meeting face to face. Instead, you’ll have the week to work on Lab #3.

  • Lab #3: Building Digital Cultural Maps and Story Maps
    In Lab #3, you’ll create a very (very) simple webmap. All you need to do is follow We’ll meet during the regular class time and I’ll introduce the lab and then move people into their co-working teams where students will work though the following tutorial to produce a simple web map: https://github.com/msu-anthropology/anp412-webmapping-lab. Submit your lab by sending the single html file to Ethan (via email) by Sunday @ at 5pm.

Week 14 (April 9, 11)

  • 3D Printing in Heritage
  • Revisiting the question of ethics in 3D printing.
  • Mobile heritage, augmented reality, virtual reality, Mixed Reality

Week 14 Readings

Week 14 Supplementary Readings

Week 14 Assignments

  • Reflective Post #3 (Due Sunday by 5pm) – Based on both our discussions and your readings, discuss both the benefits and challenges of digitizing heritage (tangible or intangible). Feel free to focus on any type of benefits or challenges (technical, ethical, philosophical, practical, etc). Use (and cite) at least 2 of the course readings up until this point to support and illustrate your discussion.  

Week 15 (April 16, 18)

  • Machine Learning and Artificial Intelligence in Heritage
  • Lab #4: Build an image classification model using Teachable Machine

Pre-Step 1 (Do This Before You Come to Class on April 16)

Watch the following Teachable Machine tutorials:

Pre-Step 2 (Do This Before You Come to Class on April 16)

Choose a class of something that has multiple types. The category must be heritage related (obviously) and have a sufficient collection of openly available objects of the types you are focusing on. So, for example, an image category would be impressionist painters, and the types would be Monet, Degas, and Pissarro. Download a representative set of image (from any source) for each type. For this lab, I’d like to see a minimum of 4 types and a minimum of 25-30 images for each type. You are free to select any class of thing you want and whichever types within that class you want. The important thing is that each of the types need to be distinct from one another. If you are stuck for ideas, here are some good options for images:

  • Projectile Points – select a minimum of three North American projectile point types. There isn’t a really good central place to get projectile point type images. However, http://projectilepoints.net/ is a good place to start. Depending on the types you want to use to train your models, you might be able to get all of your images here. However, some types don’t have that many images…which means you might have to fire up the old Google Image search to find additional images (which can be dodgy as you can never be 100% sure that which you find though a general images search is a correct representation of that specific type)
  • Ukiyo-e Artists/Prints – Like other types of visual art, Japanese woodblock prints have recognizable artists with recognizable styles. A great place to go for images is Ukiyo-e Search. You could select three artists from one period or three artists from across multiple periods, its totally up to you.

Step 1 (In Class)

Give your model data by feeding it images for each of the types that you are using. Do this process one by one – create a type (give it a descriptive name), feed the images to the type (by either uploading them directly or from Google Drive. Once you’ve finished with the first type, create a second one, give it a name, and feed it the images. The more types you feed it, the more things it will be able to classify. The more examples of each type you feed it, the more accurate it will be at correctly classifying other examples (that it hasn’t already seen). The lab requires a minimum of 4 types with 25-30 images each. If you want to do more than that (more types or more images), go ahead.

Step 2 (In Class)

Train your machine learning model by going to the Training card and clicking on the Train Model button. The more images you have for each type, the longer the training will take.

Step 3 (In Class)

Test your model by going to the Preview card, changing the Input from webcam to file, and then feeding it an image from one of each of the type that you hadn’t including in the first round of training (something brand new that it hasn’t seen before). It should be able to identify what type the image belongs to. Don’t feed it images for other types as it hasn’t been trained on those and won’t be able to recognize images that it hasn’t been trained on. The one frustrating things about Teachable Machine is that if it wrongly classifies an image, you can’t correct it. It just says “hey, this is what I think this thing is, and you can’t tell it “um, no…its this other thing.” If you want, you can always go back to the data and training steps to add more images (of the existing types or of new types) and retrain your model. The more data is has, the more accurate (theoretically) it should be.

Step 4 (In Class)

Export and submit. When you are satisfied with your model, click on the Export Model button in the Preview card. When the Export window opens, select the Upload (Shareable Link) button, and click on the Upload My Model button. Once it is finished uploading. copy the sharable link – this is what you will include in your lab post (Step 5) and will allow anyone to test out your model. In addition to generating the sharable link to your model (which you’ll include in your lab post), you need to submit the source project file. To download this file, click on the menu button in the top left hand corner of the main screen (just left of the Teachable Machine logo) and select Download Project as File. This will download a .tm file to your machine. Change the file name to your last name and send to it to me no later than Sunday, 5pm. This file will let me test whether you’ve fulfilled the requirements of the lab (minimum number of classes, minimum number of images per class, etc). Submitting the project file and writing the lab post (Step 5)

Step 5

In addition to submitting the project file, write a post on the course website (minimum 500 words and no later than Sunday, April 21st at 5pm) that includes the shareable link and discusses the following

  1. What type of thing did you choose to classify?
  2. Why did you choose that type of thing to classify?
  3. Describe your workflow for building your classification model (where did you get the images from, etc)
  4. Once you trained your initial model, did it work on other images? How successful was it in classifying other images (that were not part of the initial training)? If not, why did you think so? What did you have to do to make the model more successful (add more examples, add different kinds of examples, etc)?
  5. Imagine and describe some sort of digital heritage experience that your model could be used in?

If the post does not include the sharable link, you will not receive credit for the lab.

Week 15 Assignments

  • Reflective Post #4 (Due Sunday by 5pm) – It’s easy to imagine applications of artificial intelligence and machine learning, but what about the drawbacks?   What are the potential problems (technical or ethical) within deploying AI and machine learning in heritage. Discuss and provide supporting evidence.
  • Lab #4 due Sunday by 5pm (both the model – as described in Step 5 – and the lab post)

Finals Week (April 22-26)

  • Final Digital Heritage Projects due by 11:59 on 4/25.
  • Final Digital Heritage Project discussion post due by 11:59 on 4/26