[Thesis] 1 Month to Go! Progress from last 3 weeks

The last three weeks I have been hustling to get the following solidified:

  1. Collect all stories (4 women willing to interview and contribute their story to this piece.
  2. Successfully rigging a tilt brush sketch of an individual's face
  3. Stage design
  4. Location to show my performance
  5. Hallway (entry way to stage) tech (to create an immersive experience)
  6. Meeting with different people about my approach to the interaction
  7. Finding successful interactive narratives to base my performance off of.

Here we go!

Progress

  1. I have filmed 4 women's stories that have gone through assault and harassment in their industry.  I probably reached out to 20-25 women, but was only able to JUST get the four I needed for this performance (here is the site (https://www.chrissyelie.com/#/thesis/) that i've been forwarding to the women to get a high level understanding of the project).  Time is needed to build relationships with these women, especially around this type of subject, and I don't have a lot of time. This aspect of my thesis probably took the most time, which was unexpected. After thesis, I want to continue this project and so I will continue to build these relationships and trust over time. I went to Toronto and filmed two women (one in the ballet/burlesque community and one in the computer science/mathematics community).  I also capture stories here in NYC (one woman in the swimming community and one woman in the tech/activism of women in tech community). This is the story arc image that I have been providing the women so that they can structure their story:
StoryArc.jpg

2. After many hours of trying to get this to work, I was able to rig a tilt brush face in Maya, export it, bring it in to Unity and utilizing ARKit on the iPhoneX, puppet the face. I initially failed because I was attempting to rig the face as though each vertex was attached to create a mesh. Here is the rig I made which took many hours:

rig2.png

After spending all this time, I realized since the individual strokes are not attached to each other in a mesh, the face cannot be rigged or auto-rigged. 

ice_screenshot_20180308-172549.png

Instead, I am using the concept of blendShapes that I am individually creating for each face (there are 51 that ARKit recognizes) This is the document (with tabs) that walks through setting up the iPhone app and explanation of rigging the faces: Here is the video of me getting the mouth to work:

3. Here is my stage design with size requirements that I have been sending to locations to host the performance: 

TheMasksIWear_FloorPlan.jpg

4. I reached out to many different locations, and site visited a couple as well to try and find a space that would work well to build this show into. Unfortunately, after spending many hours on this, I have decided not to show the piece in a space yet but will be performing it at ITP and in the Riese Lounge.  When I continue this project in the future, I will definitely want to build it into a space that I can do a week of shows, if possible.

5. I want this piece to have feel very immersive and so I want the usher to invite each audience member individually (each show only have 4 audience members) and I want them to individual walk down this immersive hallway.  In this hallway, I have build self-containing IR distance sensor with speaker module that plays voices of the women when they walk down the hallway.  I would like to have three modules, one 1/3 down the hallway, one at 1/2 down the hallway and one at 3/4 down the hallway.  As the audience member triggers the IR distance sensor, voices will be triggered and will be overlapping as they get to the end of the hallway. The usher will then meet the audience member at the end of the hallway and bring them to their seat. Here is an image of the IR distance module. It consists of an IR distance sensor, MP3 wave shield and Arduino (not shown, I have a small speaker that is also attached to this module). I have build these into small black boxes that I will put on the sides of the hallway:

IR-Distance.JPG

6. I had a really great meeting with Sarah Rothberg and Mimi Yen to discuss my project. One question that Sarah posed was, who is the audience member and why are they there? This is something that after my work researching successful interactive narratives have realized is very important...

7. After our midterm presentations, Nick Hubbard suggested a couple of, in his opinion, successful interaction narratives: Facade and Her Story. The interesting thing about these narratives is that you are or become a person in the narrative (as the viewer or participator) and I feel that this really strengthens the story.  Sarah Rothberg's question of, "who is the audience member" suddenly clicked and made me think how I can include the audience member in my performance, especially since they are demanding my attending by giving an input. Say Something Bunny, one of the live performances that really inspired my project, also assigned each audience member a role in the story which I found very effective.  I am thinking of assigning the audience member a role in the story, depending on how much the interact/demand my attention in the performance, and because it's live performed, I can react and add lines accordingly.  I'm really excited about adding this aspect! I will be completing the script this week and will be writing lines that I can change depending on audience participation.  Here is my most current (last month) schedule for my thesis.

FACE TRACKING TESTS

How am i creating the faces to project?

 My face with a video projected onto it. (I am not wearing any makeup in this shot)

My face with a video projected onto it. (I am not wearing any makeup in this shot)

Ideally, I would like to have tilt brush portraits of the women (done by Akmyrat) to portray the women.  With this projected on to a screen in front of my face, you will be able to see my face through the blank spots on the drawing.

Screen Shot 2018-03-06 at 12.04.31 AM.png

I am also looking into puppeting these images by using face tracking in Unity using the iPhone X.  I am trying to rig these tilt brush portraits so that when I move and act, the portrait will mimic me.  I have a rough prototype working and now I’m just trying to replace the white face in the video below to the tilt brush image.

Other Tests

FaceRig:

Faceware: