Here’s the properly formatted version:
Scientific & Technological Development
In terms of the technical development of our project, we’ve conducted significant research in the fields of camera-based motion tracking and neural network architecture to develop a model that will interpret the performer’s movements for our machine to perceive.
- Researched various existing models for multi-camera body tracking (Free Mocap, EasyMocap, Mediapipe, MoveNet).
- Researched various solutions for camera calibration when using multi-camera systems.
- Discussed with HLRS about the potential of them developing a multi-camera tracking solution for us.
Insights:
- Open-source multi-camera solutions currently seem to support offline rather than real-time processing.
- A custom solution will take time to develop and is likely not a feasible option within the context of this residency.
- We are considering professional mocap recordings to extract our training data using OptiTrack, mocap suits, or Captury via our connections at Target 3D (a UK-based motion capture & virtual production studio).
- We are also contemplating using a single-camera tracking solution for the final performance.
- Researched different models for classifying motion capture data, both supervised and unsupervised, using RNN, LSTM, and Autoencoder architectures.
- Developed a custom LSTM Neural Network for mocap classification using archived mocap choreographies.
- Started visualizing class clustering by performing 2D and 3D t-SNE dimensional reduction on the second LSTM layer to evaluate the model’s performance and extract feature convergence from input mocap data.
Insights:
- We chose to use a supervised learning architecture following a model with substantial supporting literature.
- We aim to perform analytical post-processing to cross-correlate the output classes with the input data.
We’ve had some highly engaging initial discussions with the team in Rome during the January workshop, where we received initial feedback on our technical workflow. We also had the chance to share knowledge and exchange ideas with other artists working in a similar field.
In February, we visited HLRS in Stuttgart, our main hub. We had the opportunity to experiment with their on-site technology and test some of our initial camera tracking setups using their LED wall. We discussed the challenges encountered during these testing sessions and exchanged ideas on potential solutions. One of the scientists at HLRS is actively researching an in-house tracking system, and our discussions were fruitful in potentially collaborating on a solution that could benefit both of our research objectives.
Given that our project, besides its technical challenges, has a significant performative aspect, collaboration with tech-enthusiastic movement specialists is crucial. Both the HLRS team and Sony CSL have connected us with potential collaborators, including the State Theatre in Stuttgart and Aterballetto in Bologna. We’ve had promising initial conversations with these institutions, and we hope to find common ground for collaboration during our residency, though timing and availability remain concerns.
As our ideas and technical approaches have matured since the start of the residency, we hope that during the next stages of S+T+ARTS Air, we can engage in deeper conversations with both of our hubs, receive more meaningful feedback, and expand on our common interests in a more frequent and collaborative manner.
Let me know if there’s anything else you’d like adjusted!