Detecting, Recognizing, and Analyzing Animated Characters Talk

On July 23rd, 2020, I gave my first conference talk at the Animal Crossing Artificial Intelligence Workshop 2020, a completely virtual conference bringing scientists from around the world together during the Covid-19 pandemic, all hosted on Animal Crossing islands. 

I spoke about a side-project I had been working on for the last few months developing a deep learning computer vision pipeline to analyze an animated video, detect all character faces, and send these faces through a multi-task classification network to learn more about each of the faces. I walked through the motivations, challenges, failures, and eventual successes from the models, and shared some lessons learned in building a generalizable image model. 

The original stream is linked here, but I decided to edit the stream to reduce the delay with my audio and the slides and make it in a bit higher quality. Rest assured, I haven’t edited any of the audio, just rearranging the visuals a bit. I’ll link to the code once it is less… incredibly messy. 

Thanks for watching and cheers! 

Leave a Reply

Your email address will not be published. Required fields are marked *