We made some cool new machine learning models for separating birdsong in soundscape recordings, and demonstrated how to use the separated audio to improve downstream classification. The separation model is available on github, where we've also got lots more examples. There's also a paper.
We just launched a machine learning competition for bird identification in soundscapes! This is a surprisingly difficult problem, which can ultimately help with ecosystem health monitoring (for example, if birds X, Y, Z are present, you can make inferences about their food and predators). I've been building models in this space for a couple of years, working with both the Cornell Lab of Ornithology and Cal Academy of Sciences in my spare time. I'm excited to see what sort of ideas we get from the larger community!
Last year, Oscar Sharp and I made the short-film Sunspring in just two days for the Sci-Fi-London 48 Hour Film Contest. It was (so far as we know) the first film created from a computer-generated screenplay [1,2,3,4]. This year, Oscar and I followed up on Sunspring with a new short film created for the same contest: It's No Game, starring David Hasselhoff. See the accompanying article in Ars Technica for more details. (Rather than generating the screenplay in its entirety, this time we used our neural nets as augmentative writing tools to generate short snippets of dialogue in various styles. )