From The Well-Tempered Blog, I find out about the Expression Synthesis Project, an interface created by Elaine Chew and her team at USC. I first met Elaine when leading a special session proposal on cognitive models of Music and Motion for a music theory conference. Elaine wrote one of the papers that we selected for the proposal, and she was good enough to send me revisions as she was touring Southeast Asia. I then met Elaine in person at ICMPC8 last summer.
Unlike Bart, I don't find the science in this project to be weird at all. Elaine is trying to create an interface that will allow nonmusicians, or perhaps nonexperts who do play some music, to experience the pace of decision-making that musicians make with any performance. This is an interesting way of teaching the laity to appreciate the cognitive demands of musical performance. It could also be used as a pedagogical tool in music instruction, from music appreciation classes to upper-level stylistic interpretation lessons. Children could learn to develop their sensitivity to musical phrasing while playing a video game. Advanced students could experiment with different phrasing models to hear the differences, without worrying about the technical demands of the piece. In the latter case, it could prevent phrasing-to-the-technique problems, as well as reducing physical strain on the musician.