For our Jam Sesh project, we began with a broad goal for a project. With lofty goals that included analyzing audio and subsequently synthesizing a supporting group of instruments in real time, we definitely had our work cut out for us from the beginning. Our initial course of action revolved around researching for various techniques to implement music-based programs in java. I researched some past projects that worked to produce live synthesis of music based on various templates for input; our goal was to control as few of the parameters as possible to create a more randomized output for synthesis. Whenever we neared our deadlines, I took the lead on documenting our work and packaging everything for submission. For each of these project checkpoints, we had to reevaluate the scope of our project ó sometimes we were forced to refine our end goals to maintain the viability of the projectís success. I found this experience to be valuable overall not because of what I learned about writing music programs but predominantly due to the intangibles that are universal to working on a long-term computer science project. By allowing the strengths of my peers to be highlighted while using my own, I believe I have become a stronger team member and will be better prepared to work on group projects in the future.
My original goal for this project was simply to write a music analyzer, but after I talked to Stephen on the group division day, I agreed that making a music synthesizer is more interesting. I am a relatively quiet person, so I focused more on codes and researches. Since I studied music theory before and am taking piano and voice lessons, I was responsible for Synthesis and finding a way to make computer-generated music sounds natural. Thomas, James and I together made the algorithm that decide which chord to play. Originally, we focused on finding chords based on the output of Analysis, but since we decided to drop the whole Analysis part, our focus point changed to how to continuously generate chords by the program itself without user input. I wrote the main part of the Synthesis that is being used in the final project. I started by writing a program that takes a key signature and tempo and plays a cycle of chords using circle progression concept (raising 4 to the base note each time) as the chord progression and JFugue API found by Thomas in his research. I showed my work to the group, and Thomas and James each gave me some really good suggestions on how to make the chords sound more natural. I am also responsible for making sure the Synthesis program works correctly with the Application. When I put the Synthesis class and the Application written in Java Applet, I realized that the JFugue API used in Synthesis class would keep the Applet from initializing properly. I then modified the Application so that instead of using Applet, the Application uses Java Swing, which I have never used before, and found that JFugue API works properly in Swing. I also modified the Application so that Synthesis class is called in a separate Thread because otherwise the Application would stop responding once Synthesis starts running. In total I had no past experience with synthesizing music at all in the beginning of this semester. I learnt so much about music synthesis and analysis through this project. More than those, this project also gave me an idea of how teamwork works in computer science. If possible, I would like to keep working on this project after this semester and try to add all those features we dropped.
I found working on JamSesh to be a great experience in learning how to work on a team project and on how to do effective research in an area I originally knew little to nothing in. We initially began with an extremely ambitious project vision, planning to create an application that would synthesize backup music to live music in real time. We delegated ourselves to work on different tasks, and I started working on the analysis portion of the project. Learning how sound is represented digitally and how it can be processed required a lot of research. After making a few test programs utilizing a couple APIs that can detect pitch, I realized that because of the nature of the raw data, pulling any useful musical data from it was futile. Although I was able to make a simple program that would play back a piano note of the same pitch as the note heard by the program, there was a small delay, making real time music analysis and synthesis impractical. We changed our project goal to generating music and chord progressions.!
Learning how to synthesize unique music was another big challenge that brought its own host of problems. I did a lot of research in the areas of evolutionary programming and genetic algorithms to try and figure out how to make computer-generated music. I found that nearly all implementations of genetic algorithms used to create music required some level of human input to help train the system on what sounded good versus what sounded bad. We didnít want any extra human input, so we scratched the idea.!
In the end, we ended up going with a random chord generator based on music theory principles so it would sound good to the ear. Although our final product came short of our initial project goal, I can certainly say that I learned a lot about sound processing and computer-generated music. I would say that my main role on the team was the researcher and tester, as I did a large amount of research and wrote a number of test programs, putting what I learned to practice and to see if we could incorporate it into our project effectively.
I was the unofficial leader to begin with but after the project progressed my role shifted to that of organizer, researcher and overall management. It was my job to mediate between groups. My coding experience and limited music knowledge put me in the Appplication and Analysis group. I was responsible for designing the UI in a shared document Application Layout, although due to the changes in direction to our project we ended up limiting the users possible interactions and thus scrapped the UI we designed. My work for Analysis up until we scrapped the non-pitch detection part of analysis was researching prior work and APIs. My main job was to organize and schedule meeting and mediate between groups, which entailed keeping each group informed on the others progress. This can be shown in the document I wrote Interactions between groups, which was for the first direction of the project but the majority of this too was scrapped when we dumped analysis. Most of the documentation is a combined effort between myself and Ben, while the late night coding was done by Edward and Thomas. At meetings we generally discussed the reality of our project and the music theory going into our project. My job through the meeting was to design and look at how to piece the puzzles together which I put together graphically in a couple of flow charts. To help the groups work smoothly together I tried to teach people GitHub which I spent several hours trying to do, unfortunately we mainly worked in ITS as a couple members did not have computers meaning GitHib was useless to them, and I was the only member using a PC and GitHub works very differently on a Mac. I believe the success in our project is from the framework we have made and the knowledge we have acquired for future work in the field of computer synthesized music.