I was still sitting in Atlanta, when I decided to write a little post about interesting topics, papers and talks from ICASSP 2014, a speech and signal processing conference. So here are my notes on the keynotes and my impression of the conference :D.
Plenary Talk: Signal Processing in Computational Art History
Speaker: Richard Johnson
This talk described a novel field for signal processing, or computer vision called "Computational Art History". Apparently, art historians are often involved in some sort of detective work gathering information towards the authenticity of paintings. The speaker described his work so far with art historians in which he applied simple signal processing algorithms in order to gather quantifiable evidence. His three examples were authenticity of canvas paintings, photographs and laid paper. In all these examples, the research team analyzed image material from the material. In the canvas example, the authors count the number of threads on the canvas. Based on these distribution they can find evidence for two paintings being made on material from the same piece of canvas. Similar, in the photographic paper work, his team investigates if two pieces of photographic paper are from the same batch. Such indicators can help art historians to classify paintings and other pieces of art. It is interesting to see such a collaboration between art historians and signal processing folks. I hope to see more of this work in the future.
Plenary Talk: Model-Based Signal Processing
Plenary Talk: Signal Processing in Computational Art History
Speaker: Richard Johnson
This talk described a novel field for signal processing, or computer vision called "Computational Art History". Apparently, art historians are often involved in some sort of detective work gathering information towards the authenticity of paintings. The speaker described his work so far with art historians in which he applied simple signal processing algorithms in order to gather quantifiable evidence. His three examples were authenticity of canvas paintings, photographs and laid paper. In all these examples, the research team analyzed image material from the material. In the canvas example, the authors count the number of threads on the canvas. Based on these distribution they can find evidence for two paintings being made on material from the same piece of canvas. Similar, in the photographic paper work, his team investigates if two pieces of photographic paper are from the same batch. Such indicators can help art historians to classify paintings and other pieces of art. It is interesting to see such a collaboration between art historians and signal processing folks. I hope to see more of this work in the future.
Plenary Talk: Model-Based Signal Processing
Speaker: Chris Bishop
The talk on model based signal processing was about the use of the probabilistic graphical modeling (PGM) framework or probabilistic programming for signal processing. While the talk felt a bit more like an introduction to PGMs, I like the message a lot. The main point is that instead of writing lots of code for every model, one could use PGMs as a way of specifying the problem abstractly and use general purpose inference engines (such as Microsofts infer.net or BUGS). The main example in the talk was player skill classification for the XBox system. While the talk was good, I would have wished for more insight into the actual modeling of signal processing problems. But maybe it is too much material for a keynote.
My overall conference experience was a good one. There were a lot of interesting talks and posters that were related to my signal processing interest in audio and video mining. However, since the field is pretty big there were also time slots that I found completely boring, since there was a lot of hardware research presented.
The talk on model based signal processing was about the use of the probabilistic graphical modeling (PGM) framework or probabilistic programming for signal processing. While the talk felt a bit more like an introduction to PGMs, I like the message a lot. The main point is that instead of writing lots of code for every model, one could use PGMs as a way of specifying the problem abstractly and use general purpose inference engines (such as Microsofts infer.net or BUGS). The main example in the talk was player skill classification for the XBox system. While the talk was good, I would have wished for more insight into the actual modeling of signal processing problems. But maybe it is too much material for a keynote.
My overall conference experience was a good one. There were a lot of interesting talks and posters that were related to my signal processing interest in audio and video mining. However, since the field is pretty big there were also time slots that I found completely boring, since there was a lot of hardware research presented.