-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathautumnschool2010attendees.html
305 lines (224 loc) · 26.5 KB
/
autumnschool2010attendees.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en" dir="ltr">
<!-- Mirrored from soundsoftware.ac.uk/autumnschool2010attendees by HTTrack Website Copier/3.x [XR&CO'2014], Tue, 01 Aug 2017 18:45:19 GMT -->
<!-- Added by HTTrack --><meta http-equiv="content-type" content="text/html;charset=utf-8" /><!-- /Added by HTTrack -->
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>Who attended the 2010 Autumn School? | Sound Software .ac.uk</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<link rel="shortcut icon" href="sites/soundsoftware.ac.uk/files/favicon.png" type="image/x-icon" />
<link type="text/css" rel="stylesheet" media="all" href="sites/soundsoftware.ac.uk/files/css/css_96672be50d2b2bf1d283b8d9324e7be5.css" />
<link type="text/css" rel="stylesheet" media="print" href="sites/soundsoftware.ac.uk/files/css/css_9cb387b73479bd590fedb6ec230dc035.css" />
<!--[if IE]>
<link type="text/css" rel="stylesheet" media="all" href="/sites/all/themes/zen/zen/ie.css?j" />
<![endif]-->
<script type="text/javascript" src="sites/soundsoftware.ac.uk/files/js/js_5be44a4bd1f33d1988c9f73fe4b7f463.js"></script>
<script type="text/javascript">
<!--//--><![CDATA[//><!--
jQuery.extend(Drupal.settings, { "basePath": "/", "googleanalytics": { "trackOutbound": 1, "trackMailto": 1, "trackDownload": 1, "trackDownloadExtensions": "7z|aac|arc|arj|asf|asx|avi|bin|csv|doc(x|m)?|dot(x|m)?|exe|flv|gif|gz|gzip|hqx|jar|jpe?g|js|mp(2|3|4|e?g)|mov(ie)?|msi|msp|pdf|phps|png|ppt(x|m)?|pot(x|m)?|pps(x|m)?|ppam|sld(x|m)?|thmx|qtm?|ra(m|r)?|sea|sit|tar|tgz|torrent|txt|wav|wma|wmv|wpd|xls(x|m|b)?|xlt(x|m)|xlam|xml|z|zip" } });
//--><!]]>
</script>
<script type="text/javascript">
<!--//--><![CDATA[//><!--
(function(i,s,o,g,r,a,m){i["GoogleAnalyticsObject"]=r;i[r]=i[r]||function(){(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)})(window,document,"script","http://www.google-analytics.com/analytics.js","ga");ga("create", "UA-18698611-1", { "cookieDomain": "auto" });ga("set", "anonymizeIp", true);ga("send", "pageview");
//--><!]]>
</script>
<link rel="alternate" type="application/rss+xml" title="Subscribe" href="rss.xml" />
<meta name="google-site-verification" content="gKmyKIK_OFaFd5f07E7wIMUM59VMmUIZc7xa3qKgFEo" />
</head>
<body class="not-front not-logged-in node-type-story two-sidebars page-autumnschool2010attendees section-autumnschool2010attendees">
<div id="page"><div id="page-inner">
<a name="navigation-top" id="navigation-top"></a>
<div id="skip-to-nav"><a href="#navigation">Skip to Navigation</a></div>
<div id="header"><div id="header-inner" class="clear-block">
<div id="logo-title">
<a href="index.html" title="Home" rel="home">
<div id="site-name"><strong>
Sound Software .ac.uk </strong></div>
</a>
</div> <!-- /#logo-title -->
</div></div> <!-- /#header-inner, /#header -->
<div id="main"><div id="main-inner" class="clear-block with-navbar">
<div id="content"><div id="content-inner">
<div id="content-header">
<h1 class="title">Who attended the 2010 Autumn School?</h1>
</div> <!-- /#content-header -->
<div id="content-area">
<div id="node-26" class="node node-type-story"><div class="node-inner">
<div class="meta">
<div class="terms terms-inline"> in <ul class="links inline"><li class="taxonomy_term_3 first last"><a href="taxonomy/term/3.html" rel="tag" title="">Outreach</a></li>
</ul></div>
</div>
<div class="content">
<p>Greg asked everyone - including the SoundSoftware.ac.uk organisers! - to come up with a short and snappy introduction to their research work: here are some of them.</p>
<hr />
<p><a href="http://www.dcs.shef.ac.uk/~amyb" target="_blank">Amy Beeston</a>, University of Sheffield</p>
<p><strong>Compensation for reverberation</strong></p>
<p>Reverberation adversely affects artificial listening devices, and automatic speech recognition in particular suffers high error rates with even a minimal level of reflected sound energy. Our solution lies in the development of computational auditory models based on psychoacoustic principles of hearing. Like people, these models absorb information from contextual sound in order to improve the recognition of spoken words in reverberant rooms. Unlike existing methods of dereverberation, our approach allows us to consider the rapid changes in acoustical environments that are experienced in every-day, real-room listening situations.</p>
<hr />
<p><a href="http://www.uea.ac.uk/cmp/People/Research+Students/Andrea+De+Marco" target="_blank">Andrea De Marco</a>, University of East Anglia</p>
<p><a href="http://speakeridentification.blogspot.com/" target="_blank"><strong>Intelligent speaker identification</strong></a></p>
<p>We are studying the problem of reliable and robust speaker identification that affects forensic scientists, security systems, and ambient intelligence systems, impacting models that currently are well behind simple, untrained cognitive processes.</p>
<p>Our solution looks at directly modeling relevant human cognitive processes using the best machine learning techniques as well as cognitive perception theory available unlike the state of the art systems that use simple statistical models valid for small populations or excessive test data [D.A. Reynolds 95/2000]. It will perform on par or better than basic untrained human speaker identification.</p>
<hr />
<p><a href="http://www.isvr.soton.ac.uk/STAFF/staff311.htm" target="_blank">Anne Wheatley</a>, University of Southampton</p>
<p><a href="http://www.isvr.soton.ac.uk/MFG/research.html" target="_blank"> <strong> Music perception in cochlear implant users</strong></a></p>
<p>Cochlear implant users are not able to perceive music as well as they can understand speech. Music perception tests enable us to determine the ability of cochlear implant users to comprehend musical sounds, in order to improve cochlear implant technology.</p>
<p>This research affects cochlear implant users, candidates, clinicians and cochlear implant manufacturers.</p>
<p>We are currently reviewing the trends in music perception test use in UK cochlear implant centres. We are also reviewing currently available music perception test materials using listeners with normal hearing and cochlear implant users.</p>
<hr />
<p><a href="http://www.elec.qmul.ac.uk/digitalmusic/people/rebeccas.htm" target="_blank">Becky Stewart</a>, Queen Mary, University of London</p>
<p><strong>Stop Looking, Start Listening</strong></p>
<p>Incorporating interactive audio interfaces into music discovery.</p>
<p>Text searches give us words as results. Image searches give us pictures as results. Music searches do not give us songs as results. Instead, we usually get a list of song titles. It then takes several mouse clicks before we listen to any music. Interfaces which incorporate more listening can be a faster way to find music than a standard interface like iTunes. We create interfaces which let you listen immediately to help you find music quicker.</p>
<hr />
<p><a href="http://benfields.net/" target="_blank">Ben Fields</a>, Goldsmiths, University of London</p>
<p><strong>Contextualize Your Listening</strong></p>
<p>Viewing Playlists as a Vehicle for Music Recommendation.</p>
<p>Time is what makes music different. To music recommenders work better I exploit time, through the use of sequential ordering of recommended music and playlist generation. Further my work entails a better understanding of the existing state of the art in playlist generation and the dependency on notions of music similarity. I've created datasets that contain both audio signal and social connections, using this to create a multimodal automatic generated similarity space. Using this similarity space, I've created a group radio web application, creating playlists based on periodic request from current listeners. This system is live at <a href="http://radio.benfields.net/" title="http://radio.benfields.net">http://radio.benfields.net</a> . I've also been working on ways to describe and compare playlists, leading to distance spaces for playlists, based on social tags.</p>
<hr />
<p>Chris Hummersone, University of Surrey</p>
<p><strong>Modelling the precedence effect</strong></p>
<p>I build models of the precedence effect, including its dynamic components, in order to localise and separate mixtures of sounds.</p>
<p>The precedence effect has been observed in psychoacoustics for many years. Although many computational models of precedence exist, they do not yet include a component that accounts for dynamic processes; these processes appear to adjust precedence to the room in which the listener is located. Data from the models that I have built confirm that there is a necessity for an equivalent computational mechanism that I am currently working towards. This will drastically improve separation based on binaural cues. The technology could be used in many areas including intelligent hearing aids and front-end processors for speech recognisers.</p>
<hr />
<p><a href="http://www.juppiemusic.com/research" target="_blank">Daniel Wolff</a>, City University London</p>
<p><strong>Culture-Aware Music Recommendation</strong></p>
<p>Integrating cultural bias into computational models for music similarity.</p>
<p>Modern music recommendation systems provide valuable tools for both advertisement and exploration of musical contents. As a user, if you are not familiar with the music you are searching for, there's a good chance these system won't match up to your expectations. Our music similarity model integrates cultural bias and therefore aims to adapt results to your particular cultural identity.</p>
<hr />
<p>Eoin Mullan, Queen's University Belfast</p>
<p><strong>Physical Modelling for Sound Synthesis in Computer Games</strong></p>
<p>Real time physical modelling techniques are providing more realistic and varied sound effects in computer games.</p>
<p>The problem of creating realistic sound effects in computer games is being largely ignored as developers continue to use decades old sample playback techniques, while graphics, physics and artificial intelligence algorithms continue to improve. Our solution is to model the sound producing vibrations of virtual objects based on their physical properties and information, often from a physics engine, on the types of interaction that occur. Unlike the current practice of sample playback, which is labour intensive to implement and can become repetitive and unrealistic for the user, our technique creates more realistic, varied sound effects and a more immersive gaming experience.</p>
<hr />
<p><a href="http://www.cstr.inf.ed.ac.uk/ssi/people/s9903055.html" target="_blank">Erich Zwyssig</a>, Edinburgh University</p>
<p><strong>Speaker Diarisation in Meetings</strong></p>
<p>Speaker Diarisation in Meetings ("who spoke when") is essential for the accurate speaker and speech recognition necessary for automatic meeting transcription, summarisation and list of action/decision generation.</p>
<p>Detecting the presence of speech and isolating and merging individual speakers in recordings of meetings is still on open research issue. This research aims to improve the performance of voice activity detection (VAD) and speech segment clustering and merging, i.e. speaker diarisation. Typical problems in meeting recordings are, for example, noise, reverberation or overlapping speech. These degrade the performance of speech processing and methods and algorithms to overcome them are needed.</p>
<hr />
<p>Henry Lindsay Smith, Queen Mary, University of London</p>
<p>Automatically generating kits from drum loops is a problem for users of mpc-like audio plugins, who currently have to manually classify sliced loops. Our solution aims to automate all or much of this process, enabling 100% correct classification within a few clicks, in order to improve creative workflow - unlike other drum machine style plugins which lack this feature.</p>
<hr />
<p>Jens Enzo Nyby Christensen, Cambridge University</p>
<p><strong>Cheap touch screens</strong></p>
<p>A software-only implementation of touch screen functionality.</p>
<p>Decreasing profit margins and the increased demand for touch screen functionality in the telecommunications industry, means that the industry is constantly looking for cheaper ways to give the user what they want. Acoustic Pulse Recognition enables the mobile phone manufacturer to supply touch screens at a fraction of the typical touch screen cost, with extra features and increased reliability.</p>
<hr />
<p><a href="http://www.homepages.ucl.ac.uk/~ucjtksk/" target="_blank">Katrin Skoruppa</a>, University College London</p>
<p><strong>Kids Hear Language</strong></p>
<p>Exploring the link between speech perception and language outcome in children with hearing impairment.</p>
<p>About 2 in 100 children suffer from hearing impairment. Conventional hearing aids and, since recently, cochlear implants allow the restoration of some of their auditory capacities, an important prerequisite for oral language acquisition. However, the speech signal they perceive remains impoverished and distorted. We study how children with hearing impairment learn language despite these input limitations. More specifically, we investigate whether they benefit from the extraordinary language learning mechanisms that children with normal hearing use to acquire their native language with surprising speed and ease.</p>
<hr />
<p>Mathieu Barthet, Queen Mary, University of London</p>
<p><strong>Musicology for everyone</strong></p>
<p>New music technologies designed to fit user needs.</p>
<p>The recent increase of digitized music archives has launched the development of new computational models to process music content. Music software based on machine learning techniques and non-stationnary signal processing are able to perform complex analysis and visualisation tasks, however little has been done to investigate whether they were adapted to specific user needs, or how to improve their usability. Our research will focus on one category of users, the musicologists, who are liable to use such technologies in the formal analysis of music. We will conduct an ethnographic study based on naturalistic observation and interviews to better understand the processes underlying musicological research and provide innovative solutions to enhance human/computer interaction in computational musicology.</p>
<hr />
<p>Michael Gatt, De Montfort University</p>
<p><strong>New Tools to Analyse Electroacoustic music!</strong></p>
<p>We will develop an analysis toolbox to allow better understanding of compositions within the realm of electroacoustic music.</p>
<p>Within the field of electroacoustic music there exists no universal strategy to analysis it. This affects both the electroacoustic community and new listeners that want to gain a better understanding of the music. </p>
<p>Our solution is to create a toolbox of analytical tools that will aid users to create graphical scores for musical analysis. Current programs such as the Acousmographe only allow a user to create these scores and provide no help or guidance for the end user. Our program will have built in aids, based on the toolbox, that will guide the end user by suggesting analytical methods that will be implemented within the program.</p>
<hr />
<p><i>And our takes on the SoundSoftware.ac.uk project:</i></p>
<p>Chris Cannam, SoundSoftware.ac.uk</p>
<p><strong>Software skills for audio research</strong></p>
<p>SoundSoftware project aims to help audio researchers by building their software skills.</p>
<p>Research students find it difficult to manage the software tools they need to produce and validate their work. We aim to teach the skills they need and to provide facilities they can use to make their lives easier and their research more sustainable.</p>
<hr />
<p>Luis Figueira, SoundSoftware.ac.uk</p>
<p><strong>Achieving Sustainability in Research</strong></p>
<p>The problem of almost nonexistent good practice for software development deeply affects the Audio and Music Research community, who currently have difficulty reusing other researchers' software or even reproducing their own results.</p>
<p>Our project aims at offering the researchers the tools they need—either by teaching good practice on software development, or by giving access to specific tools, like code repositories or documentation tools.</p>
<p>This project’s team has many years of experience in the area, having suffered from the same problems we’re addressing—and therefore we’re highly motivated to improve the current situation!</p>
</div>
</div></div> <!-- /node-inner, /node -->
</div>
</div></div> <!-- /#content-inner, /#content -->
<div id="navbar"><div id="navbar-inner" class="clear-block region region-navbar">
<a name="navigation" id="navigation"></a>
<div id="primary" class="clear-block">
<!-- <ul class="links"><li class="menu-245 first"><a href="/resources" title="">Resources</a></li>
<li class="menu-117"><a href="/activities" title="Activities">Activities</a></li>
<li class="menu-116 last"><a href="/aboutus" title="">About</a></li>
</ul> -->
<ul class="menu"><li class="expanded first"><a href="resources.html" title="">Resources</a><ul class="menu"><li class="leaf first"><a href="tools.html" title="Tools and Facilities">Tools</a></li>
<li class="leaf"><a href="handouts-guides.html" title="Printable Handouts and Guides">Handouts/Guides</a></li>
<li class="leaf"><a href="videos.html" title="Videos and slide presentations">Videos/Slides</a></li>
<li class="leaf"><a href="programming-examples.html" title="">Code examples</a></li>
<li class="leaf last"><a href="archive.html" title="">Blog archive</a></li>
</ul></li>
<li class="expanded"><a href="activities.html" title="Activities">Activities</a><ul class="menu"><li class="leaf first"><a href="rr-prize.html" title="Reproducible Research Prizes">Reproducible Research Prizes</a></li>
<li class="leaf"><a href="soundsoftware2014.html" title="SoundSoftware 2014: Third Workshop on Software and Data for Audio and Music Research">SoundSoftware 2014</a></li>
<li class="leaf"><a href="soundsoftware2013.html" title="SoundSoftware 2013: Workshop on Software and Data for Audio and Music Research">SoundSoftware 2013</a></li>
<li class="leaf last"><a href="soundsoftware2012.html" title="SoundSoftware 2012: Workshop on Software and Data for Audio and Music Research">SoundSoftware 2012</a></li>
</ul></li>
<li class="expanded last"><a href="aboutus.html" title="">About</a><ul class="menu"><li class="leaf first"><a href="overview.html" title="Our aim">Our aim</a></li>
<li class="leaf"><a href="aboutus.html" title="">Who we are</a></li>
<li class="leaf last"><a href="contact.html" title="Contact us">Contact us</a></li>
</ul></li>
</ul> </div> <!-- /#primary -->
</div></div> <!-- /#navbar-inner, /#navbar -->
<div id="sidebar-left"><div id="sidebar-left-inner" class="region region-left">
<div id="block-block-9" class="block block-block region-odd odd region-count-1 count-1"><div class="block-inner">
<div class="content">
<p><a class="twitter-timeline" href="https://twitter.com/soundsoftwareuk" data-widget-id="368361225786126337" height="500" width="auto" data-chrome="nofooter noborders transparent" data-border-color="#3e442c" data-link-color="#be5600">Tweets by @soundsoftwareuk</a></p>
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+"://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script> </div>
</div></div> <!-- /block-inner, /block -->
<div id="block-views-recent_stories-block_1" class="block block-views region-even even region-count-2 count-2"><div class="block-inner">
<h2 class="title">Recent notes</h2>
<div class="content">
<div class="view view-recent-stories view-id-recent_stories view-display-id-block_1 recentnotes view-dom-id-be64a291a69fdb2b52b18ff5e50056b3">
<div class="view-content">
<div class="views-row views-row-1 views-row-odd views-row-first">
<div class="views-field views-field-title"> <span class="field-content"><a href="rr-prize-mlsp-2014-winners.html">MLSP Prizes for Reproducibility: Winners announced!</a></span> </div>
<div class="views-field views-field-teaser"> <div class="field-content"><p>Announcing the winners of the MLSP 2014 and SoundSoftware.ac.uk Prizes for Reproducibility in Signal Processing, organised by SoundSoftware.ac.uk in conjuction with the IEEE Signal Processing Society for the 2014 IEEE International Workshop on Machine Learning for Signal Processing.</p>
</div> </div> </div>
<div class="views-row views-row-2 views-row-even">
<div class="views-field views-field-title"> <span class="field-content"><a href="soundsoftware2014-videos-available.html">SoundSoftware 2014: Videos now available!</a></span> </div>
<div class="views-field views-field-teaser"> <div class="field-content"><p>The SoundSoftware 2014 workshop, our third annual workshop on software and data in audio and music research, was just as enjoyable as the previous two. Because so much research in this field ends up being expressed through software, a software workshop turns out to be all about the means by which research becomes useful and relevant to people other than the original researchers—fertile ground for interesting and thought-provoking talks.</p>
<p>The workshop videos are now available online at <a href="soundsoftware2014.html" title="http://soundsoftware.ac.uk/soundsoftware2014">http://soundsoftware.ac.uk/soundsoftware2014</a>, so if you weren't able to make it in person, catch up here!</p>
</div> </div> </div>
<div class="views-row views-row-3 views-row-odd views-row-last">
<div class="views-field views-field-title"> <span class="field-content"><a href="soundsoftware2014-registernow.html">Register now for the SoundSoftware Third Workshop!</a></span> </div>
<div class="views-field views-field-teaser"> <div class="field-content"><p>Our third annual one-day workshop on Software and Data for Audio and Music Research takes place on the 8th of July 2014 at Queen Mary, University of London. The workshop includes talks on issues such as robust software development for audio and music research, reproducible research in general, management of research data, and open access. <a href="soundsoftware2014.html">Read more here</a>, clear your calendar, and register now!</p>
</div> </div> </div>
</div>
</div> </div>
</div></div> <!-- /block-inner, /block -->
</div></div> <!-- /#sidebar-left-inner, /#sidebar-left -->
<div id="sidebar-right"><div id="sidebar-right-inner" class="region region-right">
<div id="block-search-0" class="block block-search region-odd odd region-count-1 count-3"><div class="block-inner">
<div class="content">
<form action="http://soundsoftware.ac.uk/autumnschool2010attendees" accept-charset="UTF-8" method="post" id="search-block-form">
<div><div class="container-inline">
<div class="form-item" id="edit-search-block-form-1-wrapper">
<label for="edit-search-block-form-1">Search this site: </label>
<input type="text" maxlength="128" name="search_block_form" id="edit-search-block-form-1" size="15" value="" title="Enter the terms you wish to search for." class="form-text" />
</div>
<input type="submit" name="op" id="edit-submit" value="Search" class="form-submit" />
<input type="hidden" name="form_build_id" id="form-hOnifPtpWNOcWtc1hbRX5AZYhsoDWuu0TqRnHYzgAqU" value="form-hOnifPtpWNOcWtc1hbRX5AZYhsoDWuu0TqRnHYzgAqU" />
<input type="hidden" name="form_id" id="edit-search-block-form" value="search_block_form" />
</div>
</div></form>
</div>
</div></div> <!-- /block-inner, /block -->
<div id="block-block-8" class="block block-block region-even even region-count-2 count-4"><div class="block-inner">
<div class="content">
<p><a href="archive.html">Archive</a></p>
</div>
</div></div> <!-- /block-inner, /block -->
</div></div> <!-- /#sidebar-right-inner, /#sidebar-right -->
</div></div> <!-- /#main-inner, /#main -->
<div id="footer"><div id="footer-inner" class="region region-footer">
<div id="block-block-1" class="block block-block region-odd odd region-count-1 count-5"><div class="block-inner">
<div class="content">
<p><span class="f-left"><a rel="license" href="http://creativecommons.org/licenses/by-nc/3.0/" target="_blank"><img alt="Creative Commons Licence" style="margin-top: 2px; margin-right: 4px; border-width:0; float:left; " src="http://i.creativecommons.org/l/by-nc/3.0/88x31.png" /></a><span>This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc/3.0/">Creative Commons Attribution-NonCommercial 3.0 License</a>.<br /> © 2011 Queen Mary University of London </span></span><span class="f-right"><a rel="rss" href="rss.xml" target="_blank"><img alt="Subscribe to this site's RSS feed" style="margin-top: -2px; margin-right: 4px; border-width:0; float:left; " src="sites/all/themes/soundsoftware/socialnet_icons/rss_16.png" /></a><a rel="twitter" href="http://twitter.com/soundsoftwareuk" target="_blank"><img alt="Follow us on Twitter!" style="margin-top: -2px; margin-right: 4px; border-width:0; float:left; " src="sites/all/themes/soundsoftware/socialnet_icons/twitter_16.png" /></a><a rel="linkedin group" href="http://www.linkedin.com/groups?mostPopular=&gid=3472350" target="_blank"><img alt="Join or follow our LinkedIn group" style="margin-top: -2px; margin-right: 4px; border-width:0; float:left; " src="sites/all/themes/soundsoftware/socialnet_icons/linkedin_16.png" /></a></span></p>
</div>
</div></div> <!-- /block-inner, /block -->
</div></div> <!-- /#footer-inner, /#footer -->
</div></div> <!-- /#page-inner, /#page -->
</body>
<!-- Mirrored from soundsoftware.ac.uk/autumnschool2010attendees by HTTrack Website Copier/3.x [XR&CO'2014], Tue, 01 Aug 2017 18:45:19 GMT -->
</html>