-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
291 lines (258 loc) · 14.5 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="description" content="ReVISE">
<meta property="og:title" content="ReVISE"/>
<link rel="icon" type="image/x-icon" href="static/images/favicon.ico">
<title>ReVISE</title>
<link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro" rel="stylesheet">
<link rel="stylesheet" href="static/css/bulma.min.css">
<link rel="stylesheet" href="static/css/bulma-carousel.min.css">
<link rel="stylesheet" href="static/css/bulma-slider.min.css">
<link rel="stylesheet" href="static/css/fontawesome.all.min.css">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
<link rel="stylesheet" href="static/css/index.css">
<link rel="stylesheet" href="static/css/audio-table.css">
<link rel="stylesheet" type="text/css" href="static/css/dropdown_style.css">
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script src="https://documentcloud.adobe.com/view-sdk/main.js"></script>
<script defer src="static/js/fontawesome.all.min.js"></script>
<script src="static/js/bulma-carousel.min.js"></script>
<script src="static/js/bulma-slider.min.js"></script>
<script src="static/js/index.js"></script>
</head>
<body>
<section class="hero">
<div class="hero-body">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column has-text-centered">
<h1 class="title is-1 publication-title">ReVISE: Self-Supervised Speech Resynthesis with Visual Input for Universal and Generalized Speech Enhancement</h1>
<div class="is-size-5 publication-authors">
<span class="author-block">
<a href="">Wei-Ning Hsu<sup>1</sup>,</a>
<a href="">Tal Remez<sup>1</sup>,</a>
<a href="">Bowen Shi<sup>1,3</sup>,</a>
<a href="">Jacob Donley<sup>2</sup>,</a>
<a href="">Yossi Adi<sup>1,4</sup></a>
<br>
<sup>1</sup>FAIR, Meta AI Research, <sup>2</sup>Meta Reality Labs Research, <br>
<sup>3</sup>Toyota Technological Institute at Chicago, <sup>4</sup>The Hebrew University of Jerusalem <br>
<tt>{wnhsu,talr,bshi,jdonley,adiyoss}@meta.com</tt>
<br>
<a href="">[paper]</a>
<a href="https://github.com/facebookresearch/av_hubert">[code]</a>
</span>
<div class="column has-text-centered">
<div class="publication-links">
<div class="grid-container">
<div class="grid-item">
<img src="static/images/model2.png"/>
</div>
<div class="grid-item">
<img src="static/images/illustration2.png" width=550/>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<section class="section hero is-light">
<div class="container is-max-desktop">
<!-- Abstract. -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Abstract</h2>
<div class="content has-text-justified">
<p>
Prior works on improving speech quality with visual input typically study each type of auditory distortion separately (e.g., separation, inpainting, video-to-speech) and present tailored algorithms. This paper proposes to unify these subjects and study Generalized Speech Enhancement, where the goal is not to reconstruct the exact reference clean signal, but to focus on improving certain aspects of speech. In particular, this paper concerns intelligibility, quality, and video synchronization. We cast the problem as audio-visual speech resynthesis, which is composed of two steps: pseudo audio-visual speech recognition (P-AVSR) and pseudo text-to-speech synthesis (P-TTS). P-AVSR and P-TTS are connected by discrete units derived from a self-supervised speech model. Moreover, we utilize self-supervised audio-visual speech model to initialize P-AVSR. The proposed model is coined ReVISE. ReVISE is the first high-quality model for in-the-wild video-to-speech synthesis and achieves superior performance on all LRS3 audio-visual enhancement tasks with a single model. To demonstrates its applicability in the real world, ReVISE is also evaluated on EasyCom, an audio-visual benchmark collected under challenging acoustic conditions with only 1.6 hours of training data. Similarly, ReVISE greatly suppresses noise and improves quality.
</p>
</div>
</div>
</div>
<!--/ Abstract. -->
</div>
</section>
<section class="hero">
<div class="hero-body">
<div class="container">
<h2 class="title is-3">Real-world noisy ego-centric recordings from <a href="https://github.com/facebookresearch/EasyComDataset", targer="_blank">EasyCom</a> dataset.</h2>
EasyCom contains ego-centric videos samples recorded from glasses with an audio-array and a camera.
The audio contains significant amount of background noise and overlapping speech. Hence the task of enhancement requires both denoising and separation.
The following samples are drawn from the ReVISE model trained on EasyCom.
<div class="grid-container">
<div class="grid-item">
Input video (distant mic)
<video width="300" height="300" controls>
<source src="static/videos/easycom/Session_12/Session_12_00-00-000_000004_distant_ch2.mp4" type="video/mp4">
</video>
Ref. video (close mic)
<video width="300" height="300" controls>
<source src="static/videos/easycom/Session_12/Session_12_00-00-000_000004_close.mp4" type="video/mp4">
</video>
Beamformed audio
<video width="300" height="300" controls>
<source src="static/videos/easycom/Session_12/Session_12_00-00-000_000004_distant_bf.mp4" type="video/mp4">
</video>
<b>Beamformed audio + ReVISE (ours)</b>
<video width="300" height="300" controls>
<source src="static/videos/easycom/Session_12/Session_12_00-00-000_000004_model_revise_bf.mp4" type="video/mp4">
</video>
</div>
<div class="grid-item">
<br>
<video width="300" height="300" controls>
<source src="static/videos/easycom/Session_4/Session_4_01-00-293_000011_distant_ch2.mp4" type="video/mp4">
</video>
<br>
<video width="300" height="300" controls>
<source src="static/videos/easycom/Session_4/Session_4_01-00-293_000011_close.mp4" type="video/mp4">
</video>
<br>
<video width="300" height="300" controls>
<source src="static/videos/easycom/Session_4/Session_4_01-00-293_000011_distant_bf.mp4" type="video/mp4">
</video>
<br>
<video width="300" height="300" controls>
<source src="static/videos/easycom/Session_4/Session_4_01-00-293_000011_model_revise_bf.mp4" type="video/mp4">
</video>
</div>
<div class="grid-item">
<br>
<video width="300" height="300" controls>
<source src="static/videos/easycom/Session_4/Session_4_22-00-570_000021_distant_ch2.mp4" type="video/mp4">
</video>
<br>
<video width="300" height="300" controls>
<source src="static/videos/easycom/Session_4/Session_4_22-00-570_000021_close.mp4" type="video/mp4">
</video>
<br>
<video width="300" height="300" controls>
<source src="static/videos/easycom/Session_4/Session_4_22-00-570_000021_distant_bf.mp4" type="video/mp4">
</video>
<br>
<video width="300" height="300" controls>
<source src="static/videos/easycom/Session_4/Session_4_22-00-570_000021_model_revise_bf.mp4" type="video/mp4">
</video>
</div>
</div>
</div>
</div>
</section>
<section class="hero">
<div class="hero-body">
<div class="container">
<h2 class="title is-3">Video-to-speech synthesis with in-the-wild samples</h2>
We evaluate our universal ReVISE model trained on LRS3 with samples from the <a href="https://ai.facebook.com/blog/ai-that-understands-speech-by-looking-as-well-as-hearing/">AV-HuBERT blog</a> for video-to-speech synthesis.
We present samples for: (1) the input (silent) video; (2) the target audio; (3) ReVISE model output.
The model generalizes well to samples not drawn from the training dataset.
<div class="grid-container">
<div class="grid-item">
Input video (silent)
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/input-empty/Scenario_1_Two_People_Clearly_Heard/Scenario_1_Clip_2_Two_People_Talking_Clearly_Heard.mp4" type="video/mp4">
</video>
Ref. video
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/input-raw/Scenario_1_Two_People_Clearly_Heard/Scenario_1_Clip_2_Two_People_Talking_Clearly_Heard.mp4" type="video/mp4">
</video>
<b>ReVISE (ours)</b>
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/output-empty/Scenario_1_Two_People_Clearly_Heard/Scenario_1_Clip_2_Two_People_Talking_Clearly_Heard.mp4" type="video/mp4">
</video>
</div>
<div class="grid-item">
Input video (silent)
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/input-empty/Scenario_2_One_person_speaking_and_noise/Scenario_2_Clip_3_one_person_speaking_and_noise_in_background__medium_level_guitar.mp4" type="video/mp4">
</video>
Ref. video
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/input-raw/Scenario_2_One_person_speaking_and_noise/Scenario_2_Clip_3_one_person_speaking_and_noise_in_background__medium_level_guitar.mp4" type="video/mp4">
</video>
<b>ReVISE (ours)</b>
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/output-empty/Scenario_2_One_person_speaking_and_noise/Scenario_2_Clip_3_one_person_speaking_and_noise_in_background__medium_level_guitar.mp4" type="video/mp4">
</video>
</div>
<div class="grid-item">
Input video (silent)
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/input-empty/Scenario_3__Interactive__talking_through_window/Scenario_3_Clip_2_project_due_2.mp4" type="video/mp4">
</video>
Ref. video
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/input-raw/Scenario_3__Interactive__talking_through_window/Scenario_3_Clip_2_project_due_2.mp4" type="video/mp4">
</video>
<b>ReVISE (ours)</b>
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/output-empty/Scenario_3__Interactive__talking_through_window/Scenario_3_Clip_2_project_due_2.mp4" type="video/mp4">
</video>
</div>
</div>
</div>
</div>
</section>
<section class="hero">
<div class="hero-body">
<div class="container">
<h2 class="title is-3">Audi-visual speech inpainting with in-the-wild samples</h2>
Similar to the section above, we evaluate our universal ReVISE model trained on LRS3 with samples from the <a href="https://ai.facebook.com/blog/ai-that-understands-speech-by-looking-as-well-as-hearing/">AV-HuBERT blog</a> for speech inpainting.
We present samples for: (1) the input video with 30%/50%/70% of frames dropped for the three columns from left to right; (2) the target audio; (3) ReVISE model output.
The model generalizes well to samples not drawn from the training dataset.
<div class="grid-container">
<div class="grid-item">
Input video (30% frames dropped)
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/input-mask0.3/Scenario_1_Two_People_Clearly_Heard/Scenario_1_Clip_2_Two_People_Talking_Clearly_Heard.mp4" type="video/mp4">
</video>
Ref. video
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/input-raw/Scenario_1_Two_People_Clearly_Heard/Scenario_1_Clip_2_Two_People_Talking_Clearly_Heard.mp4" type="video/mp4">
</video>
<b>ReVISE (ours)</b>
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/output-mask0.3/Scenario_1_Two_People_Clearly_Heard/Scenario_1_Clip_2_Two_People_Talking_Clearly_Heard.mp4" type="video/mp4">
</video>
</div>
<div class="grid-item">
Input video (50% frames dropped)
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/input-mask0.5/Scenario_2_One_person_speaking_and_noise/Scenario_2_Clip_3_one_person_speaking_and_noise_in_background__medium_level_guitar.mp4" type="video/mp4">
</video>
Ref. video
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/input-raw/Scenario_2_One_person_speaking_and_noise/Scenario_2_Clip_3_one_person_speaking_and_noise_in_background__medium_level_guitar.mp4" type="video/mp4">
</video>
<b>ReVISE (ours)</b>
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/output-mask0.5/Scenario_2_One_person_speaking_and_noise/Scenario_2_Clip_3_one_person_speaking_and_noise_in_background__medium_level_guitar.mp4" type="video/mp4">
</video>
</div>
<div class="grid-item">
Input video (70% frames dropped)
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/input-mask0.7/Scenario_3__Interactive__talking_through_window/Scenario_3_Clip_2_project_due_2.mp4" type="video/mp4">
</video>
Ref. video
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/input-raw/Scenario_3__Interactive__talking_through_window/Scenario_3_Clip_2_project_due_2.mp4" type="video/mp4">
</video>
<b>ReVISE (ours)</b>
<video width="300" height="300" controls>
<source src="static/videos/avhubert_demo/output-mask0.7/Scenario_3__Interactive__talking_through_window/Scenario_3_Clip_2_project_due_2.mp4" type="video/mp4">
</video>
</div>
</div>
</div>
</div>
</section>
</body>
</html>