-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathoverview.tex
37 lines (30 loc) · 3.03 KB
/
overview.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
\section{Overview of The Attack}
\label{section:overview}
This section gives an overview of our attacking system which analyzes the user's fingertip movement to infer the locking pattern. The system takes in a video segment that records the entire unlocking process. It produces a small number of candidate patterns to be tested on the target device.
Figure~\ref{fig:fig2} depicts the five steps of our attack:
%\begin{enumerate}
\vspace{2mm}
\noindent \circled{1} \textbf{Filming and Video Preprocessing:} The attack begins from
filming how the pattern is drawn. The video footage can be filmed at a distance of
about 2 meters from the user using a mobile phone camera. After recording, the attacker
needs to cut out a video segment that contains the entire unlocking
process. We have shown that it is possible to automatically identify this video segments in some scenarios (Section~\ref{sec:identify}).
%After cutting out the video segment,
Then the attacker is asked to mark two areas of interest from a video frame: one area consists of
the fingertip used to draw the pattern, and the other consists of part of the device (see
Figure~\ref{fig:fig2} (b)).
\vspace{2mm}
\noindent \circled{2} \textbf{Track Fingertip Locations:} Once the areas of interest are highlighted, a computer vision algorithm will be applied
to locate the fingertip from each video frame (Section~\ref{secction:shake}). The algorithm aggregates the successfully tracked fingertip locations to produce a fingertip movement trajectory.
This is illustrated in Figure~\ref{fig:fig2} (c). Keep in mind that at this stage the tracked trajectory is presented from the camera's perspective.
\vspace{2mm}
\noindent \circled{3} \textbf{Filming Angle Transformation:} This step transforms the tracked fingertip locations from the camera's perspective to the user's.
We use an edge detection algorithm to automatically calculate the filming angle which is then used to perform the transformation (Section~\ref{sec:transformation}).
For example, Figure~\ref{fig:fig2} (c) will be transformed to Figure~\ref{fig:fig2} (d) to obtain a fingertip movement trajectory from the user's perspective.
\vspace{2mm}
\noindent \circled{4} \textbf{Identify and Rank Candidate Patterns:} In this step, our software automatically maps the tracked fingertip movement trajectory to a number of candidate patterns (Section~\ref{section:spea}).
We rank the candidate patterns based on a heuristic described in Section~\ref{section:identity}.
% For instance, the fingertip movement trajectory in Figure~\ref{fig:fig2} (d) could be mapped to a number of candidate patterns shown in Figure~\ref{fig:fig3}.
We show that our approach can reject most patterns to leave no more than five candidate patterns to be tried out on the target device.
\vspace{2mm}
\noindent \circled{5} \textbf{Test Candidate Patterns:} In this final step, the attacker tests the candidate patterns on the target device.