Skip to content
Permalink
Newer
Older
100644 1203 lines (1095 sloc) 94.7 KB
1
% This is "sig-alternate.tex" V2.1 April 2013
2
% This file should be compiled with V2.5 of "sig-alternate.cls" May 2012
3
%
4
% This example file demonstrates the use of the 'sig-alternate.cls'
5
% V2.5 LaTeX2e document class file. It is for those submitting
6
% articles to ACM Conference Proceedings WHO DO NOT WISH TO
7
% STRICTLY ADHERE TO THE SIGS (PUBS-BOARD-ENDORSED) STYLE.
8
% The 'sig-alternate.cls' file will produce a similar-looking,
9
% albeit, 'tighter' paper resulting in, invariably, fewer pages.
10
%
11
% ----------------------------------------------------------------------------------------------------------------
12
% This .tex file (and associated .cls V2.5) produces:
13
% 1) The Permission Statement
14
% 2) The Conference (location) Info information
15
% 3) The Copyright Line with ACM data
16
% 4) NO page numbers
17
%
18
% as against the acm_proc_article-sp.cls file which
19
% DOES NOT produce 1) thru' 3) above.
20
%
21
% Using 'sig-alternate.cls' you have control, however, from within
22
% the source .tex file, over both the CopyrightYear
23
% (defaulted to 200X) and the ACM Copyright Data
24
% (defaulted to X-XXXXX-XX-X/XX/XX).
25
% e.g.
26
% \CopyrightYear{2007} will cause 2007 to appear in the copyright line.
27
% \crdata{0-12345-67-8/90/12} will cause 0-12345-67-8/90/12 to appear in the copyright line.
28
%
29
% ---------------------------------------------------------------------------------------------------------------
30
% This .tex source is an example which *does* use
31
% the .bib file (from which the .bbl file % is produced).
32
% REMEMBER HOWEVER: After having produced the .bbl file,
33
% and prior to final submission, you *NEED* to 'insert'
34
% your .bbl file into your source .tex file so as to provide
35
% ONE 'self-contained' source file.
36
%
37
% ================= IF YOU HAVE QUESTIONS =======================
38
% Questions regarding the SIGS styles, SIGS policies and
39
% procedures, Conferences etc. should be sent to
40
% Adrienne Griscti (griscti@acm.org)
41
%
42
% Technical questions _only_ to
43
% Gerald Murray (murray@hq.acm.org)
44
% ===============================================================
45
%
46
% For tracking purposes - this is V2.0 - May 2012
47
48
\documentclass[conference]{IEEEtran}
49
50
\usepackage{listings} % Include the listings-package
51
\usepackage{color}
52
\usepackage{balance}
53
\usepackage{graphicx}
54
\usepackage{url}
55
\usepackage{tabularx,booktabs}
56
\usepackage{multirow}
57
\usepackage[normalem]{ulem}
58
\useunder{\uline}{\ul}{}
59
60
\definecolor{darkgreen}{rgb}{0,0.5,0}
61
\definecolor{mygreen}{rgb}{0,0.6,0}
62
\definecolor{mygray}{rgb}{0.5,0.5,0.5}
63
\definecolor{mymauve}{rgb}{0.58,0,0.82}
64
\lstset{ %
65
backgroundcolor=\color{white}, % choose the background color; you must add \usepackage{color} or \usepackage{xcolor}
66
basicstyle=\ttfamily\scriptsize, % the size of the fonts that are used for the code
67
breakatwhitespace=false, % sets if automatic breaks should only happen at whitespace
68
breaklines=true, % sets automatic line breaking
69
captionpos=b, % sets the caption-position to bottom
70
commentstyle=\color{mygreen}, % comment style
71
deletekeywords={...}, % if you want to delete keywords from the given language
72
escapeinside={\%*}{*)}, % if you want to add LaTeX within your code
73
extendedchars=true, % lets you use non-ASCII characters; for 8-bits encodings only, does not work with UTF-8
74
frame=single, % adds a frame around the code
75
keepspaces=true, % keeps spaces in text, useful for keeping indentation of code (possibly needs columns=flexible)
76
keywordstyle=\color{blue}, % keyword style
77
% language=C, % the language of the code
78
morecomment=[l]{--},
79
morekeywords={property,set,is,type, constant, enumeration, end, applies, to, inherit, of, *,...}, % if you want to add more keywords to the set
80
numbers=left, % where to put the line-numbers; possible values are (none, left, right)
81
numbersep=5pt, % how far the line-numbers are from the code
82
numberstyle=\tiny\color{mygray}, % the style that is used for the line-numbers
83
rulecolor=\color{black}, % if not set, the frame-color may be changed on line-breaks within not-black text (e.g. comments (green here))
84
showspaces=false, % show spaces everywhere adding particular underscores; it overrides 'showstringspaces'
85
showstringspaces=false, % underline spaces within strings only
86
showtabs=false, % show tabs within strings adding particular underscores
87
stepnumber=1, % the step between two line-numbers. If it's 1, each line will be numbered
88
stringstyle=\color{mymauve}, % string literal style
89
tabsize=2, % sets default tabsize to 2 spaces
90
title=\lstname % show the filename of files included with \lstinputlisting; also try caption instead of title
91
}
92
93
\providecommand{\keywords}[1]{\textbf{\textit{Index terms---}} #1}
94
95
\ifCLASSINFOpdf
96
% \usepackage[pdftex]{graphicx}
97
% declare the path(s) where your graphic files are
98
% \graphicspath{{../pdf/}{../jpeg/}}
99
% and their extensions so you won't have to specify these with
100
% every instance of \includegraphics
101
% \DeclareGraphicsExtensions{.pdf,.jpeg,.png}
102
\else
103
% or other class option (dvipsone, dvipdf, if not using dvips). graphicx
104
% will default to the driver specified in the system graphics.cfg if no
105
% driver is specified.
106
% \usepackage[dvips]{graphicx}
107
% declare the path(s) where your graphic files are
108
% \graphicspath{{../eps/}}
109
% and their extensions so you won't have to specify these with
110
% every instance of \includegraphics
111
% \DeclareGraphicsExtensions{.eps}
112
\fi
113
114
\begin{document}
115
%
116
% paper title
117
% can use linebreaks \\ within to get better formatting as desired
118
\title{A Trace-Based Study of SMB Network File System Workloads in an Academic Enterprise}
119
120
\author{\IEEEauthorblockN{Paul Wortman and John Chandy}
121
\IEEEauthorblockA{Department of Electrical and Computer Engineering\\
122
University of Connecticut, USA\\
123
(paul.wortman, john.chandy)@uconn.edu
124
\\
125
+1-860-486-5047/+1-860-486-2447}}
126
127
128
% make the title area
129
\maketitle
130
131
\begin{abstract}
132
Storage system traces are important for examining real-world applications, studying potential bottlenecks, as well as driving benchmarks in the evaluation of new system designs.
133
While file system traces have been well-studied in earlier work, it has been some time since the last examination of the SMB network file system.
134
The purpose of this work is to continue previous SMB studies to better understand the use of the protocol in a real-world production system in use at the University of Connecticut.
135
The main contribution of our work is the exploration of I/O behavior in modern file system workloads as well as new examinations of the inter-arrival times and run times for I/O events.
136
We further investigate if the recent standard models for traffic remain accurate.
137
Our findings reveal interesting data relating to the number of read and write events. We notice that the number of read and write events is significantly less than creates and the average number of bytes exchanged per I/O is much smaller than what has been seen in previous studies.
138
%the average of bytes transferred over the wire is much smaller than what has been seen in previous studies.
May 22, 2020
139
Furthermore, we find an increase in the use of metadata for overall network communication that can be taken advantage of through the use of smart storage devices.
140
\keywords{Server Message Block, Storage System Tracing,
141
Network Benchmark, Storage Systems, Distributed I/O.}
142
\end{abstract}
143
144
\section{Introduction}
145
%Mention:
146
%\begin{itemize}
147
% \item Why is it important to re-examine the SMB protocol?
148
% \item Why does examination of network use matter?
149
% \item Need to ensure hash of data and not saving any of the original traffic packets.
150
%\end{itemize}
151
Over the last twenty years, data storage provisioning has been centralized through the
152
use of network file systems. The architectures of these storage systems can vary from
153
storage area networks (SAN), network attached storage (NAS), clustered file systems,
154
hybrid storage, amongst others. However, the front-end client-facing network file
155
system protocol in most enterprise IT settings tends to be, for the most part, solely
156
SMB (Server Message Block) because of the preponderance of Microsoft (MS) Windows clients.
157
While there are other network file systems such as Network File System (NFS) and
158
clustered file systems such as Ceph, PanFS, and OrangeFS, they tend to be used less
159
extensively in most non-research networks.
160
161
In spite of the prevalence of SMB usage within most enterprise networks, there has
162
been very little analysis of SMB workloads in prior academic research. The last major study
Jan 16, 2020
163
of SMB was more than a decade ago~\cite{leung2008measurement}, and the nature of storage
164
usage has changed dramatically over the last decade.
Jan 16, 2020
165
It is always important to revisit commonly used protocols to examine their use in comparison to the expected use case(s). This is doubly so for network communications because the nuances of networked data exchange can greatly influence the effectiveness and efficiency of a chosen protocol.
166
Since an SMB-based trace study has not been undertaken
167
recently, we took a look at its current implementation and use in a large university network.
168
%Due to the sensitivity of the captured information, we ensure that all sensitive information is hashed and that the original network captures are not saved.
169
170
Our study is based on network packet traces collected on
171
the University of Connecticut's
Apr 22, 2020
172
centralized storage facility over a period of three weeks in May 2019. This trace-driven analysis can help in the design of future storage products as well as providing data for future performance benchmarks.
173
%Benchmarks are important for the purpose of developing technologies as well as taking accurate metrics. The reasoning behind this tracing capture work is to eventually better develop accurate benchmarks for network protocol evaluation.
Jan 16, 2020
174
Benchmarks allow for the stress testing of various aspects of a system (e.g. network, single system). Aggregate data analysis collected from traces can lead to the development of synthetic benchmarks. Traces can also expose systems patterns that can also be reflected in synthetic benchmarks. Finally, the traces themselves can drive system simulations that can be used to evaluate prospective storage architectures.
175
176
%\begin{itemize}
177
% \item \textbf{Why?:} Benchmarks allow for the stress testing of different/all aspects of a system (e.g. network, single system).
178
% \item \textbf{How:} There are three ``steps'' to creating a benchmark.
179
% \begin{enumerate}
180
% \item Take a trace of an existing system
181
% \begin{itemize}
182
% \item This is important because this information is how one is able to compare the expected actions of a system (theory) against the traced actions (practice) of the system. Leads to the development of later synthetic benchmarks.
183
% \end{itemize}
184
% \item Determine which aspects of the trace of said system (in an educated arbitrary way) are most representative of ``what occurred'' during the tracing of the system. Leads to discovery of habits/patterns of the system; which is later used for synthetic benchmark.
185
% \item Use discovered information to produce benchmark
186
% \begin{itemize}
187
% \item Done by either running a repeat of the trace of synthetic benchmark created using trends from trace.
188
% \end{itemize}
189
% \end{enumerate}
190
%\end{itemize}
191
192
We created a new tracing system to collect data from the university
Apr 22, 2020
193
%\textcolor{red}{UConn}
194
storage network system. The tracing system was built around the high-speed PF\_RING packet capture system~\cite{pfringWebsite} and required the use of proper hardware and software to handle incoming data%\textcolor{blue}{; however interaction with later third-party code did require re-design for processing of the information}
195
. We also created a new trace capture format based on the DataSeries structured data format developed by HP~\cite{DataSeries}.
196
% PF\_RING section
197
%The addition of PF\_RING lends to the tracing system by minimizing the copying of packets which, in turn, allows for more accurate timestamping of incoming traffic packets being captured ~\cite{Orosz2013,skopko2012loss,pfringWebsite,PFRINGMan}.
198
PF\_RING acts as a kernel module that aids in minimizing packet loss/timestamping issues by not passing packets through the kernel data structures.
199
%The other reason PF\_RING is instrumental is that it functions with the 10Gb/s hardware that was installed into the Trace1 server; allowing for full throughput from the network tap on the UITS system. \\
200
% DataSeries + Code section
Jan 16, 2020
201
DataSeries was modified to filter specific SMB protocol fields along with the writing of analysis tools to parse and dissect the captured packets. Specific fields were chosen to be the interesting fields kept for analysis.
202
%It should be noted that this was done originally arbitrarily and changes/additions have been made as the value of certain fields were determined to be worth examining; e.g. multiple runs were required to refine the captured data for later analysis.
203
The DataSeries data format allowed us to create data analysis code that focuses on I/O events and ID tracking: e.g. Tree ID (TID) and User ID (UID). The future vision for this information is to combine ID tracking with the OpLock information in order to track resource sharing of the different clients on the network, as well as using IP information to recreate communication in a larger network trace to establish a better benchmark.
204
205
%Focus should be aboiut analysis and new traces
206
The contributions of this work are the new traces of SMB traffic over a large university network as well as new analysis of this traffic. Our new examination of the captured data reveals that despite the streamlining of the CIFS/SMB protocol to be less "chatty", the majority of SMB communication is still metadata based I/O rather than actual data I/O. We found that read operations occur in greater numbers and cause a larger overall number of bytes to pass over the network. Additionally, the average number of bytes transferred for each write I/O is smaller than that of the average read operation. We also find that the current standard for modeling network I/O holds for the majority of operations, while a more representative model needs to be developed for reads.
Duncan
Apr 23, 2020
208
%\textcolor{red}{Add information about releasing the code?}
209
210
\section{Related Work}
211
\begin{table*}[h]
212
\centering
213
\begin{tabular}{|r|c|c|c|c|c|}
214
\hline
215
Study & Date of Traces & FS/Protocol & Network FS & Trace Approach & Workload \\ \hline
216
Ousterhout, \textit{et al.}~\cite{ousterhout1985trace} & 1985 & BSD & & Dynamic & Engineering \\ \hline
217
Ramakrishnan, \textit{et al.}~\cite{ramakrishnan1992analysis} & 1988-89 & VAX/VMS & x & Dynamic & Engineering, HPC, Corporate \\ \hline
218
Baker, \textit{et al.}~\cite{baker1991measurements} & 1991 & Sprite & x & Dynamic & Engineering \\ \hline
219
Gribble, \textit{et al.}~\cite{gribble1996self} & 1991-97 & Sprite, NFS, VxFS & x & Both & Engineering, Backup \\ \hline
220
Douceur and Bolosky~\cite{douceur1999large} & 1998 & FAT, FAT32, NTFS & & Snapshots & Engineering \\ \hline
221
Vogels~\cite{vogels1999file} & 1998 & FAT, NTFS & & Both & Engineering, HPC \\ \hline
222
Zhou and Smith~\cite{zhou1999analysis} & 1999 & VFAT & & Dynamic & PC \\ \hline
223
Roselli, \textit{et al.}~\cite{roselli2000comparison} & 1997-00 & VxFS, NTFS & & Dynamic & Engineering, Server \\ \hline
224
Malkani, \textit{et al.}~\cite{malkani2003passive} & 2001 & NFS & x & Dynamic & Engineering, Email \\ \hline
225
Agrawal, \textit{et al.}~\cite{agrawal2007five} & 2000-2004 & FAT, FAT32, NTFS & & Snapshots & Engineering \\ \hline
226
Leung, \textit{et al.}~\cite{leung2008measurement} & 2007 & CIFS & x & Dynamic & Corporate, Engineering \\ \hline
227
%Traeger, \textit{et al.}~\cite{traeger2008nine} & 2008 & FUSE & x & Snapshots & Backup \\ \hline
228
Vrable, \textit{et al.}~\cite{vrable2009cumulus} & 2009 & FUSE & x & Snapshots & Backup \\ \hline
229
Benson, \textit{et al.}~\cite{benson2010network} & 2010 & AFS, MapReduce, NCP, SMB & x & Dynamic & Academic, Corporate \\ \hline
230
Chen, \textit{et al.}~\cite{chen2012interactive} & 2012 & MapReduce & x & Dynamic & Corporate \\ \hline
Jan 16, 2020
231
This paper & 2020 & SMB & x & Dynamic & Academic, Engineering, Backup \\ \hline
232
\end{tabular}
233
\caption{Summary of major file system studies over the past decades. For each study the tables shows the dates of the trace data, the file system or protocol studied, whether it involved network file systems, the trace methodology used, and the workloads studied. Dynamic trace studies are those that involve traces of live requests. Snapshot studies involve snapshots of file system contents.}
234
\label{tbl:studySummary}
235
\vspace{-2em}
Jan 16, 2020
236
\end{table*}
237
\label{Previous Advances Due to Testing}
238
%In this section we discuss previous studies examining traces and testing that has advanced benchmark development.
239
We summarize major works in trace study in Table~\ref{tbl:studySummary}.
240
%In addition we examine issues that occur with traces and the assumptions in their study.
241
Tracing collection and analysis from previous studies have provided important insights and lessons such as an observations of read/write event changes, overhead concerns originating in system implementation, bottlenecks in communication, and other revelations found in the traces.
242
Previous tracing work has shown that one of the largest and broadest hurdles to tackle is that traces (and benchmarks) must be tailored to the system being tested. There are always some generalizations taken into account, but these generalizations can also be a major source of error (e.g. timing, accuracy, resource usage) ~\cite{vogels1999file,malkani2003passive,seltzer2003nfs,anderson2004buttress,Orosz2013,dabir2007bottleneck,skopko2012loss,traeger2008nine,ruemmler1992unix}.
243
To produce a benchmark with high fidelity one needs to understand not only the technology being used but how it is being implemented within the system~\cite{roselli2000comparison,traeger2008nine,ruemmler1992unix}. All these aspects lend to the behavior of the system; from timing and resource elements to how the managing software governs actions~\cite{douceur1999large,malkani2003passive,seltzer2003nfs}. Furthermore, in pursuing this work one may find unexpected results and learn new things through examination~\cite{leung2008measurement,roselli2000comparison,seltzer2003nfs}.
244
These studies are required in order to evaluate the development of technologies and methodologies along with furthering knowledge of different system aspects and capabilities. As has been pointed out by past work, the design of systems is usually guided by an understanding of the file system workloads and user behavior.
245
%It is for that reason that new studies are constantly performed by the science community, from large scale studies to individual protocol studies~\cite{leung2008measurement,vogels1999file,roselli2000comparison,seltzer2003nfs,anderson2004buttress}. Even within these studies, the information gleaned is only as meaningful as the considerations of how the data is handled.
246
247
%The work done by
248
Leung et al.~\cite{leung2008measurement} found that
249
%observations related to the infrequency of files to be shared by more than one client.
250
over 67\% of files were never opened by more than one client
251
%Work by Leung \textit{et al.} led to a series of observations, from the fact that files are rarely re-opened to finding
252
and that read-write access patterns are more frequent.
253
%If files were shared it was rarely concurrently and usually as read-only; where 5\% of files were opened by multiple clients concurrently and 90\% of the file sharing was read only.
254
%Concerns of the accuracy achieved of the trace data was due to using standard system calls as well as errors in issuing I/Os leading to substantial I/O statistical errors.
255
% Anderson Paper
256
%The 2004 paper by
257
Anderson et al.~~\cite{anderson2004buttress} found that a
258
%has the following observations. A
259
source of decreased precision came from the kernel overhead for providing timestamp resolution. This would introduce substantial errors in the observed system metrics due to the use inaccurate tools when benchmarking I/O systems. These errors in perceived I/O response times can range from +350\% to -15\%.
260
%I/O benchmarking widespread practice in storage industry and serves as basis for purchasing decisions, performance tuning studies and marketing campaigns.
261
Issues of inaccuracies in scheduling I/O can result in as much as a factor 3.5 difference in measured response time and factor of 26 in measured queue sizes. These inaccuracies pose too much of an issue to ignore.
262
263
Orosz and Skopko~\cite{Orosz2013} examined the effect of the kernel on packet loss and
264
%in their 2013 paper~\cite{Orosz2013}. Their work
265
showed that when taking network measurements, the precision of the timestamping of packets is a more important criterion than low clock offset, especially when measuring packet inter-arrival times and round-trip delays at a single point of the network. One solution for network capture is the tool Dumpcap. However, the concern with Dumpcap is that it is a single threaded application and was suspected to be unable to handle new arriving packets due to the small size of the kernel buffer. Work by
266
Dabir and Matrawy%, in 2008
267
~\cite{dabir2007bottleneck} attempted to overcome this limitation by using two semaphores to buffer incoming strings and improve the writing of packet information to disk.
268
%Narayan and Chandy examined the concerns of distributed I/O and the different models of parallel application I/O.
269
%There are five major models of parallel application I/O. (1) Single output file shared by multiple nodes. (2) Large sequential reads by a single node at the beginning of computation and large sequential writes by a single node at the end of computation. (3) Checkpointing of states. (4) Metadata and read intensive (e.g. small data I/O and frequent directory lookups for reads).
270
%Due to the striping of files across multiple nodes, this can cause any read or write to access all the nodes; which does not decrease the inter-arrival times (IATs) seen. As the number of I/O operations increases and the number of nodes increases, the IAT times decreased.
271
%Observations from
272
Skopk\'o
273
%in a 2012 paper
274
~\cite{skopko2012loss} examined the concerns of software-based capture solutions and observed that
275
%. The main observation was
276
software solutions relied heavily on OS packet processing mechanisms. Furthermore, depending on the mode of operation (e.g. interrupt or polling), the timestamping of packets would change.
278
As seen in previous trace work~\cite{leung2008measurement,roselli2000comparison,seltzer2003nfs}, the general perceptions of how computer systems are being used versus their initial purpose have allowed for great strides in eliminating actual bottlenecks rather than spending unnecessary time working on imagined bottlenecks. Without illumination of these underlying actions (e.g. read-write ratios, file death rates, file access rates) these
279
issues cannot be readily tackled.
281
282
\section{Background}
Duncan
Apr 23, 2020
283
%\subsection{Server Message Block}
284
The Server Message Block (SMB) is an application-layer network protocol mainly used for providing shared access to files, shared access to printers, shared access to serial ports, miscellaneous communications between nodes on the network, as well as providing an authenticated inter-process communication mechanism.
285
%The majority of usage for the SMB protocol involves Microsfot Windows. Almost all implementations of SMB servers use NT Domain authentication to validate user-access to resources
286
The SMB 1.0 protocol~\cite{SMB1Spec} has been found to have high/significant impact on performance due to latency issues. Monitoring revealed a high degree of ``chattiness'' and disregard of network latency between hosts. Solutions to this problem were included in the updated SMB 2.0 protocol which decreases ``chattiness'' by reducing commands and sub-commands from over a hundred to nineteen~\cite{SMB2Spec}. Additional changes, most significantly increased security, were implemented in the SMB 3.0 protocol (previously named SMB 2.2). % XXX citations for SMB specs for different versions?
287
%\textcolor{red}{\textbf{Add information about SMB 2.X/3?}}
288
289
\begin{figure*}[ht!]
290
\includegraphics[width=\textwidth]{./images/packetcapturetopology.png}
291
\caption{Visualization of Packet Capturing System}
292
\label{fig:captureTopology}
293
\vspace{-1em}
294
\end{figure*}
295
296
297
The rough order of communication for SMB session file interaction contains five steps. First is a negotiation where a Microsoft SMB Protocol dialect is determined. Next, a session is established to determine the share-level security. After this, the Tree ID (TID) is determined for the share to be connected to as well as a file ID (FID) for a file requested by the client. From this establishment, I/O operations are performed using the FID given in the previous step. %\textcolor{green}{The SMB packet header is shown in Figure~\ref{fig:smbPacket}.}
298
299
% Information relating to the capturing of SMB information
Jan 16, 2020
300
The only data that needs to be tracked from the SMB traces are the UID (User ID) and TID for each session. The SMB commands also include a MID (Multiplex ID) value that is used for tracking individual packets in each established session, and a PID (Process ID) that tracks the process running the command or series of commands on a host.
301
For the purposes of our tracing, we do not track the MID or PID information.
Duncan
Feb 2, 2020
302
%
303
Some nuances of the SMB protocol I/O to note are that SMB/SMB2 write requests are the actions that push bytes over the wire while for SMB/SMB2 read operations it is the response packets.
304
305
306
%\begin{itemize}
307
% \item SMB/SMB2 write request is the command that pushes bytes over the wire. \textbf{Note:} the response packet only confirms their arrival and use (e.g. writing).
308
% \item SMB/SMB2 read response is the command that pushes bytes over the wire. \textbf{Note:} The request packet only asks for the data.
309
%\end{itemize}
310
% Make sure to detail here how exactly IAT/RT are each calculated
311
Duncan
Apr 23, 2020
312
%\textcolor{red}{Add writing about the type of packets used by SMB. Include information about the response time of R/W/C/General (to introduce them formally; not sure what this means.... Also can bring up the relation between close and other requests.}
314
%\textcolor{blue}{It is worth noting that for the SMB2 protocol, the close request packet is used by clients to close instances of file that \textcolor{green}{were opened} with a previous create request packet.}
Duncan
Apr 23, 2020
315
316
%\begin{figure}
317
% \includegraphics[width=0.5\textwidth]{./images/smbPacket.jpg}
318
% \caption{SMB Packet \textcolor{green}{Header Format}}
Duncan
Apr 23, 2020
319
% \label{fig:smbPacket}
320
%\end{figure}
321
322
%\subsection{Issues with Tracing}
323
%\label{Issues with Tracing}
324
%There are three general approaches to creating a benchmark based on a trade-off between experimental complexity and resemblance to the original application. (1) Connect the system to a production test environment, run the application, and measure the application metrics. (2) Collect traces from running the application and replay them (after possible modification) back on the test I/O system. (3) Generate a synthetic workload and measure the system performance.
325
%
326
%The majority of benchmarks attempt to represent a known system and structure on which some ``original'' design/system was tested. While this is all well and good, there are many issues with this sort of approach; temporal and spatial scaling concerns, timestamping and buffer copying, as well as driver operation for capturing packets~\cite{Orosz2013,dabir2007bottleneck,skopko2012loss}. Each of these aspects contribute to the initial problems with dissection and analysis of the captured information. For example, inaccuracies in scheduling I/Os may result in as much as a factor of 3.5 differences in measured response time and factor of 26 in measured queue sizes; differences that are too large to ignore~\cite{anderson2004buttress}.
327
%Dealing with timing accuracy and high throughput involves three challenges. (1) Designing for dealing with peak performance requirements. (2) Coping with OS timing inaccuracies. (3) Working around unpredictable OS behavior; e.g. mechanisms to keep time and issue I/Os or performance effects due to interrupts.
328
%
329
%Temporal scaling refers to the need to account for the nuances of timing with respect to the run time of commands; consisting of computation, communication and service. A temporally scalable benchmarking system would take these subtleties into account when expanding its operation across multiple machines in a network. While these temporal issues have been tackled for a single processor (and even somewhat for cases of multi-processor), these same timing issues are not properly handled when dealing with inter-network communication. Inaccuracies in packet timestamping can be caused due to overhead in generic kernel-time based solutions, as well as use of the kernel data structures ~\cite{PFRINGMan,Orosz2013}.
330
331
%Spatial scaling refers to the need to account for the nuances of expanding a benchmark to incorporate a number of machines over a network. A system that properly incorporates spatial scaling is one that would be able to incorporate communication (even in varying intensities) between all the machines on a system, thus stress testing all communicative actions and aspects (e.g. resource locks, queueing) on the network.
332
333
\section{Packet Capturing System}
334
%In this section, we describe the packet capturing system as well as decisions made that influence its capabilities. We illustrate the existing university network filesystem as well as our methods for ensuring high-speed packet capture. Then, we discuss the analysis code we developed for examining the captured data.
335
% and on the python dissection code we wrote for performing traffic analysis.
336
Jan 16, 2020
337
338
\subsection{University Storage System Overview}
339
We collected traces from
340
the University of Connecticut Information Technology Services (ITS)
Apr 22, 2020
341
centralized storage server%The \textcolor{red}{UITS system}
342
, which consists of five Microsoft file server cluster nodes. These blade servers are used to host SMB file shares for various departments at
343
the university
Apr 22, 2020
344
%\textcolor{red}{UConn}
345
as well as personal drive share space for faculty, staff and students, along with at least one small group of users. Each server is capable of handling 1~Gb/s of traffic in each direction (e.g. outbound and inbound traffic). Altogether, the five-blade server system can in theory handle 5~Gb/s of data traffic in each direction.
346
%Some of these blade servers have local storage but the majority do not have any.
347
The blade servers serve as SMB heads, but the actual storage is served by SAN storage nodes that sit behind them. This system does not currently implement load balancing. Instead, the servers are set up to spread the load with a static distribution across four of the active cluster nodes while the passive fifth node takes over in the case of any other nodes going down.% (e.g. become inoperable or crash).
349
The actual tracing was performed with a tracing server connected to a switch outfitted with a packet duplicating element as shown in the topology diagram in Figure~\ref{fig:captureTopology}. A 10~Gbps network tap was installed in the file server switch, allowing our storage server to obtain a copy of all network traffic going to the 5 file servers. The reason for using 10~Gbps hardware is to help ensure that the system is able to capture information on the network at peak theoretical throughput.
350
351
\subsection{High-speed Packet Capture}
352
\label{Capture}
353
%The packet capturing aspect of the tracing system is fairly straight forward.
354
%On top of the previously mentioned alterations to the system (e.g. PF\_RING), the capture of packets is done through the use of \textit{tshark}, \textit{pcap2ds}, and \textit{inotify} programs.
355
%The broad strokes are that incoming SMB/CIFS information comes from the university's network. All packet and transaction information is passed through a duplicating switch that then allows for the tracing system to capture these packet transactions over a 10 Gb port. These packets are
356
%passed along to the \textit{tshark} packet collection program which records these packets into a cyclical capturing ring. A watchdog program (\textit{inotify}) watches the directory where all of these packet-capture (pcap) files are being stored. As a new pcap file is completed \textit{inotify} passes the file to \textit{pcap2ds} along with what protocol is being examined (i.e. SMB). The \textit{pcap2ds} program reads through the given pcap files,
357
Jan 16, 2020
358
In order to maximize our faithful capture of the constant rate of traffic, we implement on the tracing server an ntop~\cite{ntopWebsite} solution called PF\_RING~\cite{pfringWebsite} to dramatically improve the storage server's packet capture speed.
359
%A license was obtained for scholastic use of PF\_RING. PF\_RING implements a ring buffer to provide fast and efficient packet capturing. Having implemented PF\_RING, the next step was to
Jan 16, 2020
360
We had to tune an implementation of \texttt{tshark} (wireshark's terminal pcap implementation) to maximize the packet capture rate.
361
%and dissection into the DataSeries format~\cite{dataseriesGit}.
362
%The assumption being made is that PF\_RING tackles and takes care of the concerns of packets loss due to buffer size, storage, and writing. \textit{tshark} need only read in those packets and generate the necessary DataSeries (ds) files.
Jan 16, 2020
363
\texttt{tshark} outputs \texttt{.pcap} files which captures all of the data present in packets on the network. We configure \texttt{tshark} so that it only captures SMB packets. Furthermore, to optimize this step, a capture ring buffer flag is used to minimize the amount of space used to write \texttt{.pcap} files, while optimizing the amount of time to
364
%\textit{pcap2ds} can
Jan 16, 2020
365
filter data from the \texttt{.pcap} files.
366
The file size used was in a ring buffer where each file captured was 64000 kB.
367
% This causes tshark to switch to the next file after it reaches a determined size.
368
%To simplify this aspect of the capturing process, the entirety of the capturing, dissection, and permanent storage was all automated through watch-dog scripts.
Jan 16, 2020
369
May 22, 2020
370
The \texttt{.pcap} files from \texttt{tshark} do not lend themselves to easy data analysis, so we translate these files into \texttt{.ds} files using the DataSeries~\cite{DataSeries} format, an XML-based structured data format designed to be self-descriptive, storage and access efficient, and highly flexible.
371
%The system for taking captured \texttt{.pcap} files and writing them into the DataSeries format (i.e. \texttt{.ds}) does so by first creating a structure (based on a pre-written determination of the data desired to capture). Once the code builds this structure, it then reads through the capture traffic packets while dissecting and filling in the prepared structure with the desired information and format.
May 22, 2020
372
For our purposes, there is no need to track all data that is exchanged, only information that illuminates the behavior of the clients and servers that interact over the network (i.e. I/O transactions). It should also be noted that all sensitive information being captured by the tracing system is hashed to protect the privacy of the users of the storage system. Furthermore, the DataSeries file retains only the first 512 bytes of the SMB packet - enough to capture the SMB header information that contains the I/O information we seek, while the body of the SMB traffic is not retained in order to better ensure privacy. The reasoning for this limit was to allow for capture of longer SMB AndX message chains due to negotiated \textit{MaxBufferSize}. It is worth noting that in the case of larger SMB headers, some information is lost, however this is a trade-off by the university to provide, on average, the correct sized SMB header but does lead to scenarios where some information may be captured incompletely. This scenario only occurs in the cases of large AndX Chains in the SMB protocol, since the SMB header for SMB 2 is fixed at 72 bytes. In those scenarios the AndX messages specify only a single SMB header with the rest of the AndX Chain attached in a series of block pairs.
373
374
\subsection{DataSeries Analysis}
375
May 22, 2020
376
Building upon existing code for the interpretation and dissection of the captured \texttt{.ds} files, we developed C/C++ code to examine the captured traffic information. From this analysis, we are able to capture read, write, create and general I/O information at both a global scale and individual tracking ID (UID/TID) level. In addition, read and write buffer size information is tracked, as well as the inter-arrival and response times. Also included in this data is oplock information and IP addresses. The main contribution of this step is to aggregate observed data for later interpretation of the results.
377
This step also creates an easily digestible output that can be used to re-create all tuple information for SMB/SMB2 sessions that are witnessed over the entire time period.
378
Sessions are any communication where a valid UID and TID is used.
379
380
%\textcolor{red}{Add information about if the code will be publically shared?}
381
382
%\subsection{Python Dissection}
383
%The final step of our SMB/SMB2 traffic analysis system is the dissection of the \texttt{AnalysisModule} output using the pandas data analysis library~\cite{pandasPythonWebsite}. The pandas library is a python implementation similar to R. In this section of the analysis structure, the generated text file is tokenized and placed into specific DataFrames representing the data seen for each 15 minute period. The python code is used for the analysis and further dissection of the data. This is where the cumulative distribution frequency and graphing of collected data is performed. Basic analysis and aggregation is also performed in this part of the code. This analysis includes the summation of individual session I/O (e.g. reads, writes, creates) as well as the collection of inter arrival time data and response time data.
384
385
\section{Data Analysis}
386
\label{sec:data-analysis}
387
388
\begin{table}[]
389
\centering
390
\begin{tabular}{|l|l|}
391
\hline
392
% & Academic Engineering \\ \hline
393
%Maximum Tuples in 15-min Window & 36 \\ %\hline
394
%Total Tuples Seen & 2721 \\ \hline
395
%\textcolor{red}{Maximum Sessions in 15-min Window} & 35 \\ %\hline
396
%Maximum Non-Session in 15-min Window & 2 \\ \hline
397
Total Days & 21 \\ %\hline
398
Total Sessions & 2,413,589 \\ %\hline
399
%Total Non-Sessions & 279006484 \\ \hline
400
Number of SMB Operations & 281,419,686 (100\%)\\ %\hline
401
Number of General SMB Operations & 210,705,867 (74.87\%) \\ %\hline
402
Number of Creates & 54,486,043 (19.36\%) \\ %\hline
403
Number of Read I/Os & 8,355,557 (2.97\%) \\ %\hline
404
Number of Write I/Os & 7,872,219 (2.80\%) \\ %\hline
405
R:W I/O Ratio & 1.06 \\ \hline
406
Total Data Read (GB) & 0.97 \\ %\hline
407
Total Data Written (GB) & 0.6 \\ %\hline
408
Average Read Size (B) & 144 \\ %\hline
409
Average Write Size (B) & 63 \\ \hline
410
%Percentage of Read Bytes of Total Data & 99.4\% \\ %\hline
411
%Percentage of Written Bytes of Total Data & 0.6\% \\ %\hline
412
%Total R:W Byte Ratio & 166.446693744549 \\ %\hline
413
%Average R:W Byte Ratio & 0.253996031053668 \\ \hline
414
\end{tabular}
Duncan
Feb 2, 2020
415
\caption{\label{tbl:TraceSummaryTotal}Summary of Trace I/O Statistics for the time of April 30th, 2019 to May 20th, 2019}
416
\vspace{-2em}
417
\end{table}
418
% NOTE: Not sure but this reference keeps referencing the WRONG table
419
420
Table~\ref{tbl:TraceSummaryTotal}
May 22, 2020
421
shows a summary of the SMB traffic captured, statistics of the I/O operations, and read/write data exchange observed for the network filesystem. This information is further detailed in Table~\ref{tbl:SMBCommands}, which illustrates that the majority of I/O operations are general (74.87\%). As shown in %the bottom part of
Apr 22, 2020
422
Table~\ref{tbl:SMBCommands2}, general I/O includes metadata commands such as connect, close, query info, etc.
424
Our examination of the collected network filesystem data revealed interesting patterns for the current use of CIFS/SMB in a large academic setting. The first is that there is a major shift away from read and write operations towards more metadata-based ones. This matches the last CIFS observations made by Leung et.~al.~\cite{leung2008measurement} that files were being generated and accessed infrequently. The change in operations are due to a movement of use activity from reading and writing data to simply checking file and directory metadata. However, since the earlier study, SMB has transitioned to the SMB2 protocol which was supposed to be less "chatty". As a result, we would expect fewer general SMB operations. Table~\ref{tbl:SMBCommands} shows a breakdown of SMB and SMB2 usage over the time period of May. From this table, one can see that the SMB2 protocol makes up $99.14$\% of total network operations compared to just $0.86$\% for SMB, indicating that most clients have upgraded to SMB2. However, $74.66$\% of SMB2 I/O are still general operations. Contrary to the purpose of implementing the SMB2 protocol, there is still a large amount of general I/O.
425
%While CIFS/SMB protocol has less metadata operations, this is due to a depreciation of the SMB protocol commands, therefore we would expect to see less total operations (e.g. $0.04$\% of total operations).
426
%The infrequency of file activity is further strengthened by our finding that within a week long window of time there are no Read or Write inter arrival times that can be calculated.
427
%\textcolor{red}{XXX we are going to get questioned on this. its not likely that there are no IATs for reads and writes}
428
%General operations happen at very high frequency with inter arrival times that were found to be relatively short (1317$\mu$s on average), as shown in Table~\ref{tbl:PercentageTraceSummary}.
Apr 22, 2020
430
Taking a deeper look at the SMB2 operations, shown in %the bottom half of
431
Table~\ref{tbl:SMBCommands2}, we see that $9.06$\% of the general operations are negotiate commands. These are commands sent by the client to notify the server which dialects of the SMB2 protocol the client can understand. The three most common commands are close, tree connect, and query info.
432
The latter two relate to metadata information of shares and files accessed. However, the close operation corresponds to the create operations. Note that the create command is also used as an open file. Notice that the number of closes is greater than the total number of create operations by $9.35$\%. These extra close operations are most likely due to applications doing multiple closes that do not need to be performed.
433
434
\begin{table}
435
\centering
436
\begin{tabular}{|l|c|c|c|}
437
\hline
438
I/O Operation & SMB & SMB2 & Both \\ \hline
439
General Operations & 2,418,980 & 208,286,887 & 210,705,867 \\
440
General \% & 99.91\% & 74.66\% & 74.87\% \\ %\hline
441
Create Operations & 0 & 54,486,043 & 54,486,043 \\
442
Create \% & 0.00\% & 19.53\% & 19.36\% \\
443
Read Operations & 1,931 & 8,353,626 & 8,355,557 \\
444
Read \% & 0.08\% & 2.99\%& 2.97\%\\
445
Write Operations & 303 & 7,871,916 & 7,872,219 \\
446
Write \% & 0.01\% & 2.82\% & 2.80\% \\ \hline
447
Combine Protocol Operations & 2,421,214 & 278,998,472 & 281,419,686 \\
448
Combined Protocols \% & 0.86\% & 99.14\% & 100\% \\ \hline
Apr 22, 2020
449
\end{tabular}
450
\caption{\label{tbl:SMBCommands}Percentage of SMB and SMB2 Protocol Commands for the Time of April 30th, 2019 to May 20th, 2019}
451
\vspace{-1em}
Apr 22, 2020
452
\end{table}
453
454
\begin{table}[]
455
\centering
456
\begin{tabular}{|l|c|c|c|}
457
\hline
458
SMB2 General Operation & \multicolumn{2}{|c|}{Occurrences} & Percentage of Total \\ \hline
459
Close & \multicolumn{2}{|c|}{80,114,256} & 28.71\% \\
460
Tree Connect & \multicolumn{2}{|c|}{48,414,491} & 17.35\% \\
461
Query Info & \multicolumn{2}{|c|}{27,155,528} & 9.73\% \\
462
Negotiate & \multicolumn{2}{|c|}{25,276,447} & 9.06\% \\
463
Tree Disconnect & \multicolumn{2}{|c|}{9,773,361} & 3.5\% \\
464
IOCtl & \multicolumn{2}{|c|}{4,475,494} & 1.6\% \\
465
Set Info & \multicolumn{2}{|c|}{4,447,218} & 1.59\% \\
466
Query Directory & \multicolumn{2}{|c|}{3,443,491} & 1.23\% \\
467
Session Setup & \multicolumn{2}{|c|}{2,041,208} & 0.73\%\\
468
Lock & \multicolumn{2}{|c|}{1,389,250} & 0.5\% \\
469
Flush & \multicolumn{2}{|c|}{972,790} & 0.35\% \\
470
Change Notify & \multicolumn{2}{|c|}{612,850} & 0.22\% \\
471
Logoff & \multicolumn{2}{|c|}{143,592} & 0.05\% \\
472
Oplock Break & \multicolumn{2}{|c|}{22,397} & 0.008\% \\
473
Echo & \multicolumn{2}{|c|}{4,715} & 0.002\% \\
474
Cancel & \multicolumn{2}{|c|}{0} & 0.00\% \\
475
\hline
476
\end{tabular}
Apr 22, 2020
477
\caption{\label{tbl:SMBCommands2}Breakdown of General Operations for SMB2 from April 30th, 2019 to May 20th, 2019.}
478
\vspace{-2em}
479
\end{table}
480
481
\subsection{I/O Data Request Sizes}
482
%\textcolor{red}{Figures~\ref{fig:IO-All} and~\ref{fig:IO-R+W} show the amount of I/O in 15-minute periods during the week of March 12-18, 2017.
483
%The general I/O (GIO) value is representative of I/O that does not include read, write, or create actions. For the most part, these general I/O are mostly metadata operations. As one can see in Figure~\ref{fig:IO-All}, the general I/O dominates any of the read or write operations. Figure~\ref{fig:IO-R+W} is a magnification of the read and write I/O from Figure~\ref{fig:IO-All}. Here we see that the majority of I/O operations belong to reads. There are some spikes where more write I/O occur, but these events are in the minority. One should also notice that, as would be expected, the spikes of I/O activity occur around the center of the day (e.g. 8am to 8pm), and during the week (March 12 was a Sunday and March 18 was a Saturday).}
484
485
%\begin{figure}
486
% \includegraphics[width=0.5\textwidth]{./images/AIO.pdf}
487
% \caption{All I/O}
488
% \label{fig:IO-All}
489
%\end{figure}
490
%\begin{figure}
491
% \includegraphics[width=0.5\textwidth]{./images/RWIO-win.pdf}
492
% \caption{Read and Write I/O}
493
% \label{fig:IO-R+W}
494
%\end{figure}
Jan 16, 2020
495
Each SMB Read and Write command is associated with a data request size that indicates how many bytes are to be read or written as part of that command.
496
Figure~\ref{fig:SMB-Bytes-IO} %and~\ref{fig:PDF-Bytes-Write}
497
shows the probability density function (PDF) of the different sizes of bytes transferred for read and write I/O operations respectively. The most noticeable aspect of these
498
graphs is that the majority of bytes transferred for read and write operations is around 64 bytes. It is worth noting that write I/Os also have a larger number of very small transfer amounts. This is unexpected in terms of the amount of data passed in a frame.
499
Part of the reason is due to a large number of long-term %calculations/
500
scripts that only require small but frequent updates, as we observed several
501
%. This assumption was later validated in part when examining the files transferred, as some were related to
502
running scripts creating a large volume of files. A more significant reason was because we noticed Microsoft Word would perform a large number of small reads at ever growing offsets. This was interpreted as when a user is viewing a document over the network and Word would load the next few lines of text as the user scrolled down the document; causing ``loading times'' amid use. Finally, a large degree of small writes were observed to be related to application cookies or other such smaller data communications.
503
%This could also be attributed to simple reads relating to metadata\textcolor{red}{???}
504
Duncan
Apr 23, 2020
505
%\textcolor{blue}{Reviewing of the SMB and SMB2 leads to some confusion in understanding this behavior. According to the specification the default ``MaxBuffSize'' for reads and writes should be between 4,356 bytes and 16,644 bytes depending on the use of either a client version of server version of Windows; respectively. In the SMB2 protocol specification, specific version of Windows (e.g. Vista SP1, Server 2008, 7, Server 2008 R2, 8, Server 2012, 8.1, Server 2012 R2) disconnect if the ``MaxReadSize''/``MaxWriteSize'' value is less than 4096. However, further examination of the specification states that for SMB2 the read length and write length can be zero. Thus, this seems to conflict that the size has to be greater than 4096 but allows for it to also be zero. It is due to this protocol specification of allowing zero that supports the smaller read/write sizes seen in the captured traffic. The author's assumption here is that the university's configuration allows for smaller traffic to be exchanged without the disconnection for sizes smaller than 4096.}
507
%\begin{figure}
508
% \includegraphics[width=0.5\textwidth]{./images/aggAvgBytes.pdf}
509
% \caption{Average Bytes by I/O}
510
% \label{fig:Agg-AvgBytes}
511
%\end{figure}
512
%
513
%\begin{figure}
514
% \includegraphics[width=0.5\textwidth]{./images/bytesCompare.pdf}
515
% \caption{Total Bytes by I/O}
516
% \label{fig:bytesCompare}
517
%\end{figure}
518
519
%\begin{figure}[t]
520
% \includegraphics[width=0.5\textwidth]{./images/smb_read_bytes_pdf.png}
521
% \vspace{-2em}
522
% \caption{PDF of Bytes Transferred for Read I/O}
523
% \label{fig:PDF-Bytes-Read}
524
%\end{figure}
525
%
526
%\begin{figure}[t]
527
% \includegraphics[width=0.5\textwidth]{./images/smb_read_bytes_cdf.png}
528
% \vspace{-2em}
529
% \caption{CDF of Bytes Transferred for Read I/O}
530
% \label{fig:CDF-Bytes-Read}
531
%\end{figure}
532
%
533
%\begin{figure}[t]
534
% \includegraphics[width=0.5\textwidth]{./images/smb_write_bytes_pdf.png}
535
% \vspace{-2em}
536
% \caption{PDF of Bytes Transferred for Write I/O}
537
% \label{fig:PDF-Bytes-Write}
538
%\end{figure}
539
%
540
%\begin{figure}[t]
541
% \includegraphics[width=0.5\textwidth]{./images/smb_write_bytes_cdf.png}
542
% \vspace{-2em}
543
% \caption{CDF of Bytes Transferred for Write I/O}
544
% \label{fig:CDF-Bytes-Write}
545
%\end{figure}
Jan 16, 2020
547
\begin{figure}[t]
548
\includegraphics[width=0.5\textwidth]{./images/smb_2019_bytes_pdf.png}
Duncan
Feb 2, 2020
549
\vspace{-2em}
550
\caption{PDF and CDF of Bytes Transferred for Read and Write I/O}
551
\label{fig:SMB-Bytes-IO}
552
\vspace{-1em}
553
\end{figure}
554
555
%\begin{figure}
556
% \includegraphics[width=0.5\textwidth]{./images/CDF-ioBuff-win.pdf}
557
% \caption{CDF of Bytes Transferred for Read+Write I/O}
558
% \label{fig:CDF-Bytes-RW}
559
%\end{figure}
560
Figure~\ref{fig:SMB-Bytes-IO} %and~\ref{fig:CDF-Bytes-Write}
561
shows cumulative distribution functions (CDF) for bytes read and bytes written. As can be seen, almost no read transfer sizes are less than 32 bytes, whereas 20\% of the writes are smaller than 32 bytes. Table~\ref{fig:transferSizes} shows a tabular view of this data. For reads, $34.97$\% are between 64 and 512 bytes, with another $28.86$\% at 64-byte request sizes. There are a negligible percentage of read requests larger than 512.
562
This read data differs from the size of reads observed by Leung et al. by a factor of four smaller.
563
%This read data is similar to what was observed by Leung et al, however at an order of magnitude smaller.
564
%Writes observed also differ from previous inspection of the protocol's usage. % are very different.
565
Leung et al. showed that $60$-$70$\% of writes were less than 4K in size and $90$\% less than 64K in size. In our data, however, we see that almost all writes are less than 1K in size. In fact, $11.16$\% of writes are less than 4 bytes, $52.41$\% are 64-byte requests, and $43.63$\% of requests are less than 64 bytes.
566
In the ten years since the last study, it is clear that writes have become significantly smaller. In our analysis of a subset of the writes, we found that a significant part of the write profile was writes to cookies which are necessarily small files. The preponderance of web applications and the associated tracking is a major change in how computers and data storage are used compared to a decade ago. These small data reads and writes significantly alter the assumptions that most network storage systems are designed for.
Feb 3, 2020
567
%This may be explained by the fact that large files, and multiple files, are being written as standardized blocks more fitting to the frequent update of larger data-sets and disk space available. This could be as an effort to improve the fidelity of data across the network, allow for better realtime data consistency between client and backup locations, or could just be due to a large number of scripts being run that create and update a series of relatively smaller documents.
568
%\textbf{Note: It seems like a change in the order of magnitude that is being passed per packet. What would this indicate?}\textcolor{red}{Answer the question. Shorter reads/writes = better?}
Jan 16, 2020
570
\begin{table}[]
571
\centering
572
\begin{tabular}{|l|c|c|}
573
\hline
574
Transfer size & Reads & Writes \\ \hline
575
$< 4$ & 0.098\% & 11.16\% \\
576
$= 4$ & 1.16\% & 4.13\% \\
577
$>4, < 64$ & 34.89\% & 28.14\% \\
578
$= 64$ & 28.86\% & 52.41\% \\
579
$>64, < 512$ & 34.97\% & 4.15\% \\
580
$= 512$ & 0.002\% & 2.54e-5\% \\
581
$= 1024$ & 1.22e-5\% & 3.81e-5\% \\ \hline
582
\end{tabular}
583
\caption{\label{fig:transferSizes}Percentage of transfer sizes for reads and writes}
584
\vspace{-2em}
585
\end{table}
586
Jan 16, 2020
587
In comparison of the read, write, and create operations we found that the vast majority
588
of I/O belong to creates. By the fact that there are so many creates, it
Jan 16, 2020
589
seems apparent that many applications create new files rather than updating existing
590
files when files are modified. Furthermore, read operations account for the largest aggregate of bytes transferred over the network. However, the number of bytes transferred by write commands is not far behind, although, non-intuitively, including a larger number of standardized relatively smaller writes. The most unexpected finding of the data is that all the read and writes are performed using much smaller buffers than expected; about an order of magnitude smaller (e.g. bytes instead of kilobytes).
591
592
% XXX I think we should get rid of this figure - not sure it conveys anything important that is not better conveyed than the CDF
593
%Figure~\ref{fig:Agg-AvgRT} shows the average response time (RT) for the different I/O operations. The revealing information is that write I/Os take the longest average time. This is expected since writes transfer more data on average. There is an odd spike for create I/O which can be due to a batch of files or nested directories being made. There are points where read I/O RT can be seen, but this only occurs in areas where large RT for write I/O occur. This is attributed to a need to verify the written data.
594
595
%\begin{figure}
596
% \includegraphics[width=0.5\textwidth]{./images/aggAvgRTs-windowed.pdf}
597
% \caption{Average Response Time by I/O Operation}
598
% \label{fig:Agg-AvgRT}
599
%\end{figure}
600
601
% XXX I think we should get rid of this figure - not sure it conveys anything important that is not better conveyed than the CDF
602
%Figure~\ref{fig:Agg-AvgBytes} shows the average inter arrival time (IAT) for the different I/O operations. \textcolor{red}{Issue: Data only exists for general operations, NOT for other operations. In other words, data for all other operations was IAT of zero.} \textcolor{blue}{Idea: This is due to single operation by a single user and then no operation being performed again. This would aligns with the ideas of Lueng et.~al.~who noticed that files were being interacted with only once or twice and then not again.}
603
604
%\begin{figure}
605
% \includegraphics[width=0.5\textwidth]{./images/aggAvgIATs-windowed.pdf}
606
% \caption{Average Inter Arrival Time by I/O Operation}
607
% \label{fig:Agg-AvgIAT}
608
%\end{figure}
609
610
%The following is a list of data collected and why:
611
%\begin{itemize}
612
% \item TID-to-IP map: with the hashing, the only way to maintain mapping of `share-types' (i.e. share-paths) to TIDs is via IP (reverse DNS).
613
% \item FID Data: holds the number of reads, writes, and size of the FID (tracked) for which this information is tracked (per FID).
614
% \item Tuple Data: holds the reads and writes performed by a seen tuple (per tuple) along with by the tuple and FID's data.
615
% \item TID Data: holds the number of reads, writes, creates, and total I/O events along with the last time each/any command was seen. Maps are kept of the buffs seen, general IAT, read IAT, write IAT, create IATs.
616
% \item Tuple Info: Tracking the tuples seen along with a map to that tuple's (per tuple) data.
617
% \item Oplock Data: Tracks the different types of oplocks that are seen per 15 minutes.
618
% \item Read/Write Buff: Maps that are used to track the different sized buffers used for Read/Write commands.
619
% \item `filesizeMap': Used for track the different sized buffers to pass data along the network (generic and all inclusive; ie. tuple level data).
620
% \item I/O Events: Track the number of I/O events seen in 15 minute periods. I/Os include - read, write, create, general.
621
%\end{itemize}
622
623
\subsection{I/O Response Times}
624
Jan 16, 2020
625
%~!~ Addition since Chandy writing ~!~%
May 22, 2020
626
Most previous tracing work have not reported I/O response times or command latency, which is generally proportional to data request size, but under load, the response times give an indication of server load. In
Jan 16, 2020
627
Table~\ref{tbl:PercentageTraceSummary} we show a summary of the response times for read, write, create, and general commands. We note that most general (metadata) operations occur fairly frequently, run relatively slowly, and happen at high frequency.
628
We also observe that the number of writes is very close to the number of reads. The write response time for their operations is very small - most likely because the storage server caches the write without actually committing to disk. Reads, on the other hand, are in most cases probably not going to hit in the cache and require an actual read from the storage media. Although read operations are only a small percentage of all operations, they have the highest average response time. As noted above, creates happen more frequently, but have a slightly slower response time, because of the extra metadata operations required for a create as opposed to a simple write.
Jan 16, 2020
629
630
% Note: RT + IAT time CDFs exist in data output
631
632
% IAT information
633
634
%\begin{figure}[t!]
635
% \includegraphics[width=0.5\textwidth]{./images/smb_general_iats_cdf.png}
636
% \caption{CDF of Inter Arrival Time for General I/O}
637
% \label{fig:CDF-IAT-General}
638
%\end{figure}
639
%
640
%\begin{figure}[t!]
641
% \includegraphics[width=0.5\textwidth]{./images/smb_general_iats_pdf.png}
642
% \caption{PDF of Inter Arrival Time for General I/O}
643
% \label{fig:PDF-IAT-General}
644
%\end{figure}
645
%
646
%\begin{figure}[t!]
647
% \includegraphics[width=0.5\textwidth]{./images/smb_general_rts_cdf.png}
648
% \caption{CDF of Response Time for General I/O}
649
% \label{fig:CDF-RT-General}
650
% \vspace{-2em}
651
%\end{figure}
652
%
653
%\begin{figure}[t!]
654
% \includegraphics[width=0.5\textwidth]{./images/smb_general_rts_pdf.png}
655
% \caption{PDF of Response Time for General I/O}
656
% \label{fig:PDF-RT-General}
657
% \vspace{-2em}
658
%\end{figure}
659
Jan 16, 2020
660
\begin{figure}[t!]
661
\includegraphics[width=0.5\textwidth]{./images/smb_2019_iats_cdf.png}
662
\caption{CDF of Inter-Arrival Time for SMB I/O}
663
\label{fig:CDF-IAT-SMB}
664
%\vspace{-2em}
Jan 16, 2020
665
\end{figure}
666
667
\begin{figure}[t!]
668
\includegraphics[width=0.5\textwidth]{./images/smb_2019_iats_pdf.png}
669
\caption{PDF of Inter-Arrival Time for SMB I/O}
670
\label{fig:PDF-IAT-SMB}
671
\vspace{-1em}
Jan 16, 2020
672
\end{figure}
673
674
\begin{figure}[t!]
675
\includegraphics[width=0.5\textwidth]{./images/smb_2019_rts_cdf.png}
676
\caption{CDF of Response Time for SMB I/O}
677
\label{fig:CDF-RT-SMB}
678
%\vspace{-2em}
Jan 16, 2020
679
\end{figure}
680
681
\begin{figure}[t!]
682
\includegraphics[width=0.5\textwidth]{./images/smb_2019_rts_pdf.png}
683
\caption{PDF of Response Time for SMB I/O}
684
\label{fig:PDF-RT-SMB}
685
\vspace{-1em}
Jan 16, 2020
686
\end{figure}
687
688
\begin{table}[]
689
\centering
Jan 16, 2020
690
\begin{tabular}{|l|r|r|r|r|}
691
\hline
692
& Reads & Writes & Creates & General \\ \hline
Jan 16, 2020
693
I/O \% & 2.97 & \multicolumn{1}{r|}{2.80} & \multicolumn{1}{r|}{19.36} & \multicolumn{1}{r|}{74.87} \\ \hline
694
Avg RT ($\mu$s) & 59,819.7 & \multicolumn{1}{r|}{519.7} & \multicolumn{1}{r|}{698.1} & \multicolumn{1}{r|}{7,013.4} \\ \hline
695
Avg IAT ($\mu$s) & 33,220.8 & \multicolumn{1}{r|}{35,260.4} & \multicolumn{1}{r|}{5,094.5} & \multicolumn{1}{r|}{1,317.4} \\ \hline
696
%\hline
697
%Total RT (s) & 224248 & \multicolumn{1}{l|}{41100} & \multicolumn{1}{l|}{342251} & \multicolumn{1}{l|}{131495} \\ \hline
698
%\% Total RT & 30.34\% & \multicolumn{1}{l|}{5.56\%} & \multicolumn{1}{l|}{46.3\%} & \multicolumn{1}{l|}{17.79\%} \\ \hline
699
\end{tabular}
700
\caption{Summary of Trace Statistics: Average Response Time (RT) and Inter Arrival Time (IAT)}
701
\label{tbl:PercentageTraceSummary}
702
\vspace{-2em}
703
\end{table}
704
705
%\begin{table}[]
706
%\centering
707
%\begin{tabular}{|l|l|l|l|l|l|}
708
%\hline
709
% & Reads & Writes & Creates & General R-W \\ \hline
710
%Total RT (ms) & 224248442 & \multicolumn{1}{l|}{41100075} & \multicolumn{1}{l|}{342251439} & \multicolumn{1}{l|}{131495153} & \multicolumn{1}{l|}{258573201} \\ \hline
711
%\% Total RT & 30.34\% & \multicolumn{1}{l|}{5.56\%} & \multicolumn{1}{l|}{46.3\%} & \multicolumn{1}{l|}{17.79\%} & \multicolumn{1}{l|}{34.99\%} \\ \hline
712
%\end{tabular}
713
%\caption{Summary of Response Time (RT) Statistics: Total RT and Percentage RT per Operation}
714
%\label{tbl:PercentageRTSummary}
715
%\end{table}
716
717
%\textcolor{red}{To get an indication of how much of an effect these general commands take on overall latency, we also calculated the total aggregate response time for read, write, create, and general operations. We see that even though general commands account for $74.87$\% of all commands, they only account for only $17.8$\% of the total response time. Thus, while the volume of general operations does not present an extraordinary burden on server load, reducing these operations can present a clear performance benefit. We also see that creates take the most amount of time ($46.3$\%) of the total response time for all operations. As seen in Table~\ref{tbl:SMBCommands}, the majority of general operations are negotiations while $28.71$\% are closes; which relate to create operations.
718
%This shows that while creates are only $5.08$\% on March 15th (and $2.5$\% of the week's operations shown in Table~\ref{tbl:PercentageTraceSummary}) of the total operations performed, they are responsible for $46.3$\% of the time spent performing network I/O.}
719
%\textbf{Do we need this above data piece?}
720
%
721
%% Not Needed to Say Since we have no data
722
%%One key observation is that there were no inter arrival time calculations for read, write, or create operations. We interpret this data to reflect the observations of Leung et.~al.~that noticed that files are interacted with only a few times and then not interacted with again. Extrapolating this concept, we interpret the data to illustrate that files may be read or written once, but then are not examined or interacted with again.
723
%%\textcolor{blue}{This was entirely unexpected and was discovered as a result of our original assumptions made based on what scope we believed to be the best interpretation of user activity on the network filesystem.}
724
%
725
%%\begin{table}[]
726
%%\centering
727
%%\begin{tabular}{|l|l|}
728
%%\hline
729
%% & Count \\ \hline
730
%%Sessions & 122 \\ \hline
731
%%Non-Sessions & 2 \\ \hline
732
%%\end{tabular}
733
%%\caption{Summary of Maximum Session and Non-Session Seen}
734
%%\label{tbl:Counts}
735
%%\end{table}
736
%%
737
%%\textcolor{red}{Not sure if presenting a count of the number of sessions seen is important or worth showing.}
738
%
739
%%\begin{table}[]
740
%%\centering
741
%%\begin{tabular}{|l|l|l|}
742
%%\hline
743
%% & Reads & Writes \\ \hline
744
%%Average & 27167.76 B & 106961.36 B \\ \hline
745
%%Percentage & 99.4\% & 0.6\% \\ \hline
746
%%\end{tabular}
747
%%\caption{Summary of Bytes Transferred Over the Network}
748
%%\label{tbl:Bytes}
749
%%\end{table}
750
%
751
%%\textcolor{red}{Reference the large single table instead}
752
%%Table~\ref{tbl:TraceSummary} shows our findings relating to the total number of bytes transferred over the network due to Read and Write operations. Mimicing the findings from Figure~\ref{fig:Agg-AvgBytes}, the table shows that while the percentage of total bytes passed over the network is dominated by Read operations the average bytes pushed by Write operations is of a magnitude greater.
753
%
754
%%Tables to be included:
755
%%\begin{enumerate}
756
%% \item Return Times:
757
%% \begin{itemize}
758
%% \item General
759
%% \item Read
760
%% \item Write
761
%% \item Create
762
%% \item Read+Write
763
%% \end{itemize}
764
%% \item Inter Arrival Times
765
%% \begin{itemize}
766
%% \item General
767
%% \item Read
768
%% \item Write
769
%% \item Create
770
%% \item Read+Write
771
%% \end{itemize}
772
%% \item Bytes per Request (Bytes Over Network)
773
%% \begin{itemize}
774
%% \item Read
775
%% \item Write
776
%% \item Read+Write
777
%% \end{itemize}
778
%%\end{enumerate}
779
%%Modeling to include:
780
%%\begin{enumerate}
781
%% \item Inter Arrival Time CDF
782
%% \begin{itemize}
783
%% \item Read
784
%% \item Write
785
%% \item Read+Write
786
%% \end{itemize}
787
%%\end{enumerate}
788
%
May 22, 2020
789
Figures~\ref{fig:CDF-IAT-SMB} and~\ref{fig:PDF-IAT-SMB} shows the inter arrival times CDFs and PDFs. As can be seen, SMB commands happen very frequently - $85$\% of commands are issued less than 1000~$\mu s$ apart. As mentioned above, SMB is known to be very chatty, and it is clear that servers must spend a significant amount of time dealing with these commands. For the most part, most of these commands are also serviced fairly quickly as
790
seen in Figures~\ref{fig:CDF-RT-SMB} and~\ref{fig:PDF-RT-SMB}. Interestingly, the response time for the general metadata operations follows a similar curve to the inter-arrival times.
792
%Next we examine the response time (RT) of the read, write, and create I/O operations that occur over the SMB network filesystem.
793
The response time for write operations (shown in Figure~\ref{fig:CDF-RT-SMB}) does not follow the step function similar to the bytes written CDF in Figure~\ref{fig:SMB-Bytes-IO}. This is understandable as the response time for a write would be expected to be a more standardized action and not necessarily proportional to the number of bytes written. However, the read response time %(Figure~\ref{fig:CDF-RT-SMB})
794
is smoother than the bytes read CDF (Figure~\ref{fig:SMB-Bytes-IO}). This is most likely due to the fact that some of the reads are satisfied by server caches, thus eliminating some long access times to persistent storage.
795
However, one should notice that the response time on read operations grows at a rate similar to that of write operations. This, again, shows a form of standardization in the communication patterns although some read I/O take a far greater period of time; due to larger amounts of read data sent over several standardized size packets.
796
%While the RT for Write operations are not included (due to their step function behavior) Figure~\ref{fig:CDF-RT-Read} and Figure~\ref{fig:CDF-RT-RW} show the response times for Read and Read+Write operations respectively. T
797
%\textcolor{red}{The write I/O step function behavior is somewhat visible in the CDF of both reads and writes in Figures~\ref{fig:CDF-RT-Read}~and~\ref{fig:CDF-RT-Write}. Moreover, this shows that the majority ($80$\%) of read (and write) operations occur within 2~$ms$, the average access time for enterprise storage disks. As would be expected, this is still an order of magnitude greater than the general I/O.}
798
799
%\begin{figure}[tp!]
800
% \includegraphics[width=0.5\textwidth]{./images/smb_read_iats_cdf.png}
Duncan
Feb 2, 2020
801
% \vspace{-2em}
802
% \caption{CDF of Inter Arrival Time for Read I/O}
803
% \label{fig:CDF-IAT-Read}
804
%\end{figure}
805
%
806
%\begin{figure}[tp!]
807
% \includegraphics[width=0.5\textwidth]{./images/smb_read_iats_pdf.png}
Duncan
Feb 2, 2020
808
% \vspace{-2em}
809
% \caption{PDF of Inter Arrival Time for Read I/O}
810
% \label{fig:PDF-IAT-Read}
811
%\end{figure}
812
%
813
%\begin{figure}[tp!]
814
% \includegraphics[width=0.5\textwidth]{./images/smb_read_rts_cdf.png}
815
% \vspace{-2em}
816
% \caption{CDF of Response Time for Read I/O}
817
% \label{fig:CDF-RT-Read}
818
%% \vspace{-2em}
819
%\end{figure}
820
%
821
%\begin{figure}[tp!]
822
% \includegraphics[width=0.5\textwidth]{./images/smb_read_rts_pdf.png}
823
% \vspace{-2em}
824
% \caption{PDF of Response Time for Read I/O}
825
% \label{fig:PDF-RT-Read}
826
%% \vspace{-2em}
827
%\end{figure}
Jan 16, 2020
829
% RTs information
830
831
%\begin{figure}[t!]
832
% \includegraphics[width=0.5\textwidth]{./images/smb_write_iats_cdf.png}
Duncan
Feb 2, 2020
833
% \vspace{-2em}
834
% \caption{CDF of Inter Arrival Time for Write I/O}
835
% \label{fig:CDF-IAT-Write}
836
%\end{figure}
837
%
838
%\begin{figure}[t!]
839
% \includegraphics[width=0.5\textwidth]{./images/smb_write_iats_pdf.png}
Duncan
Feb 2, 2020
840
% \vspace{-2em}
841
% \caption{PDF of Inter Arrival Time for Write I/O}
842
% \label{fig:PDF-IAT-Write}
843
%\end{figure}
844
%
845
%\begin{figure}[t!]
846
% \includegraphics[width=0.5\textwidth]{./images/smb_write_rts_cdf.png}
Duncan
Feb 2, 2020
847
% \vspace{-2em}
848
% \caption{CDF of Return Time for Write IO}
849
% \label{fig:CDF-RT-Write}
850
%% \vspace{-2em}
851
%\end{figure}
852
%
853
%\begin{figure}[t!]
854
% \includegraphics[width=0.5\textwidth]{./images/smb_write_rts_pdf.png}
Duncan
Feb 2, 2020
855
% \vspace{-2em}
856
% \caption{PDF of Return Time for Write IO}
857
% \label{fig:PDF-RT-Write}
858
%% \vspace{-2em}
859
%\end{figure}
860
%
861
%\begin{figure}[t!]
862
% \includegraphics[width=0.5\textwidth]{./images/smb_create_iats_cdf.png}
863
% \caption{CDF of Inter Arrival Time for Create I/O}
864
% \vspace{-2em}
865
% \label{fig:CDF-IAT-Create}
866
%\end{figure}
867
%
868
%\begin{figure}[t!]
869
% \includegraphics[width=0.5\textwidth]{./images/smb_create_iats_pdf.png}
870
% \vspace{-2em}
871
% \caption{PDF of Inter Arrival Time for Create I/O}
872
% \label{fig:PDF-IAT-Create}
873
%\end{figure}
874
%
875
%\begin{figure}[t!]
876
% \includegraphics[width=0.5\textwidth]{./images/smb_create_rts_cdf.png}
877
% \vspace{-2em}
878
% \caption{CDF of Response Time for Create I/O}
879
% \label{fig:CDF-RT-Create}
880
%% \vspace{-2em}
881
%\end{figure}
882
%
883
%\begin{figure}[t!]
884
% \includegraphics[width=0.5\textwidth]{./images/smb_create_rts_pdf.png}
885
% \vspace{-2em}
886
% \caption{PDF of Response Time for Create I/O}
887
% \label{fig:PDF-RT-Create}
888
%% \vspace{-2em}
889
%\end{figure}
890
891
%\begin{figure}
892
% \includegraphics[width=0.5\textwidth]{./images/CDF-ioRT-win.pdf}
893
% \caption{CDF of Response Time for Read+Write I/ O}
894
% \label{fig:CDF-RT-RW}
895
%\end{figure}
896
897
%\begin{figure}
898
% \includegraphics[width=0.5\textwidth]{./images/CDF-rBuff-win.pdf}
899
% \caption{CDF of Bytes Transferred for Read IO}
900
% \label{fig:CDF-Bytes-Read}
901
%\end{figure}
902
903
%\begin{figure}
904
% \includegraphics[width=0.5\textwidth]{./images/CDF-wBuff-win.pdf}
905
% \caption{CDF of Bytes Transferred for Write IO}
906
% \label{fig:CDF-Bytes-Write}
907
%\end{figure}
908
909
%\begin{figure}
910
% \includegraphics[width=0.5\textwidth]{./images/CDF-ioBuff-win.pdf}
911
% \caption{CDF of Bytes Transferred for Read+Write IO}
912
% \label{fig:CDF-Bytes-RW}
913
%\end{figure}
914
Jan 16, 2020
915
\subsection{File Extensions}
916
Tables~\ref{tab:top10SMB2FileExts} and~\ref{tab:commonSMB2FileExts} show a summary of the various file extensions that were seen within the SMB2 traffic during the three-week capture period; following the \textit{smb2.filename} field. The easier to understand is Table~\ref{tab:commonSMB2FileExts}, which illustrates the number of common file extensions (e.g. doc, ppt, xls, pdf) that were part of the data.
Jan 16, 2020
917
%The greatest point of note is that the highest percentage is ``.xml'' with $0.54$\%, which is found to be surprising result.
918
Originally, we expected that these common file extensions would be a much larger total of traffic. However, as seen in Table~\ref{tab:commonSMB2FileExts}, these common file extensions were less than $2$\% of total files seen. The top ten extensions that we saw (Table~\ref{tab:top10SMB2FileExts}) comprised approximately $84$\% of the total seen.
Jan 16, 2020
919
Furthermore, the majority of extensions are not readily identified.
920
Upon closer examination of the tracing system it was determined that
921
%these file extensions are an artifact of how Windows interprets file extensions. The Windows operating system merely guesses the file type based on the assumed extension (e.g. whatever characters follow after the final `.').
922
many files simply do not have a valid extension. These range from Linux-based library files, manual pages, odd naming schemes as part of scripts or back-up files, as well as date-times and IPs as file names. There are undoubtedly more, but exhaustive determination of all variations is seen as out of scope for this work.
Jan 16, 2020
923
Duncan
Apr 23, 2020
924
%\textcolor{red}{Add in information stating that the type of OS in use in the university environment range from Windows, Unix, BSD, as well as other odd operating systems used by the engineering department.}
925
Jan 16, 2020
926
\begin{table}[]
927
\centering
928
\begin{tabular}{|l|l|l|}
929
\hline
930
SMB2 Filename Extension & Occurrences & Percentage of Total \\ \hline
931
-Travel & 33,396,147 & 15.26 \\
932
o & 28,670,784 & 13.1 \\
933
e & 28,606,421 & 13.07 \\
934
N & 27,639,457 & 12.63 \\
935
one & 27,615,505 & 12.62 \\
936
\textless{}No Extension\textgreater{} & 27,613,845 & 12.62 \\
937
d & 2,799,799 & 1.28 \\
938
l & 2,321,338 & 1.06 \\
939
x & 2,108,279 & 0.96 \\
940
h & 2,019,714 & 0.92 \\ \hline
Jan 16, 2020
941
\end{tabular}
Duncan
Feb 2, 2020
942
\caption{Top 10 File Extensions Seen Over Three Week Period}
Jan 16, 2020
943
\label{tab:top10SMB2FileExts}