Skip to content
Permalink
Browse files

added text about NetApp storage nodes

  • Loading branch information
joc02012 committed Jul 1, 2020
1 parent aeeb66b commit be20fa50eefbcc084a8a094a4037a9d7c504b5c8
Showing with 3 additions and 3 deletions.
  1. +3 −3 trackingPaper.tex
@@ -293,7 +293,7 @@ \section{Background}
%\textcolor{red}{\textbf{Add information about SMB 2.X/3?}}

\begin{figure*}[ht!]
\includegraphics[width=\textwidth]{./images/packetcapturetopology.png}
\includegraphics[width=.92\textwidth]{./images/packetcapturetopology.png}
\caption{Visualization of Packet Capturing System}
\label{fig:captureTopology}
\vspace{-1em}
@@ -350,7 +350,7 @@ \subsection{University Storage System Overview}
%\textcolor{red}{UConn}
as well as personal drive share space for faculty, staff and students, along with at least one small group of users. Each server is capable of handling 1~Gb/s of traffic in each direction (e.g. outbound and inbound traffic). Altogether, the five-blade server system can in theory handle 5~Gb/s of data traffic in each direction.
%Some of these blade servers have local storage but the majority do not have any.
The blade servers serve as SMB heads, but the actual storage is served by SAN storage nodes that sit behind them. This system does not currently implement load balancing. Instead, the servers are set up to spread the load with a static distribution across four of the active cluster nodes while the passive fifth node takes over in the case of any other nodes going down.% (e.g. become inoperable or crash).
The blade servers serve as SMB heads, but the actual storage is served by a pair of NetApp SAN appliance nodes that sit behind the SMB heads. The NetApp storage provides 588~TB of usable disk storage fronted by 4~TB of flash cache. This system does not currently implement load balancing. Instead, the SMB servers are set up to spread the load with a static distribution across four of the active cluster nodes while the passive fifth node takes over in the case of any other nodes going down.% (e.g. become inoperable or crash).

The actual tracing was performed with a tracing server connected to a switch outfitted with a packet duplicating element as shown in the topology diagram in Figure~\ref{fig:captureTopology}. A 10~Gbps network tap was installed in the file server switch, allowing our storage server to obtain a copy of all network traffic going to the 5 file servers. The reason for using 10~Gbps hardware is to help ensure that the system is able to capture information on the network at peak theoretical throughput.

@@ -567,7 +567,7 @@ \subsection{I/O Data Request Sizes}
%\end{figure}
%Figure~\ref{fig:SMB-Bytes-IO} %and~\ref{fig:CDF-Bytes-Write}
%shows cumulative distribution functions (CDF) for bytes read and bytes written.
Additionally almost no read transfer sizes are less than 32 bytes, whereas 20\% of the writes are smaller than 32 bytes. Table~\ref{fig:transferSizes} shows a tabular view of this data. For reads, $34.97$\% are between 64 and 512 bytes, with another $28.86$\% at 64-byte request sizes. There are a negligible percentage of read requests larger than 512.
Additionally, almost no read transfer sizes are less than 32 bytes, whereas 20\% of the writes are smaller than 32 bytes. Table~\ref{fig:transferSizes} shows a tabular view of this data. For reads, $34.97$\% are between 64 and 512 bytes, with another $28.86$\% at 64-byte request sizes. There are a negligible percentage of read requests larger than 512.
This read data differs from the size of reads observed by Leung et al. by a factor of four smaller.
%This read data is similar to what was observed by Leung et al, however at an order of magnitude smaller.
%Writes observed also differ from previous inspection of the protocol's usage. % are very different.

0 comments on commit be20fa5

Please sign in to comment.
You can’t perform that action at this time.