Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
added text about NetApp storage nodes
  • Loading branch information
joc02012 committed Jul 1, 2020
1 parent aeeb66b commit be20fa5
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions trackingPaper.tex
Expand Up @@ -293,7 +293,7 @@ The SMB 1.0 protocol~\cite{SMB1Spec} has been found to have high/significant imp
%\textcolor{red}{\textbf{Add information about SMB 2.X/3?}} %\textcolor{red}{\textbf{Add information about SMB 2.X/3?}}


\begin{figure*}[ht!] \begin{figure*}[ht!]
\includegraphics[width=\textwidth]{./images/packetcapturetopology.png} \includegraphics[width=.92\textwidth]{./images/packetcapturetopology.png}
\caption{Visualization of Packet Capturing System} \caption{Visualization of Packet Capturing System}
\label{fig:captureTopology} \label{fig:captureTopology}
\vspace{-1em} \vspace{-1em}
Expand Down Expand Up @@ -350,7 +350,7 @@ the university
%\textcolor{red}{UConn} %\textcolor{red}{UConn}
as well as personal drive share space for faculty, staff and students, along with at least one small group of users. Each server is capable of handling 1~Gb/s of traffic in each direction (e.g. outbound and inbound traffic). Altogether, the five-blade server system can in theory handle 5~Gb/s of data traffic in each direction. as well as personal drive share space for faculty, staff and students, along with at least one small group of users. Each server is capable of handling 1~Gb/s of traffic in each direction (e.g. outbound and inbound traffic). Altogether, the five-blade server system can in theory handle 5~Gb/s of data traffic in each direction.
%Some of these blade servers have local storage but the majority do not have any. %Some of these blade servers have local storage but the majority do not have any.
The blade servers serve as SMB heads, but the actual storage is served by SAN storage nodes that sit behind them. This system does not currently implement load balancing. Instead, the servers are set up to spread the load with a static distribution across four of the active cluster nodes while the passive fifth node takes over in the case of any other nodes going down.% (e.g. become inoperable or crash). The blade servers serve as SMB heads, but the actual storage is served by a pair of NetApp SAN appliance nodes that sit behind the SMB heads. The NetApp storage provides 588~TB of usable disk storage fronted by 4~TB of flash cache. This system does not currently implement load balancing. Instead, the SMB servers are set up to spread the load with a static distribution across four of the active cluster nodes while the passive fifth node takes over in the case of any other nodes going down.% (e.g. become inoperable or crash).


The actual tracing was performed with a tracing server connected to a switch outfitted with a packet duplicating element as shown in the topology diagram in Figure~\ref{fig:captureTopology}. A 10~Gbps network tap was installed in the file server switch, allowing our storage server to obtain a copy of all network traffic going to the 5 file servers. The reason for using 10~Gbps hardware is to help ensure that the system is able to capture information on the network at peak theoretical throughput. The actual tracing was performed with a tracing server connected to a switch outfitted with a packet duplicating element as shown in the topology diagram in Figure~\ref{fig:captureTopology}. A 10~Gbps network tap was installed in the file server switch, allowing our storage server to obtain a copy of all network traffic going to the 5 file servers. The reason for using 10~Gbps hardware is to help ensure that the system is able to capture information on the network at peak theoretical throughput.


Expand Down Expand Up @@ -567,7 +567,7 @@ running scripts creating a large volume of files. A more significant reason was
%\end{figure} %\end{figure}
%Figure~\ref{fig:SMB-Bytes-IO} %and~\ref{fig:CDF-Bytes-Write} %Figure~\ref{fig:SMB-Bytes-IO} %and~\ref{fig:CDF-Bytes-Write}
%shows cumulative distribution functions (CDF) for bytes read and bytes written. %shows cumulative distribution functions (CDF) for bytes read and bytes written.
Additionally almost no read transfer sizes are less than 32 bytes, whereas 20\% of the writes are smaller than 32 bytes. Table~\ref{fig:transferSizes} shows a tabular view of this data. For reads, $34.97$\% are between 64 and 512 bytes, with another $28.86$\% at 64-byte request sizes. There are a negligible percentage of read requests larger than 512. Additionally, almost no read transfer sizes are less than 32 bytes, whereas 20\% of the writes are smaller than 32 bytes. Table~\ref{fig:transferSizes} shows a tabular view of this data. For reads, $34.97$\% are between 64 and 512 bytes, with another $28.86$\% at 64-byte request sizes. There are a negligible percentage of read requests larger than 512.
This read data differs from the size of reads observed by Leung et al. by a factor of four smaller. This read data differs from the size of reads observed by Leung et al. by a factor of four smaller.
%This read data is similar to what was observed by Leung et al, however at an order of magnitude smaller. %This read data is similar to what was observed by Leung et al, however at an order of magnitude smaller.
%Writes observed also differ from previous inspection of the protocol's usage. % are very different. %Writes observed also differ from previous inspection of the protocol's usage. % are very different.
Expand Down

0 comments on commit be20fa5

Please sign in to comment.