views:

299

answers:

2

I have the following code:

using (TcpClient client = new TcpClient())
{
   client.Connect(host, port);

   using (SslStream stream = new SslStream(client.GetStream(), true))
   {
      stream.AuthenticateAsClient(host);

      stream.Write(System.Text.Encoding.ASCII.GetBytes(dataToSend));

      int byteRead = 0;
      byte[] buffer = new byte[1000];

      do
      {
         byteRead = stream.Read(buffer, 0, 1000);
         reponse += System.Text.Encoding.ASCII.GetString(buffer, 0, byteRead);
      }
      while (byteRead > 0);
   }
}

I send a string to a server, and then wait for the response.

Is this the proper way to do it?

If the server takes some time to process what I sent, will it still work or will stream.Read return 0 and exit the loop? Or if some packets from the response are lost and need to be resent, will it still work?

A: 
public string Method()
{
  m_Client = new TcpClient();
  m_Client.Connect(m_Server, m_Port);
  m_Stream = m_Client.GetStream();
  m_Writer = new StreamWriter(m_Stream);
  m_Reader = new StreamReader(m_Stream);
  m_Writer.WriteLine(request);
  m_Writer.Flush();

  return m_Reader.ReadToEnd();
}
Zote
I'd give you a -1 for this if I could. Not closing the TcpClient and the NetworkStream retrieved with `m_Client.GetStream()` is just begging for a memory leak. The original poster has coded correctly with `using` statements.
Brandon
It's just an example. If you see variable names, I should known that they are class members. To post here, I just joined 2 or more methods. In productions I'm not using above code.
Zote
+2  A: 

The overall structure of your code looks right.

byteRead = stream.Read(buffer, 0, 1000); will block until all of the response data is retrieved from the server. If the remote server shuts down the connection (timeout, etc), 0 will be returned.

See the remarks found here.

The framework will properly deal with packets lost during network operations - don't worry about them.

Brandon