views:

287

answers:

5

I'm working on a program that creates 2000 directories and puts a file in each (just a 10KB or so file). I am using mkdir to make the dirs and ofstream (i tried fopen as well) to write the files to a solid state drive (i'm doing speed tests for comparison).

When I run the code the directories are created fine but the files stop writing after 1000 or so have been written. I've tried putting a delay before each write in case it was some kind of overload and also tried using fopen instead of ofstream but it always stops writing the files around the 1000th file mark.

this is the code that writes the files and exits telling me which file it failed on.

fsWriteFiles.open(path, ios::app); 
if(!fsWriteFiles.is_open()) 
{
   cout << "Fail at point: " << filecount  << endl; 
   return 1;
}
fsWriteFiles << filecontent;
fsWriteFiles.close();

Has anyone had any experience of this or has any theories?

Here's the full code: This code creates a 2 digit hex directory from a random number then a 4 digit hex directory from a random number then stores a file in that directory. It exits with a 'fail at point' (a cout I've added) after writing 1000ish files. This indicates that it cannot create a file, but it should have already checked that the file does not exist. Sometimes it fails on 0 having hit the second from bottom line (else clause for file already existing). any help appreciated, I feel that it is to do with the files i'm trying to create already existing but that have somehow slipped by my file existence check. Is there a way to get an error message for a failed file creation attempt?

int main()
{
  char charpart1[3] = "";
  char charpart3[5] = "";

  char path[35] = "";
  int randomStore = 0;

  //Initialize random seed
  srand(time(NULL));
  struct stat buffer ;

  //Create output file streams
  ofstream fsWriteFiles;    
  ifstream checkforfile;

  //Loop X times
  int dircount = 0;
  while(dircount < 2000)
  {
    path[0] = '\0'; //reset the char array that holds the path

    randomStore = rand() % 255;
    sprintf(charpart1, "%.2x", randomStore);
    randomStore = rand() % 65535;
    sprintf(charpart3, "%.4x", randomStore);

    //Check if top level dir exists, create if not
    strcat(path, "fastdirs/");
    strcat(path, charpart1);
    DIR *pdir=opendir(path);

    //If the dir does not exist create it with read/write/search permissions for owner 
    // and group, and with read/search permissions for others
    if(!pdir)
      mkdir(path,  S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH);

    //Check if 3rd level dir exists, create if not
    strcat(path, "/");
    strcat(path, charpart3);
    DIR *pdir3=opendir(path);

    if(!pdir3)
      mkdir(path,  S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH);

    strcat(path, "/");
    strcat(path, charpart3);
    strcat(path, ".txt");
    //Write the file if it does not already exist
    checkforfile.open(path, fstream::in);

    if (checkforfile.is_open() != true)
    {
      fsWriteFiles.open(path, ios::app); 
      if(!fsWriteFiles.is_open()) 
      {
        cout << "Fail at point: " << dircount << "\n" << endl;
        return 1;
      }
      fsWriteFiles << "test";
      fsWriteFiles.flush();
      fsWriteFiles.close();

      dircount ++; //increment the file counter
    }
    else
    {
      cout << "ex";
      checkforfile.close();
    }
  } 
}
+1  A: 

Do you close the files? Sound like you hit the file descriptor limit which is usually 1024. For a simple check, try "ulimit -n 4096" an execute your program again.

drhirsch
From the posted code fsWriteFiles.close();
anon
That sounds like a possibility. Unfortunately I'm not permitted to use the ulimit command. Also I am closing the file, I thought it might be that the files weren't closing quick enough, but I tried a wait before each open and it didn't help. thanks for your suggestions.
Columbo
@Columbo You don't need ulimit. To test this hypothesis, open 100 files but DON'T close them BEFORE your existing code. Then see if the failure occurs earlier or not.
anon
tested. Didn't close the first 100 files and no change, still starts failing around 1000 files written.
Columbo
That doesn't really do the test. You need to open but not close 100 files that are not part of your loop before you start the loop, then see how many you could open in the loop, with normal closing.
anon
I've now added a loop before the loop you can see in the code I posted that creates 100 new files and doesn't close them.Still not working (i'm editing my original post to show all code.
Columbo
+1  A: 

I can't see anything obviously wrong with the code as posted, but you haven't shown enough context for anyone to help much.

However, I'd be happier if you were using RAII to ensure the files were closed, by using the ofstream constructor/destructor to perform the open/close:-

for (int i=0; i != MAX_FILES; ++i)
{     
  ofstream sWriteFiles(getFilePath(i), ios::app); 

  if (!fsWriteFiles.is_open()) 
  {
    cout << "Fail at point: " << filecount  << endl; 
    return 1;
  }

  fsWriteFiles << filecontent;
}

Yes, you're 'constructing' ofstream objects every time round the loop, but I'd find this code a lot more maintainable.

Roddy
From his code, which doesn't show the loop, that's exactly what he's doing, with an extra (harmless) call to close().
anon
@Neil - possibly - but I wouldn't be surprised if his ofstream is outside the loop and "re-used" N times. Starting with a 'fresh' ofstream each time just feels slightly cleaner. We're also making the assumption that it's an ofstream rather than a derived class with a broken "close" method.
Roddy
i've now added the ofstream creation statement directly above the attempted open - still failure at the same 1000 files figureif(!checkforfile.is_open()){ ofstream fsWriteFiles; fsWriteFiles.open(path, ios::app); ...
Columbo
@Columbo : PLEASE **EDIT** the original post to show more of your REAL code. We can't second guess what you've written.
Roddy
I've added the full code with some more info.
Columbo
+5  A: 

You open directories with opendir() but never close them with closedir() - I suspect there is a resource limit there too.

anon
I spent so much time reformatting the blasted code so I could read it that you beat me to it!
Roddy
That's it. its working. cheers Neil.
Columbo
I think this shows the value of posting ACTUAL CODE when asking a question on SO.
anon
It does, in future I'll post it all rather than just the bit I think i wrong, thanks again
Columbo
please post only the bit you think wrong at first so i don't waste my time when you were right
Dustin Getz
+1  A: 

just want to make sure you're aware of boost::filesystem. It might simplify your code considerably.

#include "boost/filesystem.hpp"   // includes all needed Boost.Filesystem declarations
#include <iostream>               // for std::cout
using boost::filesystem;          // for ease of tutorial presentation;
                                  //  a namespace alias is preferred practice in real code

path my_path( "some_dir/file.txt" );

remove_all( "foobar" );
create_directory( "foobar" );
ofstream file( "foobar/cheeze" );
file << "tastes good!\n";
file.close();
if ( !exists( "foobar/cheeze" ) )
  std::cout << "Something is rotten in foobar\n";
Dustin Getz
This looks very interesting, i'm going to check it out. ta Dustin.
Columbo
Boost is C++ and thus can use dtors. That will save you on many occasions, and would probably have saved you here.
MSalters
A: 

You have the following:

char charpart1[3] = "";
randomStore = rand() % 255;
sprintf(charpart1, "%.2x", randomStore);

Did you mean to use "%.2f" format instead of x? %.2x still prints "3ff3c083" which overflows your buffer. Also, you probably want charpart1 to be of size 5, not 3 if you switch to %0.2f because that writes [0][.][2][3][\0] (for example).

Gabe