While programming a little tool that analyses some log file, I've noticed that read-line is a bit slower than one would expect. Looking at its code, it appeared to me reading the stream char-by-char could be the cause. I Modified it to use fgets instead:
Code: Select all
char * readStreamLine(STREAM * stream, FILE * inStream)
{
char buf[MAX_STRING];
size_t l;
openStrStream(stream, MAX_STRING, 1);
if(fgets(buf, MAX_STRING, inStream)!=NULL)
{
l=strlen(buf);
if(buf[l-1]==0x0A)
{
buf[--l]=0;
if(buf[l-1]==0x0D) buf[--l]=0;
}
writeStreamStr(stream, buf, l);
return(stream->buffer);
}
else
{
if(feof(inStream)) clearerr(inStream);
return NULL;
}
}
However, it doesn't strictly respects the original semantics of read-line with regards to newline characters. To be honest, I don't understand why they are that way, in particular why there is a requirement that a newline at the end of the file has to be erased.
Also, the part about the TRU64 is missing. I don't know if fgets handles EINTR correctly by itself on this platform. I've worked with systems plagued with a similar illness before, and unfortunately the FILE library (which was not standard IIRC) didn't handle very it well.
On the performance side, timings drop from 250ms to 50ms.