views:

33

answers:

2

can somebody please explain why do we need this error at all, it is really strange, when You use one compiler program works perfectly, but with other it crashes, who invented this thing?

A: 

Generally I would say that a lack of a newline at the end of a source file would mean that something went wrong in the editor or source code control client and not all of the code in the buffer got flushed. While it's likely that this would result in other errors, knowing that something likely went wrong in the editor/SCM and code may be missing is a pretty useful bit of knowledge. Certainly something that I would want to check.

tvanfosson
Technically such an error should be detected/prevented by the SCM, though. Since with source code the probability of accidentally hitting a line end with the buffer end isn't *that* small it'd be not a particulatly useful indicator anyway. In any case, I'd not trust an SCM that routinely mangles my files.
Joey
+1  A: 

Historically, at least in the Unix world, "newline" or rather U+000A Line Feed was a line terminator. This stands in stark contrast to the practice in Windows for example, where CR+LF is a line separator.

A naïve solution of reading every line in a file would be to append characters to a buffer until an LF was encountered. If done really stupid this would ignore the last line in a file if it wasn't terminated by LF.

Another thing to consider are macro systems that allow including files. A line such as

%include "foo.inc"

might be replaced by the contents of the mentioned file where, if the last line wasn't ended with an LF, it would get merged with the next line. And yes, I've seen this behavior with a particular macro assembler for an embedded platform.

Nowadays I'm in the firm belief that (a) it's a relic of ancient times and (b) I haven't seen modern software that can't handle it but yet we still carry around numerous editors on Unix-like systems who helpfully put a byte more than needed at the end of a file.

Joey