I'm using a one string per line array. Arrays are often faster or equal to update then linked list because they have a much better cache locality then linked list and for a single 1st level cache miss you can move already a few dozens pointer items in an array. I would say for everything less then 10000 items just use an array of pointers.
If your edited text are small (like hand written source code files) it is the way to go. A lot of the meta information you need for state of the art text editing can be split very well into "beginning of a line" points. Most importantly syntax highlighting.
Also error lines, breakpoints, profiling infos, code coverage markers, all of them work on a line. It's the native structure of source code text and in same cases also for literature text (if you write a text processor and not a source code editor) but in that case your need to take a paragraph as a unit.
If i ever find the time to redesign my editor i will add different buffer implementations because on larger texts the overhead of all the line info data (it's about 80 byte per line on a 32bit system) is significant. Then a gap buffer model is better, it is also much better if you don't have lines at all for example when displaying binary files in hex mode.
And finally a third buffer model is required when you allow a user to open large files. It's funny to see marketing bullshit (free open source is surprisingly worse here) about unlimited file size editing and once you open a 400 MB log file, the whole system becames unresponsive. You need a buffer model here which is not loading the whole file into the buffer first.