I've written a jQuery function to check the total number of characters in a given textbox and pop up a warning if it exceeds the maximum (user-defined) length.
Example: (greatly simplified):
function checkLength(txt)
{
if (txt.length >= maxlength)
{
alert('Too much text entered');
return false;
}
else
return true;
}
This works as expected, but like a good little developer I also perform server-side validation against the length of the field the entered text is assigned to (on my domain object).
It's there that I noticed that the following string:
12<RETURN>34
(where <RETURN>
is obviously the return character)
will return 5 as length in Javascript and return 6 as length in .NET (String.Length)
Am I missing something here? Or will I be forced to count the number of returns in the javascript string and then double them to get the 'correct' .NET count? Any pointers on how to handle this?
UPDATE: I can indeed get the correct length by doing the replace as suggested below.
function GetCurrentLength(txt) {
return txt.replace(/\n/g, "\n\r").length;
}
However, there is more to the question than detecting the character count, because once the system detects that too much characters have been entered, it should (client-side) strip all excess characters away (this can be more than one, because the users might copy-paste too much text into the textbox).
Something like:
if (textLength > maxlength) {
self.val(text.substr(0, maxlength));
}
This of course will get me into trouble if I count returns double, because the wrong number of characters risk being deleted. How can I reliably delete the right number of characters then?