I working with SQL Server data base in order to store very long Unicode string. The field is from type 'ntext', which theoretically should be limit to 2^30 Unicode characters.
From MSDN documentation:
ntext
Variable-length Unicode data with a maximum string length of 2^30 - 1 (1,073,741,823) bytes. Storage size, in bytes, is two times the string length that is entered. The ISO synonym for ntext is national text.
I'm made this test:
Generate 50,000 characters string.
Run an Update SQL statement
UPDATE [table] SET Response='... 50,000 character string...' WHERE ID='593BCBC0-EC1E-4850-93B0-3A9A9EB83123'
Check the result - what actually stored in the field at the end.
The result was that the field [Response] contain only 43,679 characters. All the characters at the end of the string was thrown out.
Why this happens? How I can fix this?
If this is really the capacity limit of this data type (ntext), which another data type can store longer Unicode string?
NTEXT
datatype is deprecated and you should use NVARCHAR(MAX)
.
I see two possible explanations:
Your ODBC
driver you use to connect to database truncate parameter value when it is too long (try using SSMS
)
You write you generate your input string. I suspect you generate CHAR(0)
which is Null literal
If second is your case make sure you cannot generate \0
char.
EDIT:
I don't know how you check the length but keep in mind that LEN
does not count trailing whitespaces
SELECT LEN('aa ') AS length -- 2
,DATALENGTH('aa ') AS datalength -- 7
Last possible solution I see you do sth like:
SELECT 'aa aaaa'
-- result in SSMS `aa aaaa`: so when you count you lose all multiple whitespaces
Check query below if returns 100k:
SELECT DATALENGTH(ntext_column)