Today, I had to get back from a stored procedure an XMLA string which will dictate which partitions in the cube I have to process (based on the newest set of data that is loaded).
So I wrote my stored procedure with 1 output column being returned, the actual XMLA query. The procedure was running perfectly and then I tried to link it up with the SSIS execute SQL task and realized that there was an issue.
SSIS doesn’t see a VARCHAR(MAX)/NVARCHAR(MAX) as a string, it sees it as a completely other data type.
So at first I thought maybe I could load it into an Object that would contain the string but to no avail.
Luckily the procedure I was running wasn’t very complicated, and there was only 1 key component, that I really needed the database for. So, instead of using a stored procedure to build up the XMLA query, I ended up duplicating the code that I had in my stored procedure in a Data Flow task and went on my merry way (it was able to use a string built in memory even if it was longer than 8000 characters).
I ran into a weird issue this morning when trying to load some segment files.
Data File Preview returns the 58 rows which I expected.
Data Viewer returns 57 rows which was not expected.
I tried to redirect any error rows to a flat file, but no success, there were no rows being emitted to the flat file.
I checked in the warnings and saw the following:
[File Connection ] Warning: There is a partial row at the end of the file.
I double and triple checked that it had all of the columns (or commas) available. They were all there (why wouldn’t it be? Excel makes valid CSV files).
When I tried to add an empty row in (a row with 13 commas) it was attaching it to the last column which seemed very odd to me. However when I gave it some text, it would capture it as a new row (which is the removed because of the last row vanishing).
It turns out that the row delimiter was messed up. It said “_x003C_none_x003E_” instead of the normal “<none>”.
After changing it back to “” it worked as expected. Very weird.
Hopefully this will stop you from scratching your head and wondering why a single row is randomly being removed from the dataset without rhyme/reason.