Deploying SSIS packages in Integration Services Catalog

Just noting the process for deploy/redeploy


I’m starting from a redployment, so i’m selecting ‘import package’ under the pretense that it will just overwrite whatever is already there – my master version of the packages lives in a visual studio solution.




Changes to WSDL structures in SSIS

I haven’t managed to find a clever way to associate my SSIS tasks with a WSDL integration – in this case salesforce – as of yet, so heres some notes on how I have to update each time I need to change the structure…

The SSIS script task that i run utilises the sample login routines to create access to the service, and then I have generated a class library from the WSDL file I obtained from my enterprise subscription with salesforce.

Once logged in, I need to pull extracts of various objects in full, so I create a SQL query that I pass through the querySample (sample code I found)… using the SforceService.query method. I get the SQL for the object from using the Apex Data Loader desktop client, which allows you to cut and paste directly out.

The result is then consumed (in this case) into a CSV which is managed into a SQL server database after some transformations later.

The issue is obviously i have created the .CS through wsdl.exe and its a point in time reference. As time goes on I need to add in new fields and when you just update the SQL, if the variables aren’t described in the wsdl file it crashes. So obviously you still need to update the SQL reference in the code, but I also then have to recreate the .CS file – for some reason I’m unable to generate a fresh one and reference it… i’m sure this can be fixed, but in the meantime i create a new one, and copy and paste the content over the existing… a build succeeds and the new object structure is recognised.



Windows 10 User rights assignment

Since upgrading to windows 10 i’ve been plagued with problems of processes and systems failing because of insufficient user rights – including the inability to access folders and files (for example some wsdl files i had squirreled away on my c drive, when running in SSIS needing to copy these to an ‘accessible’ location on my desktop… not really sustainable

Looking into these issues people are hacking about with all sorts of fixes, elevating user privileges on the files and folders – but this only lasts for the user session and disappears on a restart, if you can get it to work at all.

My user login in in the local admin group and always worked before for some batch jobs being run – these had started failing so I was able to fix this with the following steps:

  • Type: Local Security Policy into windows search
  • Run the program as administrator
  • Choose user rights assignment
  • Select Log on as Batch Job
  • Add my normal user
  • Save
  • Close and Restart

And that’s fixed it. To be honest i had no idea these tools existed, and I’m convinced the problem to my other security issues are in here somewhere i just need to find some time to look


Cross domain tracking – some notes

Implementing cross domain tracking should be relatively straightforward, however this note is to record one issue that arose when the second domain is held within an iframe embedded in the primary site.

Cross domain tracking via an auto link in GTM basically detects links to a second domain and then decorates them in such a way to transfer the appropriate cookie values for user ids and associates the two together.

This doesn’t seem to happen automatically when you have the second domain in an iframe, you need to auto-generate the iframe code to include the decoration:


List of other resources:

https://support.google.com/tagmanager/answer/6106951 (Cross domain in GTM)

https://support.google.com/analytics/answer/1034342?hl=en#GTM (GA setting up Cross Domain)

https://support.google.com/analytics/answer/1009612?hl=en (Setting up ecommerce tracking)

https://support.google.com/tagmanager/answer/6107169 (Ecommerce in GTM)

The debugging route I took from this (although not exhaustive) was in Chrome, using the GA debug extension – switching on the javascript console (Ctrl+Shift+J) and then loading the page, and checking through the GA variables passed for each of the accounts being managed (once of course clarifying both sites had the appropriate settings noted on the various google knowledge base articles).

As the page loads, keep a log of the various cid or visitor id values for each of the properties and make sure they add up between the domains.

SQL Selecting top 2 people from each company

It used to be a nightmarish situation when data is briefed usually for a DM campaign and they want you to select the ‘top 2’ from each company. Based on… ?? Well the top 2 most senior obviously.

I’ve now started asking for clarification so I can build it into the data pull automatically, we have clients where we have 25 marketing directors all registered at the same place. So differentiating this makes sense in order to pull the data.

By using Rank over Partition in SQL I now just get clarification on other variables they want to order by – so, last log in date, number of downloads, subscription level, that sort of thing and then run the script:

Rank() over (Partition by <GROUP BY FIELD> Order by <ORDERING FIELD> asc, <ORDERING FIELD> desc) as ‘Ranking’

Then you can just stick it all in Excel and filter on the Ranking column for values 1 and 2. Its can then be left to the team to decide to chop and change who.

User defined table types, stored procedure parameters and c#

Capturing references to creating user defined table types in SQL server and then using them to write data out of API calls in c# (via SSIS custom components):

“In SQL Server 2008, a user-defined table type is a user-defined type that represents the definition of a table structure. You can use a user-defined table type to declare table-valued parameters for stored procedures or functions, or to declare table variables that you want to use in a batch or in the body of a stored procedure or function.”


I’ve been using it to pass variables as parameters into stored procedures – there are some queries in forums about getting these to work, so here is my experience:

Firstly create the table type in SQL (use own tabletypename and appropriate fields and variable types


([field1] int, [field2] nvarchar(500), [field3] datetime)

And create a stored procedure that processes the data as required, using this table type as a parameter:

CREATE PROCEDURE <stored proc name>

@ctttype dbo.TableTypeName READONLY


INSERT INTO Table_on_db ([field1],[field2],[field3])

select [field1],[field2],[field3] from @ctttype

The parameter must be set to READONLY

Then create the datatable in the c# code:

DataTable dt = new DataTable();


dt.Columns.Add(“field1”, typeof(int));

dt.Columns.Add(“field2”, typeof(string));

dt.Columns.Add(“field3”, typeof(DateTime));

after trial and error it was easier to create the columns with typeof(variable type) as datetimes proved problematic – unless you explicitly suggested they were datetimes it would error trying to convert datetimes into string later. It obviously defaults to string or similar if you do not state otherwise.

I’d then use a foreach loop to run through the object returned from my API call and write the responses into the datatable:

foreach (APIResponseObject response in responses)


DataRow row = dt.NewRow();

row[“field1”] = response.field1;

row[“field2”] = response.field2;

row[“field3”] = response.field3;



And then you can pass the data table through a parameterised SQL command:

SqlConnection destination = new SqlConnection(connectionStringDest);


SqlCommand insertCommand = new SqlCommand(<stored proc name>, destination);

insertCommand.CommandType = CommandType.StoredProcedure;

SqlParameter tvpParam = insertCommand.Parameters.AddWithValue(<tabletype variable name from stored proc>, dt);

tvpParam.SqlDbType = SqlDbType.Structured;

tvpParam.TypeName = <table type name from db>;



I realise the wording for a number of these aspects is wrong, as they are more for my notes when I need to revisit this again, but hopefully it demonstrates this process moderately well.

SSIS Custom Components

There is a staggeringly large volume of data available on the topic of using c# to create custom components for SSIS. What surprised me was the lack of suitability of that content *in one place* to deal with the issues that arose during my latest implementation.

This is probably more to do with me not being a very proficient developer who is overly reliant on google cut and paste. However. Here is a summary of my experience for my own future reference.

The plan was to create (initially) a custom data flow component that would connect to a web service and pass over the contents of a CSV, outputting the result. The CSV is effectively a data sheet of profile information relating to an email address that I need to populate into our email service providers service. They publish a web service (SOAP), and have published a number of sample API calls in c# which I had previously downloaded and successfully run through Visual Studio.

My first sources of information was the review the MSDN around the objects involved, the methods that need to be used etc. in order to generate a pipeline component (it was through googling this that I found out that it needed to be a pipeline component).

These resources include:





I then found this very useful blog article where someone had created a pipeline component so I nabbed the code and ran through it as an example to test whether I could do it – it worked.



There’s some initial findings along the way – such as building the class and finding the pipelinecomponents folder, and also installing the dlls into the GAC. I’m inexperienced at this sort of thing so it was a good learning curve.

The working credit card number validator had the bones of what I wanted to do – it accepted a file (in this case excel but it could also be a CSV I found out) – and processed that file. I found out during this process that the data flow converts data into a buffer object and then passes this buffer through the components. So it made no difference what file type it was, it was the incoming buffer that needed manipulation within the component.

The one thing this sample did not do was to connect to a web service, and this is where I hit the first real problems. Essentially I started to create a mashup of samples – taking the number validator as a base code and then lifting the references and methods from the API samples for what I wanted to do. This built fine, but when transferred into SSIS (which by this time I had discovered build events to copy the files to the appropriate directories and install into the GAC, closing and opening SSIS and refreshing the toolbox between each test as apparently the toolbox elements are cached on load…) it would fail. It would not be able to find the endpoint. Checking back through, the endpoints were all available in the web service reference files, and the tests in visual studio worked. Turns out , SSIS components ignore app.config and other settings files. I’m sure this is something to do with it being a closed off component but the build in VS generated a .dll.config file that was being ignored by the transfer. I found this very useful explanation as to what was happening and how to fix it:


I’m sure if/when my work moves towards some form of production I’ll need to revisit this when it starts failing again…

This then meant I was moving forwards with just adapting the sample code and plugging in the API examples as and when required.

All was fine using test data and I moved on to some more real tests and it started failing again, this time it was on the volume of data I was trying to extract into the buffer. Turns out there are limits – again set by bindings in settings files. This is obviously an issue as SSIS components ignore settings files, so I needed to understand which bindings that were requiring changing and then coding them in to the component directly so they were accessible. By doing this I then needed to work through issues around security etc.





In the end, all works fine – however it feels a little Heath Robinson with little planning for errors and proper organisation and coding practise, some elements I’d like to think more on if I had the time, including the web service accepts the CSV file as a byte array, I found myself hacking a routine that converted the buffer stream (which was originally a CSV file) back into a string structured like a CSV, so I could convert it into a byte array and pass it to the API. Clearly there are easier ways to do this, but this is what is in there at the moment.

Also, dealing with the buffer in general would be something I need to familiarise myself with more, there’s a lot more there to get my head around – as a reference, one aspect which I managed was:


For reference, custom tasks are obviously different here are some references around this:





Why does my analytics not work

Just putting together some references that may form a blog post later: