Tuesday, September 1, 2009

How to write a trigger in Salesforce.com






Often, in salesforce.com, you want to run some custom code just when some data is being inserted, updated or deleted to/from the database. Luckily, this is easy to achieve in a trigger; the syntax for declaring a trigger is fairly trivial:


trigger {TriggerName} on {SObject} ( {events} ) {
}


- Specify a name for the trigger
- Specify which SObject this trigger will run against (e.g. AccountOpportunityCustomObject__c, etc...)
- Specify when the trigger will fire. It can be:
  • before insert
  • after insert
  • before update
  • after update
  • before delete
  • after delete

Before we look into the syntax of creating a robust salesforce.com trigger, there's an important concept to understand - that of Governor Limits. Essentially - salesforce is multi-tenanted; that is to say there are many different customers’ databases running on the same servers. Salesforce cannot risk some rogue script running away with the CPU and grinding everyone to a screeching halt, so they impose governor limits. You physically cannot write an infinite loop; you cannot get more than 1000 rows from a query; you cannot run more than 20 queries in a trigger, the list continues...
Now, the next thing to understand is that your trigger code will run when you, say, insert a single record from a page; Also, the same code will run when you perform a batch insert (e.g. via the Apex Data Loader), so you may well be passing a collection of 200 items into your trigger...
So let's say we have a fairly simple scenario where we want to update an Account record with some Contact data (where 1 Account can have many Contacts). The Account object(s) we are updating will be passed into the trigger via a handy property called Trigger.new, so we will write something like:


trigger UpdateAccountFromContacts on Account (before insert, before update) {
  for (Account a : trigger.new) {
   // Loop through each Account being inserted / updated and do work here...
  }
}


So let's say we are trying to populate a field like 'Most recent contact' on the Account, where we mark the name of one of the Contacts (for simplicity sake, let's take the Contact record that was most recently modified). 
As we loop through the Accounts, we interrogate the variable 'a' and see that is has a Property called Contacts, of type Contact[]. Now in an ideal world, the system would implement lazy-loading and we could simply use a.Contacts in our code, and the system would go get the data on an as-needs basis, but unfortunately this is not so; we will have to query the fields ourselves...
So we need to write a sub-query to get the Contact Name from the Contacts collection in the Account.. it will look like this:


SELECT (SELECT Name FROM Contacts ORDER BY LastModifiedDate DESC LIMIT 1) FROM Account


Now the temptation is to stuff that into our for-loop and job's done, we would have:


trigger UpdateAccountFromContacts on Account (before insert, before update) {
  for (Account a : trigger.new) {
    Account accFromDb = [SELECT (SELECT Name FROM Contact ORDER BY LastModifiedDate DESC LIMIT 1) FROM Account WHERE Id =: a.Id];
    if (accFromDb.Contacts.size() > 0) {
      a.MostRecentContact__c = accFromDb.Contacts[0].Name;
    }
  }
}



But the problem with this is that if we are loading 200 Accounts from some batch loading process like the Apex data loader, then we are gong to attempt 200 select statements and the triggers going to throw a DML exception, because we can only hit the database 20 times in our trigger, remember?
So to get round this, instead of creating 200 selects where id = a.Id, we need to use the 'in' keyword so we can create a single select and get the records where Id in : (some list). It will look like this:


trigger UpdateAccountFromContacts on Account (before insert, before update) {
  // Use a Set, as add() method will only add an item if it is not already in the set, hence no duplicates...
  Set< Id > accountIds = new Set< Id >();
  for (Account a : trigger.new) {
    accountIds.add(a.Id);
  }
  Account accounts = [SELECT (SELECT Name FROM Contact ORDER BY LastModifiedDate DESC LIMIT 1) FROM Account WHERE Id in: accountIds];
}


So all that remains to be done is to dump the accounts (complete with Contact names) into a map, referenced by Account Id, then loop through the trigger.new Account list again, and setting the Account object being inserted/updated with recently-queried data from the db...
Our final trigger reads:


trigger UpdateAccountFromContacts on Account (before insert, before update) {
  // Use a Set, as add() method will only add an item if it is not already in the set, hence no duplicates...
  Set< Id > accountIds = new Set< Id >();
  for (Account a : trigger.new) {
    accountIds.add(a.Id);
  }
  Map accountMap = new Map([SELECT (SELECT Name FROM Contact ORDER BY LastModifiedDate DESC LIMIT 1) FROM Account WHERE Id in: accountIds]);
  for (Account a : Trigger.new) {
    Account accFromDb = accountMap.get(a.Id);
    a.MostRecentContact__c = accFromDb.Contacts[0].Name;
  }
}


So yes, it is slightly convoluted, and yes there is a CPU performance penalty, because we need to loop through the same collection *twice*.

These days in C#, we are using LINQ and lambda expressions to minimise our usage of costly foreach loops, and here we are in Apex doing the exact same loop twice!!

But the gain, of course, is that we have removed (up to) 199 calls to the database, which from a performance perspective more than makes up for it (and also allows us to play nicely in the multi-tenanted servers over in San Francisco)

Thursday, August 6, 2009

Entourage is sick

I just love HBO show Entourage... It's superb for so many reasons, here are just a few that I noticed:

- The partnerships. In many good shows there will be a pair of characters whose chemistry really makes the show fizz; they may not even be the main characters. Think Joey and Chandler in Friends; Think Ted and Dougal in Father Ted; Entourage is full of great double-acts. Vinny and E who stick together through thick and thin; Turtle and Drama who love nothing more than to bicker and annoy each other, and the finest partnership of the show: bullish, uber-arrogant fast-thinking, fast-talking, foul-mother-fucking-mouthed Ari Gold and his subservient, hard-working assistant Lloyd, who takes more crap than is reasonable for any one man, but somehow comes through with his dignity intact.
- The continuity. 6 seasons in and *all* the main protagonists are still there, and have been all the way through.
- The writing - As they drag you through the roller coaster ride of Vince's career, you find yourself desperately wanting him to succeed. It matters whether he does well, because you like him. Making the audience give a shit about what happens next is an art - and Entourage has it down-pat.

My favourite episode is still from Season 3 - Episode 9 "Vegas, Baby, Vegas!" which has a punch-up involving the boys taking on Seth Green's crew, Ari completely losing the plot at the Roulette table and best of all: Johnny Drama accidentally wooing his male (heterosexual) masseur into bed... What else could you ask of a comedy?



Wednesday, July 22, 2009

Binding to an ObservableCollection in a WPF ListBox

Bind your items control to an ObservableCollection of objects that implement INotifyPropertyChanged, and invoke PropertyChanged within the setter of each Property you want to see change in your UI.

the wrong way:
- declare private instance of ObservableCollection, and public getter that returns this instance
- Bind to Property (which is still null)
- Do work to get data (triggered by some event, like button click)
- set ObservableCollection to be new instance and populate with data.
- Wonder why no changes are propogating to UI


the right way:
- Instantiate private ObservableCollection in *Constructor* for ViewModel
- Bind to it (empty *but non-null* collection)
- Do work to get data (triggered by some event, like button click)
- populate (already instantiated and bound) Collection with data
- Bask in glory as UI fills with data goodness


obvious when you think about it...

Sunday, June 14, 2009

Development as a Service


** As per usual - this is just me thinking aloud, and is not intended in any way to represent my employer's position **


I used to be fairly sure what my job description was, but as time marches and things change, the clarity crumbles... Safe to say, I'm a software guy... My current role is at a large NZ insurance company helping them implement a system on salesforce.com, which as you are probably aware is one of the world's leading Software as a Service (SaaS) providers. Like most systems, the core product is good for 80% of the time, but won't cover the other 20% and we must close the gap (between what you need, and what the product provides) using a combination of techniques: customisation, configuration, and (where the result is still not close enough), writing code.

But as a solution provider, how should you allow your partners/customers to place their own code into your code-base? Salesforce.com have exposed a complete development 'layer', which allows us (the developers) to create html pages in a mark-up language called Viusualforce, and to control the behaviour of those pages by linking them to code written in a language called Apex, which is a OO java-like syntax. The whole development platform is hosted in the cloud... changes to the the code files get pushed up, and if they don't compile, they won't save... everything's hosted, hence 'Development as a Service'


I have been working with DaaS for 10 months now, and I am aching - longing - yearning - for writing code on my PC again... Writing code in the cloud can be monumentally frustrating, especially when one is constained by NZ's notoriously poor infrastructure.


I was recently asked my someone 'what about offering a hosted platform - is there a need for it?', and whilst I completely failed to answer the question, I did have the following thoughts:

From a customer's perspective, here's what's really great about consuming SaaS, or more specifically *Solutions* as a Service:
- When the underlying OS/db server needs patching (Microsoft, Oracle, Linux, whatever) then I (the consumer) do not need to worry about implementing the patch or regression testing the base solution.
- I do not need to purchase hardware or software (boxes, OS, database servers, etc).
- therefore, I do not need to hire professional expertise to manage the hardware/software/system.
- The software is already implemented and being served. All I need is a login and I'm away. I'll still need to apply config changes, but I'd need to do those on top of a local system implementation anyway...

And here's a few of the reasons why you just shouldn't bother:
- I (still the consumer) do not have control over my own servers. If the service provider rolls out a patch, then I *have* to regression test my customisations and implement any fixes as are needed, whether I currently have the capacity to manage this or not (I might be tight up against a project deadline, and do not have spare resource just to make sure that was working last week is still working). If I choose not to take the patch, then I run the risk of drifting into the dreaded 'unsupported' territory...
- I will need to ensure I have thumping quick internet from my ISP, which may be additional (hidden) costs, and
- I will need to ensure I have good proxy servers that can route the traffic quickly and robustly. Typically, companies have proxy servers that deal with people browsing for bits of this or that to do their job, and maybe looking at the Herald/TradeMe during lunch... now, they will have (n) users hammering the internet getting their data up and down the wire all day every day. Again, this could well mean expensive upgrades and extra costs...
- The two points mentioned above (the ISP and the proxy servers have become additional points of failure in the uptime of my system, not to mention the actual application servers (the physical boxes located over in San Francisco* for Salesforce.com). There is no evidence to suggest that hosting solutions on internal servers has greater uptime than those hosted in the cloud, incidentally, but I can tell you from experience that it is *substantially* more annoying when they go down and it wasn't your fault, and there's nothing you can do to fix it...

* Production salesforce.com servers are located in Asia for the Asia-Pacific (and hence NZ) market, but we still need to push our code all the way to San Fran and back to hit the development servers (or 'sandboxes', as they are quaintly named)...

- You may or may not have thoughts on the security of storing your data in servers located overseas. Often, companies keep their (confidential, sensitive) data in servers that are physically located in locked server rooms...


The bottom line, as far as I can see, is that it's important for SaaS providers to understand what is so awful for an average company about running applications on servers... and then remove that pain:

The provision of cloud-based services is a good basic premise, as long as the provider can offer a more compelling option for them to browse to an application using just the web browser of their choice than it would be for them to create, host, distribute and support their own LOB applications; By more compelling, I mean:
- Better quality software / solutions
- Smaller overall TCO (Total Cost of Ownership), remembering that the boxes themselves are just about cheap as chips these days.

There are always pros and cons to any system, of course. But consumers need to carefully ensure that one outweighs the other before jumping on the SaaS bandwagon.
From what I can see, Development as a Service still has far too many cons to outweigh the pros...

Friday, May 22, 2009

public string bar { get; set; }

In the old days, I used to write code in VC++ 6.0, and my properties would look like this:

public class foo
{
private string bar;
public string getBar()
{
return bar;
}
public void setBar(string value)
{
bar = value;
}
}

Then, about 6 million years ago, .Net allowed us to use getters and setters and the code would look like this:

public string bar { get; set; }

Much nicer... cleaner and easier to read, less tedious to write... everyone's a winner, baby.


Now, when I write code in Apex - salesforce.com's very own development language (not quite java, not quite .Net, not quite good enough) - I instinctively use the latter syntax, especially given that elsewhere in the code base, usage of the properties are fully supported, 

eg. 
foo.bar = "Beyond All Recognition";
string b = foo.bar;

Now; In a VisualForce page, one binds to values in a controller class, using the following syntax:

<apex:page controller="foo">
  <apex:outputText value="{!bar}" />
</apex:page>


So the page needs to be able to see 'bar' from the controller, foo. But we've exposed bar through a public getter, so we'll be fine, right?
WRONG!

No, we need to explicitly write a public method that returns bar as a string. 
Always? Nope, not always - sometimes the getter property will be fine. 
And what will happen if the page can't see the property? Anything... literally anything... I wasted 8 hours last week chasing one of these because the string I was trying to show on the page was not exposed explicitly by a public string getBar() method, and the page, when rendered, helpfully told me 'This URL no longer exists... your bookmark may be out of date or you may be a half-wit. Thanks for using Salesforce!" (or similar)

Coincidentally, on another 'URRGGHH!! IT SHOULD WORK!!!" problem I was having last week, I had written a page that loaded a few defaults, then allowed the user to tweak the values they wanted and press 'submit' when they were happy... But I foolishly didn't write a getBar() method, and the 'submit' button was happily reloading the defaults immediately before sending the values off to the database...


So Monday morning beckons, and I know it will be filled with refactoring, retesting, refactoring, and more retesting...

Wednesday, March 18, 2009

Duathlon. Part 2

I did my event at the weekend... I was pleased that I'd got through it, but a couple of things niggled me:
1. The warden misdirected me (and about half the field!) so I didn't run over the electronic sensors at the first interchange, and didn't get a split time for my first run or bike... I just know that those first 2 legs took 1:09. I think the 4k run was about 24 mins, which means the 13k cycle must have been about 45. I do know the last 2k run took me just over 14mins which means someone walking briskly could've kept pace with me!! (But at least I achieved my pre-race goal of not stopping to walk, and not getting off the bike to push). However...
2. I had woefully under-prepared for the cycle leg. I'd convinced myself that the course would be flat and that I could rest on the bike if I was tired from running... nope! there were a couple of challenging hills to climb, and at one point (I kid you not) I was going so slowly at the top of a hill just levelling out onto the flat, that a young girl with a pretty basket on the front of her bike came zooming past me!!! (Well she had just pushed her bike up the hill, you see)
3. I didn't win a spot-prize.

All up, though, it was a good experience, and having taken Monday off to rest, yesterday I started training for the next one; about 8 weeks away in Mission Bay, there's an event. I think the short-course is 20k cycle + 5k run, so I need to train hard. Well at least that one *will* be flat... 

Wednesday, February 25, 2009

Duathlon.

Without ever consciously deciding to set New Year's Resolutions, a couple of months ago, I found myself just a little bit too large around the middle, and for the first time I actually cared.
With some gentle prods from home (The Wife) and work (Gareth, The super-fit Project Manager who just did a half-iron man!) I joined a gym and at around the same time, I caught up with my mate Jules who told me he was training for a duathlon in March.
Buoyed with a new-found lust for life (and a couple of Steinlager Pure's) I heard myself saying 'ooh I'll do that - it'll be a larf'.

So now I'm a couple of weeks away from actually competing and starting to wonder what I've let myself in for... The 'open' age group is 17-39 and I was tempted to enter the 'vets' and pass myself off as 40...

The event itself is a 'short course'. My perspective on short distances with regards to running/cycling is rapidly changing, but 4km run + 12km cycle + 2km run doesn't SEEM short to me...
Incidentally, I'm not doing the 400m swim on account of how I do not wish to drown.

Training is going OK-ish (although I have no frame of reference...) although I feel I should be doing more *real* running/cycling. I keep finding good excuses for staying in the gym: I can set the pace on the treadmill; I can stay out the sun (pasty-white-Englishman-Syndrome); it's too hilly round where I live... but with only a couple of weeks to go, I guess I'm going to have to harden up and put a couple of training runs in before the real event creeps up on me!
Maybe I'll slap some sunny on tomorrow and see if Gareth wants a gentle trot round Takapuna lake...