shutterstock_279405572.jpg

Integrating Knowledge Based Content Inside of Salesforce

Tim is a Senior Consultant at ThreeWill. He has 15 years of consulting experience designing and developing browser-based solutions using Microsoft technologies. Experience over the last 8 years has focused on the design and implementation of SharePoint Intranets, Extranets and Public Sites.
Danny Ryan:Hello, and welcome to the ThreeWill Podcast. This is your host, Danny Ryan, and I have Tim here with me today, Tim Coalson. How are you doing, Tim Coalson?

 

Tim Coalson:I’m doing well, doing well.

 

Danny Ryan:Excellent. Good to see you.

 

Tim Coalson:Thank you. Good to be here.

 

Danny Ryan:Thanks. We’ll get our little quarterly get-together here.

 

Tim Coalson:Yes.

 

Danny Ryan:You’ve been busy. You’ve been very busy lately.

 

Tim Coalson:Yeah, we’ve been doing some integration of a support site with Salesforce. Our first integration was with cases: Salesforce has case objects that employees for this particular company use to manage cases on the backend; and then we display those through a support site, through a public support site, so that customers can log in and see the status of any of their ongoing cases they have going on or view a closed case to understand what the resolution was.

 

Danny Ryan:Nice.

 

Tim Coalson:For this integration, we are integrating with the knowledge base content inside of Salesforce; so basically on the Salesforce side, they customize the knowledge base to have a particular version of that knowledge base that’s specific for this company that has various attributes that this company cared about. Then on the support site, we are consuming that content.

 

Danny Ryan:Okay.

 

Tim Coalson:What was interesting is, normally we hook into an index, like with a product like Solr that indexes content, and we interface with that index, while with Salesforce they don’t really open up, give you immediate access to that index, so all of our search capability, to be able to search knowledge base content … A customer logs in, they have questions around a certain product, they can either type in a keyword search, or they can specify, click on a product, and we’ll go pull content from knowledge base based upon that product. Our integration with Salesforce in this case … In our earlier integration, we went through an enterprise service bus. In this case, we’re actually going directly against Salesforce using their APIs.

 

Danny Ryan:Very nice.

 

Tim Coalson:For search capability, you use a language called SOSL, S-O-S-L, versus another language inside of, query language inside of Salesforce is SOQL, which is S-O-Q-L, so query language, so this is a search language. It has a little bit different syntax, but it seems to be, so far, very powerful, so it’s just interesting learning all the ins and outs of, “How do I get the most out of the Salesforce search through a REST interface?”

 

Danny Ryan:Let me get this straight. What are you using, again? What’s the front-end technologies?

 

Tim Coalson:The front end is a .NET technology.

 

Danny Ryan:Okay, and then, so you’re using that for the UI and all that good stuff.

 

Tim Coalson:Right, so we’re using .NET and all of the modern plugins too: .NET, and CSS, Bootstrap, and other things to provide …

 

Danny Ryan:It’s a responsive site that …

 

Tim Coalson:Provides some of the responsive, and also some other plugins to provide some other features on the site, like breaking news, scrollable-type content, things like that.

 

Danny Ryan:Nice.

 

Tim Coalson:Then …

 

Danny Ryan:Then-

 

Tim Coalson:Yeah, so in this case, from the .NET side, we’re making REST calls into the … In this case, it’s actually a custom REST interface that we created, because not only did we want to get results, but we also wanted to have facet values; so certain values like product names, types of forms, other things that are proprietary or specific to this industry, we wanted to be able … Customers to be able to drill into their search results based upon these different facets. Not only is this service retrieving the content using the SOSL language, so we’re actually building this SOSL query on the fly based upon whatever text they input or whatever products or other facets they select, we’re building this SOSL query on the fly, we get back the results. Then, part of the result set, we actually do counts of the content so we can provide various facet values on the left that will allow them to further drill into this content.

 

Danny Ryan:Very nice. Very nice. Then you were mentioning, with the search that they, who is, what’s indexing that content … Is that something that you’re using from Salesforce, as far as the search, or is the search something that you’ve built, or what is it?

 

Tim Coalson:No, the search is, it’s part of the Salesforce cloud platform. Now, what I’ve been told is, behind the scenes, Salesforce also uses Solr.

 

Danny Ryan:Okay.

 

Tim Coalson:I don’t know if that’s true or not, and for us, it really doesn’t matter, because we’re accessing all the content through the API, through the SOSL language, so …

 

Danny Ryan:Solr is this, is like an open source search engine, or something along the lines, something like that?

 

Tim Coalson:Yeah, it’s … I don’t know if it’s open source, I think it’s maybe a …

 

Danny Ryan:It’s a popular search engine, yeah.

 

Tim Coalson:Right, it’s a pretty popular one. It was one that this, in fact, this customer was already using to crawl their content. One of the challenges with our search results is we really have two sources of search content, or knowledge base content. We actually have help files which are still indexed by Solr, and then we have Salesforce content, and because they’re kind of like two separate engines, search engines, there’s really no common factor between the two result sets to be able to merge those in any kind of meaningful way. Right now, we’re just alternating our responses between knowledge base content and help content until one or the other runs out, but …

 

Danny Ryan:Okay.

 

Tim Coalson:In the future, as we learn more, we may find there’s a better way to integrate the two results, but for now, we’re just kind of interweaving those together.

 

Danny Ryan:Got you, got you. This is … This is, you’re working on an upcoming release of the site, I’d imagine? Something in the next couple of months or so, or …

 

Tim Coalson:Yeah, we’ve got different … This, we’ve actually gone live with, and now we’re making some changes as we’ve learned more and seen customers interact with it. We’re making further refinements to the knowledge base search, so that’s kind of what we’re working on now. We’ve actually got some future releases where we’ll be integrating with some other backend systems, and this time, we’ll probably be going through the enterprise service bus to actually retrieve customer, or sales representative information; so when a customer wants to identify, “Who do I talk to, to get further information about a certain product,” this will be a new system that will … A new functionality that we’ll add that will allow the customer to go through a wizard, and if they already have an account, they can specify their account number, and we’ll pull the appropriate information.

 

Otherwise, we’ll take them through a series of questions that will help identify what particular type of company they are, how big their company is, where are they located. Using all that criteria, we’ll make a call to this enterprise service bus which will reach out to several backend systems and determine, based on that information, “Who is this person’s sales representative.” Then we’ll give the customer an option: They can either call the sales rep based upon the information returned, or we’ll allow them to click a button and enter in some minimal information and actually have the sales representative contact them. From that perspective, we’ll be creating a lead inside of Salesforce, or just some further integration with Salesforce.

 

Danny Ryan:Is there any online chat or anything like that that’s for real time?

 

Tim Coalson:They do have online chat. We haven’t worked on that piece out, that was a piece that already existed …

 

Danny Ryan:Okay.

 

Tim Coalson:… so I’m not really familiar, too much, with what technologies or what company they’re using for them, but they do have that capability.

 

Danny Ryan:After these, integrating with a knowledge base, and, what, is there something … Is that, sort of, the end of the next version, and moving on to new things after that, or what is that?

 

Tim Coalson:Yeah, we’ve got some other things coming up. We’re going to enhance the customer’s ability to make various types of payments to their account, so we’ll give them more flexible payment arrangements.

 

Danny Ryan:Nice.

 

Tim Coalson:That’s coming up in a future release as well, so … Just trying to give the customer as much capability as they can online. First of all, it helps the customer. Most people don’t want to have to call a company and deal with a person if they can do it online, so that’s … Excuse me, one advantage.

 

Danny Ryan:Self-service is always good.

 

Tim Coalson:Exactly. Yeah, and for the company, obviously. Fewer calls means fewer people necessary to support it, which is also a benefit for the company, so …

 

Danny Ryan:Absolutely. Well, cool, I appreciate you doing this little update. It sounds like you’re a little hungry, your tummy’s been growling over this whole podcast. People were listening …

 

Tim Coalson:I think that’s the thunder outside, so …

 

Danny Ryan:Oh, yeah, that … That’s what it is. If you’re listening in, sorry, we had a little action going on outside with a thunderstorm, so you probably heard that every once in a while. It’s not Tim’s stomach. It’s not my stomach, but, yeah, a little rain coming down in Georgia today. Well, I appreciate you doing this. It sounds like you’re doing great work on the project. Thanks for all your hard work there.

 

Tim Coalson:All right. Thank you, Danny.

 

Danny Ryan:Awesome. Thanks so much, Tim. Thanks, everybody, for listening. Take care. Bye bye.

 

read more
Tim CoalsonIntegrating Knowledge Based Content Inside of Salesforce
twins-chess.jpg

Syncing SharePoint User Profiles

John Underwood is a Technical Evangelist at ThreeWill. He has nearly thirty years of software development experience. He is an experienced technical instructor with superior presentation skills and is proficient in delivering standard curriculum as well as developing and delivering custom curriculum.

How Can You Be In Two Places At Once?

We often joke about needing to be in two places at once in the real world, but in the software world we often have the opposite problem: data about persons in our enterprises frequently appear in multiple places, with the resulting challenges about keeping the information up to date. This can even be true within the confines of the SharePoint world, especially in those companies that are implementing hybrid environments between on-premise SharePoint 2013 and Office 365/SharePoint Online. Specifically, there is the challenge of managing the user profiles between the two environments. Unfortunately (as of this writing) Microsoft acknowledges that there is no out-of-the box solution for solving this problem, so until such time as Microsoft provides an answer we’re stuck with solving this ourselves.

Recently, we had the opportunity to help a customer with this very challenge. They decided to implement a hybrid environment, but wanted consistent data between on-premise and cloud user profiles as well as correct and relevant results concerning people search.

Who’s in charge here?

If we have a hybrid environment, then users that will visit on-premise and cloud-based environments will have two user profiles. Which one will be authoritative? How do users know which one to modify? How do we keep them from changing the wrong profile? How do we sync changes between the two environments? Here’s an overview of how we solved this problem:

1. The cloud-based user profile is authoritative

For this customer, certain users will only access the cloud, so the cloud-based profile must be the profile of record. However, the other reason to favor this approach is that we have greater customization options on-premise when it comes to “hiding” the user profile. Which leads to our next point…

2. The on-premise user profile needs to be “hidden from view” as much as possible

The URL for on-premise “My Sites” is configured to a local SharePoint Web Application that contains no content. At the root of this SharePoint Web Application is a custom ASPX page that intelligently handles redirection to and from the cloud environment.

Therefore, any attempt on the user’s part to navigate to the local profile will simply send them to the cloud.

One might ask why this custom code is necessary, after all, we could simply change the on-premise “My Sites” to point to the cloud and be done with it. However, there were two reasons why this wouldn’t suffice:

  • There was a high likelihood that someone has bookmarked profile or “My Sites” information in their browser favorites at a previous point in time
  • Some intelligence was needed to make sure that redirection occurred correctly when users navigate “back” from the cloud (i.e. pressing the “back” button in the browser).

3. The on-premise data needs to be uploaded to the cloud

In order to have a proper starting point, the on-premise profile data must be duplicated to the cloud. This is a one-time conversion involving custom code that reads the local profile data and inserts it into the cloud-based profile.

4. Cloud-based changes must be synchronized back to on-premise profile

Why do we need to sync data back from cloud to on-premise if we’re hiding the on-premise profile? Well, try as we might, it is simply impossible to fully hide the local profile information. As an example… any on-premise list that has a people field will draw picture and other information about the person from the on-premise profile. In addition, on-premise people search results will reflect data from the on-premise user profile. Therefore, any changes made in the cloud must be reflected on-premise.

We solved this problem with custom code that runs in a timed batch job. The code reads profile changes from the cloud and updates the local on-premise profile to match.

Learning more about Hybrid Environments

Want to learn more about Hybrid Environments, including challenges when integrating SharePoint Search? Check out the video replay of our recent webinar on the topic.

read more
John UnderwoodSyncing SharePoint User Profiles
illusion.jpg

SharePoint Perceptions – A New Way of Looking at Things

John Underwood is a Technical Evangelist at ThreeWill. He has nearly thirty years of software development experience. He is an experienced technical instructor with superior presentation skills and is proficient in delivering standard curriculum as well as developing and delivering custom curriculum.

Many of us are familiar with the picture above or similar pictures that play tricks with our mind. We stare intently and see one thing, only to have another thing emerge upon further review. Studies show that the effect is even stronger when we are preconditioned to believe we are going to see a certain picture. In the bookThe 7 Habits of Highly Effective People a similar picture is used to illustrate how strongly our preconceived notions can influence what we see. Two groups of people are shown the same picture. One group has been told they will see a picture of a young woman; the other has been told they’ll see a picture of an elderly woman.

When the picture is finally shown to each group they easily identify the young or old woman based on what they’ve been told to expect. Each group is convinced that they are right and have trouble seeing the other group’s perspective. Only when the two groups begin to communicate and review the picture together can they see each other’s view.

I have experienced a similar effect recently in my professional life. For years I’ve used SharePoint as a way to manage the content for the programming courses that I’ve written. I’ve used it to create document libraries for slide decks and student manuals. I’ve used lists to track questions asked by students. I’ve setup team sites to share information with students that have attended my classes. I’ve written surveys to track customer satisfaction. I feel as though I’ve employed SharePoint enough in my daily work to have a pretty good grasp of what it is and what it does. And yet, in the last 3 weeks I’ve been confronted with the notion that, perhaps, SharePoint is something more than what I perceived it to be.

In early August I started a new job at ThreeWill as a Technical Evangelist. This new job will entail presenting SharePoint to everyone from executives to end users to developers. I was informed in my very first week that a big part of that message will be “SharePoint 2010 as a Product Platform.” Clearly I knew that it was possible to customize SharePoint through application pages, Web Parts, and custom workflows. I knew that we could use these tools to tweak SharePoint and make it behave in a way that more closely fits the way a certain team or department does business. But the notion that it could be used as a foundation for building entire applications – even applications that ISV’s would sell as a shrink-wrapped product – is something that I had never considered.

ISV’s or corporate developers may choose to engage SharePoint in the following ways:

  • Connect: building software that will connect SharePoint with other applications. As an example, enabling SharePoint search results to include data from a company’s existing database applications.
  • Extend: creating Web Parts, Workflows, and employing other out-of-the-box SharePoint components to create a highly-customized application for meeting a business need.
  • Build-on: SharePoint becomes a library of functionality that can be consumed from a custom Windows, ASP .NET, or Silverlight application. Each of these applications uses the Client Object Model library to call into SharePoint and consume its functionality.

As a developer I think about the features of SharePoint and how they map to applications that I’ve written in my career. I can remember building an application that had to capture and track medical images. There was a significant amount of effort exerted in building a subsystem to manage and track the image files. Today a SharePoint document library can readily provide that functionality with little or no programming required. Likewise, I’ve worked on an application that implemented a massive custom workflow engine. 75% or more of that application could have easily been constructed using SharePoint workflows. As a developer I found it intellectually stimulating to work on these complicated subsystems; but the reality for most companies is that it’s more profitable for their developers to write business logic than it is to write complicated, plumbing-oriented code.

Some have used the phrase “Business Operating System” (video on Steve Ballmer’s take on the subject at the SharePoint 2010 Conference) to describe the set of features presented to developers by SharePoint. It is a vast set of capabilities to allow developers to quickly build sophisticated applications while at the same time permitting a support model that will fit nicely with the IT infrastructure of a company that has already adopted SharePoint for more “traditional” purposes. For ISV’s and corporate developers that choose to use SharePoint as a foundation, they will also find that it opens up new opportunities inside companies that have already invested in SharePoint and are eager to maximize the return on that investment.

So when it comes to something as simple as a picture – or as complicated as a software development platform – don’t be afraid to step outside of preconceived notions and look at something in a new way.

read more
John UnderwoodSharePoint Perceptions – A New Way of Looking at Things
planet.jpg

Google for Geocoding

Eric Bowden has over 19 years of software development experience around enterprise and departmental business productivity applications.

I recently used the Google maps API in a list event handler to automatically populate the longitude and latitude values for items in a list.

If you too need geocoding, follow these easy to follow steps:

Get your own Google Maps key by signing up at http://code.google.com/apis/maps/signup.html
Read up on how to access the Maps API directly using HTTP requests at http://code.google.com/apis/maps/documentation/services.html
Copy/Paste the class below in to your own .cs file (I was just kidding about step #2, you don’t actually have to read )
Be sure to include using System.Xml, using System.IO
There were a few twists and turns necessary in my case, but that’s the core of simple geocoding with Google.

public class GoogleMaps</code>

{

static public string Status_ok = "200";

static public string Status_UnknownZipcode = "Unknown zipcode";

static public string GetGeoCode(string zipcode, string country, string key,
out string url, out double longitude, out double latitude)

{

string ret = "";

url = "";
longitude = 0;
latitude = 0;

try

{

//return out the url for debugging

url = string.Format("http://maps.google.com/maps/geo?q=
{0}
&amp;output=xml&amp;key=
{1}
", zipcode + "," + country, System.Web.HttpUtility.UrlEncode(key));
//create a web request against the google maps site

System.Net.WebRequest webReq = System.Net.HttpWebRequest.Create(url);

//get the response stream so we can read the reply to a string

StreamReader rdr = new StreamReader(webReq.GetResponse().GetResponseStream());

string xmlString = rdr.ReadToEnd();

rdr.Close();

XmlDocument topNode = new XmlDocument();

topNode.LoadXml(xmlString);

//create namespace aliases so we can drill down in to the results with xpath queries

XmlNamespaceManager ns = new XmlNamespaceManager(topNode.NameTable);

ns.AddNamespace("ge", "http://earth.google.com/kml/2.0");

ns.AddNamespace("ad", "urn:oasis:names:tc:ciq:xsdschema:xAL:2.0");

XmlNode statusCodeNode =

topNode.SelectSingleNode("//ge:kml/ge:Response/ge:Status/ge:code", ns);

XmlNode coordNode =

topNode.SelectSingleNode
("//ge:kml/ge:Response/ge:Placemark/ge:Point/ge:coordinates", ns);

XmlNode zipNode =

topNode.SelectSingleNode
("//ge:kml/ge:Response/ge:Placemark/ad:AddressDetails
//ad:PostalCodeNumber", ns);

if (statusCodeNode == null)

return "Geocode error: Cannot find status node in response";

else if (statusCodeNode.InnerText \!= Status_ok \|\| coordNode == null)

return "Geocode error: " + statusCodeNode.InnerText;

else if (zipNode == null \|\| string.Compare
(zipNode.InnerText, zipcode.Substring(0, Math.Min
(zipcode.Length, zipNode.InnerText.Length)), true) \!= 0)

return Status_UnknownZipcode;

else

{

ret = statusCodeNode.InnerText;

string\[\] points = coordNode.InnerText.Split(',');

if (points.Length &gt;= 2)
{ longitude = double.Parse(points[0]); latitude = double.Parse(points[1]); }
}

}

catch (System.Exception ex)
{ return "Exception: " + ex.Message; }
return ret;

}

}
read more
Eric BowdenGoogle for Geocoding
review1-e1425575685609.jpg

MOSS BDC Design Studio

Eric Bowden has over 19 years of software development experience around enterprise and departmental business productivity applications.

Let me define basic first: I need to allow my SharePoint users to search Products within the AdventureWorks database using SharePoint search. Further, I want to allow users to search and view Products in the Business Data List web part.

As database administrator, I want to control access to the data. Therefore, I’ll expose the data from the Line of Business (LOB) application via stored procedures. This has some other advantages such as enabling me to tune SQL statements used to support the BDC outside of SharePoint, and it allows BDC configuration to be performed by resources that may not have expertise with SQL or the particulars of the LOB entity.

I created three stored procedures to achieve my goals (script is listed at the end of this post):

  • usp_Select_Products – to perform a search across all products filtering by Name and Product Number.
  • usp_Select_Product – to select out a specific Product by Product ID
  • usp_Select_ProductIds – to select out all product ids and the last modified date from the Product table.

These match up with the BDC method types Finder, Specific Finder, and ID Enumerator respectively. Now that the SQL tasks are out of way, let’s crack open the MOSS BDC Design Studio.

First, choose Connect Data Source to define how to reach the database.

Define the user and password. I am using the bdcuser account to which I have only granted exec rights to the three stored procedures listed above. One quirk, the error message “invalid connection string” may mean the password was entered incorrectly.

This leads me to the design view of the editor where I can drag any one of the stored procedures onto the Design surface.

The Entity Wizard appears and guides us first to name the entity. Let’s name it Product.

The Entity Wizard requires that we define the stored procedures for use as Finder, Specific Finder, and IdEnumerator for this entity.

Next, identify the column used to uniquely identify our entity.

Last step in the wizard; define the column which will be used to select a specific entity when presented in the Business Data List web part.

The newly defined Product entity now appears in the Design space.

Choose Edit Properties and define the Product ID as the primary key of the entity and the Name as the Title Column. Yes, the Entity Wizard did ask for this already.

Choose the Methods tab, select the Get_Product method, and define the input parameter as uniquely identifying the Product entity.

Set the default values for the input parameters of the Finder method to % (the SQL wildcard character).

Next, we must define how MOSS will connect to the LOB application. From the top menu, choose Edit-Properties.

Choose Build-Build BDC Xml to generate the Xml configuration file.

One optional step, we can inform the BDC as to which column may be used to determine the last time an entity was modified. This field enables the incremental crawl feature to skip entity records which have not been modified since the last crawl.

Choose File-Save BDC Xml to save the XML to file. Navigate to the Shared Services Administration page and choose Import application definition.

Once the import completes successfully, you can add a Business Data List web part to your web page and search for products by Name or Product Number.

The final step is to add the new LOB as a content source for use in SharePoint search. From Shared Services Administration, choose Configure Search Settings, Content Sources and Crawl Schedules, and add a new Content Source being sure to choose Business Data as the Content Source Type. Last, be sure to check Start full crawl of this content source before clicking ok.

And just for completeness, here is the script for the stored procedures. You have to create a bdcuser account and add that user to the AdventureWorks database before running this script.

go
create procedure dbo.usp_Select_Products (@Name nvarchar (50),
    @ProductNumber nvarchar(25))
as
begin
select
ProductID,
Name,
ProductNumber,
MakeFlag,
FinishedGoodsFlag,
Color,
SafetyStockLevel,
SafetyStockLevel as SafetyStockLevel1,
StandardCost,
ListPrice,
Size,
SizeUnitMeasureCode
from
Production.Product
where
Name like @Name and ProductNumber like @ProductNumber
end
go
create procedure dbo.usp_Select_Product (@ProductID int)
as
begin
select
ProductID,
Name,
ProductNumber,
MakeFlag,
FinishedGoodsFlag,
Color,
SafetyStockLevel,
SafetyStockLevel as SafetyStockLevel1,
StandardCost,
ListPrice,
Size,
SizeUnitMeasureCode
from
Production.Product
where
ProductID = @ProductID
end
go
create procedure dbo.usp_Select_ProductIDs as
begin
select
ProductID,
ModifiedDate
from
Production.Product
end
go
grant exec on dbo.usp_Select_ProductIDs to BDCUser
grant exec on dbo.usp_Select_Product to BDCUser
grant exec on dbo.usp_Select_Products to BDCUser
read more
Eric BowdenMOSS BDC Design Studio
connect1.jpg

SharePoint Connector for Confluence

Kirk Liemohn is a Principal Software Engineer at ThreeWill. He has over 20 years of software development experience with most of that time spent in software consulting.

A few months ago, we and Atlassian announced their SharePoint Connector for Confluence, which impressed both customers and analysts. Now, ThreeWill, a Microsoft Gold Certified Partner specializing in developing SharePoint-based solutions, which helped design and implement the SharePoint Connector, will describe how they did it.

Integrating Content and Search Results with SharePoint

Have you ever needed to integrate an external system with SharePoint, showing content from each system within the other? What if you needed to integrate search between SharePoint and the external system? How do you keep the user experience seamless if the systems use different authentication mechanisms? Have you wondered if this can be done if the external system is written in Java?

If you answered “yes,” then read on. Along the way you will learn some of the internals of SharePoint 2007 web parts, Microsoft Office SharePoint Server (MOSS) 2007 enterprise search, and Microsoft Single Sign-on (SSO).

Overview

This blog entry discusses how three developers integrated MOSS 2007 with Confluence, an enterprise wiki, in about 2 months time. It discusses the work behind implementing features for the SharePoint Connector for Confluence as shown in the diagram below.

FeaturePrimary Technology
Content EmbeddingWeb Parts
Integrated SearchMOSS Enterprise Search
Single Sign-on (SSO)Microsoft SSO Service

Content Embedding

The goal for content embedding is to show Confluence content within SharePoint and to show SharePoint content within Confluence. The approaches for this were:

  • Use Confluence plug-ins to show SharePoint content such as list data and document links within Confluence.
  • Use SharePoint web parts to show Confluence wiki pages and Confluence wiki page hierarchy within SharePoint.

Confluence Plug-In

The Confluence plug-in allows the user to enter SharePoint specific Confluence wiki markup within a Confluence wiki page. The plug-in utilizes the built-in SharePoint web services (primarily Lists.asmx) to retrieve list content. A simple example of the markup might be the following.

{sp-list:test doc library|doc}

The resulting section of the wiki page would look like:

One interesting aspect of the plug-in was a technique used to provide an external link for editing a document in a SharePoint document library. The link is shown in a Confluence wiki page, but allows the user to click the link, edit the document, and save the changes back to SharePoint as seamlessly as it is done within a SharePoint document library.

SharePoint Web Parts

Two web parts were created to show Confluence content within SharePoint:

  • Confluence Page Web Part
  • Confluence Space Pages Web Part

Both web parts use Confluence web services to obtain Confluence content. Following a section for each web part, we discuss how the web parts are administered.

Confluence Page Web Part

The Confluence Page web part shows a Confluence wiki page within a SharePoint web part.

The example above shows a Confluence wiki page that displays news via RSS. The high-level work of displaying the page content within the web part is shown below:

RemotePage page = ConfluenceHelper.GetConfluencePage(this.PageId);
this.TitleUrl = page.url;
pageContent = ConfluenceHelper.RenderContent(this.SpaceKey, this.PageId);
pageContent = GetCleanPageContent(pageContent);
_literalPage.Text = pageContent;

This is simplifying things a bit, but the ConfluenceHelper class basically has several methods that wrap Confluence web service proxy calls. In this case, the GetConfluencePage wraps the Confluence getPage web method and the RenderContent wraps the Confluence renderContentAsUser web method. In addition, regular expressions are used to properly prefix all URLs within the Confluence page content with the Confluence host and to remove some inline CSS styles that were conflicting with the rest of the SharePoint page. Finally, the pageContent is simply applied to the Text property of a Literal control. All of this happens within the OnPreRender override of the web part.

The image above also shows an “Edit Confluence Page” verb on the web part menu. This is simply done by overriding the get of WebPartVerbCollection within the web part as shown below.

public override WebPartVerbCollection Verbs
{
get
{
// See if the user has the ability to edit the page - if so, provide a link
List verbs = new List();
if (_pageId != -1)
{
try
{
if (ConfluenceHelper.HasEditPermission(_pageId))
{  WebPartVerb editPageVerb = new WebPartVerb(this.ID + &quot;_editPageVerb&quot;,
String.Format(  &quot;window.open('{0}/pages/editpage.action?pageId={1}','EditPage')&quot;,
ConfluenceConfiguration.Instance.ConfluenceSite, _pageId));
editPageVerb.Description = &quot;Edit the Confluence wiki page.&quot;;
editPageVerb.Text = &quot;Edit Confluence Page&quot;;
editPageVerb.ImageUrl = Constants.WebPartTitleIconImageUrl;
verbs.Add(editPageVerb);
}
}
catch (Exception ex)
{
TraceHelper.TraceError(&quot;ConfluencePageWebPart&quot;, &quot;get_Verbs&quot;,&quot;Unable to check edit permissions.&quot;, ex);
}
}
WebPartVerbCollection allverbs = new WebPartVerbCollection(base.Verbs, verbs);
return allverbs;
}
}

Once again we use our web service proxy wrapper, this time to determine if the user can edit the page. Note that caching should be done on operations (such as this) within the get of WebPartVerbCollection as it may be called multiple times when a single web part is displayed.

A more interesting aspect of the web parts is the editor part. Editor parts are used to provide a custom editor for web part properties. Without an editor part, custom web part properties are typically text boxes shown in the “Miscellaneous” section when you edit your web part properties (enumeration web part properties do show as a drop down). In this case, we want to allow the user to select a page for the web part. Here is what our editor part looks like:

Confluence has the concept of spaces which are similar to SharePoint sites. Every Confluence wiki page lives within a space and a Confluence installation can contain many spaces. The editor part to the left shows a drop-down list of spaces with a tree view of pages within the selected space. Since pages can contain other pages, this is a hierarchical list and shown as a tree view.

To make this work, we override the CreateEditorParts method on the web part as shown below.

public override EditorPartCollection CreateEditorParts()
{
// The EditorPartCollection object is immutable so we have to use
// the following approach to return the base editor part collection
// along with our editor part
// Create a list of editor parts containing our editorpart
ListListeditorParts = new List(1);
EditorPart part = new ConfluencePageEditorPart();
part.ID = this.ID + &quot;_confluencePageEditorPart&quot;;
editorParts.Add(part);
// Return a new EditorPartCollection containing the base editor parts
// and our editor part
return new EditorPartCollection(base.CreateEditorParts(), editorParts);
}

As you can see, it references a ConfluencePageEditorPart. This class inherits from EditorPart which is an extension of a Panel (a WebControl). In this particular editor part, we override CreateChildControls and OnPreRender to display the content you see in the image further above. We happen to use our own custom DropDownList control to encapsulate the list of Confluence spaces and a custom TreeView control to encapsulate the pages within a space. This was done because they are both used in the Confluence Space Pages Web Part discussed further below.

The primary difference between a UserControl and an EditorPart, is that the EditorPart must be able to communicate to the web part. This is done through two methods: SyncChanges and ApplyChanges.

A web part is in charge of storing and retrieving the web part property values. SyncChanges is used to take the web part property values within the web part and provide them to the EditorPart so it can properly display the current settings.

public override void SyncChanges()
{
ConfluencePageWebPart pageWebPart = this.WebPartToEdit as ConfluencePageWebPart;
if (pageWebPart \!= null)
{
// Store the synced values to be handled later in OnPreRender
_syncedPageRenderMode = pageWebPart.PageRenderMode.ToString();
_syncedSpaceKey = pageWebPart.SpaceKey;
_syncedPageId = pageWebPart.PageId;
}
}

Typically, the EnsureChildControls method can be called to make sure that CreateChildControls is already called and then the properties can sometimes be applied directly to controls within the editor part. However, in our case the processing was a little more complex, so we stored the property values in member variables to be used later within OnPreRender.

The ApplyChanges method is the corollary to SynChanges. It allows the editor part to provide web part property values to the web part.

public override bool ApplyChanges()

this.EnsureChildControls();

ConfluencePageWebPart pageWebPart = this.WebPartToEdit as ConfluencePageWebPart;

if (pageWebPart \!= null)
{
pageWebPart.PageRenderMode = (ConfluencePageWebPart.RenderMode)
(Enum.Parse(typeof(ConfluencePageWebPart.RenderMode), _pageRenderModeDropDownList.SelectedValue));
pageWebPart.SpaceKey = _spaceDropDownList.SelectedValue;
if (_pagesTreeView.SelectedValue != null &amp;amp;&amp;amp; _pagesTreeView.SelectedValue.Length &amp;gt; 0)
{
pageWebPart.PageId = Int64.Parse(_pagesTreeView.SelectedValue);
pageWebPart.Title = _pagesTreeView.SelectedNode.Text;
}
else
{  pageWebPart.PageId = -1L;
pageWebPart.Title = DefaultTitle;
}
}

return true;

}

Confluence Space Pages Web Part

The Confluence Space Pages web part shows the list of pages within a single Confluence space as shown below. The tree view of pages should look familiar. It uses the same Control created and used by the editor part for the Confluence Page web part.

The corresponding editor part for this web part simply allows the user to select a space from the space drop-down list. It also uses the same custom control used by the editor part for the Confluence Page web part to show the spaces.

The option to show the space list allows the user to show the drop down list of spaces within the body of the web part. This web part uses the same concepts discussed for the Confluence Page web part above including communication between the editor part and the web part.

Web Part Administration

In order to use the web parts, we need to know how to access Confluence. This could have been done through web part properties, but we decided to configure how to access Confluence within site settings.

We did this by creating a custom administration page. To provide access to the page, we needed to create a custom action within our feature’s “elements.xml” file.

{color:blue}&amp;lt;{color}{color:maroon}CustomAction {color} Id=&quot;ConfluenceSettings&quot; GroupId=&quot;SiteAdministration&quot; Location=&quot;Microsoft.SharePoint.SiteSettings&quot; Sequence=&quot;33300&quot; Title=&quot;Confluence Settings&quot; Rights=&quot;ManageWeb,BrowseUserInfo&quot;&amp;gt;

{color:blue}&amp;lt;{color}{color:maroon}UrlAction{color} {color:red}Url{color} {color:blue}={color}&quot;{color:blue}\_layouts/Atlassian/ConfluenceSettings.aspx {color}&quot; {color:blue}/&amp;gt;{color}

Describing the details behind the deployment of the features and SharePoint solution file is beyond the scope of this blog entry, but the result of the above custom action within our deployed feature results in a site settings page with our “Confluence Settings” link.

The administration page itself is a SharePoint application page that inherits from LayoutsPageBase. You can use existing application pages under TEMPLATE\LAYOUTS within the 12 hive as a starting point. Simply find an existing page that is similar to what you want and copy it.

For your code behind, you will want to declare protected page variables for each named control you want to access within your code, for example:

protected InputFormTextBox ConfluenceSite;
protected InputFormCheckBox UseSsoCheckBox;

Then you can populate the controls within OnLoad, OnPreRender, etc. and update SharePoint button click event handlers. In our case we stored the Confluence settings for the site within a property bag within the site. Each SharePoint site has a property bag you can use to store data. This allows us to store information without having to use a hidden list or a separate database.

Here is the code for saving the property values.

private void SetSiteProperty(string propertyName, string value)
{
if (SPContext.Current.Web.Properties.ContainsKey(propertyName))
{
SPContext.Current.Web.Properties[propertyName] = value;
}
else
{
SPContext.Current.Web.Properties.Add(propertyName, value);
}
SPContext.Current.Web.Properties.Update();

}

Retrieving the property values uses similar code, but we recursively look to the parent site’s property bag if we cannot find the property at our site. This allows for simple inheritance of properties on sub-sites.

private string GetSiteProperty(string propertyName)
{
return GetSiteProperty(SPContext.Current.Web, propertyName);
}
private string GetSiteProperty(SPWeb site, string propertyName)
{
string propertyValue = String.Empty;
if (site.Properties.ContainsKey(propertyName))
{
propertyValue = site.Properties[propertyName];
}
else
{
if (!site.IsRootWeb)
{
// Look in the parent site if this property is not set in this site
propertyValue = GetSiteProperty(site.ParentWeb, propertyName);
}
}

return propertyValue;

}

Integrated Search

The goal for integrated search is to have both SharePoint and Confluence show the same search results containing content from both systems.

Integrated Search Overview

Integrated search was implemented by setting up MOSS to search Confluence and redirecting the Confluence search to the MOSS search site. The latter was done rather easily through a custom Confluence plug-in setting, but the former is discussed below.

Initially we considered writing a custom Protocol Handler to search Confluence, but quickly realized that using the Web Site content source within MOSS was going to be much quicker to implement. Unfortunately, we found that the Web Site search did not work if it required forms-based authentication, which is a typical setup for Confluence. Fortunately, our network engineer, Jerry Rasmussen, found a knowledgebase article discussing the problem with a solution (via a hot fix). Note that the hot fix is part of SP1, but there are still no data entry screens for forms-based authentication.

The manual setup for search was fairly involved, requiring the user to configure several entities, one of which required editing an XML file and loading it through an AddRule.exe tool described in the knowledgebase article. This was due to the fact that the hot fix only updated the API, but not any data entry screens. Due to this involved process, we created custom search configuration screens.

In the end, using the hot fix provided in the KB article worked but was not a complete solution. This is because web site searches from MOSS do not store ACL information when the content is being crawled by the search engine. The solution was to implement a custom security trimmer. Note that Business Data Catalog (BDC) searches require a custom security trimmer as well, but SharePoint, file share, and Exchange public folder searches store ACLs and therefore do not need a custom security trimmer.

The custom search configuration screens and custom security trimmer are discussed below.

Custom Search Configuration

What we needed was an easy way to configure the following search items:

  • Content Source
  • Crawl Rule (with forms-based authentication configuration)
  • Scope
  • Registered Security Trimmer

To allow for the searching of multiple Confluence sources, we decided to create a screen for managing the list of Confluence Search Sources and for creating an individual one. As done with the site administration screen for the web parts, we need to have a custom action that references our configuration screen.

{color:blue}&amp;lt;{color}{color:maroon}CustomAction{color}{color:blue} {color} Id=&quot;ConfluenceSearchSettings&quot; GroupId=&quot;Search&quot; Location=&quot;Office.Server.ServiceProvider.Administration&quot; Sequence=&quot;33300&quot; Title=&quot;Confluence Search Settings&quot; &amp;gt;

Unlike the web parts, this custom action is defined within a feature scoped to the farm level instead of the site collection. The result of this custom action is the link shown within the Shared Services Administration home page under the “Search” group. The Shared Services Administration home page is found by going to SharePoint 3.0 Central Administration and clicking on the SSP link (e.g., “SharedServices1”).

Similar to the site administration screen, we used existing SharePoint administration screens to create our two administration screens.

To keep the created entites together, a tagging mechanism was used. The ContentSource class has a Tag property that we used to store a semicolon delimited string of IDs for the security trimmer, crawl rule, and scope.
The more difficult aspect of this was figuring out how to register a security trimmer programmatically. Documentation on how to do this with stsadm is easy to find, but figuring it out how to do it without shelling out to stsadm was a little more challenging.

Custom Security Trimmer

A custom security trimmer is a .NET interface that is run at query time to determine the URLs within the search results to which the current user has access. The primary method for a security trimmer is the CheckAccess method.

public BitArray CheckAccess(IListdocumentCrawlUrls,IDictionarysessionProperties)

The implementation of the security trimmer was to take a subset of the provided documentCrawlUrls by culling out pages we did not want to show in search results. This was a way to implement a more sophisticated exclusion crawl rule than you can through the search configuration. Then we simply call a Confluence web service that takes in a set of URLs and returns an array of Booleans. Finally, we merge the skipped URLs with the Confluence permissions and return a BitArray.

Since this security trimmer runs at query time, performance must be a consideration. However, the security trimmer is only provided a relatively small number of URLs if the ratio of URLs accessible by the user to total URLs is relatively high. Basically the query engine wants to show enough results to fill a page. If the search results page shows 10 results per page, the query engine may provide 15 URLs at a time. If the user has access to less than 10, another 15 may be requested. When the user requests the next search result page, the security trimmer is invoked again.

If you write your own security trimmer, we recommend that you utilize .NET tracing for debugging purposes. You can also break into the code with a debugger, but you will likely need to include a Debugger.Break method for the debugger to attach properly.

Single Sign-On (SSO)

One of the primary goals of  the Confluence and SharePoint integration effort was to make sure the products could work together seamlessly with a minimal amount of authentication requests presented to the user. Another goal was to implement this without requiring additional products and intrusive configuration requirements. We chose to use the SSO service that comes with MOSS as a base for SSO functionality and to rely on custom code in the Confluence web parts and related Confluence Administrative Settings page.

SSO Overview

The following diagram and number references below it provide a high-level view of how SSO works between SharePoint and Confluence.

(1) The client browser will typically be configured with Windows Authentication/NTLM to access SharePoint.

(2) The Confluence web parts use the default MOSS SSO provider to look up stored credentials for “Confluence”.

(3) The Confluence web parts will use the returned SSO credentials to access a Confluence web service to retrieve a list of pages or content for a specific page.

(4) The Confluence page content will return a SSO ticket and related XMLHTTP JavaScript methods to retrieve and forward Confluence SSO credentials from the Client to Confluence. This is used for seamless accessibility of images stored within Confluence or when the user clicks links within the web part that access Confluence.

(5) The Confluence page will begin rendering/processing the XMLHTTP JavaScript methods on the Client.

(6) The Client will “redeem” the SSO ticket to retrieve SSO credentials and use the credentials to authenticate the client.

(7) The Client completes rendering content, including images, links, etc. within the context of the SSO credentials.

Microsoft SSO Service

The Microsoft SSO Service provides a way for a user to provide an individual set of credentials to use for back-end service/web service impersonation. The individual set of credentials are stored and associated with a user’s Windows-based credentials.

Why Did We Choose the Microsoft SSO Service?

  • The SSO Service is built into MOSS and does not require additional products.
  • The service immediately provides the ability to pass individual user credentials to the Confluence web service. This automatically allows Confluence to trim returning content within the context of the correct user.
  • The service supports the ability to plug in a different/custom SSO provider implementation.

Pros/cons of the Microsoft SSO Service:

ProsCons
  1. Service and basic SSO provider implementation Included with MOSS.
  2. Simple/yet powerful.
  3. Pluggable way to replace the out-of-the-box SSO provider; works very well for providing credentials to a web service.
  4. Fairly simple to configure (depending on server configuration).
  1. Must use Windows Authentication.
  2. Requires credentials to be recaptured and stored in a separate repository (security risk).
  3. Note: To eliminate an extra prompt for credentials, SSO credentials could be generated by an automated process.
  4. Requires additional code to allow the browser to receive and impersonate credentials.

SSO Configuration

The following references helped to provide the specific configuration steps required to get the SSO service up-and-running for a Confluence-SharePoint integration environment. The first two links provide the basic Microsoft SSO service reference material. The third link helped to provide insight on the Microsoft SSO service itself and the relationship between service accounts, roles, etc.

SSO Code Samples

The following sections reference the areas of custom code and configuration used to implement a simple SSO solution. Note that some code in the sections below has been omitted (denoted with “…”). Most of the omitted code is unrelated to SSO functionality and has been left out to avoid confusion and clutter.

Client JavaScript Rendering

The Confluence Page Web part has SSO code to render client script to the browser. This is done within the web part’s CreateChildControls overload. The code checks to see if the Microsoft SSO service has been configured for the “Confluence” application. If it has been configured, it uses the built-in SSO provider to reserve a credential ticket. This ticket is a generated string token that can used later to retrieve a user’s credentials.

try
{
if (ConfluenceConfiguration.Instance.UseSso)
{
if (!this.Page.ClientScript.IsStartupScriptRegistered(&quot;Confluence&quot;))
{
//Reserve a credential ticket from SSO Service
string ssoTicket = null;
Credentials.ReserveCredentialTicket(
Credentials.SingleSignonTicketType.Default,
ref ssoTicket);
this.Page.ClientScript.RegisterClientScriptBlock(
typeof(System.Web.UI.Page), &quot;Confluence&quot;,
String.Format(SsoClientScriptFormatted,
HttpUtility.UrlEncode(ssoTicket)), true);
}
}
}
catch (SingleSignonException singleSignOnEx)
{
//Swallow this exception, we will not be generating a ClientScriptBlock for
//client-side authentication...
}

The code above renders the following JavaScript to the browser. Most if it is in the SsoClientScriptFormatted constant, with the exception of the SSO ticket highlighted in yellow.

var xmlHttp;
function createXMLHttpRequest()
{
if (window.ActiveXObject)
{
xmlHttp = new ActiveXObject(&quot;Microsoft.XMLHTTP&quot;);
}
else if(window.XMLHttpRequest)
{
xmlHttp = new XMLHttpRequest();
}
}
function getCredentials()
{
url = &quot;_layouts/Atlassian/ConfluenceSettings.aspx?getCredentials=true&amp;amp;id=;
xmlHttp.open('GET',url,true);
xmlHttp.onreadystatechange=sendAuthRequest;
xmlHttp.send(null);
return false;
}
function sendAuthRequest()
{
if (xmlHttp.readyState==4)
{  xmlHttp.open('GET', xmlHttp.responseText, false);
xmlHttp.send(null);
}
}
createXMLHttpRequest();
getCredentials();

The call to createXMLHttpRequest is called as the browser is rendering one of the Confluence web parts. This call sets up the xmlHTTP object that is appropriate for the browser (Internet Explorer 6/7, Firefox, etc.). The next call to getCredentials makes a separate HTTP request to ConfluenceSettings.aspx to redeem the ticket/token that was originally sent to the browser. Once the call to ConfluenceSettings.aspx returns, another request is sent to Confluence with the specific user’s credentials. This establishes a session (and authenticates the user) between Confluence and the user’s browser session.

Confluence Page Request

In order for the Confluence Page web part to render the Confluence page content, the web part must request the data from Confluence using the SSO credentials. This is done within the OnPreRender overload of the web part.

try
{  RemotePage page = ConfluenceHelper.GetConfluencePage(this.PageId);
...
}
catch (SingleSignonException singleSignOnEx)
{
int ssoErrorCode = singleSignOnEx.LastErrorCode;
if (ssoErrorCode == SSOReturnCodes.SSO_E_CREDS_NOT_FOUND)
{
try
{
ISsoProvider provider = SsoProviderFactory.GetSsoProvider();
string strSSOLogonFormUrl = provider.GetCredentialManagementURL(  ConfluenceConfiguration.Instance.SsoApplicationName).AbsoluteUri;
pageContent = &quot;Click &lt;strong&gt;&lt;a href=&quot; + strSSOLogonFormUrl + &quot;&gt;here&lt;/a&gt;&lt;/strong&gt; to save your credentials for the Confluence Application.&quot;;

}
catch
{
pageContent = &quot;Credentials not found.&quot;;
}
}
}

The code checks to determine if SSO credentials have been stored for the current user. If they have not been, the call to GetConfluecePage will throw an exception. This exception will be handled by the SingleSignonException handler which provides a link so the end-user can provide SSO credentials.

SSO Settings and Authentication Ticket Verification Code

The ConfluenceSettings.aspx page has been set up to respond with a Confluence login URL if a particular query string is provided with an appropriate token. This is used by the JavaScript emitted further above.

Below we show snippets of the OnLoad method for the page. This method pulls the credentials for the current user with a call to the GetCredentials method. A separate call is made to GetCredentialsUsingTicket. If both the username/password match from the credentials returned from both calls, then the credentials are returned to the browser through the JavaScript request. Note that to keep the code snippet smaller, we skip some interop code required to extract the username and password from the SSO provider.

The call to GetCredentialsUsingTicket is executed with elevated priveleges. This had to be done because the call requires SSO administrative-level privileges. This also requires the SSO Administrative account to be a site collection administrator.

//Check to see if the request is for ticket redemption for user credentials
if (Page.Request.QueryString.ToString().ToLower().Contains(&quot;getcredentials&quot;))
{

//Make sure the user has been authenticated and pull the SSO
//credentials for the ticket and verify they match the current user
...
string ssoTicket = HttpUtility.UrlDecode(Page.Request.QueryString[&quot;id&quot;]);
ssoTicket = ssoTicket.Replace(&quot; &quot;, &quot;+&quot;); //Loss of &quot;+&quot; during encoding/decoding...
SPSecurity.RunWithElevatedPrivileges(
delegate()
{

using (SPSite site = new SPSite(Web.Site.ID))
{
using (SPWeb web = site.OpenWeb())
{
//Retrieve the credentials using the ticket
Sso.Credentials.GetCredentialsUsingTicket(  0,  ConfluenceConfiguration.Instance.SsoApplicationName,  ssoTicket,  ref rgTicketCreds);
}
}
}

);

if (userName != string.Empty &amp;amp;&amp;amp; password != string.Empty)
{
if (userName == rgTicketCreds[0] &amp;amp;&amp;amp; password == rgTicketCreds[1])
{  Page.Response.Write(string.Format(  ConfluenceConfiguration.Instance.ConfluenceSite +  &quot;/login.action?os_username= {0}
&amp;amp;os_password=
{1}
&amp;amp;login=Log+In&amp;amp;os_destination=&quot;,

&lt;code&gt;rgTicketCreds[0], rgTicketCreds[1]));
}
}
Page.Response.End();
}
Summary

We covered the primary points of content embedding (especially web parts), integrated search, and single sign-on. We didn’t have the space here to cover some other aspects of this project, such as how we deployed the SharePoint Connector for Confluence into two SharePoint features and provided a MSI for installation. Nor did we get into our agile process for working with the Atlassian team on the other side of the US, not to mention their development team in Australia – that resulted in a couple of late nights!

We would like to thank Lawrence Liu and Deb Bannon of Microsoft for introducing us to Atlassian. We would also like to thank Jonathan Nolen of Atlassian for managing the requirements and giving guidance as well as Brendan Patterson who wrote the Confluence plug-in for SharePoint.

About the Authors

Kirk Liemohn – Kirk is a Principal Software Engineer with ThreeWill. His recent project experience includes enterprise search projects as well as a business analysis portal.

Chris Edwards – Chris is a Senior Software Engineer with ThreeWill. Chris’ project roles have ranged from development/technical lead to development resource. He is certified as MCSD using Microsoft .NET and as MCTS: SharePoint Services 3.0, Application Development. Chris manages resource links related to WSS at http://wssresourceguide.com.

read more
Kirk LiemohnSharePoint Connector for Confluence
tools1.jpg

Visual Studio Tools for Office

Pete is a Director of Technology at ThreeWill. Pete’s primary role is driving the overall technology strategy and roadmap for ThreeWill. Pete also serves as ThreeWill’s Hiring Manager and is constantly looking for new talent to join the ThreeWill family.

As part of the recent ThreeWill “Next Level” seminar at Microsoft in Alpharetta, Eric Bowden and I showed a Word 2007 Visual Studio Tools for Office Second Edition (VSTO SE) Add-in that used WSS 3.0 Web Services to generate a basic status report. The Add-in uses WSS 3.0 Web Services to pull data from a custom list and a task list. Once the data is retrieved from SharePoint lists, a custom UserControl contained in the Office Task Pane can update Word 2007 Content Controls and create a task table in the Word document in response to user events. Finally, a custom Ribbon tab enables performing the same actions from the Ribbon as from the Custom Task Pane.

Several people asked for the code, so here is the code and some highlights. All of the code is available here.

First, you’ll need Word 2007, Visual Studio 2005 and VSTO SE. To install VSTO SE, visit the Office Developer Center and download. Rather than rehash a how-to on creating a VSTO SE Add-in, check out the articles, code samples, and how-to’s on the VSTO SE site. A basic Add-in, with a custom Task Pane and Ribbon similar to the “Next Level” demo, can be created by following the steps in the samples and how-to’s found there.

Once I had a basic Add-in completed, I created a Word document template (Weekly Project Status Report.docm) for the status report, added the Content Controls to the template and added the template to the Add-in project. To add Content Controls to a Word 2007 document you must use the Developer tab which is disabled by default. If this tab is not visible, click the Office Button, then “Word Options”, and check “Show Developer tab in the Ribbon” to enable the Developer tab.

After the Status Report template was added to the project, I created 2 SharePoint lists. The first list, a custom list named Projects, should have the following column definitions as displayed in the Settings menu for the list.

The second list, a Task list named ProjectTasks, should have these column definitions as displayed in the Settings menu for the list.

Then I added a Web Service reference to the Lists.asmx web service, /_vti_bin/lists.asmx, of the site that contained the project lists. To retrieve the Project list data was a straight forward call to the lists.asmx Web Service. Once the XmlNode is returned, simply walk the child elements and add them to a List<Project> collection.

Lists lists = new Lists();
lists.PreAuthenticate = true;
lists.Credentials = CredentialCache.DefaultCredentials;
lists.Url = m_wssListServiceUrl;
XmlNode list = lists.GetListItems("Projects", null, null, null, null, null, null);
foreach (XmlNode node in list.ChildNodes[1].ChildNodes)
{
if ((node.Attributes != null) &amp;&amp; (node.Attributes["ows_Title"] != null))
{
string linkTitle = node.Attributes["ows_LinkTitle"].Value;
string projectName = node.Attributes["ows_Project_x0020_Name"].Value;
string extranetURL = node.Attributes["ows_Has_x0020_Extranet"].Value == "1"
? node.Attributes["ows_Project_x0020_URL"].Value
: "";
string teamLead = node.Attributes["ows_Team_x0020_Lead"].Value;
projects.Add(new Project(linkTitle, projectName, extranetURL, teamLead));
}
}

The List<Project> collection is filled and the combo box on the StatusReportControl is bound to this data. Changes to the combo box update the controls in the custom Task Pane and the Content Controls contained in the Word document by calling SelectContentControlsByTag. Each named Content Control then updates the Range with the specific project item data similar to the following code.

//update content controls in document
if (Globals.ThisAddIn.Application.ActiveDocument.ContentControls.Count &gt; 0)
{
object controlRef = 1; //control references start with 1 not 0
ContentControl projectNameControl =
Globals.ThisAddIn.Application.ActiveDocument.
SelectContentControlsByTag("ProjectName").get_Item
(ref controlRef);
projectNameControl.Range.Text =
((Project) comboBox1.SelectedItem).ProjectName.ToString();
//other calls omitted
}

Notice that the controlRef number is not zero based and must be referenced beginning with 1. Also, the call to get_Item() is required since there can be multiple controls with the same name and this ensures you are updating the correct control.

The status report also enables adding a table of tasks from the ProjectTasks list. This call to the lists.asmx web service involves building a Query node to provide to the lists web service. To build the Query node, the user can select a date range that tasks were completed within. Using the Query node, the lists web service retrieves the tasks and passes the returned XmlNode variable to a method to convert the node into a table in Word.

XmlNode ndQuery = xmlDoc.CreateNode(XmlNodeType.Element, "Query", "");
string queryString =
"";
queryString +=
tasksCompletedAfter.ToString("s", CultureInfo.InvariantCulture) + "";
queryString += "";
queryString += tasksCompletedBefore.ToString("s", CultureInfo.InvariantCulture);
queryString += "";
ndQuery.InnerXml = queryString;

XmlNode ndListItems = listService.GetListItems(taskListGuid, null, ndQuery,
ndViewFields, null, ndQueryOptions, null);
taskTable = CreateActivitiesDataTable(ndListItems);

Finally, the Ribbon button definitions in NextLevelRibbon.xml have “onAction” attributes that call code similar to that listed above. The custom Ribbon buttons simply change the contents of the project name Content Control and add the tasks table based on the date interval selected.

There’s lots of room for improvement and enhancements to this Add-in sample, but hopefully this shows some of the power of combining SharePoint list data, or any other external data source, within a familiar environment like Word to enable users and data to work together better.

read more
Pete SkellyVisual Studio Tools for Office
reports-e1425586188238.jpg

Effective SharePoint Reporting

Kirk Liemohn is a Principal Software Engineer at ThreeWill. He has over 20 years of software development experience with most of that time spent in software consulting.

To effectively report on SharePoint data using SQL Server Reporting Services (SSRS), you must copy the data into SQL Server using SSIS. Unfortunately, SharePoint out of the box does not have a solution for reporting on its own data other than configuring list views or building data views. Building a reporting database gives developers a powerful toolkit for building high-performance reports while off-loading the burden of report processing from SharePoint.

So how do you get the SharePoint data loaded into SQL Server so reports can be built and executed in SSRS? Simple. Write an SSIS package that executes the following steps:

  • Executes a small custom utility to extract the SharePoint data to xml files using the built-in ‘Lists’ web service.
  • Delete the current reporting data in the database
  • Load each of the list’s xml data files into the corresponding tables
  • Send an e-mail notifying that reporting data has been successfully loaded

Finished Control Flow Example

Small Utility to the Generate the List XML Data Files (see the “Run SharePoint Extractor” task in the Control Flow above):

First, create custom views for each SharePoint list to contain the data needed for reporting. The views must be defined to return a set amount of records used for making multiple calls to get all the data from large lists. Also, the views can be used to only extract the data needed for reports instead of all the data in the list. Next, write a method named something like ExtractListData that takes the following parameters: string listName, string fileName, string elementName, string viewName. The ExtractListData method basically makes a call to the SharePoint Lists’ GetListItems method to retrieve the data. Once you retrieve the data into the NodeListItems, then there are several important items that need to be done:

  • Write out the root xml element
  • Scrub all lookup values to other lists – you only include the ID value, so you have to remove the #;<data value> from the columns value
  • Scrub all the Person/Group values to only the data part of the value so you have to remove the ID and the “#;”
  • Remove the z: from the z:row element name because the SSIS XML Source Data Flow Component will not load the data
  • Convert any number values to integers if they are defined as integers in the database
  • Write out the transformed rows
  • Write out the ending root xml element

Delete the Data from the Reporting Tables (see the “Delete All Records” task in the Control Flow above):

The design is to completely delete all the data in the SQL Server reporting tables and then reload all the data from the xml data files generated above. Since the entire process happens rather quickly, I did not feel the need to do incremental updates based on my performance requirements, but that may be needed if there is a tremendous amount of data in the SharePoint lists. I basically had 16 lists where most lists were < 1000 records and one list with ~50,000 records.

Load Each List’s XML Data Files:

To create the SSIS XML Source Data Flow Components, you need an XML Schema file for each of the lists. I just opened up the XML files generated above in Visual Studio 2005 and selected the XML->Create Schema item for a quick and dirty way to get the XML Schema. The generated schema will need to cleaned up a little, such as setting lookups to xs:integer.

Now you can create a new SSIS Data Flow task and put an XML Source component in it. Right-click and select Edit… on the XML Source to configure the settings including the schema file and the columns used to load the reporting database.

Next, drop a Row Count Data Flow Transformation and create a link between the XML Source and the Row Count tasks. Following that, drop an OLE DB Destination Data Flow Destination and configure it to connect to the reporting database and select the appropriate table. You also need to link all the columns from the XML Source columns to the reporting database table columns to define where the data gets loaded.

Data Flow Task Example:

Send Email to Notify Successful Data Load:

Drop a Send Email Task on the control flow and link it to the last Data Flow task. First, you will need to create an SMTP Connection to configure which email server to use to send the email.

Next, right-click on the task and select Edit… and select the SMTP connection created above and fill out the rest of the email information.

read more
Kirk LiemohnEffective SharePoint Reporting