How to Convert Strings to DateTime Data Types in Microsoft Flow

Rob Horton is a Senior Consultant and the Sustainment Practice Lead at ThreeWill. His experience includes over 20 years leading software architecture, design and development focusing on support tools, automation and e-commerce for large corporations and his own small businesses.


Being a consulting firm, you can imagine we do a lot of reporting. One of my responsibilities as the Solution Sustainment lead at ThreeWill is to provide monthly Sustainment usage reports to our clients. It typically involves massaging time tracking reports into more friendly formats. This process can be a bit time consuming, so I set out to automate the process and implement Microsoft Power BI reports.

I decided to use Microsoft Flow for the automation and was challenged to import the reporting data into Azure Table storage. Our reporting data does not contain unique fields that could easily be used as Row Keys and I wanted to make the process self-healing (avoid duplicate data while also being able to reprocess a report), so I needed to query the Table for existing entities by date. This required storing the date as a DateTime data type in the Table. If I didn’t need to query the Table by date, I could have simply added the date as a string and shaped it on the consumption side in Power BI.

The Problem

Flow does a lot of things well but doesn’t have any out-of-the-box functions for converting strings to DateTime data types. I needed to first format my “m/d/yyyy” string to a standard DateTime format.

The Solution

I created 3 Compose actions for each date as such:

To calculate the dateReportStartMonth, I used the following expression:

substring(concat('0',split(body('Parse_JSON')?['report']?['Report Start Date'],'/')[0]),sub(length(split(body('Parse_JSON')?['report']?['Report Start Date'],'/')[0]),1),2)

Where body(‘Parse_JSON’)?[‘report’]?[‘Report Start Date’] is the report’s start date. This expression adds a ‘0’ to the beginning of the month and takes the right two characters so that all months are depicted by two characters.

To calculate the dateReportStartDay, I used a similar expression:

substring(concat('0',split(body('Parse_JSON')?['report']?['Report Start Date'],'/')[1]),sub(length(split(body('Parse_JSON')?['report']?['Report Start Date'],'/')[1]),1),2)

Then I put the formatted DateTime string together in dateReportStart:

concat(split(body('Parse_JSON')?['report']?['Report Start Date'],'/')[2],'-',outputs('dateReportStartMonth'),'-',outputs('dateReportStartDay'),'T00:00:00Z')

I also created dateReportEnd using similar expressions to format the report’s end date. With these dates formatted, I was able to query the Table with the following Filter Query:

(the first Output is dateReportStart and the second is dateReportEnd)

The final step is to insert the formatted date into the Table as a DateTime typed value. This is done by adding the data type into the JSON payload just prior to the value as such:


“Company”: “<Company Output>”,

“Contract”: “<Contract Output>”,

“[email protected]”: “Edm.DateTime”,

“Date”: “<Formatted Date Output>”,

“Ticket”: “<Ticket Output>”,

“PartitionKey”: “<PartitionKey Output>”,

“RowKey”: “<guid() Output>”


(where Outputs are values from other Flow Actions)


I suspect Microsoft will continue to add expressions to Flow and include string to date conversion at some point, making the process above simpler. Still, it is still possible with the current tools to format strings as acceptable DateTime inputs and use them to query Azure Table storage as well as populate typed entities in a Table.

read more
Rob HortonHow to Convert Strings to DateTime Data Types in Microsoft Flow

Using PowerShell to Fix Custom Search Pages in SharePoint Online

Matthew Chestnut is a Senior Consultant at ThreeWill. He has over 20 years of software development experience around enterprise and departmental business productivity applications. He has a proven track record of quality software development, on-budget project management and management of successful software development teams.


A customer recently migrated from on-premise SharePoint 2013 to SharePoint Online. Unfortunately, the migration tool they used did not migrate their custom search pages correctly. With four custom pages for 40+ subsites, that’s 160+ pages that needed to be fixed. Fortunately, many of these subsite search pages were identical across the subsites, so scripting a solution to this problem was the approach I took.

Create the search page prototypes

The first step was to create an example of each search page to make certain the search refinement web part and content rollup/content search web part were configured correctly. The search pages are stored in the Site Pages document library and use the Wiki Page template, with a two-column layout, search refiner on the left and content search on the right.

The page prototypes I created included:

  • two corporate-level search pages that were identical across all subsites
  • two division-level search pages that were slightly different based on the four divisions

This resulted in ten search page prototypes (2 corporate-level plus 8 division-level.)

Export the web parts

Now that I had working examples of search pages it was time to prepare for scripting by exporting the web parts to use for all the other search pages. The steps were:

  • edit the search page
  • click the drop-down context menu for each web part
  • choose the “Export…” option

This results in two web part files, “Refinement.webpart” and “Content Search.webpart”.

This pair of “.webpart” files contain the search configuration for each of the ten prototype search pages. I placed each of these pairs of web part files into an appropriately named folder so I wouldn’t get them mixed up.

Identify the scripting technology to use

Here is a recap of what I needed to automate:

  • For each of the 40+ subsites…
  • For each of the four search pages…
  • Create a Wiki Page in Site Pages document library
  • Set the Text Layout for the Wiki Page to “two columns”
  • Add the appropriate Web Parts to the Wiki Page, based on whether it is one of the two Corporate search pages or one of the eight Division search pages
    • Left column, add the Search Refinement web part
    • Right column, add the Content Search web part

There are several options to use for scripting a SharePoint Online solution. This blog article, PowerShell Modules for Managing SharePoint Online ( does a great job of outlining the options available:

  1. Microsoft Online Services Sign-in Assistant
  2. SharePoint Online Management Shell
  3. Office Dev PnP PowerShell Module
  4. SharePoint Online Client-Side Object Model (CSOM)
  5. Azure Azure AD Management Shell
  6. Security and Compliance Center

I’ve used the SharePoint Online Client-Side Object Model (CSOM) option before in solutions I’ve created for on-premise customers. This option exposes much of the SharePoint API and would’ve worked great.

However, I wanted a solution that was a bit more administrator-friendly and not so developer-friendly. The Office Dev PnP PowerShell Module fit that criterion perfectly. These commands use CSOM behind-the-scenes to perform provisioning and artifact management actions, and many of these commands can work against both SharePoint Online and SharePoint Server.

Using the Office Dev PnP PowerShell Module

You’ll need to install the following modules to prepare a machine to use PnP PowerShell:

PS> Install-Module -Name PowerShellGet -Force
PS> Install-Module SharePointPnpPowerShellOnline

More details about PnP PowerShell can be found here,

Scripting the solution

Where am I now?

  • I have my search pages prototyped
  • The web part XML is saved to the local file system
  • I have the PnP PowerShell module installed for SharePoint Online

Now, it’s time to script the solution.

  • Connect to SharePoint Online
  • Get a handle to the SharePoint web (subsite) for division / plant
  • Create the wiki page in the Site Pages library
  • Add the web parts to the page

Below is the list of commands needed to create the search pages:

# connect to SharePoint Online
Connect-PnPOnline -Url "" -Credentials (Get-Credential)

# get the subweb for the division / plant
$web = Get-PnPWeb -Identity "/sites/corporate/division/plant" -ErrorVariable err -ErrorAction Continue
if ($err.count -gt 0 -or $web -eq $null) { throw "Web not found: $($subsite)" }

# Wiki Page to create
$wikiPageUrl = "/sites/corporate/division/plant/SitePages/SearchPageOne.aspx"

# check to see if page exists
$query = "SearchPageOne"
$sitePage = Get-PnPListItem -Web $web -List "Site Pages" -Query $query
if ($sitePage -ne $null) {
Write-Host "Page already exists:" $wikiPageUrl
else {
# create the Wiki Page with "Two columns" text layout
Add-PnPWikiPage -Web $web -ServerRelativePageUrl $wikiPageUrl -Layout TwoColumns
Write-Host "Page created:" $wikiPageUrl

# from the local filesystem, get the web parts to add to the wiki page
$pathRefinement = Join-Path $PSScriptRoot "webparts\Refinement.webpart"
if ((Test-Path $pathRefinement) -eq $false) { throw "File not found: $($pathRefinement)"}
$pathContentSearch = Join-Path $PSScriptRoot "webparts\Content Search.webpart"
if ((Test-Path $pathContentSearch) -eq $false) { throw "File not found: $($pathContentSearch)"}

# the Site Pages library has check out required enabled, so, check out file
Set-PnPFileCheckedOut -Web $web -Url $wikiPageUrl

# remove the web parts if they already exist; if they don't exist, no error is thrown
Remove-PnPWebPart -Web $web -ServerRelativePageUrl $wikiPageUrl -Title "Refinement"
Remove-PnPWebPart -Web $web -ServerRelativePageUrl $wikiPageUrl -Title "Content Search"

# add the Refinement web part to the "left column", row 1 column 1
Add-PnPWebPartToWikiPage -Web $web -ServerRelativePageUrl $wikiPageUrl -Path $pathRefinement -Row 1 -Column 1

# add the Content Search web part to the "right column", row 1 column 2
Add-PnPWebPartToWikiPage -Web $web -ServerRelativePageUrl $wikiPageUrl -Path $pathContentSearch -Row 1 -Column 2

# Now, check in page with comments
Set-PnPFileCheckedIn -Web $web -Url $wikiPageUrl -CheckinType MajorCheckin -Comment "Modified by $($PSCommandPath)"

To keep the example script as concise as possible I intentionally did not include much error checking. In the “production version” of the script I stored the subsite information (division, plant, etc.) in a comma-delimited (.csv) file. I executed the “create search page” commands in a loop for each of the entries in the file.


What I like about the PnP PowerShell module is the relative simplicity of the commands and scripts. Once the connection is established to SharePoint Online, the “PnP” commands work just like server-side PowerShell. There is no need to pass around a context variable like we need to do when using SharePoint Online CSOM. I think this makes this technology much more suitable for SharePoint administrators.

This script accomplished in minutes what would’ve taken me hours to do by hand. And, if changes in the refiners or content search were needed, no worries…just edit the web part XML as needed and re-run the script!

read more
Matthew ChestnutUsing PowerShell to Fix Custom Search Pages in SharePoint Online

How to Display Number of Days Since a SharePoint List Item was Last Opened

Caroline Sosebee is a Software Engineer at ThreeWill. She comes to us with 20+ years of software development experience and a broad scope of general IT support skills.

For a recent SharePoint 2013 on-premises project I needed to show the number of days elapsed between the date opened and the current date anytime the item was displayed, whether it was on a list view or when viewing or editing the list item. I also had the twist that once the item was marked as ‘closed’, the number of days should be set permanently and not changed again, so that the user will have that value available when exporting to Excel without having to recalculate.

In order to accomplish this, I had to do two things. The first was to show the running days open whenever that field is shown in the UI. I determined that the easiest way to do the this was to use Client Side Rendering (CSR) and a custom JavaScript file that I hooked in by updating the JSLink directly on my field. I found this to be pretty cool as I had no idea it could be added directly to a list column. This allowed me to add the link reference in just one place and then let SharePoint magically load the script and handle showing the field correctly via CSR, regardless of the page it is shown on. For more information on JSLink, which was introduced in SharePoint 2013, see this link (one of many).

This was also my first exposure to Client Side Rendering (CSR), so I claim no great depth of understanding. One of the best sites I found for explaining some of how it works is here and it helped me work out the pieces I needed in order to accomplish this task. By using CSR, I was able to override the default SharePoint behavior for the field and have it render my new value to the DOM.

The second part was to save the final elapsed number of days value back to the SharePoint list when the item was marked as closed. I did this via a workflow that I already had running for updating statuses and such. I just added this calculation to that workflow so that the list item was updated. I’m not going into further details on this piece as this post is primarily about how to render the running total to the UI.

Here are the pieces needed to get the elapsed days to work. I first added a Number field (ElapsedDays) to my list. I did not have to make this a calculated column or set it to read-only since the code that will be rendering the calculated value will overlay the display of the field everywhere, making it a read-only view by default. The only exception to this is in Quick Edit mode, where you will still be able to update this value if so desired. If you do NOT want the user to be able to ever enter a value or don’t have a need to save the final value back to the list, then you might want to consider making the field read-only.

I next added a JSLink reference (using PowerShell and the Microsoft.SharePoint.Client.dll) to the list column I had created. I did this by getting a reference to the list and the field, then setting the JSLink property for this field to the following:


Since my script uses jquery, I had to include a link to that as well as my custom script.

I next created my script using some code I found on the web to help me get started (see links referenced above) and loaded it into SiteAssets in a scripts folder. It has two functions – the first runs on load and adds the template overrides (using CSR) to the field and the other does the calculations and returns a value.

In the function for adding the CSR overrides, I’m linking my custom function to View, NewForm, EditForm and DisplayForm because I need it to show the calculated value everywhere the field is shown. Doing this will cause my function to be run each time one of these pages with my field on it is loaded and cause it to show the value returned by my function. One thing to note is that it will render the value in a read-only format. I think it can be manipulated post-render to make it editable, but I didn’t dig into these capabilities as this was doing exactly what I needed.

In my CalcDays function, I’m calculating the elapsed number of days (not including the first day) and returning that value. I’m also handling the situation where there is an actual value already saved in this field. When this is the case, I simply return that value instead of the calculated value. There’s also a safety check just in case the item has been closed but the ElapsedDays field has not been updated yet. In that case, it will still calculate the value but will use closed date instead of current date for the calculation.

As you can see, this is a pretty basic script but does exactly what I needed. It could easily be modified to handle even more complex situations very easily.

(function () {

// Create an object that will have the context information for the field we want to render differently

var fieldContext = {};

fieldContext.Templates = {};

fieldContext.Templates.Fields = {


"ElapsedDays": {

"View": CalcDays,

"NewForm": CalcDays,

"EditForm": CalcDays,

"DisplayForm": CalcDays



// Now register the changes to the field



// This function will calculate and return the number of days

function CalcDays(ctx) {

// If the item is already closed (duration has been saved to the list), just return this saved value

if (ctx.CurrentItem.ElapsedDays > 0) {

return ctx.CurrentItem.ElapsedDays;


// If we have no open date we can't calculate so return 0

if (ctx.CurrentItem.DateOpened == "") {

return 0;


// If we get here, the final elapsed days has not been saved yet, so we need

// to calculate it for display

var openDate = new Date(ctx.CurrentItem.DateOpened);

// Default closed date to current date

var endDate = new Date(new Date().format('yyyy-MM-dd'));

// If we find a closed date use this instead of current

if (ctx.CurrentItem.DateClosed) {

endDate = new Date(ctx.CurrentItem.DateClosed);


// Now calculate the number of days, rounding to no decimals.

// If you need to include both beginning and ending date, add 1 to the below output

var dateDiff = Math.round(((endDate - openDate) / 1000 / 60 / 60 / 24));

return dateDiff;


So in summary, by using JSLinks, Client Side Rendering and some custom JavaScript, you can show the actual number of elapsed days since something was opened (or another date) in SharePoint. Pretty cool, huh?

read more
Caroline SosebeeHow to Display Number of Days Since a SharePoint List Item was Last Opened

Using Angular 4.0 with TypeScript 2.5 Best Practice

Matthew Chestnut is a Senior Consultant at ThreeWill. He has over 20 years of software development experience around enterprise and departmental business productivity applications. He has a proven track record of quality software development, on-budget project management and management of successful software development teams.

When using Angular 4 with Typescript 2.5 with SharePoint Patterns and Practices JavaScript Core Library 2.0.8 ( it is good practice to cast the value returned by the async / await methods to the appropriate data type. You can then use the properties of the returned object in your code. If you don’t use a cast, under some situations, the returned value will be an “any” data type which will cause you problems when trying to access object properties later in the code.

Here is the how you import the module, specifying the exports you wish to include:

import pnp, { ItemUpdateResult, ItemAddResult, AttachmentFileAddResult } from "sp-pnp-js";

Here is an example of adding a list item and using “ItemAddResult”:

let result = <ItemAddResult>await pnp.sp.web.lists.getByTitle("Zips").items
.catch(e => {
errorMsg = super.logPnpError(e);

Here is an example of updating a list item and using “ItemUpdateResult”:

let result = <ItemUpdateResult>await pnp.sp.web.lists.getByTitle("Zips").items
.catch(e => {
errorMsg = super.logPnpError(e);

Here is an example of adding a list item attachment and using “AttachmentFileAddResult”:

let result = <AttachmentFileAddResult>await pnp.sp.web.lists.getByTitle("Zips").items
.add(, attachment)
.catch(e => {

Do you have any best practices that you would add?  Leave a comment below.

read more
Matthew ChestnutUsing Angular 4.0 with TypeScript 2.5 Best Practice

Angular 4 Routing Roundup

Bo is a Principal Consultant for ThreeWill. He has 18 years of full lifecycle software development experience.

A few months ago I wrote a post about using the PnP JS Core to create a mock backend as part of an Angular 2.0 SharePoint app written in TypeScript.  At the time, I was just doing some proof of concept work and feeling my way through things.  Now two months later I thought I would give an update on what that POC has become and what else I’ve learned since then.

Regarding our technology stack, not much has changed except we are using Angular 4.0 now and that transition was seamless.  To be honest, I didn’t even realize we were on 4.0 and not 2.0 until I went looking.  The little skunkworks stuff using PnP Core JS and our own Custom HttpClientImpl has turned out even better than I expected.  We can now run the application on localhost against mocked SharePoint lists, Users, and Permissions/SharePoint Groups and then deploy to SharePoint with no code or configuration changes.  Running locally, we use local storage to preserve all data changes until we want to start over.  My very non-scientific guess is that this has made us 25%-50% faster at developing our user interface and REST calls.

What I would like to share with you now is some information on how we’ve used routing in our application.  If you are new to routing, I would check out the Angular routing tutorial.  I honestly don’t know where we would be without it.  In our application we are using multiple levels of router outlets, lazy loading routing modules and using route guards, all of which I learned through the Angular tutorials.  Fundamentally, routing is just about mapping a URL path to components in your application.  Angular makes it so easy to plug and play all these components so you can assemble them and orchestrate a complex UI with ease.

App Structure and Router Outlets

Before getting into the routing code, I wanted to give you an overview of the components and router outlets used as part of a user navigating to a route in our application.  Below is an example of a “snapshot” of what is in play when a user is looking at a list of business requirements for a business requirements document.  This is the “deepest” a user might navigate based on the way the application is currently configured.

You are probably looking at the picture above and wondering why I tried to create what looks like an Infinity Mirror.  The goal is to show how all these Angular components and router outlets work together to produce a responsive, flexible and usable single page application.   An application where you navigate around at a global level among “top level” components or at a “document level” among document level components.

Notice that there are two router outlets in the above image.  This gives us the ability to swap out progressively more specific pieces of the application when a user moves from one route to another.  Router outlet #1 would deal with my navigation between larger sections of the app.  Router outlet #2 would deal with my navigation within a specific section of the app.  Take the following URL as a hypothetical example of this, note that routes do not have to follow this pattern, but it’s meant to show a parent-child URL relationship that is then supported by our router outlets.


Routing Information

Consider the URL (http://localhost:4200/#/documents) as a more concrete example for the prefix of a route.  Looking at it you’re probably pretty sure I’m looking at documents, probably a list of them.  At this point, I’m only within router outlet #1.  Now, if I click on one of those list items, maybe I end up at (http://localhost:4200/#/documents/BR/1) which is some detail or list screen for a specific document.  At this point, the details were emitted by router outlet #2.  Now I can navigate all around the specific documents to URLs like these:

The cool thing is that the information obtained and code that was executed as part of router outlet #1 isn’t exercised again.  Not until I make one of those big jumps to another document or another completely different section will that code need to be invoked.  In addition to some of the implicit magic, the routes and router outlets give us we do utilize services to manage states and allow for sibling type components to communicate and share information.

Other Cool Routing “Stuff”

I was going to dive a little deeper into our implementation for these, but then I figured there are tons of examples and I wanted to keep this brief, so I instead thought I would give two brief code snippets and provide some background of what is used and why.

export const routes: Routes = [
    { path: 'view/:documentType/:docId', component: DocumentPrintComponent },
        path: '',
        component: AppComponent,
        children: [
            { path: 'releases', pathMatch: 'prefix', loadChildren: 'app/releases/release.module#ReleaseModule' },
            { path: 'projects', pathMatch: 'prefix', loadChildren: 'app/projects/project.module#ProjectModule' },
            { path: 'documents', pathMatch: 'prefix', loadChildren: 'app/documents/document.module#DocumentModule' },
            { path: 'roles', component: UserRolesComponent },
            { path: 'noaccess', component: ErrorNoAccessComponent },
            { path: '', component: HomeComponent },
            { path: '**', component: ErrorNotFoundComponent }

Lazily Loaded Routes

I stumbled across this in the Angular routing tutorial but found it useful in keeping some “separation of concerns” around our routes.  As you can see above, we look for the prefixes releases, projects, and documents in the main routes for the application and then lazy load their children defined in other files.  Defining the child routes in a file that’s in the same folder structure as the components that are going to be invoked as part of those child routes just seems to help when working on specific areas of the application.  I know there are also startup benefits since lazily loaded routes aren’t loaded at startup.  In our case, we probably have dozens of routes, so that’s probably not a big deal but if you had hundreds or thousands of routes, I could see this helping performance.

export const routes: Routes = [
        path: 'new', component: DocNewComponent, canDeactivate: [CanDeactivateGuard], canActivate: [ProjectCreatorAuthGuard]
        path: 'BR/:docId',
        component: DocumentComponent,
        canActivate: [DocumentStatusAuthGuard],
        children: [
            { path: '', redirectTo: 'overview' },
            { path: 'overview', component: OverviewComponent },
            { path: 'requirements', component: RequirementsComponent, canDeactivate: [CanDeactivateGuard] },
            { path: 'groups', component: GroupsComponent, canDeactivate: [CanDeactivateGuard]  },
            { path: 'attachments', component: AttachmentsComponent, canDeactivate: [CanDeactivateGuard] }
        path: 'AA/:docId',
        component: DocumentComponent,
        canActivate: [DocumentStatusAuthGuard],
        children: [
            { path: '', redirectTo: 'overview' },
            { path: 'overview', component: OverviewComponent },
            { path: 'systems', component: SystemsComponent, canDeactivate: [CanDeactivateGuard] },
            { path: 'work', component: WorkComponent, canDeactivate: [CanDeactivateGuard] },
            { path: 'attachments', component: AttachmentsComponent, canDeactivate: [CanDeactivateGuard] },
    /... other routes excluded for brevity .../
    { path: '', component: DocumentListComponent }

Route Guards

We have also started using route guards significantly in our application, and I can’t imagine any genuinely robust application not using them.  At least not without a lot of ugly and messy code in your components with router redirects.

  • CanDeactivateGuard – As the name implies, we use this route to determine if the user can deactivate or leave a route. We have lots of forms in the application that users will perform edits on and then may try to leave the page without saving.  This routing feature combined with a dialog service allows the “Are you sure” dialog that used to be such a horrible pain in plain old JavaScript days.
  • CanActivate – On the other side of the equation there is the option to check whether a route can be activated. The most obvious checks are authentication checks but since we are inside of SharePoint much of that has been handled, and so we can deal with more specific application role based permission checks and any business centric logic necessary to know if the user can access an URL.  You can never trust that a user is going to access your application based on the prescribed flow you’ve defined, always assume they know every URL that is possible and will try to access it. If you notice in the above route definition, the guard is defined at a parent route and therefore guards all child routes.

Hopefully, you can take something away from all the routing information I’ve shared with you.  As much as I’ve learned along the way I do feel like I’m continually learning more and improving and tweaking the patterns. If you have any thoughts on what you would do differently or how you’ve used Angular routing in your application, feel free to share them by commenting on this post below.

read more
Bo GeorgeAngular 4 Routing Roundup

Closing an Angular 2 Form from within a SharePoint Modal Dialog

Caroline Sosebee is a Software Engineer at ThreeWill. She comes to us with 20+ years of software development experience and a broad scope of general IT support skills.

Recently I was on a project for a client where we were converting their custom SharePoint 2010 app to SharePoint 2013. This application was based on a fairly complicated InfoPath form with lots of supporting event receivers that we were taking to an Angular 2 application, leaving InfoPath behind forever (we hope!).

In an effort to minimize the disruption to the users due to the completely new UI, we decided to keep it as similar as possible to the old InfoPath form, which meant serving it up in a SharePoint modal dialog.

This worked just fine by making a call to SP.UI.ModalDialog.showModalDialog(). Where I got hung up was on how to close the form. Should be simple, right? But since a modal dialog is loaded into an iFrame and has its own container, we lose the reference to the SP object. This meant that I couldn’t just call the SP.UI.commonModalDialogClose function from the form and be done.

No, problem, I thought. A quick google search should turn up the proper command to use in order to close the dialog. No such luck though – apparently my google juju was all used up for the week. I did pick up a few things to try, but none of them got the job done.

On my own, I got as far as knowing I needed to get a reference to the parent container using Window.parent but still couldn’t figure out how to get to the SP functions I needed. After consulting with a co-worker (thanks, Kirk!) we were able to take what I had and finally get a very simple solution working. From within the iFrame we got a reference to the ‘SP’ object by treating it as a property of the parent window, which itself is an object. Duh. Sometimes the simplest things …

The call to close the dialog ended up looking like this:

Window.parent["SP"].UI.ModalDialog.commonModalDialogClose(closeStatus, null)

Here is the full function, which handles closing the form from within a dialog or closing it from a non-modal form, such as opening the page directly in the browser.

closeForm(closeStatus) {

// closeStatus: 1 ==> UI.DialogResult.OK

// closeStatus: 0 ==> UI.DialogResult.cancel

if (window.parent["SP"] !== undefined) {

window.parent["SP"].UI.ModalDialog.commonModalDialogClose(closeStatus, null);


else {

//assumes we are not in a dialog

window.location.href = this.requestService.GetHref();



Doesn’t get much simpler than this. I hope this can help save someone else some of the growing pains I went through on this one.

read more
Caroline SosebeeClosing an Angular 2 Form from within a SharePoint Modal Dialog

Migrating a Document Library and Retaining Authorship Information

Caroline Sosebee is a Software Engineer at ThreeWill. She comes to us with 20+ years of software development experience and a broad scope of general IT support skills.

Lately, I’ve had the opportunity to work on a couple of on-premises SharePoint 2010 to SharePoint 2013 migrations which proved to be both fun and challenging.  Our team chose to leverage PowerShell and CSOM for handling all the heavy lifting when it came to provisioning the new site and migrating data.  This, in turn, lead to one of the more interesting things I got to do, which was write a PowerShell script that would migrate the contents of a document library from the old 2010 site to a document library in the new 2013 site, while at the same time retaining authorship information.

As I researched how to do this, I found that there wasn’t any one source that explained all the steps needed in a nice concise manner. I found dribs and drabs here and there, usually around one step or another, or found that the code provided was using server side code which wasn’t help in my case.

So, I decided it would be worthwhile to pull all the parts into one document, both for my own reference as well as for others. And yes, I admit it. I shamelessly pilfered from some of my co-workers’ fine scripts to piece this together. Thanks, team, for being so awesome!

Here is an outline of the high-level steps needed to move files between sites. You can find the full script at the bottom.

Basic steps

  • Get a context reference for each of the sites (source and target). This will allow you to work on both sites at the same time.
  • Load both the source and target libraries into their appropriate contexts. Load the RootFolder for the target library as well. This will be needed both when saving the document and when updating the metadata.
  • Have a valid user id available to use in place of any invalid users found in the metadata being copied. Ensure the ‘missing user’ user id.
  • Query the source library using a CAML query to get a list of all items to be processed. You can apply filters here to limit results as needed. (Attached code has parameters for specifying start and end item ids).
  • Loop through the items returned by the CAML query
    • Get a reference to the source item using the id
    • Get a reference to the actual file from the source item
    • Load the source file by calling ‘OpenBinaryDirect’ (uses the file ServerRelativeUrl value)
    • Write it back out to the target library by calling ‘SaveBinaryDirect’ on the just loaded file stream
    • Copy over the metadata:
      • Get a reference to the new item just added in the target library
      • Populate the target item metadata fields using values from the source item
      • Update the item
    • Loop

That’s it, in a nutshell. There are all sorts of other things you can do to pretty it up, but I thought I would keep this simple as a quick reference both for myself and others. Be sure to check the top of the script below for other notes about what it does and does not do.

Copies documents from one library into another across sites.
This was tested using SharePoint 2010 as the source and both SharePoint 2010 and SharePoint 2013 as the target.

* Parameters can either be passed in on the command line or pre-populated in the script
* Example for calling from command line:
./copy-documents.ps1 "http://testsite1/sites/testlib1/" "domain\user" "password" "source lib display name" "http://testsite2/sites/testlib2/" "domain\user" "password" "target lib display name" "domain\user" 1 100

* Can cross site collections and even SP versions (e.g. SP2010 to SP2013)
* Allows you to specify both the source and target document library to use
* Can retain created, created by, modified, modified by and other metadata of the original file
* Can specify a range of files to copy by providing a starting id and ending id
* When copying metadata such as created by, will populate any invalid users with the provided 'missing user' value
* Uses a cache for user data so it doesn't have to run EnsureUser over and over for the same person

* Does not currently traverse folders within a document library.
* This only copies.  It does not remove the file from the source library when done.


	[string]$sourceSiteUrl = "",
	[string]$sourceUser = "",
	[string]$sourcePwd = "",
    [string]$sourceLibrary = "",
	[string]$targetSiteUrl = "",
	[string]$targetUser = "",
	[string]$targetPwd = "",
    [string]$targetLibrary = "",
    [string]$missingUser = "",
    [int]$startingId = -1,
    [int]$endingId = -1

## Load the libraries needed for CSOM
## Replace with the appropriate path to the libs in your environment
Add-Type -Path ("c:\dev\libs\Microsoft.SharePoint.Client.dll")
Add-Type -Path ("c:\dev\libs\Microsoft.SharePoint.Client.Runtime.dll")

function Main {
	Write-Host "[$(Get-Date -format G)] copy-documents.ps1: library $($sourceLibrary) from $($sourceSiteUrl) to $($targetSiteUrl)"
    # Get the context to the source and target sites
	$sourceCtx = GetContext $sourceSiteUrl $sourceUser $sourcePwd
	$targetCtx = GetContext $targetSiteUrl $targetUser $targetPwd

    # Ensure the "missing user" in the target environment
    $missingUserObject = $targetCtx.Web.EnsureUser($missingUser)
	## Moved the try/catch for ExecuteQuery to a function so that we can exit gracefully if needed
	ExecuteQueryFailOnError $targetCtx "EnsureMissingUser"

	## Start the copy process
	CopyDocuments $sourceCtx $targetCtx $sourceLibrary $targetLibrary $startingId $endingId $missingUserObject

function CopyDocuments {
	param($sourceCtx, $targetCtx, $sourceLibrary, $targetLibrary, $startingId, $endingId, $missingUserObject)

    $copyStartDate = Get-Date

    # Get the source library
    $sourceLibrary = $sourceCtx.Web.Lists.GetByTitle($sourceLibrary)
	ExecuteQueryFailOnError $sourceCtx "GetSourceLibrary"

    # Get the target library
    $targetLibrary = $targetCtx.Web.Lists.GetByTitle($targetLibrary)
	## RootFolder is used later both when copying the file and updating the metadata.
	ExecuteQueryFailOnError $targetCtx "GetTargetLibrary"

    # Query source list to retrieve the items to be copied
    Write-Host "Querying source library starting at ID $($startingId) [Ending ID: $($endingId)]"
    $sourceItems = @(QueryList $sourceCtx $sourceLibrary $startingId $endingId) # Making sure it returns an array
    Write-Host "Found $($sourceItems.Count) items"

    # Loop through the source items and copy
    $totalCopied = 0
    $userCache = @{}
    foreach ($sourceItemFromQuery in $sourceItems) {

        $totalCount = $($sourceItems.Count)

        if ($sourceItemFromQuery.FileSystemObjectType -eq "Folder") {
            Write-Host "skipping folder '$($sourceItemFromQuery['FileLeafRef'])'"
		Write-Host "--------------------------------------------------------------------------------------"
        Write-Host "[$(Get-Date -format G)] Copying ID $($sourceItemFromQuery.ID) ($($totalCopied + 1) of $($totalCount)) - file '$($sourceItemFromQuery['FileLeafRef'])'"

        # Get the source item which returns all the metadata fields
        $sourceItem = $sourceLibrary.GetItemById($sourceItemFromQuery.ID)
        # Load the file itself into context
		$sourceFile = $sourceItem.File
		ExecuteQueryFailOnError $sourceCtx "GetSourceItemById"

		## Call the function used to run the copy
        $targetId = CopyDocument $sourceCtx $sourceItem $sourceFile $sourceItemFromQuery $targetCtx $targetLibrary $userCache $missingUserObject

    # Done - let's dump some stats
    $copyEndDate = Get-Date
    $duration = $copyEndDate - $copyStartDate
    $minutes = "{0:F2}" -f $duration.TotalMinutes
    $secondsPerItem = "{0:F2}" -f ($duration.TotalSeconds/$totalCopied)
    $itemsPerMinute = "{0:F2}" -f ($totalCopied/$duration.TotalMinutes)
	Write-Host "--------------------------------------------------------------------------------------"
    Write-Host "[$(Get-Date -format G)] DONE - Copied $($totalCopied) items. ($($minutes) minutes, $($secondsPerItem) seconds/item, $($itemsPerMinute) items/minute)"

### Function used to copy a file from one place to another, with metadata
function CopyDocument {
    param($sourceCtx, $sourceItem, $sourceFile, $sourceItemFromQuery, $targetCtx, $targetLibrary, $userCache, $missingUserObject)

    ## Validate the Created By and Modified By users on the source file
    $authorValueString = GetUserLookupString $userCache $sourceCtx $sourceItem["Author"] $targetCtx $missingUserObject
    $editorValueString = GetUserLookupString $userCache $sourceCtx $sourceItem["Editor"] $targetCtx $missingUserObject

    ## Grab some important bits of info
	$sourceFileRef = $sourceFile.ServerRelativeUrl
    $targetFilePath = "$($targetLibrary.RootFolder.ServerRelativeUrl)/$($sourceFile.Name)"

    ## Load the file from source
    $fileInfo = [Microsoft.SharePoint.Client.File]::OpenBinaryDirect($sourceCtx, $sourceFileRef)
    ## Write file to the destination
    [Microsoft.SharePoint.Client.File]::SaveBinaryDirect($targetCtx, $targetFilePath, $fileInfo.Stream, $true)

    ## Now get the newly added item so we can update the metadata
    $item = GetFileItem $targetCtx $targetLibrary $sourceFile.Name $targetLibrary.RootFolder.ServerRelativeUrl

    ## Replace the metadata with values from the source item
    $item["Author"] = $authorValueString
    $item["Created"] = $sourceItem["Created"]
    $item["Editor"] = $editorValueString
    $item["Modified"] = $sourceItem["Modified"]

	## Update the item
    ExecuteQueryFailOnError $targetCtx "UpdateItemMetadata"

    Write-Host "[$(Get-Date -format G)] Successfully copied file '$($sourceFile.Name)'"


## Get a reference to the list item for the file.
function GetFileItem {
	param($ctx, $list, $fileName, $folderServerRelativeUrl)

	$camlQuery = New-Object Microsoft.SharePoint.Client.CamlQuery
	if ($folderServerRelativeUrl -ne $null -and $folderServerRelativeUrl.Length -gt 0) {
		$camlQuery.FolderServerRelativeUrl = $folderServerRelativeUrl
	$camlQuery.ViewXml = @"
        		<FieldRef Name='FileLeafRef' />
        		<Value Type='File'>$($fileName)</Value>

	$items = $list.GetItems($camlQuery)
	if ($items -ne $null -and $items.Count -gt 0){
		$item = $items[0]
		$item = $null
	return $item

## Validate and ensure the user
function GetUserLookupString{
	param($userCache, $sourceCtx, $sourceUserField, $targetCtx, $missingUserObject)

    $userLookupString = $null
    if ($sourceUserField -ne $null) {
        if ($userCache.ContainsKey($sourceUserField.LookupId)) {
            $userLookupString = $userCache[$sourceUserField.LookupId]
        else {
            try {
                # First get the user login name from the source
                $sourceUser = $sourceCtx.Web.EnsureUser($sourceUserField.LookupValue)
            catch {
                Write-Host "Unable to ensure source user '$($sourceUserField.LookupValue)'."  

            try {
                # Now try to find that user in the target
                $targetUser = $targetCtx.Web.EnsureUser($sourceUser.LoginName)
                # The "proper" way would seem to be to set the user field to the user value object
                # but that does not work, so we use the formatted user lookup string instead
                #$userValue = New-Object Microsoft.SharePoint.Client.FieldUserValue
                #$userValue.LookupId = $user.Id
                $userLookupString = "{0};#{1}" -f $targetUser.Id, $targetUser.LoginName
            catch {
                Write-Host "Unable to ensure target user '$($sourceUser.LoginName)'."
            if ($userLookupString -eq $null) {
                Write-Host "Using missing user '$($missingUserObject.LoginName)'."
                $userLookupString = "{0};#{1}" -f $missingUserObject.Id, $missingUserObject.LoginName
            $userCache.Add($sourceUserField.LookupId, $userLookupString)

	return $userLookupString

## Pull ids for the source items to copy
function QueryList {
    param($ctx, $list, $startingId, $endingId)

    $camlQuery = New-Object Microsoft.SharePoint.Client.CamlQuery
    $camlText = @"
            <FieldRef Name='ID' Ascending='True' />
        <FieldRef Name='ID' />
    <QueryOptions />

    if ($endingId -eq -1) {
        $camlQuery.ViewXml = [System.String]::Format($camlText, "<Geq><FieldRef Name='ID' /><Value Type='Counter'>$($startingId)</Value></Geq>", "")
    else {
        $camlQuery.ViewXml = [System.String]::Format($camlText, "<And><Geq><FieldRef Name='ID' /><Value Type='Counter'>$($startingId)</Value></Geq><Leq><FieldRef Name='ID' /><Value Type='Counter'>$($endingId)</Value></Leq></And>", "")

    $items = $list.GetItems($camlQuery)
	ExecuteQueryFailOnError $ctx "QueryList"

    return $items

function GetContext {
	param($siteUrl, $user, $pwd)
	# Get the client context to SharePoint
	$ctx = New-Object Microsoft.SharePoint.Client.ClientContext($siteUrl)
	$securePwd = ConvertTo-SecureString $pwd -AsPlainText -Force
	$cred = New-Object PSCredential($user, $securePwd)
	$ctx.Credentials = $cred
	return $ctx

function ExecuteQueryFailOnError {
	param($ctx, $action)
	try {
	catch {
		Write-Error "$($action) failed with $($_.Exception.Message).  Exiting."
		exit 1

### Start the process
read more
Caroline SosebeeMigrating a Document Library and Retaining Authorship Information

CSOM GetItemById Results in “The property or field has not been initialized”

Kirk Liemohn is a Principal Software Engineer at ThreeWill. He has over 20 years of software development experience with most of that time spent in software consulting.

I was having a problem where I was trying to migrate list content from SharePoint 2010 to SharePoint 2013 but a couple of fields were not making it over.  I was using a migration tool that I like to use, but had to resort to connecting to both SharePoint 2010 and SharePoint 2013 using CSOM due to environment restrictions.  I know there are limitations with CSOM and likely more with SharePoint 2010 than there are with SharePoint 2013, but I was still surprised the tool couldn’t get values for these two fields.

I thought I could get around this if I wrote the code myself so I gave it a shot.  I wrote some PowerShell to first query for the list item IDs from the source list.  The code uses a CAML query and only has the ID field in the ViewFields for the query.  I then iterate across the results of the query.  For each result I get the entire source item by ID as follows:

$sourceItem = $sourceList.GetItemById($sourceItemFromCAMLQuery.ID)

GetItemById is the key method above as I thought it would do a good job of getting all fields for that list item.  However, for the two fields that gave me problems with the migration tool, it failed with my code.  The line of code that gave me an error was:

 $sourceValue = $sourceItem[$fieldName]

The exception thrown was of type PropertyOrFieldNotInitializedException.  The exception message was:

The property or field has not been initialized. It has not been requested or the request has not been executed. It may need to be explicitly requested.

I immediately thought this might be a row ordinal issue based on problems it has caused me in the past.  In retrospect I discovered that according to the schema XML for the fields, they are still in row ordinal 0 (which is good) so that didn’t explain my issue, but my fix was the same regardless.

To fix this problem I simply added these two problem fields to the ViewFields for my initial CAML query and subsequently get the values from the CAML query instead of the list item obtained using GetItemById.  It added a little complexity since I had to pay attention to which field values I obtained from the CAML query and which I obtained from GetItemById, but it solved my problem!  I hope it helps others as well.

read more
Kirk LiemohnCSOM GetItemById Results in “The property or field has not been initialized”

PnP, CSOM, and Claims-Based Authentication

Chris is a Senior Software Engineer at ThreeWill. His area of focus is consulting in the development of Microsoft .NET and SharePoint technologies. His primary role has been development lead in recent projects. Project roles have ranged from Development/Technical Lead to Development Resource.

I was recently tasked with working on a proof-of-concept (POC) that required the use of SharePoint hosted in Azure.  This particular SharePoint environment was configured exclusively with claimed-based authentication.  My task was to leverage the client-side object model (CSOM) to update the AlternateCSSUrl property on the target Web (in this case a Microsoft.SharePoint.Client.Web object).

Side Note: Setting this AlternateCSSUrl property to point to a custom “.css” file that resides in the local Site Assets library is a nice-and-simple way to augment some of the SharePoint styling.  In this case, my goal was to improve the user experience for mobile users (basically… introduce some “responsive” behavior).

Setting the AlternateCSSUrl property is super simple, assuming you can successfully authenticate to your target SharePoint environment.  However, simply setting up a NetworkCredential object or relying on DefaultCredentials does not cut it with claimed-based authentication.  Claims-based authentication requires a separate hop to an ADFS server, and the use of using a FedAuth cookie issued from the target SharePoint environment.

I did a search for “CSOM and claimed-based authentication” and found a couple of interesting links…both of which focus on SharePoint 2010 (I was targeting SharePoint 2013), and offer an understanding of the problem and respective solution.

Both of these are worth a read to get a better understanding of what needs to be done.

Since I was very limited on time, I started digging a bit more and decided to have a look at the wealth of samples from the Patterns and Practices (PnP) folks.  Low and behold, I found this….

Basically, the PnP AuthManager class has support for exactly what I needed.  Thanks PnP team!

The PnP AuthManager makes this very simple.   Here is an example of what this might look like.  The code has been simplified to show the essence.

static void Main(string[] args)
	string webUrl = "";
	string userName = "user1";
	string password = "password1";
	string domain = "";
	string sts = "";
	string idpld = "urn:sharepoint:sp-contoso";
	string cssRelativePath = "/SiteAssets/responsive-option1.css";

	SetAlternateCSSUrl(webUrl, userName, password, domain, sts, idpld, cssRelativePath);

private static void SetAlternateCSSUrl(string webUrl, string userName, string password, string domain, string sts, string idpld, string cssRelativePath)
	OfficeDevPnP.Core.AuthenticationManager am = new OfficeDevPnP.Core.AuthenticationManager();

	using (var ctx = am.GetADFSUserNameMixedAuthenticatedContext(webUrl, userName, password, domain, sts, idpld))
		ctx.RequestTimeout = Timeout.Infinite;

		// Just to output the site details
		Web web = ctx.Web;
		ctx.Load(web, w => w.Title, w => w.ServerRelativeUrl, w => w.AlternateCssUrl);

		web.AlternateCssUrl = web.ServerRelativeUrl + cssRelativePath;
read more
Chris EdwardsPnP, CSOM, and Claims-Based Authentication

Populating a Multi-Value People Picker with PowerShell

Will Holland is a Senior Software Engineer at ThreeWill. Will has proven to be adept at understanding a client’s needs and matching them with the appropriate solution. Recently he’s developed a passion for working with .NET, MVC, and cloud-based solutions such as Microsoft Azure and Office 365.

On my current project, a large scale migration from a SharePoint dedicated environment to SharePoint Online, my team is using a “Site Inventory” list to keep track of the over 52,000 site collections in our client’s source. Since the source is “live” and things are constantly evolving, we get periodic CSVs containing the most recent data regarding the source.

Naturally, 52,000+ rows is a LOT of data to go look through, so we created a PowerShell script that would parse the updated information, compare it to the Site Inventory list, and update any changes we find. Among the columns in our list are a few “Owner” columns (primary and secondary) and an “All Administrators” column. All three of the columns are just text fields that contain the login names of users in the dedicated environment, and we wanted to aggregate the three fields into one multi-value people picker.

Sounds easy, right? I ended up having quite a struggle, spending more time than I felt necessary dredging the depths of the internet for answers.

I knew that I had to use the Web.EnsureUser method to turn my username into a User object so I could grab the ID. I also knew that I needed to turn that ID into a lookup value since a people picker is, more or less, a special kind of lookup column. Finally, I knew that my “AllOwners” column needed an array of lookup values. That last part was where the issue came in.

$userName = "whd\wholland"
$spuser = EnsureUser $context
$user if($spuser -ne $null){
     $spuserValue = New-Object Microsoft.SharePoint.Client.FieldUserValue 
     $spuserValue.LookupId = $
     $listItem["AllOwners"] = @($spuserValue)

After going through the steps of turning a username into a FieldUserValue object (the ‘special’ lookup value I mentioned earlier), I would simply wrap that FieldUserValue in an array and attempt to set it as the value of the “AllOwners” field. My expectation was that, in doing so, I was creating an array of FieldUserValues. Expectations and reality don’t always line up.

As it turns out, unless you specify a type, PowerShell will create an array of the most generic type it can. In my case, I was getting an array of generic Objects.

Before I tried to set the value of my field, I needed to cast my array to be, specifically, an array of FieldUserValue objects. Below you’ll find the code snippet that sorted the issue for me.

$userName = "whd\wholland"  
$spuser = EnsureUser $context $user  
$lookupValueCollection = @()  
if($spuser -ne $null){  
     $spuserValue = New-Object Microsoft.SharePoint.Client.FieldUserValue                 
     $spuserValue.LookupId = $  
     $lookupValueCollection += $spuserValue  
$userValueCollection = [Microsoft.SharePoint.Client.FieldUserValue[]]$lookupValueCollection 
$listItem["AllOwners"] = $userValueCollection  
read more
William HollandPopulating a Multi-Value People Picker with PowerShell

Switching SharePoint Views Defaults from Standard/HTML to Edit/Datasheet/GRID and Back

Kirk Liemohn is a Principal Software Engineer at ThreeWill. He has over 20 years of software development experience with most of that time spent in software consulting.

I recently created a new list by uploading an Excel spreadsheet to SharePoint 2013. I’m sure there are several blogs telling you how to do this, but here is one that you may find useful.

The problem I encountered, however, is that after uploading the spreadsheet to Excel, the view that was created was automatically in edit mode (a.k.a. Datasheet view). I guess the thought was that people using Excel would like to be in that mode by default. I didn’t want that so I figured out how to switch it back. It wasn’t entirely obvious, so I thought I would share.

I knew I could create a new view from scratch that was not in edit mode by default, but this view had been updated to have many of columns in a specific order, and I really didn’t want to do it manually. Plus, I knew there had to be a way. I started out looking at the view in the browser and didn’t see anything. Then I edited the page and looked at the web part properties for the view, but that didn’t help either.

The solution was to open the site in SharePoint Designer 2013 and navigate to the view itself. Upon opening the view I saw the following HTML (I have seen this on line 37 and 38 for me depending on the view):

<View Name="{24C927B3-0768-4129-9A41-73961AA48508}" DefaultView="TRUE" Type="GRID" DisplayName="All Items" Url="..." [a bunch more attribues]>[a bunch of child elements]</View>

I simply had to change GRID to HTML and save the page:

<View Name="{24C927B3-0768-4129-9A41-73961AA48508}" DefaultView="TRUE" Type="HTML" DisplayName="All Items" Url="..." [a bunch more attribues]>[a bunch of child elements]</View>

I got a notice about the page being customized (unghosted) which was fine for me. You can change these back and forth and it works fine. SharePoint Designer shows a summary of all views for a list and their type (HTML or GRID). That summary is cached so it doesn’t update after you save the changes to the ASPX page directly.

read more
Kirk LiemohnSwitching SharePoint Views Defaults from Standard/HTML to Edit/Datasheet/GRID and Back

Starting and Stopping Multiple Azure VM’s from a Mac Terminal

Pete is a Director of Technology at ThreeWill. Pete’s primary role is driving the overall technology strategy and roadmap for ThreeWill. Pete also serves as ThreeWill’s Hiring Manager and is constantly looking for new talent to join the ThreeWill family.

We recently started a project in which ensuring we kept costs of Azure VM’s as low as possible was a real concern. Since I am a Mac user (stop snickering), the Azure CLI tools seemed like an obvious choice. I just wanted a simple way to check the current status of the virtual machines in a specific Azure Subscription, and then start to shutdown the virtual machines as needed. I agree with Scott Hansleman, and thought I would save myself and others some keystrokes. #sharingiscaring

Using the Azure CLI from a Mac Terminal

The first thing I found when searching was Using the Azure CLI for Mac. This was pretty helpful and gave me some basic commands, but didn’t give me the simple path of shutting down all the virtual machines that were running at once. This did give me the basic commands that let me check the status of the virtual machines, and stop and start the machines as well. By the way, this page has a bunch of info I plan on diving into at a later date.

To start, you’ll need to install the Azure CLI tools. This is a pretty straight forward install using npm.

npm install azure-cli -g

Once you have the tools, you’ll need to get your subscription settings. I have to run the following:

azure account download

This will open a browser and you’ll need to select the subscription for which you want the settings file. Once the subscription information was downloaded, running

azure account import [settings file]

will set the current account.

The Basic Commands

Once the Azure CLI tools are installed, the basic commands to list, stop and start the virtual machines are pretty simple

azure vm list

azure vm start vm-name

azure vm shutdown vm-name

Great. Almost there.

Painful Unix Memories

Unfortunately, even though I am now a Mac user, I still have to bang my head against the wall to remember Unix commands and pipe operations. Bad memories of awk and sed from many years ago. Towards the end of the install, the Azure CLI tools post, there was a section about “understanding results” which did something very similar to what I needed. This example was using xargs and something I had never heard of called jsawk. The xargs didn’t give me any flashbacks, so I was ok with that, but the jsawk made me twitch a little. Turns out, the curl installation of jsawk from the GitHub repo failed for me, but I was too close to stop!

A link in the jsawk readme mentioned using Homebrew, so the following installed jsawk and the dependencies I was missing. Back in business!

brew search jsawk
brew install jsawk

The Final Solution

Finally, all the pieces are in place. Now I can list the virtual machines, filter them, and more. This is a little deeper than I want to go here, but the jsawk documentation is really great. With the ability to filter and then use xargs to call another azure command, the final solution was just a simple change based on the example towards the end of the install the Azure CLI tools post. Here they are:

Shutdown some Virtual Machines

azure vm list --json | jsawk -n 'out(this.VMName)' | xargs -L 1 azure vm shutdown

Start all of the Virtual Machines

azure vm list --json | jsawk -n 'out(this.VMName)' | xargs -L 1 azure vm start

There are still some things I want to do, specifically create a shell script to enable some other options and arguments, but this might be a blog post for another day. Hopefully this will help someone else and save some keystrokes.

And BTW – I do realize this would be very easy in PowerShell. I’m just sort of new to the Mac. #justsaying

Get-AzureVM | Where-Object {$_.Status -like "Ready*"} | ForEach-Object { Stop-AzureVM -ServiceName $_.ServiceName -Name $_.Name -Force }

Let me know what you think or if this was helpful.


read more
Pete SkellyStarting and Stopping Multiple Azure VM’s from a Mac Terminal

How to Add Aurelia to Visualforce Pages in

Eric Bowden has over 19 years of software development experience around enterprise and departmental business productivity applications.

Aurelia is a next gen JavaScript client framework for mobile, desktop and web that leverages simple conventions to empower your creativity (

From my view, I see it as serving a similar purpose as AngularJS, with several key benefits:

  • No more IIFE‘s!
  • ECMAScript 2016
  • jspm package manager
  • Configuration by convention means less setup code

In all fairness, three of the four benefits I’ve listed above are provided by Javascript libraries and code that is outside of Aurelia. But, I’m lumping all of these benefits together under the Aurelia umbrella because they work well together, and I’m working with them because of Aurelia.

I recently got my feet wet with Aurelia thanks to Building Applications with Aurelia from Scott Allen on pluralsight. As usual, Scott does an awesome job of making this topic easy to learn and understand. Though Scott’s examples are created in Visual Studio, he explains that they could be accomplished in any web browser.

Naturally, I wanted to bring Aurelia to Salesforce! However, there are a few twists and turns, and I’ve listed the steps below. You can skip to the bottom and download the source code, if you’re looking to dive into this.

Setup the Developer Environment

Create and configure the project

Create a simple project which includes a Visualforce page and a single static resource. I named my Visualforce page AureliaAppPage and my static resource is named AppJS. Also, create a folder in the project titled “assets.” When finished, your project should appear as shown below.

Before going any further, confirm that you can edit and render your new Visualforce page and that the static resource can be edited and saved to Salesforce, without any errors.

Configure packages

Open a command prompt and navigate to the assets folder that you created as part of the project above and execute the following commands:

  • npm install jspm –g
  • npm install –global gulp
  • npm install –save-dev gulp
    • You can ignore any warning which appear about the Description, README and repository being empty.
  • jspm init
    • Accept the defaults, except for those listed below
    • Which ES6 transpiler would you to like to use, Traceur or Babel? Babel
  • jspm install aurelia-framework
  • jspm install aurelia-bootstrapper
  • jspm install aurelia-http-client
  • jspm install bootstrap

Project Configuration and Source files

Retrieve the following files from the attached zip and place these into the assets folder:

  • config.js
  • gulpfiles.js
  • The “js” folder

Your project should now look like the screen snap below


You can copy the code for the Visualforce page from the attached zip, and I’ve included it below so that I can highlight a few elements.

Notice the following:

  • The baseUrl is set dynamically to the URL of the static resource which will hold our javascript files.
  • We must pass the Salesforce session Id to the our application since it will be necessary when making API calls back into Salesforce.
<apex:page docType="html-5.0" showHeader="false" sidebar="false" applyBodyTag="false" standardStylesheets="false">



<link rel="stylesheet" type="text/css" href="{!URLFOR($Resource.AppJS, 'jspm_packages/github/twbs/[email protected]/css/bootstrap.css')}"></link>

<link rel="stylesheet" type="text/css" href="{!URLFOR($Resource.AppJS, 'styles/styles.css')}"></link>


.container {

margin-left: 10px;

margin-right: auto;




<body aurelia-app="js/app/main" >

<script src="{!URLFOR($Resource.AppJS, 'jspm_packages/system.js')}" ></script>

<script src="{!URLFOR($Resource.AppJS,'config.js')}" ></script>



"baseURL": "{!URLFOR($Resource.AppJS)}",

"apiSessionId": "{!$Api.Session_Id}"







Save to Salesforce

Run the following command from command prompt: gulp build

The gulp build command will zip all of the assets into APPJS.resource. The IDE will automatically upload assets to Salesforce, but it needs a kick in order to upload the static resource file which is created as a result of the gulp build. To get it going, simply right click on APPJS.resource. After doing so, you should notice that the file is uploaded to Salesforce.

Run the app

You should now be able to navigate to login to Salesforce and navigate to https://yourserver/apex/aureliaapppage to view a listing of Opportunities, using Aurelia.

Download the attached zip file to see the full source code and compare it with your results.

Are you building with Aurelia and Salesforce?

I’d love to hear about it. Leave me a comment below.

read more
Eric BowdenHow to Add Aurelia to Visualforce Pages in

What is Client-Side Rendering and Why Would I Use It?

Tim is a Senior Consultant at ThreeWill. He has 15 years of consulting experience designing and developing browser-based solutions using Microsoft technologies. Experience over the last 8 years has focused on the design and implementation of SharePoint Intranets, Extranets and Public Sites.

Sometimes I hear about new features in SharePoint, but it doesn’t immediately occur to me how they would be useful. I’ve heard of Client-Sider Rendering (CSR) for some time, but never really dug in to understand it until I was working on a recent project. Now that I understand it better, I’m sure I will use it much more often. Hopefully this brief explanation can help you better understand the power of Client-Side Rendering so that you can leverage it when it might be the best option.

Why Would I Use Client-Side Rendering?

SharePoint libraries and lists can be a great place to store information, but sometimes the out-of-box presentation of that information is not what is desired. For public facing sites in particular, there is usually a desire to display the content in a more engaging way than the standard interface. Client-Side Rendering is one possible solution to help you customize the display to your liking.

What Is Client-Side Rendering?

Client-Side Rendering (CSR) is javascript code that allows you to override the default SharePoint display by supplying your own html. Through the client-side context that is provided, you can dynamically integrate the data from the SharePoint list or library with your own custom html to manufacture the desired experience. CSR can be applied at several levels. For example, you can choose to override the display of one or two fields and allow the other fields to display as normal. Or maybe you want to change the background color of a numeric field called “Test Score” based upon the value (i.e. greater than 90 is green; between 70 and 80 is Yellow and less than 70 is red). Or maybe you need to customize the display in a more radical way. Maybe you want to display two items per row instead of the default behavior of one item per row. In this case, the default List View Header wouldn’t make sense, so you would likely override the Header. All of this is possible with CSR.

Let’s look at the sample below of default document library content displayed in 2 List View Web Parts. This should look familiar to those of you who have used SharePoint.


After applying Client-Side Rendering, I have transformed these two List Views to look a bit different.


As you can see in the Documents List View Web Part I, have overridden the Header to only display “Documents,” and I have overridden the item display to only display the name of the file (minus the checkbox a,nd icon). What you can’t really see and what is more significant than the UI changes is that I have also overridden the default click event so that this document opens in the SharePoint modal dialog.

So, when the link is clicked, I call a javascript function that formats the options object and then calls the out-of-box SharePoint modal dialog.

Here is the result:

For the Local Videos List View Web Part, I have overridden the header to display “Local Videos,” but I have also changed the display to display items per row instead of 1. I have also displayed a Thumbnail image that is a clickable link which launches the video in the SharePoint Modal dialog window.


As the name Client-Side Rendering implies, this transformation of the display all happens on the client-side. By including a reference to a javascript file in the JS link property of a web part on the page, I can relatively easy transform the display. In this scenario, I used the ~site token to reference the current site and stored my custom javascript file in the SiteAssets folder where it can be easily updated using SharePoint Designer.


Here’s a screen shot of some of the code. Note the references to different javascript functions to implement the html for the Footer, Header and Item.


I hope this has been useful to help you envision the value provided by Client-Side Rendering.

For a more detailed description of the options that are available, I recommend the following 3 blogs by Suhail Jamaldeen. He has provided a lot of useful detail that will be helpful if you determine Client-Side Rendering is a good solution for your problem.

read more
Tim CoalsonWhat is Client-Side Rendering and Why Would I Use It?