action-plan-1.jpg

Typical Action Plan for Jive to Microsoft Migrations

We’ve been doing Jive to Microsoft (including Office 365/SharePoint) migrations for the last 4-5 years and we’ve got a great process and tooling to help with these migrations.

One of the things I like to do early in the process is to put together an action plan (from Solution Selling).  It’s based on one of the seven habits, Begin with an End in Mind.

Here’s a typical action plan based on our experience – it might seem like some of these timelines are longer than expected, but they reflect the reality of what we’ve seen (we are typically working with larger clients so usually the paperwork is what is slowing us down).

Tentative Action Plan for Transition from Jive to Microsoft

Now – Download and run Migrator Trial Version and get counts on People and Places and answer questions about migration in Pre-Migration questionnaire

+2 weeks – Decide on and schedule 2-day Migration Workshop

+1 month – Begin getting NDA/MSA/SOW in place for the Workshop and get date for Workshop tentatively set

+6 weeks – Pre-Migration Workshop meeting and finalize paperwork

+2 months – Migration Workshop and POC (migrate a couple of places)

+2 months – Client completes mapping document (where Jive content is going to in Office 365)

+3 months – Get SOW in place for migration with Project Plan

+3 months (2-4 weeks) – Pilot Migration and Data Extraction

+4 months (typically 3 weeks per 1k places) –  Production Migration and Final Acceptance

+5 months –  Contingency Time

+5 months – Off Jive

We’ve done smaller migrations in less time (I think the smallest was around 6 weeks total) and have sped up the process with clients that can make decisions and get paperwork in place quicker.  Some things, like communication to and feedback from department heads and end users, require time not just from the team and are dependent on others.

Important factors to time are if the client is just migrating content types we’ve worked with on previous projects and the number of places that we are moving.

Read more Frequently Asked Questions (has details on things like the 2-day Migration Workshop and content types we can migrate).

read more
Danny RyanTypical Action Plan for Jive to Microsoft Migrations
long-running.jpg

Long Running Operations in Azure

For a project, we recently had to implement a mechanism for performing bulk data operations. Essentially, the customer wanted for an end user to upload an Excel file with a bunch of tabular data in it that we would then process and insert into SQL Server. We knew the act of processing and inserting could be very time consuming (we estimated it would be ~30 seconds/row x 10,000 max rows = ~1.2 days) so we wanted something very robust. Simply working in the application pool wasn’t an option, IIS would time us out. When coding the on-premise implementation, we just used a Windows service to watch for a file event in a folder that would process the file uploaded by the user. Easy peasy lemon squeezy.

However, Azure works differently. To have the equivalent of a Windows service in Azure we either needed a Worker Role or an Azure Fabric Stateless Worker. Both of these approaches were undesirable since worker roles always consume resources, even when they are idle. This bulk import process would be used once, maybe twice, in the initial phases of an engagement and then never again over the course of months (or maybe even years). That means that money is being spent even when nothing is happening.

Our first idea for solving this was to use an Azure Function App. A Function allowed us to listen to an Azure BLOB storage account for when a file was written and then react accordingly. However, we quickly realized that Function Apps have a 5-minute timeout (at least the Consumption plan ones do, I think the ‘always on’ Functions do not have this limitation) and our job had the likelihood of running for days.

The next couple ideas we tried out were very complicated, and in the end, none of them worked. That’s when we stumbled across Azure Batch Services. Batch Services are a way to run very long running operations much like the worker roles. However, unlike Worker Roles, you only pay for Batch Services when a batch job is running. The Azure Team were even nice enough to provide some example projects that showed how to wire up a job, start it, monitor it, and eventually kill it when its done (and all kinds of other very helpful things). Even better, we learned we could set up a Function App to trigger off a file being written to BLOB storage that would start the batch job, but the Function App didn’t have to wait until the job was completed, so we weren’t limited by the 5-minute timeout any longer. Even better, when setting up a job, you can specify how many cores and nodes in the batch cluster you want. That allows us to “preflight” the job by looking at how many records we will be importing and scale our job accordingly. Very slick!

It feels weird writing a blog post without code so here are some steps that should get you pointed in the right direction:

  1. Create a Batch Service account from the Azure Portal.
  2. Grab one of the sample projects from the Github project I linked above. I used HelloWorld as a starting place.
  3. Open Main.cs and comment out the lines for Console.ReadLine at the end of Main() (~line 38), and the call to WaitForJobAndPrintOutputAsync (~line 71) and all the code in the Finally block (~line 75-80) in HelloWorldAsync() (mainly we just want HelloWorld to tee up the job and not wait for it to finish). Save and compile everything.
  4. Make sure your Azure specific settings (keys, URL, etc.) are saved in Common’s AccountSettings.settings.
  5. Create a new Function App. Add a BlobTrigger-CSharp Function and add the following code into Run (you will also need to add ‘using System.Diagnostics’ at the top):
    Process process = new Process();
    process.StartInfo.FileName = @"D:\home\site\wwwroot\BlobTriggerCSharp1\HelloWorld\HelloWorld.exe";
    process.StartInfo.Arguments = "";
    process.StartInfo.UseShellExecute = false;
    process.StartInfo.RedirectStandardOutput = true;
    process.StartInfo.RedirectStandardError = true;
    process.Start();
    string output = process.StandardOutput.ReadToEnd();
    string err = process.StandardError.ReadToEnd();
    log.Info(output);
    process.WaitForExit();
    
  6. Open the Function App settings and open Kudu. Use Kudu to upload the contents of the HelloWorld Debug folder into the Function App.
  7. Configure the BLOB triggers for the Function App using any path and storage account you want to.
  8. If needed, correct the path to the EXE above.
  9. Open Azure Storage Explorer and upload a file to the storage account you configured in #7.

You should now see your HelloWorld job if you navigate to the Batch Service’s Jobs in the Azure Portal. Note that because we took out the code to delete the job once its done from HelloWorld, you will need to add that logic somewhere else in your workflow (maybe another BLOB trigger that kicks off another Azure Function App).

And that’s it. An on-demand, scalable batch processing mechanism that works (and costs us money) only when we want it to.

read more
Lane GoolsbyLong Running Operations in Azure
shutterstock_339263117.jpg

How to Prepare to Migrate to SharePoint Online

Danny:Hello and Welcome to the ThreeWill podcast. This is your host Danny Ryan and today I have Bruce Harple here with me, the VP of Delivery. Did I hit that?

 

Bruce:You hit it right, Danny.

 

Danny:It kind of rhymes, too. Yo, I’m Bruce, I’m the VP of Delivery.

 

Bruce:Yeah.

 

Danny:Well, all right. You do an amazing job with the task that we give to you. Tommy and I were talking earlier and we think we might have more projects than we do have people and that’s kind of crazy.

 

Bruce:That’s a great thing. We love it.

 

Danny:We’ll take it, but you’ve got a lot on your plate in managing all these different projects. It’s amazing what you do, Bruce.

 

Bruce:Hey, thanks Danny. I appreciate it.

 

Danny:You bet you. Today, we wanted to talk about migrating to Office 365. I think it sounds so simple. Right? You probably run into this all the time. Well, we’ve set up our initial sites in Office 365 and it’s all done. There’s not very much to this and really, there is so much you need to be thinking about. When we talked earlier, you said the overall theme for this podcast is be prepared. Be prepared. I guess that’s the scout’s motto, right?

 

Bruce:That’s exactly right Danny and that’s exactly where it came from. I was a Boy Scout, so …

 

Danny:Nice.

 

Bruce:I still apply that really basic fundamental motto of the scouts of being prepared and I think it really, truly applies to these migrations to Office 365.

 

Danny:Well, get us started here. What, should we first be thinking about?

 

Bruce:Yes, so what I thought I’d start out with is just really a lot of times what we see happening with customers, as they’re looking at migrations, is they really underestimate the effort and the complexity of these migrations.

 

Danny:Yeah.

 

Bruce:I try to think through why is that. What are some of the things that lead them to believe it’s going to be an easy process to move content from point A to point B up in the Cloud? Some of the things I was thinking about is, and we’re guilty of this as well. One of the things is we assume that all these migration tools, that are out there in the market place will automatically just migrate all the content.

 

Danny:Yep.

 

Bruce:The reality of it is that’s not true. Will they get 80, 90% there? Absolutely, but there’s a lot of content that won’t migrate successfully using all these tools. That is one thing I think people don’t anticipate and don’t plan for and aren’t prepared for.

 

Danny:What do you tip at? Create a mitigation plan for that type of content or you see how important it is for it to move over?

 

Bruce:Yeah, one of the things that we’ll talk about later on is the concept of running a peripheral concept early.

 

Danny:Got you.

 

Bruce:Yeah, to really vet out where there are gaps in those migration tools in the automation, which really leads to the next thought around why people underestimate these migrations. The other thing we’ve seen is that people, they really don’t fully understand their current state environments. I mean, they understand they’ve got content stored in SharePoint, but depending on how much governance they have in place, how close do they really monitor and manage those environments, they kind of get that SharePoint sprawl and people take of advantage of SharePoint. They stand up a lot of sites. There’s a lot of content out there. Even to the point where there gets to be a lot of customizations. There’s third-party components that get added in, but they really don’t truly understand, especially in large company, you might not understand all the business scenarios out there and everything everybody is doing to use SharePoint as a collaboration document management platform. A lot of people really underestimate how big, how widely used, and the depth and breadth of that usage.

 

Danny:Talking this through, typically the way people move out to Office 365 as they put in Exchange first and then they start looking at other things. Right?

 

Bruce:That’s right.

 

Danny:I could see people saying, “Okay, well Exchange was easy enough.” Exchange is like moving your email over and there’s not very many ways of customizing Exchange.

 

Bruce:That’s right.

 

Danny:I can see them making the misstep of thinking, “Oh, we’ve already moved our Exchange stuff over. SharePoint, it’s going to be as simple as that as well.”

 

Bruce:Yeah and another thing related to that, and where SharePoint becomes more complex, is if there are customizations.

 

Danny:Yep.

 

Bruce:Again, not every IT organization understands the extent of all those customizations because these customizations could’ve been implemented over years of using SharePoint as their content collaboration platform. That’s another thing is I think people don’t fully understand how much customization is out there and what’s the purpose of that customization. How important is it to the business? Because you’ve got to make some pretty important decisions about how you’re going to remediate all those customizations.

 

Danny:Yep, yep. Makes sense. What’s next?

 

Bruce:I think the other thing is in many cases IT doesn’t really set realistic expectations with the business. Again they think a migration is just, “I’m going to take this content from point A put it in point B.” Well, at the end of the day point B looks a lot different than point A. Even in the Microsoft ecosystem, SharePoint Online has got some differences than SharePoint On-Prem, and assuming that would be a simple transition for users is also where I think customers tend to underestimate the effort. You have to really make sure those customers understand that maybe all content won’t get migrated and maybe the user experience is going to be a little bit different.

 

Really making sure that they set expectations with the business and get them actively engaged is really important. Also, customers, a lot of times, they don’t have a strategy of really branding Office 365, so the user experience out of the box is fine, but there’s no sizzle to it. There’s no branding as far as your corporate identity, so I think in a lot of cases people don’t invest the time to think about that user experience and what do we need to do to really optimize that experience and get the business engaged early on in that process.

 

Danny:Yep.

 

Bruce:Then the last thing, and it’s related to that, is just I think many times organizations don’t put enough time and investment into having an effective communication plan that just talks about the schedule for migration, the impact. What do we have to do to retrain the business community? Helping them understand what content has been moved, has not been moved, how to find the new content, et cetera. Those are, as I think through some of the challenges and some of the things that we see companies, a lot of times, not think about and causes them to miss the mark on the migrations being successful.

 

Danny:I can see that, especially the communications and setting expectations inside the organization being critical for, especially if you’re moving from some other platform like Jive into Office 365, that there’s some level setting that needs to occur inside the business.

 

Bruce:Absolutely, yeah because it is a whole different user experience when you’re working in SharePoint Online versus a Jive social club or platform.

 

Danny:What’s next?

 

Bruce:Let’s do this. What I would like to talk about now is how can companies be better prepared.

 

Danny:Okay.

 

Bruce:All right, so let’s just talk through that. I am going to first start and just talking about planning, which is pretty simple, pretty basic. Everybody gets that you need to have a plan. Yeah, so the things I want to talk about are, A, the first thing … Danny, I know you’ve experienced this. I know you tell customers this all the time for these migrations, you really need to start your planning at least six months in advance. We can’t tell you the number of times where customers are moving from a Jive platform to a SharePoint platform or there’s an impending merger, an impending acquisition where now they’re going to move to an integrated collaboration platform and they start thinking about that six weeks before that event occurs. Quite simply, there’s not enough time in six weeks for a big enterprise to consolidate and migrate all this content and have a great user experience at the end of that. I know Danny during the sales cycle you talk about that, and it’s so critical to really … I’m saying start at least six months. At least.

 

Danny:I say eight to 10.

 

Bruce:There you go.

 

Danny:It’s up on our site, eight to 10. That’s ideal because typically I try to put together an action plan for making the move and my template is really starting to make some of those steps eight months out to be successful. What ends up happening, sometimes, is if they bring us in too late, we have to wait another year because they’re typically annual renewals and you just say, “Guys, we’re not going to be able to do this in this timeframe.” We don’t have that luxury with mergers, unfortunately, but with folks who are saying, “Oh look at this upcoming year. If we focus in on Office 365 and maybe merged the Jive into it, then we could save this money.” If you come too late, there’s not a whole lot we can do.

 

Bruce:Yep, absolutely. The next thing I had from a planning perspective, in many cases a lot of these migrations there is a business driver. It might be a license for platform that’s expired, there’s an acquisition merger coming up, or it’s just somebody’s on an older platform and they needed to move to a newer version of that platform. A lot of these I mention tend to be IT initiated projects, but I can’t express how important it is and how critical it is to have the business totally aligned with IT. There needs to be an IT executive sponsor and a business executive sponsor, and those two sponsors really need to be aligned and attached at the hip because you won’t be successful if it’s an IT drive project. The business has to step up, has to take ownership, has to be engaged. Many times we see that not taking place and that’s critical.

 

Danny:Yep, absolutely.

 

Bruce:The next thing on my list in planning is, I call it established … I’ll call them migration principles. These principles are things and decisions that really guide the team in the decision making process, the investments they are going to make in automation, the investments they’re going to make in remediation, and how they’re going to really set proper expectations with the end users. When I say migration principles, the areas that I think about are the content type and content mapping and prioritization. From a content perspective, what content is in, what content’s out? What’s going to get migrated? What’s not going to get migrated? What are we going to just archive and put on the back burner?

 

Handling customizations. What’s the principle? What’s the migration principle for how to really deal with customizations? Are they just out of scope? Do we come up with a remediation plan and alternatives with cost benefits? Then there’s a decision making process. What guidance is there for the team? What about issue remediations? What are the guiding principles around the amount of time we invest in remediating issues? Many times that needs to be tied to how many people is it impacting? How much content is it impacting? Set those parameters up front so the team can make quick decisions around remediation.

 

Danny:Yep.

 

Bruce:Then really just looking at automation versus manual. In some cases, it’s easier and less costly just to do a manual migration. An example of that might be a video content. A lot of customers will put video content out in SharePoint. They might have it out on a platform like Jive. The number of videos might be in the hundreds, not the thousands. Well, if it’s hundreds of videos, it’s really not that difficult of an effort to really download those videos to a file share from one platform then manually upload those to another file share to another platform. In some cases it makes more sense to do a manual migration, so up front coming up with the principles, the criteria for helping the team decide where do we invest in automation versus with manual.

 

Danny:Nice.

 

Bruce:That was my migration principles piece. The next two areas are, I think, things everybody understands, and most people do to a certain level, but the first one was really you do have to understand your current state. Like I said, depending on an organization’s level of governance, there could be a lot of SharePoint sprawl or Jive sprawl. A lot of Jive places, SharePoint sites that there is no way way IT knows everything that those sites are being used for. How much content is there? How many binder documents are there? How many collaborative documents? How many wiki pages? Exactly what’s there? How many work flows are there per that solution? How many web parts have been added, et cetera? Critical to get an inventory of what you have. You have to understand that current state. What you’re really looking for is the exceptions. You’re really looking for the customizations because that’s where most of your remediation’s going to be.

 

Danny:Yep.

 

Bruce:Then the other part is planning out your future state. When you go into Office 365, it’s not a simple lifted shift like it might be if I was going from one on-premise environment to another. I really need to think through in SharePoint Online, what is my information architecture? What’s that going to look like? What about my text? It’s a great opportunity to really look at how can we, based on understanding my current state, how can I fine tune that for a better search experience and a better navigational experience in my future state? We think it’s a great time to make those kind of decisions. Not to over-engineer it, but to look for how can we simplify, how can we get more consistent across this new platform?

 

Then the other thing is branding. I mentioned it earlier. SharePoint Online is not branded when you get there. Most people want to have a branded experience. They want some kind of a better experience. They want it to look and feel like the corporate brand or their divisional brand. There’s different ways to do branding in Office 365. There’s probably four to five different levels of branding. As you go up that chain, you get more and more into customizations, which are risky in Office 365. Changing master pages in Office 365 is a risky thing to do because Microsoft could put out updates to SharePoint Online that could break those master pages. It’s important to think about that, plan that out. Then just really picking a migration tool. We actually would suggest Metalogix. That’s our tool of choice. We found that tool suite to be very robust and again meets most of the migration needs, not all, as I said. You will need a tool to get from an on-prem environment to Office 365. Those are my thoughts around planning and what we’ve learned. Those are the high points of that.

 

Danny:Awesome. What do we have to wrap us up here?

 

Bruce:Next, I want to talk about execution of migration. One of the things that we’ve found is you hear a lot about different development, implementation processes. You hear the terms agile, waterfall, kanban. Quite honestly, what we’ve learned, is that migrations are all the above. There’s different parts of a migration where you’re going to be agile because you’re learning stuff each day. You’re identifying issues that you didn’t think existed. You’re mitigating those issues and you’re iterating through your migrations. There are parts to our pure waterfall. I got to get stuff done first before I can move on to step two. I guess my point there is don’t box yourself into one methodology or one process. Be willing to adapt and to flex because you’re going to have to. It’s just the nature of migration.

 

Danny:Scrum-ban-fall?

 

Bruce:Scrum-ban-fall, there you go. I like it. These are just things for people to think about that will help your execution become more predictable. One is do early proof of concepts of the migration. What do I mean by proof of concept? Proof of concept is take one to four sites or places and, using the tools, run migration. Then assess that migration. What worked? What didn’t work? What are the gaps? What you’re really looking for is the gaps. What’s not getting migrated, and why is it not getting migrated? The next then is then plugging the gaps in those migration tools. Once you run that proof of concept, you’ll see real clearly the gaps. Then you got to figure out how do I plug those gaps. Is it further automation I do through my own manual scripting? Is it manual migration?

 

Am I just going to manually migrate content, or am I just going to say, “You know what? There’s not enough occurrences of that type of content. Let’s just archive it. It’s not going to get migrated,” or we tell the users, “We’re not going to migrate it. If you want to go get it, you’ve got X number of days to go get that content and move it to your place.” The early POC, again, with migrations you’re just, each step of the way you’re trying to reduce risk, become more predictable, because at the beginning you have risks. You don’t have a high level of predictability. You’ve got to get to the POC. Now I’ve learned some things. I’ve eliminated some risk and I’ll get more predictable. Then the next step is what most people do, and that’s pilot migrations. This is really where you got to get the business involved. Again, we want the business users to look at that future state and then your content, it’s all been moved to SharePoint Online.

 

We want you to look and see what’s that user experience like. Can you find that content? How easy is it for you to search and find the content you have out there? Which is good. Again, you’re going to find issues. Some things are not going to work. Some things will be broken. Some content will be missing or it won’t look the way they want it to look, but the beauty is you’re finding it out early. Again, you’re trying to get down to that next level of reducing risk and getting to a more predicable end state. It’s a good thing. Those pilot migrations, you want … A pilot migration isn’t just one. Pilot migrations, maybe it’s 12 sites or 12 places. I’m not going to do that just one time. I’m going to do the pilot. I’m going to get the feedback. I’m going to make adjustments to my tools, to my scripts, to the branding and user experience. I’m going to migrate again and ask the users to look at it again. I’m going to get the feedback and I’m going to migrate again.

 

Danny:It’s almost like you’re versioning the migrations.

 

Bruce:Yeah, I mean you’re going to. Eventually, we actually like customers to sign off. We like to get to the point of, “I accept this future state, look, feel, the way the content’s organized, et cetera.” Those iterative migrations are important. The other things that you want to do during that is really capture the time to migrate because now, before we start the production migrations, we want to have some way to predict how long it’s going to take to pull all that content from my current state and push it into a cloud-based platform. That’s really critical in that predictability.

 

Danny:You’re able to at least extrapolate off the data that you have from the pilots to do that.

 

Bruce:Absolutely.

 

Danny:Yep, makes sense.

 

Bruce:Absolutely. Then just a couple of points, as we get into the production migrations. We’re big believers in breaking a big problem down into little pieces. Production migrations can be very, very big. We’re big believers in breaking the migration up into batches or iterations. You can work with customers to prioritize what content is most important, maybe I migrate that first, but break it up into smaller iterations. That also enables you, and a smaller amount of content, to, A, validate the success of each iterative migration, and to remediate any issues. Again, it’s all about risk reduction and predictability, and trying to move that dial each time. You want to inspect and adapt after each iteration.

 

Then at the end, you really want to be prepared to run what we call delta migrations. It may take a while to actually extract content and push that content to the new platform. There’s different ways to mitigate that. Some people will actually lock down the source system, right, and disable people using it that system. Others say, “I can’t do that. It’s mission critical. I can’t stop people from continuing to use it.” In that case, you got to come back and pick up all the changes, and you may have to push those changes. Be prepared to think about, depending on your migration approach and strategy, think about having that delta migration as something that is a way to mitigate that incremental need to pull new content.

 

Danny:Nice.

 

Bruce:Yeah, that was my part in execution. Then lastly, Danny, is just overall communication and governance. That’s so key and so critical. Again, it’s all about setting expectations. I mean Danny, we’re a big believer that a good customer for us is an educated customer.

 

Danny:Yep, absolutely.

 

Bruce:They understand what’s going to happen, how they’re going to be impacted, what their day-to-day interaction is going to look like, and feel like, and be like. That’s all about communication and getting them prepared for this new world, and the new user experience that they’re going to have, and understanding how they could find their content. That’s just so key is that the education piece and making sure they are ready, they understand, and expectations have been set. Then, especially as you’re going to SharePoint Online, governance is critical. If you don’t have a governance plan …

 

A lot of people say they do, but they really don’t. If they do, they don’t really use it. I think as you go into these cloud-based environments, they actually enforce a bit more governance because it’s not as easy to just stand up things up and create sites and create places. Governance is really important, really having a way that you’re going to govern those cloud-based environments so that they stay structured, they stay in a controlled environment, but you still give users the flexibility to do the things they need to do to run their business. I’ll close it with that as my last thought related to this.

 

Danny:Well, this has been great. I think for somebody preparing to make the move to Office 365/SharePoint Online, I think there’s a lot of great points that you brought out into this. I know we’ve been doing a lot of these projects over the last year and I foresee us in this upcoming year helping out a lot of people with this as well. Hopefully this gives us some insights and some things to think about when preparing to make the move. Again, I know these are tough projects and it sounds like the team is using a lot of things that they’ve learned from their app dev background with versioning and chunking up things and really using that background as an advantage for these migrations. I love how flexible you guys are, and learn, and then share it, and then go roll it out into the next project we go after. You guys are doing a great job.

 

Bruce:Yeah. Absolutely, Danny. Every environment’s different. Every customer’s different. There’s always new experiences for us, and new things we learn. We love it because we get to capture that and reuse and apply it on the next migration. That’s the kind of stuff we enjoy. We enjoy the challenges. We enjoy making sure that our customers are successful and that at the end of the day we’re very committed to making sure those end users have a great experience when they get to SharePoint online. We have a lot of passion around that, as you know.

 

Danny:Yep. One of things I’d first talk, when I talk with people about migrations is you can do everything perfectly in the migration and get everything migrated over exactly like it should be, but if you didn’t have a communication plan, if you didn’t set expectations with your end users around this, it would considered a failure. I’m glad you wrapped up with that because I think that’s something important that we end up coaching our customers on how to do that effectively. That’s really important.

 

Bruce:Absolutely.

 

Danny:Awesome. Well, thank you. If you’ve made it through to this point, we’re almost at 30 minutes. I think I’ve used up all my marketing budget on the transcription for this, but we’ll see how that goes. Thank you so much for taking the time. If you’re making the move, obviously, to SharePoint online and want someone with a lot of experience to do this, we’ve got a lot of good things that we’ve learned through the years. It’s a one-time thing that people are going through and really doesn’t make sense to, using your internal folks, to learn all the lessons that we’ve learned.

 

Use and outsource this. We can coach you through the whole thing and point you to the right tools and the right ways of doing this, so that migration is a success. We’d love to hear from you. Come to the ThreeWill site and come to the Contact Us page, and you’ll interact with Bruce and I on an estimate and all the good stuff. We’ll get you moved over, so that you can take advantage of all the wonderful things that Microsoft is doing in Office 365. Thanks again Bruce for taking the time to do this.

 

Bruce:Absolutely. I’ve enjoyed it, Danny.

 

Danny:Great. Everybody have a wonderful day. Take care. Bye, bye.

 

Bruce:Bye, bye.

 

read more
Bruce HarpleHow to Prepare to Migrate to SharePoint Online
woman-laptop-code.jpg

Migrating a Document Library and Retaining Authorship Information

Lately, I’ve had the opportunity to work on a couple of on-premises SharePoint 2010 to SharePoint 2013 migrations which proved to be both fun and challenging.  Our team chose to leverage PowerShell and CSOM for handling all the heavy lifting when it came to provisioning the new site and migrating data.  This, in turn, lead to one of the more interesting things I got to do, which was write a PowerShell script that would migrate the contents of a document library from the old 2010 site to a document library in the new 2013 site, while at the same time retaining authorship information.

As I researched how to do this, I found that there wasn’t any one source that explained all the steps needed in a nice concise manner. I found dribs and drabs here and there, usually around one step or another, or found that the code provided was using server side code which wasn’t help in my case.

So, I decided it would be worthwhile to pull all the parts into one document, both for my own reference as well as for others. And yes, I admit it. I shamelessly pilfered from some of my co-workers’ fine scripts to piece this together. Thanks, team, for being so awesome!

Here is an outline of the high-level steps needed to move files between sites. You can find the full script at the bottom.

Basic steps

  • Get a context reference for each of the sites (source and target). This will allow you to work on both sites at the same time.
  • Load both the source and target libraries into their appropriate contexts. Load the RootFolder for the target library as well. This will be needed both when saving the document and when updating the metadata.
  • Have a valid user id available to use in place of any invalid users found in the metadata being copied. Ensure the ‘missing user’ user id.
  • Query the source library using a CAML query to get a list of all items to be processed. You can apply filters here to limit results as needed. (Attached code has parameters for specifying start and end item ids).
  • Loop through the items returned by the CAML query
    • Get a reference to the source item using the id
    • Get a reference to the actual file from the source item
    • Load the source file by calling ‘OpenBinaryDirect’ (uses the file ServerRelativeUrl value)
    • Write it back out to the target library by calling ‘SaveBinaryDirect’ on the just loaded file stream
    • Copy over the metadata:
      • Get a reference to the new item just added in the target library
      • Populate the target item metadata fields using values from the source item
      • Update the item
    • Loop

That’s it, in a nutshell. There are all sorts of other things you can do to pretty it up, but I thought I would keep this simple as a quick reference both for myself and others. Be sure to check the top of the script below for other notes about what it does and does not do.

<#
.SYNOPSIS
Copies documents from one library into another across sites.
This was tested using SharePoint 2010 as the source and both SharePoint 2010 and SharePoint 2013 as the target.

Notes:
* Parameters can either be passed in on the command line or pre-populated in the script
* Example for calling from command line:
./copy-documents.ps1 "http://testsite1/sites/testlib1/" "domain\user" "password" "source lib display name" "http://testsite2/sites/testlib2/" "domain\user" "password" "target lib display name" "domain\user" 1 100

Features:
* Can cross site collections and even SP versions (e.g. SP2010 to SP2013)
* Allows you to specify both the source and target document library to use
* Can retain created, created by, modified, modified by and other metadata of the original file
* Can specify a range of files to copy by providing a starting id and ending id
* When copying metadata such as created by, will populate any invalid users with the provided 'missing user' value
* Uses a cache for user data so it doesn't have to run EnsureUser over and over for the same person

Limitations:
* Does not currently traverse folders within a document library.
* This only copies.  It does not remove the file from the source library when done.

#>

[CmdletBinding()]
param(
	[Parameter(Mandatory=$false)]
	[string]$sourceSiteUrl = "",
    [Parameter(Mandatory=$false)]
	[string]$sourceUser = "",
	[Parameter(Mandatory=$false)]
	[string]$sourcePwd = "",
	[Parameter(Mandatory=$false)]
    [string]$sourceLibrary = "",
	[Parameter(Mandatory=$false)]
	[string]$targetSiteUrl = "",
    [Parameter(Mandatory=$false)]
	[string]$targetUser = "",
	[Parameter(Mandatory=$false)]
	[string]$targetPwd = "",
    [Parameter(Mandatory=$false)]
    [string]$targetLibrary = "",
    [Parameter(Mandatory=$false)]
    [string]$missingUser = "",
	[Parameter(Mandatory=$false)]
    [int]$startingId = -1,
    [Parameter(Mandatory=$false)]
    [int]$endingId = -1
)

## Load the libraries needed for CSOM
## Replace with the appropriate path to the libs in your environment
Add-Type -Path ("c:\dev\libs\Microsoft.SharePoint.Client.dll")
Add-Type -Path ("c:\dev\libs\Microsoft.SharePoint.Client.Runtime.dll")

function Main {
	[CmdletBinding()]
	param()
	
	Write-Host "[$(Get-Date -format G)] copy-documents.ps1: library $($sourceLibrary) from $($sourceSiteUrl) to $($targetSiteUrl)"
	
    # Get the context to the source and target sites
	$sourceCtx = GetContext $sourceSiteUrl $sourceUser $sourcePwd
	$targetCtx = GetContext $targetSiteUrl $targetUser $targetPwd

    # Ensure the "missing user" in the target environment
    $missingUserObject = $targetCtx.Web.EnsureUser($missingUser)
    $targetCtx.Load($missingUserObject)
	
	## Moved the try/catch for ExecuteQuery to a function so that we can exit gracefully if needed
	ExecuteQueryFailOnError $targetCtx "EnsureMissingUser"

	## Start the copy process
	CopyDocuments $sourceCtx $targetCtx $sourceLibrary $targetLibrary $startingId $endingId $missingUserObject
}

function CopyDocuments {
	[CmdletBinding()]
	param($sourceCtx, $targetCtx, $sourceLibrary, $targetLibrary, $startingId, $endingId, $missingUserObject)

    $copyStartDate = Get-Date

    # Get the source library
    $sourceLibrary = $sourceCtx.Web.Lists.GetByTitle($sourceLibrary)
    $sourceCtx.Load($sourceLibrary)
	ExecuteQueryFailOnError $sourceCtx "GetSourceLibrary"

    # Get the target library
    $targetLibrary = $targetCtx.Web.Lists.GetByTitle($targetLibrary)
    $targetCtx.Load($targetLibrary)
	## RootFolder is used later both when copying the file and updating the metadata.
    $targetCtx.Load($targetLibrary.RootFolder)
	ExecuteQueryFailOnError $targetCtx "GetTargetLibrary"

    # Query source list to retrieve the items to be copied
    Write-Host "Querying source library starting at ID $($startingId) [Ending ID: $($endingId)]"
    $sourceItems = @(QueryList $sourceCtx $sourceLibrary $startingId $endingId) # Making sure it returns an array
    Write-Host "Found $($sourceItems.Count) items"

    # Loop through the source items and copy
    $totalCopied = 0
    $userCache = @{}
    foreach ($sourceItemFromQuery in $sourceItems) {

        $totalCount = $($sourceItems.Count)

        if ($sourceItemFromQuery.FileSystemObjectType -eq "Folder") {
            Write-Host "skipping folder '$($sourceItemFromQuery['FileLeafRef'])'"
            continue
        }
		Write-Host "--------------------------------------------------------------------------------------"
        Write-Host "[$(Get-Date -format G)] Copying ID $($sourceItemFromQuery.ID) ($($totalCopied + 1) of $($totalCount)) - file '$($sourceItemFromQuery['FileLeafRef'])'"

        # Get the source item which returns all the metadata fields
        $sourceItem = $sourceLibrary.GetItemById($sourceItemFromQuery.ID)
        # Load the file itself into context
		$sourceFile = $sourceItem.File
        $sourceCtx.Load($sourceItem)
        $sourceCtx.Load($sourceFile)
		ExecuteQueryFailOnError $sourceCtx "GetSourceItemById"

		## Call the function used to run the copy
        $targetId = CopyDocument $sourceCtx $sourceItem $sourceFile $sourceItemFromQuery $targetCtx $targetLibrary $userCache $missingUserObject
        
		$totalCopied++
    }

    # Done - let's dump some stats
    $copyEndDate = Get-Date
    $duration = $copyEndDate - $copyStartDate
    $minutes = "{0:F2}" -f $duration.TotalMinutes
    $secondsPerItem = "{0:F2}" -f ($duration.TotalSeconds/$totalCopied)
    $itemsPerMinute = "{0:F2}" -f ($totalCopied/$duration.TotalMinutes)
	Write-Host "--------------------------------------------------------------------------------------"
    Write-Host "[$(Get-Date -format G)] DONE - Copied $($totalCopied) items. ($($minutes) minutes, $($secondsPerItem) seconds/item, $($itemsPerMinute) items/minute)"
}

### Function used to copy a file from one place to another, with metadata
function CopyDocument {
    [CmdletBinding()]
    param($sourceCtx, $sourceItem, $sourceFile, $sourceItemFromQuery, $targetCtx, $targetLibrary, $userCache, $missingUserObject)

    ## Validate the Created By and Modified By users on the source file
    $authorValueString = GetUserLookupString $userCache $sourceCtx $sourceItem["Author"] $targetCtx $missingUserObject
    $editorValueString = GetUserLookupString $userCache $sourceCtx $sourceItem["Editor"] $targetCtx $missingUserObject

    ## Grab some important bits of info
	$sourceFileRef = $sourceFile.ServerRelativeUrl
    $targetFilePath = "$($targetLibrary.RootFolder.ServerRelativeUrl)/$($sourceFile.Name)"

    ## Load the file from source
    $fileInfo = [Microsoft.SharePoint.Client.File]::OpenBinaryDirect($sourceCtx, $sourceFileRef)
    ## Write file to the destination
    [Microsoft.SharePoint.Client.File]::SaveBinaryDirect($targetCtx, $targetFilePath, $fileInfo.Stream, $true)

    ## Now get the newly added item so we can update the metadata
    $item = GetFileItem $targetCtx $targetLibrary $sourceFile.Name $targetLibrary.RootFolder.ServerRelativeUrl

    ## Replace the metadata with values from the source item
    $item["Author"] = $authorValueString
    $item["Created"] = $sourceItem["Created"]
    $item["Editor"] = $editorValueString
    $item["Modified"] = $sourceItem["Modified"]

	## Update the item
    $item.Update()
    ExecuteQueryFailOnError $targetCtx "UpdateItemMetadata"

    Write-Host "[$(Get-Date -format G)] Successfully copied file '$($sourceFile.Name)'"

}

## Get a reference to the list item for the file.
function GetFileItem {
	[CmdletBinding()]
	param($ctx, $list, $fileName, $folderServerRelativeUrl)

	$camlQuery = New-Object Microsoft.SharePoint.Client.CamlQuery
	if ($folderServerRelativeUrl -ne $null -and $folderServerRelativeUrl.Length -gt 0) {
		$camlQuery.FolderServerRelativeUrl = $folderServerRelativeUrl
	}
	$camlQuery.ViewXml = @"
<View>
	<Query>
   		<Where>
      		<Eq>
        		<FieldRef Name='FileLeafRef' />
        		<Value Type='File'>$($fileName)</Value>
      		</Eq>
		</Where>
	</Query>
</View>
"@

	$items = $list.GetItems($camlQuery)
	$ctx.Load($items)
	$ctx.ExecuteQuery()
	
	if ($items -ne $null -and $items.Count -gt 0){
		$item = $items[0]
	}
	else{
		$item = $null
	}
	
	return $item
}

## Validate and ensure the user
function GetUserLookupString{
	[CmdletBinding()]
	param($userCache, $sourceCtx, $sourceUserField, $targetCtx, $missingUserObject)

    $userLookupString = $null
    if ($sourceUserField -ne $null) {
        if ($userCache.ContainsKey($sourceUserField.LookupId)) {
            $userLookupString = $userCache[$sourceUserField.LookupId]
        }
        else {
            try {
                # First get the user login name from the source
                $sourceUser = $sourceCtx.Web.EnsureUser($sourceUserField.LookupValue)
                $sourceCtx.Load($sourceUser)
                $sourceCtx.ExecuteQuery()
            }
            catch {
                Write-Host "Unable to ensure source user '$($sourceUserField.LookupValue)'."  
            }

            try {
                # Now try to find that user in the target
                $targetUser = $targetCtx.Web.EnsureUser($sourceUser.LoginName)
                $targetCtx.Load($targetUser)
                $targetCtx.ExecuteQuery()
                
                # The "proper" way would seem to be to set the user field to the user value object
                # but that does not work, so we use the formatted user lookup string instead
                #$userValue = New-Object Microsoft.SharePoint.Client.FieldUserValue
                #$userValue.LookupId = $user.Id
                $userLookupString = "{0};#{1}" -f $targetUser.Id, $targetUser.LoginName
            }
            catch {
                Write-Host "Unable to ensure target user '$($sourceUser.LoginName)'."
            }
            if ($userLookupString -eq $null) {
                Write-Host "Using missing user '$($missingUserObject.LoginName)'."
                $userLookupString = "{0};#{1}" -f $missingUserObject.Id, $missingUserObject.LoginName
            }
            $userCache.Add($sourceUserField.LookupId, $userLookupString)
        }
    }

	return $userLookupString
}

## Pull ids for the source items to copy
function QueryList {
    [CmdletBinding()]
    param($ctx, $list, $startingId, $endingId)

    $camlQuery = New-Object Microsoft.SharePoint.Client.CamlQuery
    $camlText = @"
<View>
    <Query>
        <Where>
            {0}
        </Where>
        <OrderBy>
            <FieldRef Name='ID' Ascending='True' />
        </OrderBy>
    </Query>
    <ViewFields>
        <FieldRef Name='ID' />
        {1}
    </ViewFields>
    <QueryOptions />
</View>
"@

    if ($endingId -eq -1) {
        $camlQuery.ViewXml = [System.String]::Format($camlText, "<Geq><FieldRef Name='ID' /><Value Type='Counter'>$($startingId)</Value></Geq>", "")
    }
    else {
        $camlQuery.ViewXml = [System.String]::Format($camlText, "<And><Geq><FieldRef Name='ID' /><Value Type='Counter'>$($startingId)</Value></Geq><Leq><FieldRef Name='ID' /><Value Type='Counter'>$($endingId)</Value></Leq></And>", "")
    }

    $items = $list.GetItems($camlQuery)
    $ctx.Load($items)
	ExecuteQueryFailOnError $ctx "QueryList"

    return $items
}

function GetContext {
	[CmdletBinding()]
	param($siteUrl, $user, $pwd)
	
	# Get the client context to SharePoint
	$ctx = New-Object Microsoft.SharePoint.Client.ClientContext($siteUrl)
	$securePwd = ConvertTo-SecureString $pwd -AsPlainText -Force
	$cred = New-Object PSCredential($user, $securePwd)
	$ctx.Credentials = $cred
	
	return $ctx
}

function ExecuteQueryFailOnError {
	[CmdletBinding()]
	param($ctx, $action)
	
	try {
		$ctx.ExecuteQuery()
	}
	catch {
		Write-Error "$($action) failed with $($_.Exception.Message).  Exiting."
		exit 1
	}
}

### Start the process
Main
read more
Caroline SosebeeMigrating a Document Library and Retaining Authorship Information
shutdown.jpg

Configure Scheduled Shutdown for Virtual Machines in Azure

Scheduled Shutdown in Azure

I’ve been using Virtual Machines in Azure for a while, and it’s a great option. My favorite part about it is how easy it is to expose Virtual Machines in Azure to the public internet. This can be really useful for a number of testing scenarios; It’s been a great resource. I’m able to use it because as part of my MSDN subscription, I’m provided a monthly credit. The only issue is that the credit is approximately enough to cover Virtual Machines that are running for the full month (as long as those Virtual Machines are running only during working hours.)

Approximately eight hours a day, five days a week.  The credit has been enough to cover. That works out great as long as I remember to start my Virtual Machines in the morning and then stop my Virtual Machines at the end of the day. Unfortunately, I’d forgotten to stop my Virtual Machines a number of times, so they run overnight. One time, they ran over the weekend and that caused me to overrun the monthly credit that I have as part of my MSDN subscription.

A colleague of mine, Rob Horton caught wind of that recently and directed me to this blog post that I’m showing, which walks you through the process where you can configure what’s called a runbook in Azure that runbook will run a PowerShell script. It’s actually called a graphical PowerShell script. There are graphical PowerShell scripts and I guess non-graphical PowerShell scripts and this walks you through configuring a graphical PowerShell script as part of your runbook, which can be used to automatically stop your Virtual Machines.

Configured that a couple of weeks ago. It’s been awesome! I configured mine to run at 6:00 PM which is approximately the end of the day and then 2:00 AM which could be the end of my day if it’s a long day. If I’m working on something into the night, 2:00 AM will probably be the latest. I just want  make sure that I have it running twice just to make sure that it shuts down. I’ll walk us through the process of configuring my runbook.  I’m showing on the screen now the blog post that I followed but there was one key difference in how I configured mine versus how this blog post walks you through it. That is that, I used a user account to run my runbook versus this blog post walks you through using an application account.

I think there are probably good reasons for using an application account. I tried that. I couldn’t get it to work, but I got it to work with a user account. I could probably circle back through and get it to work with an application account, but just have it put that effort into it. I’m going to walk you through today how to configure it using a user account. Let me get started.

The first thing I’m going to do is create the account. To do that, I’m going to go into the portal and choose Active Directory to get into my Azure Active Directory accounts. I’m going to go to the users tab. I’m going to add a user. I’ll create a new user and I’m going to call this VM Maintenance. Choose next and we’ll just call it VM Maintenance. Just create it as a standard user and you will get a temporary password for that user. I’m going to copy that to my clipboard. We’ll go ahead and close that up.

I’m going to open up a new incognito window. Then I’m going to go ahead and log in as this new user. It’s going to prompt me to change my password, which I will do. Take my password and sign in. Now, my credential is set, and my next step is that I’ll need to configure this user as a co-administrator of my Azure subscription. To do that, I’m going to scroll down to Settings I’ll choose Administrators and then I’m going to choose Add. Add in my user. Select that user as a co-administrator.

You’ll see the user is actually listed in here twice at the moment because I did it earlier while preparing for this demo. Let’s just remove the first one. There we go. Hopefully, I removed the old one and not the new one, but we’ll see what happens there. My account is now a co-administrator of this account. I know that seems pretty heavy-handed. I have read a blog post, Stack Overflow post and so forth, about people who have stated that this is necessary, and I’ve actually tried to get this user a lower account privilege and it was not successful. This does seem to be necessary.

That should be what we need for our accounts. Next, I’m going to go back to the portal admin page. I’m going to go to automation accounts and I’m going to add a new automation account. Click Plus and we will call this, VM Automation. I’m going to add it to an existing resource group. I’m not really aware of why this matters, but I’ll choose the same resource group as I do for all my others. I’ll leave the other default settings and choose Create.

It may take just a few minutes for the automation account to finish the creation process. For me, I had to click refresh on this automation, the listing of automation accounts. Once that appears, once it’s all like that, once the new automation account is open, the first thing we’re going to want to do is go to Assets and then we’re going to click Variables and we’re going to add a new Variable called Azure Subscription ID. For that, we of course are going to need our Azure subscription ID. Just go back to the portal. Go to My Subscriptions grab the Azure subscription ID. Paste that in.

Next, we’re going to need to create a credential which will be used to run the runbook, so I’m going to click Credentials. I’m going to click Add Credential and we’re going to name this one Azure Credential. I’m going to add in the VM Maintenance account that I created earlier. Now we’ve created a variable and a subscription and our next step is going to be to create the runbook. I’m going to choose Add runbook. No, I’m not going to choose that runbook. I’m going to choose Browse Gallery and then choose Stop Azure Classic. Let’s just see what that gives us. Stop Azure Classic VM’s.

You’ll notice that there’s two of them. There’s one that says PowerShell Workflow Runbook and another one that says Graphical PowerShell Workflow. The Graphical Powershell Workflow is the one that we need. We’ll choose that and then select Import. I can accept the default name, choose okay. We’ll import it from the gallery. My next step is going to be to choose Edit. There is a button for task pane and it prompts me for parameters. The service name that’ll default, I guess, to the current service. The Azure Subscription ID and the name of the credential  both have defaults which we configured a moment ago. You can see Azure Subscription ID is the variable that we just configured and then Azure Credential is the credential that we just created.

When we’re ready, we can just click Start to begin the test. I’m going to go scroll over a little bit here to see the results or the output. It takes a while for these to run, so I’m just going to pause the video while it runs. When the workflow is completed, as you can see here, it gives you a little bit of output. In this case, it tells me that my Virtual Machines were already stopped, so there is no action taken. Had any of them been running, it would tell me that they would list out those that it had stopped.

Let me close out of the task pane, My next step is going to be to publish this runbook so it’ll prompt me, do I want to publish it? Of course I say Yes. Now we have a runbook created. Our accounts are created and we have tested our runbook, so our next step is going to be to schedule it. I’m going to choose, go back to the VM Automation and I will choose select the runbook. Once the runbook is open, I’ll click schedule from the top and it will say like to a schedule. Link a schedule to your runbook, so I’ll choose that. I’m going to create a new schedule.

In this case, I’m just going to choose a one-time schedule. Of course, you can choose once or recurring. I’ll just choose once, and I’ll have it fire off here in the next 20 minutes. We’ll choose 12:20 PM and I’m going to say Create. Now one trick is that, simply creating the schedule,  I thought I was finished, but you’re not. All you have done at this point is created a schedule, but you need to link this runbook to it.

Now I’ve selected it and choose OK. Now you should see the number of schedules increase here under the schedules listing. I’ll open it, and now I can see that my runbook is scheduled and we’ll wait a moment for that schedule to run to confirm that it’s working. To do that, I’m going to go back to my classic virtual machines. Let me just start up one of my smaller virtual machines here.

It’ll start up and hopefully it’ll be fully started by the time that schedule fires off. Let’s go back to Automation Accounts. Open that Automation Account. Open the runbook and open the schedule and we’ll see it’s set to fire off at 12:20, which is just about five minutes from now. They won’t allow you to fire off a schedule. Any sooner than five minutes, so we’ll wait just a moment for that to fire off.

Now I can see that my schedule is listed as expired which tells me that it’s already run. So if I go back to my classic virtual machines, I can confirm now that my virtual machine has been stopped. That’s the quick run-through for creating a runbook which you can use to stop your virtual machines.

read more
Eric BowdenConfigure Scheduled Shutdown for Virtual Machines in Azure
drop-the-mic.png

The Logistics of Running My First Podcast

I wanted to share my workflow for creating a producing my first podcast / videocast. Have any suggestions or is any of this particularly useful? Leave a comment below the article.

Here are my notes…

Starting Out the Podcast with an Introduction Episode

I gave an introduction to why I started the podcast, the goals for the podcast, and some personal background on the hosts of the podcast.

Create a Schedule for the Podcast

One of the first things that I did for the podcast was to plan out the first 3-6 months of the podcast. For the AppExchange Podcast, this meant assigning the category (Sales, Marketing, IT, etc) of apps in the AppExchange to a specific month. Although we didn’t strictly stick to this schedule – it helped provide some structure on what to cover first.

Create a Landing Page for the Podcast

I purchased the www.appexchangepodcast.com domain and mapped it over to a specific page on our website that would serve as a landing page for the podcast. I used a WordPress plugin called Content Views to show posts that were tagged with the “AppExchange Podcast.” I also created a logo for the podcast that I could reuse aross the different places where the podcast is available.

Getting Guests for the Podcast

For the AppExchange Podcast, we contacted the providers for the top 50 apps in each category. With the help of Barbara Green, I created an email blast to the email address listed in the AppExchange. The email explained the podcast, expectations for the podcast and how to schedule the interview. To make this easier, I used TimeTrade to schedule the interviews (this synchs with my Outlook calendar).

Approaching the Date of the Interview

Once the date was set, I would update the appointment with GoToMeeting info and a set of typical questions that I ask for interviews.

At the Interview

I set up the calendar appointments to be 30 minutes. The first ten minutes we would talk through the interview and any questions the interviewee has. Most importantly, we would walk through the hand off of screen sharing by making them a presenter and checking to make sure that process works.

During the Interview

For me, it was just important to listen, relax and have fun. Even if the discussion or demo did not go as planned we stuck to first take typically (I only edited a handful of the episodes – with the exception of cutting the very beginning and end of the audio).

After the Interview

Typically, I’d thank the interviewee for their time and set expectations on when I would be done with publishing the episode. Most of the time this would be by the end of the week – I tried queueing up a handful of podcasts

Post Production – Video

For the AppExchange Podcast, I used GoToMeeting to record the episode. GoToMeeting lets you output your recording to two formats – either MP4 or WMV. I would typically generate out both formats.

PRO TIP – GoToMeeting creates a folder called Original in your My Documents folder – after you generate to the first format GoToMeeting will move this folder. Just drag the file and move to the My Documents folder and double-click on it and you’ll have the chance to generate to another format  

The WMV format I would move to Vimeo and give a Title and Summary (that I will use again when creating the listing in SoundCloud and the post in WordPress).

Post Production – Audio

I used a website called Auphonic to help with the Audio post production. I would upload the MP4 version of the episode to the website. I used it for adding the music at the beginning/end, sound leveling, noise and hum reduction and automatic deployment to SoundCloud. Once on SoundCloud, I would finish updating the metadata and set the episode to public. I set up iTunes and Stitcher to monitor the SoundCloud RSS feed for new episodes.

Post Production – Transcription

I used a service called Rev.com to get transcriptions of the episodes. The cost is about $1 a minute and the turnaround is typically within 24 hours. After getting the transcription back, I would look for obvious corrections and make them in Word.

Creating a Blog Post

We’re finally ready to make a blog post. I would clone the last episode for the podcast and update with the new embed info for Vimeo and Soundcloud. I would copy/paste the transcription from Word. Finally, I would check update the post with appropriate info to get a green bullet from Yoast SEO. That includes downloading and sizing a nice licensed visual from Shutterstock.

Have any suggestions or is any of this particularly useful? Or, maybe you have a question I didn’t address.  Leave a comment below and I’ll respond.

Good luck with your new podcast!

read more
Danny RyanThe Logistics of Running My First Podcast
helpful-tips-3.jpg

How to Copy an Approval Workflow and Retain Its Custom Task Form

Background on why I needed to copy an approval workflow

Recently, one of our clients had a unique problem with workflows that required me to go into ALL the workflows, verify the email settings and correct them as needed. Since there were around 10 large approval workflows and several other basic notification workflows, this was no trivial task.

We’ve found in our experience that when changes are needed to an approval workflow, it’s best to make a new version of it in order to not disrupt any currently running approval workflows. One of the major pain points in copying an approval workflow is having to rebuild the custom task form associated with it. I REALLY didn’t want to have to do this for 8 to 10 workflows, each with their own custom task form filled with details. So I talked with co-workers, dug around on the internet and tested and played until I came up with a way to retain my custom forms. This post details out those steps (as well as how to copy the workflow itself). Of course, this is just one way to do it. I’m sure there are plenty of other ways that might be better, but this seems to work well for me. I hope it helps someone!

The problem itself that caused all this angst was an interesting one. It surfaced when our client acquired another company and joined the two domains. Suddenly there were two Active Directories being checked when an email was sent. The problem arose because some of the email recipients in the workflows were set to send to a ‘Display Name’ format instead of to an ‘Email address’ format. This meant that if an email is supposed to go to ‘John Doe’ and there is a ‘John Doe’ in both ADs, it was a crap shoot as to which John Doe would receive the email – the one from domain A or the one from Domain B. Definitely not the behavior we wanted! So all the workflows had to be checked and updated to be sure all emails were being sent to email addresses. Here are the steps I took to get these corrected.

Part 1 – Copying the approval workflow itself

One note before you start: I’m going to have you opening and closing SharePoint Designer for what feels like a million times. Do it even if you don’t feel like it. J SharePoint Designer does some really weird things with caching, etc. and this will help keep everything straight.

  1. Open the site in Microsoft SharePoint Designer.
  2. Click on the Workflows link on the left to display all the workflows, select the workflow you want to copy, right click it and select ‘Copy and Modify …’.
  3. Give it a new name and click OK. The workflow will open to the main workflow settings screen.
  4. Click Save to save any changes that may have been made in the background and then close this window (do NOT publish the workflow yet).
  5. Now go to All Files / Workflows and find the folder for the new workflow. It should contain three files.
    (The three files you should find.)
  6. Click on the ‘xxx.xoml.wfconfig.xml’ file to open it and select ‘Edit file’.
  7. Since this is an approval workflow with an approval task, there will be a couple of references in this file to the task – we need to update these. When a workflow is copied, SharePoint renames any tasks it finds by tacking a ‘ Copy’ to the end of the existing name (only one in this case). So search this file for the text ‘Task Copy’ to find all the references and rename it to something new (but be sure to retain the ‘_Task’ in the name). Make sure to note what you named it for the next step.
  8. Now save the XML file and exit SharePoint Designer.
  9. Reopen SharePoint Designer and open your new workflow from the Workflows link, then click the ‘Edit’ button to open the workflow for editing.
  10. Find the ‘Start task process … ‘ statement and click on the ‘… Task Copy’ link. When it opens, click in the Name field and rename it to the same name you noted above. In this example, I renamed ‘BI_PreProduction_Approval_v3_Task Copy’ to ‘BI_PreProduction_Approval_v4_Task’.

  11. If you have other changes that need to be made to the workflow (that do not involve the task form), you can make them now. When done, save the changes and Publish the workflow. You now have a clean copy of the original workflow. On the workflow settings page for the copied workflow, you should see two generic forms that were created for you.
  12. The last step is to delete this new ‘Task’ form that was just created for you (in preparation for copying the original one). You do this by highlighting it and either pressing the Delete key or clicking the Delete button in the toolbar. You can ignore the form with a type of Association/Initiation (for this exercise anyway).

Part 2 – How to copy an existing custom task form to a new workflow

This is usually the painful part of copying a new workflow, as the custom task form is NOT copied with the rest of the workflow when you do a ‘Copy and Modify.’ This means you have to recreate it manually, which can be a major pain. Luckily, there is a way that allows you to reuse the old one in the new workflow. Here are the steps:

  1. In SharePoint Designer, open the old workflow to the main settings screen.
  2. Open the custom form you want to copy (will be of type ‘Task’) by clicking on its name. This will open the form in InfoPath.
  3. After it opens in InfoPath, click on File then Publish. Choose the ‘Export Source Files’ option and save them somewhere on your local drive where you can find them. A temp directory is fine.
  4. Close InfoPath, choosing ‘Don’t Publish’ when prompted.
  5. Close SharePoint Designer.
  6. Using Windows Explorer, navigate to the folder where you saved the InfoPath files.
  7. Right click the manifest.xsf file and open it using your favorite text editor.
  8. Within this file, search for all references to the original xsn file. I find it simplest to search for ‘.xsn’. There are a couple of things to change here:
    • The path to the .xsn file – will need to match the path to your new workflow. You can verify this path within SharePoint Designer, under All Files.
    • The name of the .xsn file – should be the same name as the workflow, with a ‘_Task’ following it. (You should follow the pattern used already for naming.)
  9. Save the file and close it.
  10. In Windows Explorer, right click the saved manifest.xsf file and select Design. This will open the file in InfoPath.
  11. Once open, we simply need to publish the form. If the name and path are set right, the form will be added to the new workflow. To publish, you can either click the ‘x’ in the top right corner and select ‘Save and Publish’ when prompted, or select File then Publish. If you choose this way, you can verify the location and name of the form to be published. If these look correct, click the ‘Workflow’ button to publish.
  12. Close InfoPath if it is still open.
  13. Re-open SharePoint Designer and connect to your site.
  14. Find the new workflow and open it to the main settings page. You should see your new custom form listed, along with the already existing Association form. To verify, click on the form to open it in InfoPath and your custom form will be displayed. You can now close InfoPath (choosing to publish or not).
  15. Back in SharePoint Designer, publish your workflow.

And that’s it! You now have a new copy of an old approval workflow and have retained the custom form you worked so hard to build in the original workflow. I hope this saves someone some of the headaches I’ve had in the past while copying workflows.

Enjoy!

read more
Caroline SosebeeHow to Copy an Approval Workflow and Retain Its Custom Task Form
scrum-diagram.jpg

Exceeding Customer Expectations with SCRUM

 

Exceeding Customer Expectations with SCRUM

Danny Ryan:                       Hi. This is Danny Ryan and welcome to the ThreeWill Podcast. I’ve got Bruce Harple with me today. Thank you again, Bruce, for joining me for the podcast.

Bruce Harple:                     Absolutely. Glad to be here, Danny.

Danny Ryan:                       Great. We wanted to take some time in here. I was joking as we were prepping for this that we could probably talk on this subject for much longer than twenty minutes. I am going to try to hold us to twenty minutes, but probably a subject that’s near and dear to a lot of folks hearts at ThreeWill, which is talking about managing customer expectation and talking in particular about Scrum. Tell me a little bit about what you want to talk about today.

Bruce Harple:                     Yes. What I want to talk about Danny is when we think about managing customer expectations, we kind of talk about the Iron Triangle and three things that we want to really set expectations around and then manage throughout the life of a project is scope, schedule and budget. Of course those things all impact one another. So if one either increases or decreases, it actually does affect the other parts of that triangle. I talk about how our Agile Scrum process really helps us manage those three parts of the Iron Triangle, and how we kind of manage that on a regular basis with our customers.

Danny Ryan:                       Awesome. With this, it’s describing … you probably have to get started off with a couple of definitions of some important terms. In particular, for folks who are listening in, we use an Agile process called Scrum. What’s interesting about this conversation is, Scrum is typically used in large development shops. We are using it in the situation where we’re trying to develop products and applications for customers. Because we’re using it, yet there’s some modifications that we need to do to set those expectations up properly. So if you could, sort of at a high level, talk me through User Stories, Story Points, all that good stuff.

Bruce Harple:                     Yes. One of the key artifacts in Agile Scrum that kind of drives and really defines the scope of a project, is something that is collectively called the Product Backlog. It’s kind of an Agile Scrum term. Within a Product Backlog, there are a set of what we call User Stories.  User Story really in one sentence describes a User’s interaction with that application and the benefit that might result as a result of that customer’s interaction or that User’s interaction with the system. A User Story is just a way for us to redefine a requirement into those three parts. There’s a User, there’s an Action, and there’s a Benefit. That kind of forms that baseline of the scope for a project.

Then one of the things that we do, another term that I’ll use is called Story Points. Story Points are the way that we actually size the effort to implement a specific User Story.  A Story Point is just a number. We follow the fibinocci system, that could be anywhere from a point five up to a twenty or higher. The higher the number, the more difficult it is to implement that specific User Story. We assign a Story Point or a size effort to implement every single User Story. That really becomes the foundation for the scope. The scope is the User Stories. We actually take those Story Points and we actually convert those into hours. Once we have hours, we can convert those into dollars, which is your budget. That’s your Scope.

User Stories make up the Scope and then the Story Points converted into hours and dollars, that makes up your Budget. Then, we take those User Stories and group them logically into a sequence that makes sense to implement them, and also kind of grouping them by size. We group all those into Sprints. A Sprint is, typically we work in two week Sprints. A Sprint is just a duration in which you are going to deliver a certain number of User Stories to the customer.  Once we take all those User Stories and size them with Story Points, we then group those into Sprints and those Sprints define the schedule. That’s kind of a baseline, kind of the Scope, the User Stories, the Budget is converted from the Story Points, and then a Schedule is the number of Sprints and the Spring Duration that we plan at the start of a project.

Danny Ryan:                       So these User Stories and the Story Points and this calculation of time and money, that all goes into the statement of work for the customer or how does that work?

Bruce Harple:                     Yes, that’s right, Danny. We actually go through this process during the sale cycle. As we talk to a prospective customer and try to understand their business problem, we really drill down into some lower level requirements, and actually build out the product backlog and all the User Stories. That’s been very successful for us from the perspective of … you know, customers love it when we kind of play back these User Stories. Because their reaction typically is these guys really get my business problem, they get what I’m trying to accomplish. We review that backlog with them in the sales cycle to make sure that we have correctly restated their requirements in these User Stories. We also go through them in the sales cycle, and for our ties, all those User Stories.

They tell us what items are must have items versus should have, could have, or if it’s a willing to have, it’s something that could wait until a later phase. We obviously size everything. Then those User Stories are actually grouped into something we call Feature Groups, which is just a logical grouping of Stories. Then, with that, and we size everything, convert it to hours and dollars. Customers can actually go through that and actually exclude Stories, exclude features. They can really take that whole product backlog and kind of get it sized to their budget. We end up working with them to help them determine what’s most important, where are they going to get the most value for their investment, and that’s where we establish that baseline. What is the baseline of User Stories that are must have, we have to have it in this release of the solution. Then what’s the side of that Story Points, that obviously gets converted to, as I said, hours, dollars, and a schedule.

Danny Ryan:                       Does the customer pay for this estimate?

Bruce Harple:                     No. That’s something that we do during the sales cycle. Now I will say when we start a project, in some cases there may be some requirements that we haven’t had the time to really vet out in detail. In many cases what we’ll do in the sales cycle, if there’s any kind of technical risk or uncertainty or any kind of requirement risk or uncertainty, we’ll tend to put User Stories in our backlog called Spikes. A Spike is just a story that says we didn’t have time to flesh out this technical concept or we don’t have time to flesh out this set of business requirements, so we’re going to allocate a little bit of time in the first Sprint of the project, that we call Sprint Zero, and further vet those things out.

At then we can at that point determine, once we’ve got more detailed requirements, or maybe we do a proof of concept to show if there’s a technical risk or to present a concept to a customer. We can say after that review and after those Spikes are completed, we can then sit back down with a customer and say this particular Spike is resulting in these results, which impact the product backlog in this way. It might not have any impact on the backlog, in the way of additional User Stories or anything like that, but it gives us another checkpoint after that Sprint Zero, which is the first Sprint, to re-calibrate, budget, Schedule and Scope if we need to.

Danny Ryan:                       Just to point out a couple of things that I think are really valuable. Mentioning that this is something we do as part of the sales process. For folks who haven’t gone through the sales process with us, Bruce puts together a very detailed spreadsheet. One of the benefits of having that is you can see where they basically get a breakdown on where the work effort and what’s involved. The other thing I like about that for customers is, not only can they prioritize things, but it’s almost like a shopping basket type of approach, where you can say maybe I don’t have the budget to go after this, but you can play through some scenarios. I know customers just love having that control over what are we going after, and help the customer basically size out the project appropriately. I think it’s just a great thing to give or to hand over to people. It’s fun going through the estimates that you guys come up with, and walking that through and then the customer seeing it and getting feedback from them is really awesome.

Bruce Harple:                     Yes, Danny. I think just to add to that. I think what customers really appreciate is that it lets them see and for them to implement a certain feature group or set of User Stories. Once they see a price attached to that, it really resonates with them. At that point, they’re making an investment decision. They can decide is my business going to get enough benefit from this Feature Group or set of Stories, and it’s worth making this investment. We’re just really trying to give them the data points that will help them make what we hope are the right decisions for their business.

Danny Ryan:                       So we start off with this statement of work that has a certain number of Story Points against it. Of course, the project starts, week number two something comes up. The business climate changes, something happens. How do we adapt to these changes that come up that we know will come up during projects?

Bruce Harple:                     Yes. Exactly. I mean anybody that’s been involved in custom software development knows that you need to identify new requirements as you go. Also identify areas that are more complex than you thought, to maybe implement a specific requirement or User Story. One of the things we really try to do is at the front of a project, go through that product backlog with our customers, explain how we phrase it. We explain how we size everything using these Story Points. We really try to educate our customers into thinking of the Scope of the Project being the number of Story Points that they’re going to get that are associated with this project backlog of User Stories.

That’s kind of the baseline that we kind of always go back to, to say we committed at the start of this that we would develop one hundred Story Points worth of features, for these hours and for this budget. We kind of try to use that as the baseline. In Agile, we talked about every two weeks we develop a set of features that we deliver to the customer. We go through a process called a Sprint Review, which is where you kind of review what you’ve accomplished with a customer. You cerebrate that. You take their feedback. Right. So if any adjustments need to be made you’re getting that feedback every two weeks.

And then, as a part of the Sprint Review, we also go into what we call Spring Planning. I talked about at the beginning of a project, we set a Sprint Schedule for all the Sprints and we take the backlog that we started with. We break it up into these Sprints. When I get done with Sprint One, for example, I’m going to go ahead and look at my Sprint Two plans, what I originally started. And when I do that, I’m also going to bring up, okay, so during Sprint One, we uncovered these five new User Stories.

We estimate that these five new User Stories are … lets say it’s ten Story Points to implement these five Stories. So one of the things that we do as part of Sprint Planning when we’re looking at that next Sprint, we sit down with the customer and we say we’ve got five new User Stories, five new sets of requirements. Are these User Stories or these requirements more important than what we already had planned for this next Sprint? If the answer is yes, some of these are more important and we need to have these included in the next Sprint, then we work with them.

If we’re going to take these new User Stories and these requirements and include those at Scope, then something’s going to be pushed out of Scope, right. Because overall, we’re trying to hold the Scope to the hundred Story Points that we started with as our baseline. They understand that, they understand that they’ve got to … you know, in order to include something new in Scope, they’ve got to kind of push something out to a lower priority. Maybe if we operate faster than we thought we would, and maybe if we get more work done than we thought we would originally planned, we could still do those original stories. But in some cases you can’t, and that’s when things actually get moved out of Scope.

The other choice the customer has, Danny, in that scenario where we’ve uncovered new requirements and we’ve kind of shared with them what those are, what the impact is in the way of size, which also impacts Schedule and Budget. The other option they have is to say these new requirements are important, and there’s nothing I can move out of Scope. I still need to have everything you originally stated, plus these new requirements are important to me. In that case, that would lead us to create a [inaudible 00:14:59] with that customer. We would expand the Scope, which then would impact Budget and Schedule. The beauty of it is, we’re doing that every two weeks.

Danny Ryan:                       Yes.

Bruce Harple:                     We’re constantly in that expect and adapt kind of principle behind Agile. You are constantly looking at where am I today, what did I get accomplished in this last Sprint, what else new did I learn that’s impacting my Scope, what the User Stories have identified, and what do we want to do about that. We don’t want to ignore it, we want to recognize that we’ve uncovered new requirements and they’re valid. They’re important to the customer. We then together decide how do we incorporate and include those new requirements in the project.

Danny Ryan:                       We typically … for the back to the statements of work, will typically write the statement of work around a time and materials budget, not to exceed a certain amount, right? So we write it up that way, and then if we see that there’s things that they want to pull in that would exceed that amount, that’s where we would talk through the change order, we need that additional. But yet let them make the decision as far as whether those new features need to be included or not.

Bruce Harple:                     That’s right. In some cases, we are actually ahead of our original plan. In other words, if our velocity, the rate at which we can implement a Story Point is faster than we planned originally. Maybe we’re better and more efficient. Or maybe we found some ways to implement some Stories that were simpler than we assumed when we did our original estimate. It could be that we’re kind of ahead of plan and ahead of budget, and we can actually take in additional User Stories and not impact the original budget or timeline. That happens in many cases.

Danny Ryan:                       You do that way too much, Bruce. Really, really. I am amazed that just internally I know we look at from a planning perspective, if the SOW is at let’s say one hundred K, we’re planning to really to hit below that, more like eighty K, because we’re typically delivering the solution to people. Not to over-set expectations, but it just seems like we have the ability to manage to that budget and give something to someone that they really want under that budget. That’s really, really important to customers.

Bruce Harple:                     Yes. It is and that’s one of the things that they like about Agile because they are seeing features delivered to them, put in their environment, right. So they can actually do their UAT testing early on as they go. You’re constantly looking at the Budget and the Scope every two weeks. You’ve got that such a great view into where you’re at and whether you’re ahead of what you thought or behind. You get to make those adjustments every two weeks, which is really important to customers. They really love that.

Danny Ryan:                       That’s awesome. I just wanted to wrap us up here. This has been a great conversation. I think we can talk for hours here. But I know one of the things that we’ve come up with through the years was the ThreeWill Promise, which we’ve talked about internally quite a bit where the three C’s, Control, Choice and Commitment. I think that what we’re talking about today really have to deal with those things with Control, we say that we provide the structure for clients to control Priority of Features and Budget throughout the lifetime of the project.

How important is that? How important is it for the client to maintain Control? Choice, because we’re delivering it every two weeks, we’re earning business every two weeks as well. We’re delivering software every two weeks. The great team that you’ve put together, Bruce, out there delivering every single day. The last one is Commitment. It’s where people are really taking on the Challenges, like their own Challenges. I love how committed we are to clients. You guys just do wonderful job. It’s so much fun talking to clients after we’re done with projects and hearing what’s been done. So I appreciate you, Bruce, taking the time. Any thoughts to wrap this up at all? Anything you want to add?

Bruce Harple:                     No. Just that we really enjoy working with our clients. I think they enjoy our process and the way we attack these problems. I think they appreciate the Agile process that we bring to bear. It’s often, there’s plenty of days one of the things customers call out when we do surveys with them and try to understand their level of satisfaction with ThreeWill. It’s one of the things they typically call out as one of the things that they see as one of our strengths, and that they really appreciate.

Danny Ryan:                       Yes. I think it’s … I hear a lot from people that the original reason why they brought us in was because of our technical experience, and then the reason why we stay around and why they want more projects from us is because of our process, because of what they see delivered on projects, which is wonderful. Wonderful to hear.

Bruce Harple:                     Absolutely.

Danny Ryan:                       Hopefully if you’ve gotten to the end of this Podcast, thank you for taking the time to do that. If you are a perspective client and hopefully some of this has helped you out a little bit, or a current client. Really, the whole estimation process, getting a handle on how much time it’s going to take, how much the cost is going to take. That is part of what we do, that’s a part of our sales process. I highly encourage you to come to our website. Reach out to us. We’ll put together the details so that you can really put together a sound Budget and something that’s workable. Please feel free to reach out to us. You’ll interact with myself and Bruce, and folks from his team. It’s a great process that we’ve put together here. Bruce, thank you for taking the time to do this.

Bruce Harple:                     Thank you, Danny. I enjoyed it.

Danny Ryan:                       You bet ‘cha. Have a great day. Thanks everybody for listening. Bye bye.

read more
Bruce HarpleExceeding Customer Expectations with SCRUM
rejection-office-app-store.png

Why Did Microsoft Reject My App? Lessons Learned Publishing to the Office App Store

Over the past couple weeks, I’ve been working with Danny Ryan to get an application published into the Office App Store (read more about it in Danny’s recent blog post). I’d like to share with you a few tips and some of the reasons why our app was rejected on the first pass.

Tips for Publishing to the Office App Store

  1. Although it is not required for all apps, I recommend that you create an environment which the engineers at Microsoft can use to test/evaluate your application. Because our application, Trove, is a Salesforce to Office 365 connector and requires a Salesforce account, Microsoft requested an environment in which to test. You can speed things up by creating the environment and providing login credentials as part of the initial submission, just in case Microsoft needs it for testing.
  2. Based on the IP address, it appears that the testers are working from Ireland and are thus 5 hours ahead of me in Atlanta. When they had an issue with one of the login accounts I provided, the email came in at 4:30 am Atlanta time. I was fortunate to catch it at 6:30 am, resolve the issue, and they continued testing on that workday. If it’s important to you to speed through the approval, I recommend that you try to watch your email over working hours in Ireland.
  3. Our experience after submitting the application 2x is that testing begins in less than 24 hours. So, if you’re watching emails during odd hours (prior tip), you hopefully won’t need to do so for more than a couple days.

After our first submission, the folks at Microsoft provided a Validation Test Results report. Since our app failed, the report included a helpful listing titled Breakdown of Changes Required. I found the list of issues and requested repairs to be reasonable and clear. You can find the full set of validation policies here.

  • Your app must work according to your description and must not be in an unfinished state: Seems like a reasonable request. Our app threw an exception due to a configuration error and the cause/remedy was not apparent to the engineers testing. We decided to resolve this by providing an environment in which they could test the app.
  • If your app requires a login/password, you must give Microsoft a pre-existing working login for test purposes: This is where the testers came back and requested an environment in which to test. To their credit, the test engineers did create a “trial” Salesforce environment in order to test the app, but they eventually ran into the error above. We resolved this by creating an environment in which they could test.
  • If your app depends on additional services or accounts, this dependency must be clearly called out in your app description: Since our application requires Salesforce, we added this requirement to the description of our application.
  • You must specify language support for your app in your app’s manifest: The testers provided a link which explains the property that must be included in the application manifest to specify the languages that our app supports (http://aka.ms/supportedlocalesblog). Easy.
  • The version number you specify for your app on the Seller Dashboard submission form must exactly match the app version number in the app manifest: We initially submitted the Seller Dashboard submission as version 1.1.1 and the app manifest as 1.1.1.0. Remedied this by updating the Seller Dashboard submission version to read 1.1.1.0.
  • Apps can fail validation for issues related to Apps for Office and SharePoint UX guidelines which impede the customer experience within Office and SharePoint: The link for our app, from the Site Contents page in SharePoint, led to a page from which users could not easily return back to the SharePoint site from which they started. Fairly basic, and we resolved this issue by ensuring that the link for our app leads to a page from which users can easily return.

We applied the repairs, resubmitted the application, and the Microsoft engineers started testing again in less than 24 hours. Hopefully our next blog post will be to announce that Trove is available in the Office store!

Update – here is a link to that blog post – http://threewill.com/the-eagle-has-landed-trove-now-in-the-office-app-store/

How is your experience going with publishing an app in the Office App Store? Need help?

Comment below or contact us if we can assist.

read more
Eric BowdenWhy Did Microsoft Reject My App? Lessons Learned Publishing to the Office App Store
office-app-store.png

Pointers on How to Get an App into the Office App Store

I’m working with Eric Bowden on getting an integration (called Trove) between Salesforce and Office 365 in place. Basically, we are helping Salesforce users store their documents for Opportunities and Account in a document library in Office 365. Sounds pretty basic (and pretty high value). Yes, I want this for my own – but so would everyone else who uses the world’s leading CRM with the world’s leading productivity platform.

What’s different about our Office App is it’s in place primarily for security reasons and to make the maintenance and set up much more straightforward. This may (and probably will) change in the future when we find out from initial adopters about features they would want on the Office 365 side of things.

Ok, enough about that. Here are some links to get you started:

  1. Publish Office and SharePoint Add-ins and Office 365 web apps – https://msdn.microsoft.com/en-us/library/office/jj220037.aspx?f=255&MSPPError=-2147217396
  2. Upload Office and SharePoint Add-ins and Office 365 web apps to the Office Store – https://msdn.microsoft.com/en-us/library/office/dn708487.aspx

Getting Started

Here’s a link to the Seller Dashboard – http://go.microsoft.com/fwlink/?LinkId=248605 Sign in with your Microsoft account and bookmark the site. Trust me on this one.

Creating a Seller Account and Payout Information

Even though Trove is a free app, we still needed to fill out info on us as a seller and where to send the check (when we do create a paid-for app). You’ll first create a marketing profile that includes your company name, your logo, and a brief description of your company. As part of this process, you’ll also need to provide a reference of someone inside your company. Give the person a heads up (I used Tommy) so they can keep an eye out for the email. It comes pretty quickly.

You’ll need to talk to the folks in Finance to get the right info on your bank account or PayPal account for payout. You’ll also need your Tax ID, if you don’t know that already. If you don’t have all this information handy, you can save the information and submit it later on when complete.

Creating Client IDs and Secrets

Ok, this is where I really leaned on Eric Bowden. We started by creating a Client ID – you need to do this if you are a provider hosted app or a connector (we are the latter). The important field for us was the length that the client secret is valid for – we wanted this to be as long as possible so we selected three years. Another tip is that the App Redirect URL must be https:// and refer to a specific page in your app built for OAuth authentication. After clicking GENERATE CLIENT ID, I copied all the info and stored in a secure place and shared with Eric. This will be the only time you will see the client secret – so make sure you put it someplace safe that you can come back to.

Submitting the SharePoint Add-in

There’s a checklist here. Here are some pointers. You want to make sure the version exactly matches the version number in the add-in manifest file. We tested this out and this really does matter. For our SharePoint Add-In, we needed a 96 by 96 version of the Trove logo. My Photoshop skills include resizing images, so I had this covered. I had Eric send me the app file (he has the easy parts) and I used the Client ID that I just created. There is a section for Testing notes – here is where you want to include any special instructions for testing the app. Since our app has two parts – one part in the AppExchange and the other in the Office Store—we needed to provide info on logging into a Test environment for Salesforce along with a Test environment for Office. Finish up with providing a short and long description with some screenshots. I don’t have more to add about the other fields.

The Fun Part – Getting Approval

The app first goes through a scan and then the testing begins. We were pleasantly surprised with how quickly the process happened and the level of detail in the testing feedback. I’m hoping that Eric will write a post on what this is – hopefully this will be a blog post we put out later today or this week.

Posting Updates to Apps

One last tip – you want to make sure you specify the correct Client ID that you created for this version of the app. I had Eric look over my shoulder when doing this.

Do You Have Any Pointers to Add?

Leave your pointers in the comments below.  Of course, if you want some help to get your app into either the Office App Store or the AppExchange please reach out to us. Or maybe you have an app like ours and want build something that goes into both stores-we’d love to hear from you.

read more
Danny RyanPointers on How to Get an App into the Office App Store
decision.jpg

How to Publish to the Salesforce AppExchange

As a developer, it has been a great experience to methodically work through the process for building and publishing an application on the Salesforce AppExchange. Earlier this week, we were thrilled to announce the fruit of our labor, the latest version for Trove! Naturally, there are numerous details to consider, and the ISVforce guide should be your primary resource; however, we’d like to share with you the high level steps required.

Overview

In summary, the process for publishing an application on the AppExchange is largely about creating Salesforce orgs and requesting that features are enabled. There is also a security review required. And, of course, you’ll need to develop and test your app.

Step by Step

  1. First, identify your Salesforce business org. This is typically a paid for org which is used for tracking leads, accounts, opportunities, and so on.
  2. Submit a request for Salesforce to provision the Environment Hub application in your business org. The Environment Hub application is required in order to associate related orgs, and it is used for creating orgs for testing your app.
  3. Login to the AppExchange, select the Publishing Console, and create a listing for your application, even if your application hasn’t been developed yet. The listing will not be public, but you must create the listing and decide if it will be paid/free before you can proceed to the next step.
  4. Submit a request for Salesforce to provision the License Management Application in your business org. The License Management Application is used to track organizations which subscribe to your app, licenses, and so on.
  5. Use the Environment Hub to create a Developer Edition org which will serve as the packaging org. Or, if you have already created a packaging org, you can use the Environment Hub to associate the pre-existing packaging org with the Environment Hub.
  6. Create separate developer edition orgs to build and test your product. Use the Eclipse based Force.com IDE to write/debug code and store the source in your favorite source code repository. There are other tools that can be used for app dev such as the developer console and the Salesforce setup menu. The following is an awesome overview of the app the dev lifecycle for Salesforce projects: Team Development and Release Management for ISVs.
  7. Once app dev is complete, create a managed package in your packaging org and upload your source. Note: Many attributes of managed packages are difficult to change once they’ve been set and after a managed package has been released.
  8. Next, you’ll upload a beta version of your application to the AppExchange for testing. Click Upload from the package in the packaging org and be sure to select “Beta.” Note: Do not upload a Release package until the application has completed all testing. Some aspects of the managed package cannot be modified after it has been uploaded as a release.
  9. Use the link from the beta upload (prior step) to install and test your application. Since beta versions cannot be installed in production orgs, you’ll need to use the Environment Hub to create test orgs to test different editions of Salesforce (e.g. Group Edition, Professional Edition, Enterprise Edition, etc…).
  10. Upload a release version of your application once testing has completed.
  11. Login to the AppExchange, access the publishing console, and confirm that your application appears in the section titled Your Uploaded Packages. It may take 30 minutes or longer for your application to appear in the publishing console.
  12. Next, it’s time to begin the security review. Your application cannot be listed publicly in the AppExchange until it has passed the security review. Click start review and work your way through the wizard. A few hints: You’ll need to have completed the automated Force.com security code scanner and repair any issues found. You’ll also need to create an environment in which Salesforce engineers can test your app.
  13. Submit a request for Salesforce to enable the patch feature in your packaging org. This will allow you to create patch upgrades which can be published to the AppExchange and can be pushed to existing subscribers. You may not need to create a patch just yet, but you’ll want to be ready if/when a patch is needed.
  14. Did your app pass the security review? Congratulations, you are ready to set your listing on the AppExchange to public!
  15. Now that your app is live on the AppExchange, you can use the License Management Application to monitor the installations. In the Subscribers tab of the License Management Application, you’ll see a line item for each org in which your application has been installed, including test orgs.
  16. Submit a request to have the Usage Metrics application installed. The Usage Metrics application will let you know how your application is being used, such as which Visualforce pages and custom objects are being accessed by your subscribers.

Easy, right? We’ve learned a lot through study and experience. Ask below or contact us if you have questions about publishing your app in the Salesforce AppExchange.

read more
Eric BowdenHow to Publish to the Salesforce AppExchange
promise.jpg

The ThreeWill Promises – Control, Choice and Commitment

What is The ThreeWill Promise?

Our promise to our business sponsors:

  1. Control – We provide the structure for our clients to control priority of features and budget throughout the lifetime of the project​.
  2. Choice – Because we deliver working software every two weeks, we earn our client’s business every two weeks.​
  3. Commitment – We take on your challenges like they are our own; you will not find another business partner more committed to your success.

Internal Discussions

We’ve been discussing our brand promise often recently on our internal Yammer network.  It’s a relatively new concept for us. I wanted to summarize what we really do for our business sponsors.  Yes, we are technical and process experts in our given domain. but how does this ultimately translate over to benefits to our customers?

Control

The first promise is control.  We want you to feel and be in control throughout the lifetime of the project.  To do this, we need to provide structure.  The way we do this is you own the priority of the features (what goes into the next Sprint) and you decide where the budget is spent (more about that in a minute).  The feedback we get from many of our projects is that what is liked best is the process we use to structure the project.  We typically get hired because of our technical acumen, but the reason we stick around is our execution of the process.  In fact, sponsors like the experience so much that they want us to teach other projects on how to “do SCRUM.”

Choice

There are a couple of options on how we price projects.  A majority are T&M with a budget cap.  We have adopted a fundamental tenet of SCRUM, which is to deliver working software every two weeks.  That means you can stop when you feel the product is ready.  And yes, this does happen.  Some opt not to use the remaining budget; others opt to use the budget on other projects that could use some attention.  This means we earn your business every two weeks – because you are in a good state at the end of each Sprint.  Note, especially for larger projects, there is a Transition Sprint – usually for 1 or 2 weeks to do appropriate training and transition of deliverables.

Commitment

Our last promise is commitment.  Simply, when you hire us your challenges become ours.  Take a minute to read some of the testimonials here and here.  The only way we build this reputation is putting your needs in front of ours.

Making the Promises Real

We don’t want these promises to be just words on a website or a slide.  As a part of ThreeWill, we need to hold each other accountable and call each other out if we aren’t true to a promise.  As a customer, we want you to challenge us if we fall short of any of these promises.  Iron sharpens iron.

Leave a Comment

If you’re a customer, we’d love to hear any experiences where you have seen the promises in play.  Feel free to leave a comment below.

If you’re another ThreeWiller, I’d love to hear your words on what these promises mean to you.  This would be great to supplement the page on what it’s like to be at ThreeWill.

If you’re a prospective customer, we’d love the opportunity to show you these promises in action.  Reach out to us here.

read more
Danny RyanThe ThreeWill Promises – Control, Choice and Commitment
video-front.jpg

Managing Video Content with SharePoint 2013

Video Content with SharePoint 2013

You may be familiar with the Asset Library that gets included with team sites. It’s a convenient place for users to store any visual assets that they may want to use on the pages they author in SharePoint 2013. However, did you know that there is a corresponding Asset Library template that you can use to create your own libraries? And, furthermore, did you know that this template produces a library that is uniquely qualified to hold video content that you might want to upload and manage on your site? Let’s see how it works.

We’ll get started by navigating to the Site Contents page, and from there we’ll create a new library based on the Site Asset template. In this case, the library is named “Videos”.

If we navigate to the Library Settings page, note that this newly created library has a series of Content Types associated with it; this is the special sauce that makes this library a great place for holding videos.

Now, we can upload video and see some of the capabilities in action:

Notice the Renditions and Thumbnail links; Renditions will allow us to manage multiple video resolutions and bitrates for our users, while Thumbnail permits a visually engaging image to be specified when the video is shown in a library or in search results.

The view of the uploaded video in the new library appears in this fashion:

This simple demo is just a small sample of setting up video content. To see the rest of the story, check out this video of our recent webinar on managing video content in SharePoint 2013 using out-of-the-box features.


SharePoint is a web application platform in the Microsoft Office server suite. Launched in 2001, SharePoint combines various functions which are traditionally separate applications: intranet, extranet, content management, document management, personal cloud, enterprise social networking, enterprise search, business intelligence, workflow management, web content management, and an enterprise application store. SharePoint servers have traditionally been deployed for internal use in mid-size businesses and large departments alongside Microsoft Exchange, Skype for Business, and Office Web Apps; but Microsoft’s ‘Office 365’ software as a service offering (which includes a version of SharePoint) has led to increased usage of SharePoint in smaller organizations.

While Office 365 provides SharePoint as a service, installing SharePoint on premises typically requires multiple virtual machines, at least two separate physical servers, and is a somewhat significant installation and configuration effort. The software is based on an n-tier service oriented architecture. Enterprise application software (for example, email servers, ERP, BI and CRM products) often either requires or integrates with elements of SharePoint. As an application platform, SharePoint provides central management, governance, and security controls. The SharePoint platform manages Internet Information Services (IIS) via form-based management tooling.

Since the release of SharePoint 2013, Microsoft’s primary channel for distribution of SharePoint has been Office 365, where the product is continuously being upgraded. New versions are released every few years, and represent a supported snapshot of the cloud software. Microsoft currently has three tiers of pricing for SharePoint 2013, including a free version (whose future is currently uncertain). SharePoint 2013 is also resold through a cloud model by many third-party vendors. The next on-premises release is SharePoint 2016, expected to have increased hybrid cloud integration.

Office 365 is the brand name used by Microsoft for a group of software plus services subscriptions that provides productivity software and related services to its subscribers. For consumers, the service allows the use of Microsoft Office apps on Windows and OS X, provides storage space on Microsoft’s cloud storage service OneDrive, and grants 60 Skype minutes per month. For business and enterprise users, Office 365 offers plans including e-mail and social networking services through hosted versions of Exchange Server, Skype for Business Server, SharePoint and Office Online, integration with Yammer, as well as access to the Office software.

After a beta test that began in October 2010, Office 365 was launched on June 28, 2011, as a successor to Microsoft Business Productivity Online Suite (MSBPOS), originally aimed at corporate users. With the release of Microsoft Office 2013, Office 365 was expanded to include new plans aimed at different types of businesses, along with new plans aimed at general consumers wanting to use the Office desktop software on a subscription basis—with an emphasis on the rolling release model.

read more
John UnderwoodManaging Video Content with SharePoint 2013
blog.jpg

Automating Service Requests

ThreeWill wrote a blog post for the Microsoft SharePoint Product Team Blog on building a generic Service Request Office Business Application (OBA) using InfoPath Forms Services.

Overview

Today’s guest blog post is from ThreeWill, a Microsoft Managed Gold Partner located in Alpharetta, Georgia that focuses on SharePoint development and integration. ThreeWill recently worked with Microsoft to build a generic Service Request Office Business Application (OBA) using InfoPath Forms Services. The application addresses the need for enterprises to have a no-code way to quickly turn around service request based SharePoint sites (i.e. sites that are using electronic forms to initiate a request and tie that request to a workflow).

Some Service Request examples are:

      Request for Marketing Funds

 

      Request for Laptops or other equipment

 

    Request for Project Site Creation

The solution is packaged up as a SharePoint Feature to enable deployment to a Server Farm and standard SharePoint provisioning. The application supports integration with Active Directory to pre-populate user information and provides easy access to Web Services from InfoPath using Data Connections. Finally, configuration information is stored in a SharePoint List for secure yet convenient access to Site Administrators. Over to ThreeWill on how they did it.

Pej Javaheri, SharePoint Product Manager.

Read the full post at the Microsoft SharePoint Team Blog – http://sharepoint.microsoft.com/blog/Pages/BlogPost.aspx?pID=542


SharePoint is a web application platform in the Microsoft Office server suite. Launched in 2001, SharePoint combines various functions which are traditionally separate applications: intranet, extranet, content management, document management, personal cloud, enterprise social networking, enterprise search, business intelligence, workflow management, web content management, and an enterprise application store. SharePoint servers have traditionally been deployed for internal use in mid-size businesses and large departments alongside Microsoft Exchange, Skype for Business, and Office Web Apps; but Microsoft’s ‘Office 365’ software as a service offering (which includes a version of SharePoint) has led to increased usage of SharePoint in smaller organizations.

While Office 365 provides SharePoint as a service, installing SharePoint on premises typically requires multiple virtual machines, at least two separate physical servers, and is a somewhat significant installation and configuration effort. The software is based on an n-tier service oriented architecture. Enterprise application software (for example, email servers, ERP, BI and CRM products) often either requires or integrates with elements of SharePoint. As an application platform, SharePoint provides central management, governance, and security controls. The SharePoint platform manages Internet Information Services (IIS) via form-based management tooling.

Since the release of SharePoint 2013, Microsoft’s primary channel for distribution of SharePoint has been Office 365, where the product is continuously being upgraded. New versions are released every few years, and represent a supported snapshot of the cloud software. Microsoft currently has three tiers of pricing for SharePoint 2013, including a free version (whose future is currently uncertain). SharePoint 2013 is also resold through a cloud model by many third-party vendors. The next on-premises release is SharePoint 2016, expected to have increased hybrid cloud integration.

Office 365 is the brand name used by Microsoft for a group of software plus services subscriptions that provides productivity software and related services to its subscribers. For consumers, the service allows the use of Microsoft Office apps on Windows and OS X, provides storage space on Microsoft’s cloud storage service OneDrive, and grants 60 Skype minutes per month. For business and enterprise users, Office 365 offers plans including e-mail and social networking services through hosted versions of Exchange Server, Skype for Business Server, SharePoint and Office Online, integration with Yammer, as well as access to the Office software.

After a beta test that began in October 2010, Office 365 was launched on June 28, 2011, as a successor to Microsoft Business Productivity Online Suite (MSBPOS), originally aimed at corporate users. With the release of Microsoft Office 2013, Office 365 was expanded to include new plans aimed at different types of businesses, along with new plans aimed at general consumers wanting to use the Office desktop software on a subscription basis—with an emphasis on the rolling release model.

read more
Pete SkellyAutomating Service Requests
blog-newspaper.jpg

Connect Multiple SSRS Servers

ThreeWill wrote a blog post on connecting multiple SSRS Servers for the Microsoft SharePoint Product Team Blog on extending Microsoft’s SSRS Web Parts for an enterprise scenario.

Overview

Today’s guest blog post is from ThreeWill, a Microsoft Managed Gold Partner located in Alpharetta, Georgia that focuses on SharePoint development and integration. You might remember them from a previous article about the SharePoint Connector for Confluence. They worked with Microsoft on a recent project to extend some of the features of SSRS Web Parts to meet the needs of the enterprise.
In this project, ThreeWill helped a large telecommunications organization address their concern of being bound to one SSRS Reporting Server with the web parts. This concern was primarily because they did not have the luxury of combining all reports into one scaled SSRS environment. Of course, like so many other projects, the client needed a solution today, not months from now… this article has the details behind what they did on a week by week basis. Over to ThreeWill.

Pej Javaheri, SharePoint Product Manager.

Read the full post at the Microsoft SharePoint Team Blog –http://sharepoint.microsoft.com/blog/Pages/BlogPost.aspx?pID=547


SharePoint is a web application platform in the Microsoft Office server suite. Launched in 2001, SharePoint combines various functions which are traditionally separate applications: intranet, extranet, content management, document management, personal cloud, enterprise social networking, enterprise search, business intelligence, workflow management, web content management, and an enterprise application store. SharePoint servers have traditionally been deployed for internal use in mid-size businesses and large departments alongside Microsoft Exchange, Skype for Business, and Office Web Apps; but Microsoft’s ‘Office 365’ software as a service offering (which includes a version of SharePoint) has led to increased usage of SharePoint in smaller organizations.

While Office 365 provides SharePoint as a service, installing SharePoint on premises typically requires multiple virtual machines, at least two separate physical servers, and is a somewhat significant installation and configuration effort. The software is based on an n-tier service oriented architecture. Enterprise application software (for example, email servers, ERP, BI and CRM products) often either requires or integrates with elements of SharePoint. As an application platform, SharePoint provides central management, governance, and security controls. The SharePoint platform manages Internet Information Services (IIS) via form-based management tooling.

Since the release of SharePoint 2013, Microsoft’s primary channel for distribution of SharePoint has been Office 365, where the product is continuously being upgraded. New versions are released every few years, and represent a supported snapshot of the cloud software. Microsoft currently has three tiers of pricing for SharePoint 2013, including a free version (whose future is currently uncertain). SharePoint 2013 is also resold through a cloud model by many third-party vendors. The next on-premises release is SharePoint 2016, expected to have increased hybrid cloud integration.

Office 365 is the brand name used by Microsoft for a group of software plus services subscriptions that provides productivity software and related services to its subscribers. For consumers, the service allows the use of Microsoft Office apps on Windows and OS X, provides storage space on Microsoft’s cloud storage service OneDrive, and grants 60 Skype minutes per month. For business and enterprise users, Office 365 offers plans including e-mail and social networking services through hosted versions of Exchange Server, Skype for Business Server, SharePoint and Office Online, integration with Yammer, as well as access to the Office software.

After a beta test that began in October 2010, Office 365 was launched on June 28, 2011, as a successor to Microsoft Business Productivity Online Suite (MSBPOS), originally aimed at corporate users. With the release of Microsoft Office 2013, Office 365 was expanded to include new plans aimed at different types of businesses, along with new plans aimed at general consumers wanting to use the Office desktop software on a subscription basis—with an emphasis on the rolling release model.

read more
Eric BowdenConnect Multiple SSRS Servers