|Danny:||Hello and welcome to the ThreeWill podcast. This is your host Danny Ryan and I’m here with Tim Coalson. Tim, how’s it going?|
|Tim:||Good, how are you doing today Danny?|
|Danny:||I’m doing great. We’re wrapping up the day here, a little past four. I appreciate you taking the time to come and sit with me for a little while.|
|Tim:||What better way can I spend February 14th, Valentine’s Day?|
|Danny:||You don’t have to hold my hand though man.|
|Tim:||Are those chocolates for me?|
|Danny:||You’ve done something sweet for Sonya, right?|
|Danny:||You’re already done with doing something?|
|Tim:||Just my mere presence.|
|Danny:||Your mere presence? Tim, let’s talk afterwords. We’re going to have to talk a little bit after this, okay?|
|We wanted to talk today about logging, the exciting subject of logging. What I wanted to find out was how are you using it on projects, what are you doing with it? Just get me up to date on what’s going on here.|
|Tim:||I am and have been working probably for the last year on a public site for a tax and auditing software company. We are working on a support site, and we found that logging could be really valuable to us. We initially had logging for just normal debugging, things that when we call services and such, when we could capture errors and things like that.|
|We sort of have gotten a broader vision of how logging can be used, not only just for debugging but also to start to capture some analytic type information. Part of our project has been the creation of a logging framework with an associated viewer and a lot of configuration options to be able to look at different parts of the application, either in standalone log files or collected together in a bigger file.|
|Basically the idea is that from the time a user logs on, even if they’re not authenticated using a session id, we can see exactly what that user did, how they used the system. There’s tools out there to do some of that as well, but we’re doing this at a logging level to be able to see how they navigated through parts of the application and based upon their activities.|
|For example, if they did a search in our knowledge base; did they have to page through the results or seemingly did they get their answer on the first page? Do we see that once they did their search, then they had to page through the results to find what they were looking for; which might indicate that maybe the relevancy in our search needs to be tweaked.|
|Danny:||Are you using this logging for when you have issues on the site and people raise up that there is a problem that they’re having with the site? Are you using this logging tool to try to reproduce the error that they’re getting?|
|Tim:||Right, it’s really being used for all these things we’re talking about; not only analytics but trouble shooting. For example we’re generating log error reports on a daily basis, that way an administrator of the log can go through and look and see all the errors that were raised in the application during the day.|
|If you start to see certain errors happening multiple times, then certainly you start to see that’s a good area we should do some research in to see, “Is there is a problem with the application, or is there a path that the code is going down that we didn’t anticipate and that’s causing problems?”|
|It’s really kind of a proactive way that even if users don’t take the time to submit a call or submit a ticket on the site, it’s a way for us to kind of proactively see, “Wow, maybe there’s a problem here that we need to fix.”|
|Danny:||You use the word framework, are you using any open source frameworks for this?|
|Tim:||Log4Net is the foundational tool that’s being used. Then part of the project team has written wrappers around that to be able to configure that. We have configurations screens where log administrators can go in and they can establish for each part of the application what level of logging do we want to see.|
|Do we want to see an info level logging, which is kind of a high level view of what’s the user doing on the site? Do we need a debug level? Typically, we run in an info level where it’s pretty high level view. If we start to see there’s issues or want to see at a lower level exactly what’s going on, we can set it a debug level.|
|We have statements scattered throughout our code that are showing the values of certain objects throughout the life cycle of certain transactions that happen in the application. That way you can kind of see what was going on at any given point in time and any error information, if there is an error that you’re trying to trace down.|
|Danny:||I’m sorry, ever since you used the word wrapper I have this stuck in my head; wemote access WAIN wapper. I don’t know why I think Elmer Fudd saying that, I don’t know what part of my history that comes from.|
|Sounds like the team is utilizing this pretty extensively. Are you using this on other projects or is it just this one project?|
|Tim:||We have used it on other projects; within our applications, oftentimes we’ll create a logging table where we log any kind of exceptions there or log any information there that we want to be able to track.|
|Matthew Chestnut, one of my coworkers, he’s had a much longer history, maybe because he’s a lot older than me; much bigger history with logging. He has some nice tools that he uses, DebugView and BareTail so that you can actively watch the logging as it occurs and see it as it scrolls; which is nice when you’re trying to either validate that your logging makes sense, that it tells a story, or just as you’re running the application.|
|With this BareTail tool you can turn on highlighting so that anything with the word error will show up in red, so you can easily see an error as it scrolls across your screen because it’s going to be in a bright red font. It’s just a nice way to, as you’re working, to be able to see maybe even unexpected errors that you don’t anticipate that are being caught with your exception handlers that are catching these errors. Nevertheless, you can see them in the log and know that they are happening.|
|Even though it may not show up within the application screen, the user may not be aware of it, which hopefully they’re not seeing errors; behind the scenes there could be errors that are being thrown and caught. These tools allow you to see those real time as you’re testing.|
|Danny:||Is this something just our team is going to use, or is this something that they were transitioning over to them to use as well? Are both teams using it?|
|Tim:||It’s been a joint effort. Actually both teams were using it, but on this project itself it’s become a lot more formalized. We’re actually building this whole framework around the loggings, so that it’s not only a developer tool but really becoming a system administration tool where a person will have the assigned role of going through, reviewing these logs on a regular base to determine are there things that we can see that are happening that we need to proactively fix before we start getting calls from customers.|
|Or are there other usage patterns, that from an analytic standpoint, we can learn from to better organize the data on the screen to make the most important things that our users are hitting? Maybe make those a little bit more visible.|
|Danny:||Are you guys using an analytics tool at all?|
|Tim:||The long term view is to actually purchase a product so that essentially all these logs that are being written to on the individual front-end web servers can all be collected in a single location and a lot more analytics done on the data.|
|Danny:||Cool, good stuff.|
|Tim:||It’s interesting. Sometimes your vision of logging, at least for me, is expanded. We’ve used Google Analytics obviously in the past to capture certain events. This is just another opportunity to collect more information within the application and have it available to be able to look at; slice it and dice it, and kind of see what’s going on.|
|Danny:||This has been nice because this project is a sustainment project, so it’s a multi-year project. Typically we’re building out these applications and then transitioning them over. This is one when we’re supporting the application over the long term.|
|Tim:||We’re being part of the whole evolution from the beginning, where it was really more of a debugging tool, to now where it’s being evolved by the entire team to be more of an application into itself that we can get a lot of value out of.|
|Danny:||Excellent. Anything else before we wrap up?|
|Tim:||I think that will do it. I’m looking at that chocolate over there.|
|Danny:||You got some chocolate? Where? Thanks Tim, thanks for taking the time to do this.|
|Tim:||Have a good day.|
|Danny:||You betcha, take care. Thanks everybody for listening. Bye-bye.|
Hello everybody. My name’s Pete Skelly. I’m a principal consultant and Director of Technology here at ThreeWill and I’d like to thank y’all for joining me today. We’re going to do a quick presentation on the new business operating system what ThreeWill’s calling the new business operating system and discuss the combination of Office 365, Microsoft’s Azure and on-premises products and how they can create value for your business.
So, let’s dive right in. We’ve done this presentation a couple of times and typically I start off and explain Why Live Event. So, in October of 2014, we published a white paper that really described what we call the new business operating system and why we believe that the new business operating system really provides some compelling opportunities for enterprise collaboration, increases in productivity, and innovation.
Second, we wanted to share how we see clients using the Cloud today. For most of our clients, hybrid is their reality. So, really explaining that the hybrid environments in Cloud on your terms is really what the Enterprise Cloud is going to be like for quite a while.
Then finally to discuss some benefits and success stories that ThreeWill’s learned over the course of about the last two years moving clients to the Cloud, working on some proofs and concepts, and also find out how some folks are using the Cloud offerings today from Microsoft and from other vendors and how that hybrid story is playing out.
First, the Cloud means a lot of things to a lot of people. There’s a lot of … some information about the Cloud that, frankly, can be cloudy. Pun intended. So, let’s define some terms. To start off, Cloud computing can be a really scary topic to a lot of folks. A lot of clients are very concerned about moving to the Cloud. They often have concerns about compliance or how do I move an existing app? What does it mean for Office 365? What does it mean for my users? What does it mean for my investment and share points? So, let’s start with some terms and some models, some delivery models and then we’ll dive into a little bit about what the new business operating system actually is.
So, to start, the first delivery model is something that everyone’s familiar with. It’s the traditional IT Model where you own the entire stack of delivering and application. So, from the networking physical storage, the actual server hardware managing the OS all the way up through any data that you have to provide, disaster recovery solutions, the application itself and all the clients that would consume it. These can be on-premises, they can be private in a private Cloud like PC Mall or Rackspace, and they can also be in public Clouds like Microsoft Azure.
The second hosting model, or delivery model, is an infrastructure as a service and this is when you start to move to managing some of your infrastructure as a managed service. So you begin by virtualizing some hardware, some storage and a portion of the OS. These can be on-prem, these can be, again, in the Cloud with a private provider like PCM, Rackspace and also with Azure or another Cloud provider. The key here is moving up that stack so that you’re concerned with less of the physical hardware and less of the management of that infrastructure for servers and hardware even to the point of patching some of that OS. You may be responsible for some of it, but you’re moving up that stack.
The next delivery model is platform as a service and platform as a service, or PAS provides a solution as a service typically built on top of infrastructure as a service, or IAS. This can be provided, again, on-prem, private or public situations and here the focus pulls all the way up to the top of the stack to where you’re more concerned with your applications and your data. We’re less concerned with the OS, that’s even taken out of our hands, in most cases. We’re very concerned with how the data and the applications provide value to our business.
Then finally, there’s a business model which is software as a service and this is a full solution. So, SAS, or software as a service, is a business model where everything is delivered to you from networking storage, servers, data, the application itself. You’re consuming that service and you may have multiple clients, a desktop, a laptop, a phone, etc.. This is just one perspective. If we take a different look at how those delivery models are consumed, you begin to see a little bit of a different picture. So, in an IAS, or infrastructure as a service delivery model, we’re typically going to migrate to it. So the physical resources we may take those physical resources and actually package those physical resources up and put in a new, or an old Legacy system on a new virtual machine management system in the Cloud. So taking something like an old accounting application that has to run on Windows server 2003 or Windows XP and actually putting that up into the Cloud using infrastructure as a service, that’s one way of migrating to infrastructure as a service, or one of those delivery models.
The second, in a platform as a service, we’re typically going to build on it. So, in platform as a service, or PAS, we build the solution on top of the platform typically something like e-mail or storage. We’re going to interact with the compute cycles or we’re going to consume some of the data storage for image processing, for example. So, we typically build on top of an operating system. All the middleware may be provided to us. Some of the frameworks may be provided to us and this enables us to customize and build applications that are really providing our business value and not having to worry about some of those resources that are in the infrastructure data center physical resources.
Then finally, from a SAS, or a software as a service perspective, this is really the [consume 00:07:19] it model. So with a SAS solution, you’re consuming the entire app, UI, configuration, across multiple devices, etc. This is a different perspective and the reason I have the shading here will become a little more apparent in a different slide. So, remember how that shading appears.
There are also four compute patterns that are often associated with challenges that the Cloud delivery models address. The first computing pattern is On/Off. So this is typically used for development tasks, or prototyping, very intermittent compute needs. Think here of business intelligence processing, or nightly processing call center data for reports and this is all things that you would typically over provision hardware for. You’re just going to have something running for a short period of time and then it’s going to be off for quite a while and then run again in the future.
The second compute pattern is Growing Fast and this represents pattern of growth in which it’s unlikely that you can provision hardware fast enough to respond to increases in need for your application. Think of Facebook, or Twitter, probably Snapchat or any of the consumer apps that kind of take-off and grow extremely fast. They have zero deployment lifetime so they need hardware as quickly as possible and typically you can’t think of, “How am I going to deal with that?”
The third compute pattern is Unpredictable Bursting and this unexpected growth. So, my service or application may be going along perfectly and then I have this giant spike. Here you might think of something like when Ashton Kutcher tweets about he’s going to back your start-up and you get billions of hits to your website instantly. There’s no way you can predict that, but the Unpredictable Bursting model is something that you have to deal with.
The fourth compute model is predictable bursting. This is for things like seasonal, or predictable loads. Things that are cyclical, tax calculations, for example, seasonal staffing. If you’re a logistical company that needs to have an increase in staff or a retail industry that needs to increase staff and compute time for some of your retail operations. You know those things are coming so you can actually compute … you can deal with compute needs you’re going to have based on those cyclical needs.
We talked a little bit about Cloud delivery models and about the compute patterns that they can address. So, let’s dive a little bit deeper with what we mean by that operating systems analogy. First, the new business operating system really starts with that top layer, an application layer if you will. On a desktop operating system, we’re all familiar with using different applications: e-mail, word, word processor, any app that you can think of. With the new business operating system, this is now a transparent layer. You have to be able to consume applications from a browser from an android or IOS device, desktops, laptops. I can use all of them in this application layer, but I also have to have access to things like business apps, not just things like Office 365, or Yammer, or my typical office productivity apps.
The second layer of the new business operating system is a security layer. With a traditional desktop operating system, I typically want to know who the user was for audit reasons, perhaps I wanted to have group policies that provisioned applications, might want to know what they were accessing, might want to be able to say which locations on this they could save files, etc. With the new business operating system, I need to know that as well, but I need a mechanism that is going to work in that transparent mobile world. I need something that will enable my identity and my user’s identity to be location transparent and more and more [O-Off 00:11:53], specifically [O-Off2 00:11:55], is being that security mechanism that providers are using to enable that transparent, or location transparent identity and still provide things like audit access, group management, etc.
The third layer is a services layer and in the traditional desktop world when you use something like Microsoft Word, or PowerPoint, or Excel and you click Save, I doubt that anybody really cared about how that document got saved, but there were services that are taking care of this. In the new business operating system is no different. In the new business operating system we just have those services at a higher level, things like e-mail, task, calendar, search, workflow and many others, but I’m not limited to just consuming Office 365’s APIs. I can consume my custom business APIs or services and third party services. I can also consume other Cloud services from other providers as well.
Finally, the fourth layer of an operating system traditionally deals with things like storage, caching, memory management and a whole host of other things. For example, if I send you a five megabyte Word file and you click Save, you’re not thinking about thread management, or how that file’s going to get stored to disk, or any of that. In the new business operating system, we have that same series of needs for things like scheduling, caching, we have to have hardware that things physically run on, but in Azure … in the new business operating system, the combination of Azure, on-premises services and that operating system layer it manages memory, CPU usage, responds to application needs and it automatically can scale to meet those compute models we just talked about.
So, this is all great, but why is it important and why do we think this is going to help enterprises be more productive and potentially innovate? What problems does this kind of new business operating system really solve? We said that the Cloud, for the enterprise, is about hybrid. So, let’s kind of take a quick step back and look at what a view of your Cloud profile might look like when you’re dealing with moving to solving delivery model and compute pattern problems that we just described and how that new business operating system might help.
So, first there’s the on-premises world. This is that traditional IT, you own everything, on-premises situation. It’s familiar to everybody. Everything on-premises. We’re all happy of consuming maybe SharePoint data and Exchange, etc. and I’m in control of all of this. At some point, my CFO may come along and say, “I’ve heard about the cost savings from the Cloud and I want to start consuming that software as a service for things like commodity services like e-mail, calendaring and this is great. So now we’re starting to dip our toe into the world of the Cloud. I may see some opportunity to use infrastructure as a service for something like that accounting app that I mentioned, something that is a Legacy application that we could potentially package up and move to the Cloud as an infrastructure as a service capacity. This may or may not work perfectly. You may have some adjustments to make, but you can get that service off into the Cloud and maybe mitigate some risk of some old hardware that’s about to fail.
Then you might say, with new projects, we’re going to start to move to the Cloud by creating platform as a service applications, or PAS, solutions that you can develop using SQL storage, for example, or mobile notification features, anything that you can do to kind of pull those services up into that PAS environment and reduce your IT operations burden. At some point, you may have a sales person come to you and say, “You know this is great. I have marketing data that our sales people have put out in the public SharePoint environment and I have private information that is sales related, and sales figure related that I need to make sure that I’m pulling the data from both locations.” How can we deal with that? So, you’re probably going to end up with a hybrid Cloud search situation. So, in this case, you might want to search across SharePoint [and 00:16:52] the Cloud and SharePoint on-prem and actually have a unified search experience.
At some point, you may find another company needs to have services like a private Cloud. For example, I may need to host virtual machines and networkings, etc. outside of my own environment. So, I’m going to probably work with a private hosting company and try to get those IS Services, or infrastructure as a service, into a private Cloud. Maybe I have specific needs for more compliance, or I need more control, maybe we’re ready to go do the Cloud, but not totally ready to go to a public Cloud.
Once we have that private Cloud, we may find that we actually can manage this environment and we do have the need to actually get more benefit from those commodity type services. So we could, potentially, move to a dedicated [inaudible 00:17:49] Office 365 and actually have dedicated services or an isolated environment for those commodity services.
Then we may find that that same sales person, or sales team, that was working with external customers for marketing and sales data now they have to have some integration with additional SAS services, or software as a service, that they’re consuming and their clients are consuming from the public Cloud. So, at this point I may need to integrate with a CRM system, for example in my on-prem or my IS Solutions may also need to save documents back to another public SAS service.
Finally, I may have infrastructure as a service on-prem and using virtual hardware etc. that I need to connect up to the public Cloud IS services for things that might be big data analysis, or I have a factory floor that has IOT devices that I want to update from on-prem to the Cloud, or I want the Cloud to process the data, but I want the reports built internally. So, based on this kind of diagram, or picture of how the Cloud really will operate for most enterprises, the reality is hybrid Clouds are in your future and that new business operating system that we just described helps you interact with this type of Cloud, manage your business processes, and your infrastructure, securely, transparently, and in a really scalable way.
So we talked about what the Cloud is and what the new business operating system. Let’s talk about some of the benefits that we think the new business operating system can provide to your business. The first is the new business operating system really helps you move to the Cloud on your terms. This is the most critical benefit. Since most enterprises are likely to require some sort of hybrid environment, that new business operating system is delivery model independent. You can combine public and private Clouds as we said. You can use e-mail, or storage, any of those commodity type services. You can then use infrastructure as a service to host some CRM application that you have to have that’s Legacy and it’ll take you a while to get off of that. You can combine all of these things on-premises, private, public, and hybrid scenarios, things like search, reporting, big data analysis.
A second way that this enables on your terms architectures is incremental adoption. You can incrementally adopt the Cloud. It’s not an all or nothing situation. You can slowly adopt the Cloud by using on-premises, or a private IS solutions and migrate slowly to the Cloud when you have the opportunity. You can gradually decrease your IS footprint and increase your PAS footprint, or platform as a service, over time as those applications need to be replaced.
Finally, you can consume those SAS services when they make financial and business sense. If you can do it, great. If not, you still have those other options. In the new business operating system enables you to move from on-premises to private and public Clouds transparently. So, I can have new business apps using the new app models, of today, on-premises and this enables me to kind of future proof my applications as I want to move those into the private or public Cloud. This also lets me manage virtual machines across Cloud boundaries with an easily managed, simplified, single control surface for all of my hardware, all of my virtualization needs, and all of my application needs.
The second benefit is it reduces the time you spend on routine maintenance. So, using commodity services like e-mail, SharePoint, link and more decreases your IT operations management service area. The [second 00:22:21] thing is by looking to reduce your IS management and increase your PAS solutions over time, you’re going to spend less time in routine maintenance. As you increase those platform as a service, or PAS, solutions in your environment, you’re also increasing IT’s ability to participate in adding or [inaudible 00:22:45] to the business. You’ve got tons of IT folks that are really smart people that aren’t necessarily wasting their time but spending time on low value tasks when they could spend time really impacting your business because they know a lot about your business.
As just a point of reference, in late 2013 80% of IT budgets were still tied up in routine maintenance. Just think about if you could unlock some of that potential and turn your IT folks loose on helping your business.
So, the third benefit is the new business operating system enables innovation together with Microsoft moving to what’s called the continuous delivery model, this is a really critical change in the way that they do business. So no more three-year product life cycles, no more waiting for SharePoint 2007 to 2010, or 2013 and 2016. If you’re using Office 365 or Azure, futures are continuously and incrementally released monthly, weekly, even daily if needed if they have security fixes or critical bugs. This continuous delivery life cycle of products and services means that we have to change as well. So, how we, as consultants, or you, in your particular business, how you provide value has changed.
The second thing is the Cloud moves faster than Moore’s Law. Moore’s Law states that every 18 to 24 months processes will cost half as much to produce and be able to perform twice as many operations. I probably butchered that a little bit, but you get the gist. So, we use this plan to actually plan our business and IT strategies for years. So, according to your two to three year cycle, what did we use to do? License cost, we used to redo licensing agreements every two to three years. We would purchase hardware every two to three years. For servers, we might do a hardware refresh for phones or laptops for our end users, but we’re not constrained by that two to three year cycle anymore. The Cloud moves much faster than Moore’s Law.
Third, the new business operating system allows us to incrementally innovate. So, I can move more and more to those PAS solutions. Once I do that, I can find opportunity to innovate because now I’m not constrained to these two to three year cycles, I can innovate much more quickly and perhaps differentiate myself for my customer from my competitors. So all of these combine to make businesses more efficient, IT more productive and focused on ways to innovate.
The fourth benefit is that the new business operating system promotes process cohesion. If we look at a traditional full stack application, typically you’re required to create and deliver everything in that application. So, let’s take a look at kind of theoretical, hypothetical onboarding solution. So, if I were going to onboard a large amount of users I had to build the entire solutions provision hardware, patch the OS, I had to go through and make sure networking was set up correctly. As far as integration with other systems in that middle tier in that PAS layer that we would typically think of now, I had onboarding functions that were specific to my business value, security issues, but all the way up that stack, I was responsible for everything. This solution does not scale well. For those delivery models that we discussed earlier, those delivery models and compute patterns are limiting to me in this environment.
So, the new business operating system promotes a more cohesive process in application. If we were to build that onboarding process today, we might start with our business logic and really figure out how can I provide that as more of a service that can be consumed across multiple layers? Well, if I start with a service from my onboarding logic then I can start to consume other PAS services like storage. I can consume an e-mail service. I can start to integrate with other third party apps that they’re responsible for their own infrastructure as a service or PAS solutions.
Then we can start to think of [inaudible 00:27:28] as a consumer of our services versus something that we’re providing and tightly coupling ourselves to so we can provide SharePoint as an interface, Outlook, and even Word. So, if we’re building on the new application model from Microsoft, I actually have the ability to write even a single code base that can be consumed from those three environments.
This trend of cohesive business processes, things that are tightly defined and very small and composable compact services. It’s not new. It’s not kind of the nouveau thing, but it does hint at a larger trend. So, if everybody’s using the same SAS solutions, or consuming some of these PAS solutions how do you differentiate? Well, if you start to notice these little changes you can start to look at how are these things going to impact me, long term? What are the bigger changes that are coming? Increasing the number of PAS services that you have over time will increase your integration opportunities exponentially. So, this is really where innovation can start to occur.
The fifth benefit is this stack is technology agnostic. The new business operating system enables us to use the right tool for the job. So, if you are familiar with .Net, and ASP .Net, and C#, and SQL server, etc., then go ahead and use those tools in your tool bag. But, if you have a different tool bag and, you have a different skill set, or different business needs you might want to use the LAMP stack, you know using Linux and Apache and PHP may be what your developers internally know and you’re not constrained to that anymore. If these things make sense, use them. A frequently asked question is: why would I not use a relational database, or why wouldn’t I use a document store for this? Now, you can do those things when developing Office and SharePoint solutions.
As your business grows, you’re going to see new opportunities to innovate and those opportunities may require some new architectures or software designs. So this new business operating supports massive scale and all those compute models. There are a variety of services that you can consume from storage and mobile messaging and, all sorts of things. These are all possible so using high [through 00:30:12] put processing for IOT media services and more. Those things are not only going to be available to you, but you also have to think of, “If I take a dependency on another PAS service, how do I create a resilient design that can compensate for some of these failures that might occur?” Networks will go done. Services will have interruptions. There is no such thing as 100% uptime. So, you’re going to have to learn to look at some new opportunities for architecture and design.
Finally, the new business operating system gives you the freedom to choose the right technology based on your business scenario. So, if you look at SharePoint, Outlook, Custom Web, Web Axe, and Workflow for your departmental processing needs, you can also use the new business operating system for media integration, IOT, Power BI, and SharePoint, all those things. So, when you find something that is critical for your business to be successful, you can utilize the new business operating system to impact that change and you’re not driven by technology. You’re driven the solution … the problem you’re trying to solve, the right solution.
So, to recap the five benefits of the new business operating system really, if you look at the top benefit on your terms architectures, it’s really what this new business operating system is about and what ThreeWill’s really trying to kind of explain this to some of our customers and reduce some of this anxiety about moving to the Cloud. It’s not an all or nothing proposition. You can move to the Cloud on your terms and get all of these benefits out of moving to the Cloud.
So, some success stories. We created, about 18 months ago, a perfect concept application that was an Azure based sales enablement application called Popcorn. This was a contextual based search application that aggregated content across multiple SAS services. So, this really proves that kind of those cohesive services that once you start to combine them, you can get new powerful applications out of those different services. This not only has search but integration with other technologies like push technologies for mobile phones, dial technologies. So from a search result actually being able to dial my phone and start a conversation with someone.
We had a recent client that we completed a seasonal staffing application for so that onboarding application, that fictitious example isn’t so fictitious. It’s a real world example of an Office 365 intranet and an Azure based public facing web application that eliminates a paper based, Excel based hiring and termination process. The internal processes for approval, routing, some of the process automation, all the status management, and internal HR review process, that’s all handled within Office 365. The background checks and integration with termination process and hiring and payroll all those are done from the Azure side, but this is a great example of that predictable burst pattern. So consuming Office 365 commodity services, the intranet, and the HR staff go about their business on a daily basis and just perform their job like normal. As their season starts, that cyclical need for compute power it’s automatic.
The third example is a hybrid search environment that we created for a client that needed to have search from on-premises and an Office 365 environment actually deliver aggregated search results. They needed single sign on, they needed people that were external to an office be able to actually log-in, search o-premises data, and Office 365 data but get an aggregated search results.
The fourth example is an IOS application proof of concept that we wrote that uses Office 365 and Azure to access SharePoint documents and SharePoint document libraries using single sign-on from Office 365 provide native capabilities of the IOS device like search, offline storage, sharing, texting, etc. This was an application that really kind of suits the need of, if I need to have an application that really has native functionality and pushes into that familiar experience that users have as consumers, it’s more than possible. I can integrate with all of Office 365 and Azure from different devices today.
Finally, we’ve recently completed an infrastructure as a service based SharePoint 2013 portal for a law firm. This portal is an IS based for clients and matters which enable internal users to have that single sign-on experience, external users, their clients, to actually log-in and access resources all using infrastructure as a service to manage and house a complete SharePoint 2013 [form 00:36:08] because they needed some special features, they had some special code that needed to run and this enabled IT full control of that environment.
In summary, the new business operating system really lets you provide value to your business by consuming those Office 365 commodity services like e-mail, and SharePoint, and link. It can also help reduce long term costs gradually by letting you reduce your on-premises IAS needs and increase those PAS solutions over time.
As hopefully, we’ve made a case for hybrid Cloud is probably in your future. It is the enterprise Cloud. Using the new business operating system can help flatten that control plain that IT, operations and your enterprise has to deal with in order to manage that hybrid environment.
Finally, ThreeWill can help define your business application profile and see how that new business operating system can help start moving you to the Cloud today. In probably a final note, the Cloud isn’t just about compute power and, the cost of storage. Many people say it’s a race to the bottom as far as storage costs, etc. To us, the decision about using hybrid Cloud is about creating new opportunities, finding new ways to use the data and provide value to your business by moving to these different compute models and finding ways that the new business operating system can add value to your business today and in the future.
Hopefully, the Cloud is a little less scary at this point. Thanks for everybody’s attention. Here’s some of my information. Feel free to shoot me an e-mail, or ask any questions. There are my social media handles, so if you want to talk to me on Twitter @pskelly, or @ThreeWillLabs, and thanks for your time.
Sharepoint 2013 App Workflow – Weird Part #1: WriteToHistory has some strange limitations
Generally the WriteToHistory workflow activity is used for one of two purposes:
- To leave a simple audit trail of messages for users of a workflow
- As a debugging tool for developers
If you’re like me, you have a WriteToHistory activity between almost every major step in your workflow to make sure it’s functioning correctly (although, let’s not forget to take those out before we go to production!).
Let’s look at the WriteToHistory limitation by considering a simple scenario:
- We have a SharePoint-Hosted App
- The App includes a single list (such as an Announcements list)
- The App includes a Visual Studio-authored workflow that is triggered when an item is added to the list
Now, to demonstrate the problem at hand, let’s consider the workflow to be configured as follows:
- The workflow consists only of a Try/Catch block
- The Try block includes two WriteToHistory activities: one that writes a “short” message, and one that writes a “long” message
- The Catch block simply notes any exception that occurs and uses a WriteToHistory activity to record the error
You can see this illustrated in the following figures:
Did I tip my hand there? Yes, the issue here is the length of the message being written. If one attempts to write a message that is longer than 255 characters the entire workflow crashes. No “Catch”… No error message… the workflow simply stops. (As an aside, really?!?!? This is 2014, do we still have to worry about 255 character boundaries?). On the other hand, if I go back and shorten the “longer” message to 255 or less then everything works as expected.
Obviously, we’ll need to make sure we cover our bases on this going forward: never allow a workflow to attempt to write more than 255 characters to the WriteToHistory activity. We can insure this by always calling a method on the string to trim or substring it to a length of 255 or less.
SharePoint 2013 Apps
Our core business is building custom business solutions on the SharePoint Platform that help teams “work together better.” We see the shift of our client’s solutions going from SharePoint On-Premises to SharePoint Online (in the “Cloud”) due to the roadmap and focus of Microsoft. Because of this shift, we believe it is important for our customers to build their current customizations with the SharePoint 2013 App Model. This will put them in a much better position to move to SharePoint Online over time with a lower cost of migration for their customizations to SharePoint.
Going to the Cloud
You can read more about our thoughts about enterprises going to SharePoint Online in the Big Bet #1 – SharePoint 2013 Migrations and Hybrid Environments.
We have seen a progression of the SharePoint application development model going from non-existent to becoming a rich and robust set of APIs and tools. When we decided to retool on SharePoint development in 2006 (see “A Bit of ThreeWill’s SharePoint History” below for more detail), the SharePoint 2007 development model was farm solutions. These farm solutions ran on the SharePoint Server and were the first, truly viable platform launched with a rich set of APIs for customizing the SharePoint Server. This was a huge benefit for the application developer, but over time it became a concern to the administrators of the SharePoint Farms. The concern was for any rogue code that could bring down the entire SharePoint Farm or for “Full Trust” code that could create security concerns. Because of this concern, many companies deployed two SharePoint Farms: 1) a SharePoint Farm for customized solutions and 2) a SharePoint Farm for standard collaboration with no customizations. Having separate farms mitigated the concern of custom solutions bringing down the SharePoint Farm. To allow for companies to run customizations on the same farm as standard collaboration, the SharePoint team introduced Sandboxed Solutions in the launch of SharePoint 2010. Sandboxed Solutions provided the ability to isolate custom application code from the rest of the SharePoint Farm. That process isolation came with a cost and depreciated what you could do in code for Sandboxed Solutions.
A Bit of ThreeWill’s SharePoint History
In October 2006, we made a conscious decision to switch from being a C# development organization to a company that creates custom collaborative solutions on the SharePoint Platform. This started with building custom solutions on SharePoint before it really became mainstream for enterprises. We have been amazed with what we can do with focusing on SharePoint as our starting platform for a collaborative business solution. It has allowed us to build solutions without having to build the framework first. Over the years, taking SharePoint as the 80% of the solution and customizing the other 20% has allowed our clients to solve more business problems due to focusing on the business problem vs. creating framework code.
Now with SharePoint 2013, we have a new Application Development Model which is the SharePoint App Model. This App Model allows for taking advantage of running your apps on Azure or on the SharePoint Server with your code isolated from the SharePoint Server. This App Model can leverage a robust set of APIs from SharePoint and has an incredible amount of infrastructure that allows you to discover your custom applications and other third party applications. Also, this new App Model allows for the SharePoint Developer to tap into some of the mainstream application development approaches like ASP.NET MVC / WebAPI.
Why Are We Excited About The SharePoint 2013 App Model
We can develop applications with better user experiences
We can use a wider set of tools and frameworks to build apps
We can build applications that can work On-Premises or in the Cloud/Online
As usual, SharePoint development has a deep set of options and services available to the Collaboration Application Developer. Our team is reenergized with all the new things we are learning around Azure, SharePoint 2013 and Office365. We are seeing a number of new possibilities for how we can solve the challenging collaboration needs in the enterprise. The SharePoint world is facing one of its biggest changes with the Cloud and the SharePoint App Model. These changes are necessary to keep pace with what enterprises are demanding in the fast paced world of leveraging software to support business processes. We are excited about helping our clients automate their collaborative business processes using the SharePoint 2013 App Model.
Deeper Technical Dive
One of our Principal Consultants, Eric Bowden, shared some of his thoughts back in April of last year on the high points for SharePoint 2013 App Development. If you are looking for more of a technical view of what we think about SharePoint 2013 Apps, take a peek – Top 5 SharePoint 2013 App Development Book Takeaways. Also, if you want to start building your own SharePoint 2013 Apps, be sure to check out technology evangelist’s (John Underwood) post on Start Building SP 2013 Apps.
What do you think about the SharePoint 2013 App Model? Do you think it is a good thing? Do you think there will be adoption of the App Model this year (in 2014) or is too early for enterprises to adopt this new model? How will this better enable your IT to organization serve the business needs you have today?
A Quick Note on the SharePoint App Marketplace (aka Office Store)
By the way, I did not go into the details of the SharePoint App Marketplace. I think it is a great foundation for discovery of internal and third party applications. I don’t think it will be key to Enterprises in 2014, but I do see this gaining traction over time.
A lot has been made of the new App model for SharePoint 2013, but the companion app model that is available for Office 2013 seems to be getting less attention. Perhaps this is because there’s already a large community of SharePoint developers and so they are naturally drawn to anything new and exciting related to SharePoint.
While Office Apps may open some interesting possibilities in a general sense, I believe that they provide some specific opportunities for SharePoint developers, particularly in the area of integrating SharePoint data with office documents.
In this blog post I’m going to document some of the basic moving parts in an Office App. Before doing so, let’s clarify some terminology:
- When I say “Office Application”, I mean a member of the Microsoft Office suite of applications, such as Word, Excel, PowerPoint, etc.
- When I say “App”, “Office App” or “Office 2013 App” I’m talking about code that we would write that would be imbedded within one of the Office Applications for the purpose of manipulating a document that is currently open in the Office Application.
What Kind of Apps Can We Build?
In general, we must make two decisions when creating an Office App:
- What Office Application(s) are we targeting?
- Will it be a Task Pane App (which can target all Office Applications), Content App (Excel only), or Mail App (Outlook only)?
The targeting decision is pretty easy: which of the Office Applications do wish to act as a host for our App (Word, Excel, etc.)? While it is possible to write an App that supports more than one Office Application, it seems that in most cases we’d probably be targeting a particular Office Application.
So what is the difference between a Task Pane App and a Content App? Well, reviewing the images below show some obvious differences:
Aside from the obvious visual differences, I think there’s a more direct statement on why you’d use one vs. the other: Task Pane Apps are about having a user interface that allows a simple request/response model where the user tells us what they want to accomplish; Content Apps and Mail Apps, on the other hand, are about extending the contents of the document in a way that would not ordinarily be possible.
Anatomy of an Office App
In Visual Studio 2012 we can create a project using the App for Office 2013 template (for purposes of review, you’ll probably want to do this alongside the reading of the blog so you’ll have the code to review)…
Then, we decide what Office products to target as well as the kind of App we wish to build. In this case, we’re building a Task Pane App for Excel.
Looking at the Solution Explorer, we see a configuration that is similar to a cloud-hosted SharePoint App: The App itself, which is nothing more than a manifest describing basic attributes of the App, and an App Web, which hosts the content and code that will be presented in the context of the hosting Office Application.
In the “out of the box” code there are a few lines of particular importance:
- In home.js the “Office.initialize” code must be run on each page request
- “Office.context.document” gives us programmatic access to the document in the hosting Office Application. If we fail to run initialize in the previous step then use of this property will fail.
- “CoercionType” is used to describe the type of data we’re manipulating. The most common types are “Text” for simple string data, and “Matrix” for a grid of data in Excel.
- “getSelectedDataAsync()” and the companion method “setSelectedDataAsync()” are used to read and write the contents of the document in the hosting Office Application.
As interesting as it might be to manipulate an Office document using this toolset, the real power comes when we attempt to integrate with other back-end systems. For example, imagine users responding to an email and embedded data goes directly to a SQL database. Or, think about using data that has been entered to SharePoint to populate a Word, Excel, or PowerPoint document that is then disseminated to interested audiences. In a later blog post we’ll look at a more detailed example that allows us to read data from SharePoint.
ALL MY TASKS ACROSS PROJECTS
One of the interesting features of SharePoint 2013 is the introduction of the My Tasks feature.
It addresses one of the more common problems with SharePoint: you’ve been assigned various tasks across many different SharePoint sites, and now you have to keep up with all of them. My Tasks consolidates the tasks assigned to you in one simple view and saves you the tedium of visiting each site containing assigned tasks.
So, what if you’re in a company that is slow to upgrade to SharePoint 2013?
Well, it turns out there’s a clever search technique that you can implement in SharePoint 2010 that will give you similar results.
ASKING THE RIGHT QUESTION THE RIGHT WAY
How many times have you asked a question and gotten an unsatisfactory answer, only to rephrase the question and get what you’re looking for?
Well, that can also be the case with SharePoint. We can use the SharePoint search facility to locate tasks by simply using the query term “tasks”. However, doing so will probably produce results that are too broad:
Now let’s ask the question again, but be a bit more specific about what we want using the following query:
Notice that this produces a startlingly different result…
Depending on your technical background, this query either makes total sense or looks like a foreign language. Either way, at its core it is a simple request: “SharePoint, please only show me items in a task list, and nothing else.”
Great, we’ve narrowed our search to tasks only. But the real goal was to find only the tasks assigned to a certain person (that person being “me”, of course).
It would be nice at this point if SharePoint included a keyword that refers to the current user (such as [Me] when creating a view) but sadly that is not the case. However, it’s easy enough for us to ask for items assigned to a particular person using the following query (in this case we’re assuming the user’s name is Jim Shorts):
ContentClass:STS_ListItem_Tasks assignedto:"jim shorts"
Simply put, “SharePoint, only show me tasks, and only show me the ones assigned to Jim Shorts.”
Where do we go from here?
If you look carefully, you’ll notice that the end of the address for the results page for this search looks something like this:
Simply put, the query details get appended into the URL. Now then, how do you run this query again to see your latest tasks? Well, just add the URL to your browser’s favorites list and then click the link anytime you want to see your latest tasks.
CONGRATULATIONS, YOU’VE JUST CREATED A SEARCH-DRIVEN APPLICATION
Want to learn more about what SharePoint Search and search-driven applications can do for you?
If so, please attend our upcoming webinar entitled Enterprise SharePoint Search. You’ll see even more about search-driven applications as well as learn about how ThreeWill can help you get the most out of SharePoint search.Register Now
SharePoint 2013 represents Microsoft’s first cloud-focused release of the SharePoint product line, along with a complete rework of the application development architecture. As businesses consider a move to SharePoint 2013 they will have to make decisions about whether and/or how to embrace the new cloud environment while maintaining their investment in existing SharePoint customization. This webinar delivers a broad overview of the new environment with the goal of kick-starting the decision making process.
IT Leaders, IT Resources, Business Stakeholders
Thursday, July 18th at 3PM EST
- Office 365 and SharePoint Online
- New features
- Cloud and on-premise capabilities
- Demonstration: SharePoint Online
- SharePoint 2013 App Model
- Deployment models
- Demonstration: “Hello World”
- SharePoint 2013 Security
- User and App identity
- Demonstration: Application identity
- Hybrid Environments
- Business Connectivity Services
- Demonstration: Using hybrid search
I just finished up Microsoft SharePoint 2013 App Development and thought I’d share a few high level points that persist with me.
1. Custom app dev in SharePoint 2007 and farm solutions in SharePoint 2010 followed the same path as Microsoft developers used themselves for customizing SharePoint.
As a developer, we could find dozens of examples for how to perform some custom app dev, usually by taking a look at how Microsoft themselves applied that same kind of customization. Following in their footsteps not only helped us find the right approach, but along with it we had an early indication for how well the solution approach would scale and otherwise hold up to enterprise standards. Starting with SharePoint 2010 sandboxed solutions, they started sending us down our own path. And as we found, this path has not proven robust at all, and was in fact deprecated in SharePoint 2013. Now, we are sent down yet another path. I like the new path. I can see why it makes sense, but I’m concerned that it hasn’t it been proven in the field, and we can’t assume good behavior because the apps approaches, and there are 3 of them, are not used by the SharePoint 2013 product itself.
2. I envision more time will be spent with architecture design and POC’s than experienced SharePoint developers have had in the past.
Once we have more experience with SharePoint apps, we’ll settle into some common patterns, but until then I imagine a matrix of pros and cons for solution architecture as we decide among:
- Autohosted app: App hosted in Azure and is automatically provisioned
- Server side, .net client object model or REST svc access to SharePoint
- Provider hosted: App hosted somewhere other than Azure
- Server side, .net client object model or REST svc access to SharePoint
- Remote event receivers (can’t wait to make use of these!)
- Server side, .net client object model or REST svc access to SharePoint
4. Server side authentication back to SharePoint requires extra steps for autohosted and provider hosted apps where SharePoint 2013 is hosted on-prem.
In summary, configuring a high trust app requires that the “app” and SharePoint exchange a certificate used for authentication.
5. For those looking to get up to speed on SharePoint 2013 apps, I highly recommend this book as your first step.
At just 171 pages, it is a quick read and is packed with the right detail to get you started. Moving on from there, check out these great SharePoint Developer Training Videos from Microsoft.
Interested in learning more about SharePoint 2013? Join us for a SharePoint 2013 bootcamp in June.Learn More
So much SharePoint, so little time…
I didn’t get to see all of the sessions at the SharePoint 2012 conference (spc12) that I was interested in. I was quadruple booked for every one, so there are plenty more (recorded) sessions that I want to watch, not to mention digging in and getting dirty.
Regardless, here are my initial impressions.
I absolutely love some new usability features that I have seen in SP2013:
- Minimal Download Strategy – by default when you navigate from page to page within a SharePoint site, very small changes are downloaded instead of the entire page. For example, the quick nav and global nav don’t change and are not part of the HTTP response. The result is a much faster and smoother experience for the user. I think this is really going to help people “like” (dare I say “love”) navigating around in SharePoint.
- Focus on Content – a simple icon is available which removes the navigation elements (quick nav and global nav) and just shows the body of the page. This should be very handy for when you are viewing lists with a lot of columns and rows.
- Drag and Drop – yes, you can drag and drop documents into a document library.
Yes, they are for real. It’s not clear to me how quickly and how much they will take off, but I think you should consider creating an app anytime you are doing SharePoint development. The good news is that these are not taking away any options for developers. We can still do Farm Solutions. Sandbox solutions are even an option, but not recommended (oopsies). Some great aspects of apps are:
- There is a public store hosted by Microsoft but you can also have an on-prem catalog for apps specific to a company.
- Apps use underlying API and infrastructure to allow for server-to-server communication within apps as well as client side cross-site messaging (e.g., resize the IFrame).
OK, this doesn’t excite users, but it should excite developers; esp. developers that have connected SharePoint with other systems. ThreeWill does a lot of this and this should make our life easier if the other side handles OAuth as well.
Now SharePoint workflows are built on top of Windows Workflow Foundation 4 and are hosted outside of SharePoint. I think the biggest deal here, though, is that SharePoint Designer workflows have a lot more power. They can have much more sophisticated capabilities with the ability to loop and jump to different areas of the workflow. I see this being very useful at ThreeWill.
I hadn’t heard any buzz on search going in, but there were around 26 sessions on search and there are plenty of things to be interested in. The big news is that FAST is now part of core search. This will have some pain because certain things are deprecated, but I think the positives will well outweigh the negatives. A few things to note include:
- Query Rules – these give you a lot of power on how search results are presented. I think this will open up the possibilities for search-based applications.
- Display Templates – this should make it a lot easier and less intrusive to provide a different display for custom content returned from search results. These are very powerful for displaying list content outside of search as well.
- Security Trimming (something near and dear to my heart) – you can now have a pre-trimmer in addition to a post-trimmer. The post-trimmer is similar to ISecurityTrimmer2 in SharePoint 2010. The pre-trimmer will let you provide claims information and map it to claims that can now come in from the BCS crawl. A much improved and faster approach than the post-trimmer (if feasible for your situation).
- Access – its baaaack! … or maybe it never left. But this appears to be more useful for real applications. I can’t believe I just wrote that, but the fact that is uses actual SQL Server tables looks promising.
- Remote debugging for SharePoint!
- Napa Office 365 Developer Tools – looks pretty interesting; it’s a browser based development environment for O365.
- If I see one more Bing Maps demo I think I will throw up.
What did you takeaway?
Feel free to share some of your takeaways from the conference in the comments section – I’d love to hear them.