Hope that everyone had a great Thanksgiving holiday in the US and getting ready for the holiday season around the world. I have been enjoying time with my family in New York including giving my niece and nephew a brand new Xbox One. They have been filling their free time since with many waves of Plants vs. Zombies: Garden Warfare. Even I am getting into it now. I am guessing there will be many multiplayer games on Xbox Live with them. Oh darn!
One of the things I forgot to blog about earlier is one of the best events online for IT Pros. The Azure IaaS for IT Pros Online Event that has been hosted by Rick Claus is giving IT Professionals some great information about Azure and specifically the infrastructure as a service (IaaS) offerings. Azure keeps adding incredible services for Azure users including some recent additions:
Getting an opportunity to play with these services has excited me to what Azure can offer IT Professionals. One of the best uses of Azure for IT Pros is making "proof of concept" environments. You can prove how technologies work while not taking any of your current on-prem hardware. Another easy use is running development or QA testing environments. In both of these cases, you can turn on and off the environment while you need it and only be billed for the environment while it is on.
But I have buried the lead here folks. On Thursday, at 12:00 Pacific time, I will be talking about SharePoint on Azure. Yup, that's right … I will be talking about how to run SharePoint on Azure. There are some tweaks and best practices I will be talking about with SharePoint on Azure IaaS. I will also go over the new Cloud App Model (CAM) and how you can use Azure with it. Lastly, there's a few other things Azure can help with around SharePoint. I am including my introduction video here:
I recommend you heading over to http://aka.ms/levelupazure to watch the recorded sessions and to see me live tomorrow. If you can't see me live, you can always catch up on the recordings.
As you might have read, I moved my website onto Azure a couple of weeks ago. I have not looked back at all. Well, okay. Two events made me rethink my strategy around hosting on Azure. One was my own doing and the other is a conflict between DotNetNuke and the Azure SQL model. Both were resolved and I am again 100% on hosting via Azure, until the next problem rears its ugly head.
Let's review how I got to today. First, I started out on Azure with my DotNetNuke instance using the DotNetNuke Azure Accelerator. It was a miserable failure and I was floundering. I also had other issues going on that night with various technologies and decided to skip it. Then, I found the ease of setting up my Azure hosted DotNetNuke CMS system. Success!
Let's move on to last Saturday, March 2nd. I decided to do some re-configuring of the website on Azure. First thing, I reviewed my account and my bandwidth and processing needs were pushing the limits of the free account. I had to change from their "Free" webhosting instance to the "Shared" model. On top of that change, I wanted the URL to be my own website's URL and not the azurewebsites.net version that is created when you setup a website on Azure. Lastly, I wanted to use a publishing system so I could upload changes to my site when update came out. In my case, the only one I had some experience in (and not very much as I find out) was GIT but I did not want to tie my Azure site to GitHub, so I selected localized GIT on my desktop. With all of these actions, I pulled out the gun, filled and loaded the magazine, chambered a round, and pointed it at my foot.
Sunday morning rolls around and I get a text message page at 6:30 am; my Azure website is offline. HUH? How can it be offline? Did Azure have another one of their illustrious outages? Looking at the site on my phone, I got at 502 error. Ummmm … "Bad Gateway"??? Thinking my DNS was having issues, I went to the default Azure website URL and got slapped with another 502 error. My site was down! Jumping out of bed, I fumble into my computer and start to look at the issue. I pulled up the Azure Portal, my site, my monitoring services and my VM hosted mail server to get an external perspective on the issue. No matter how many times I pressed SHIFT-F5, the site was down. I checked all browsers and still the same. I had the monitoring service check from all of its servers; still down. Looking through the Azure portal, nothing seemed to be misconfigured. Checking the Azure DB, no issues were seen there. Last check was looking at the webserver logs from Azure; the logs did not show anyone visiting the site. Huh? How could my attempts from my phone, home computer and hosted VM not register in the Logs. I restarted the website services and nothing in the logs. One more SHIFT-F5 and "Ta da!", website functional. HUH? BLAM! That hurt.
I don't like having mysteries. One of the toughest thing for me in my IT world is to have something fix itself and not know what the root cause is. Many of you might remember IBM's commercials around the "Self-Healing Server Pixie Dust". I mock these commercials because parts of servers can fix themselves but others cannot. System Admins are still a necessary group of people no matter what technologies you add to hardware or software. Giving those professionals the information they need to perform good root cause analysis is more important than self-healing. Yet, this is what I was looking at. Nothing in the logs, in the stats, nor in the code told me what was wrong. Nothing like this happened the 7 days I was hosting it on the "Free" model. Being a good IT Operations person, I started rolling back my changes. Doing the easy stuff first, I reversed the DNS work and then went to breakfast. During my meal, I got 10 pages that my site was up, then down, then up, then … well, you get the idea. After breakfast, I went home and switched the site back to the "Free" model. I waited for any changes and was met with similar pages and watching my site go from non-responsive to responsive. My final thought was that the problem must be in the GIT deployment system.
The story turns very interesting at this point. Reviewing the settings for Azure, there is no way for an Azure administrator to remove a deployment system from a website. No mechanism is in the Azure Portal to change once a deployment system is selected. I was stuck with an unstable site and no way to revert back what I did. It seems Azure's method is to just recreate the site. I copied the code from my Azure website to my local computer, deleted the Azure website and created a new one in Azure, copying the code back from my desktop. Thanks to many factors, the file copying seemed to take hours though, in reality, it took 35 minutes for both down and up loads. I clicked on the link for the new site and ".NET ERROR". A huge sigh and facepalm later, I delved into what was going on. DotNetNuke was missing key files; my copy from the internet did not include them. Instead of trying to figure out where I went wrong, I reviewed what I had: an Azure website with code that was bad and an Azure SQL DB with my data. To make it easy for me, I decided to just build a new DotNetNuke installation from scratch with a new DB. Then, recopy my blog data back in to complete my work. After approximately 2 hours of work later, my site was back up and running again on the Azure URL. Success!
Going over all of the changes I wanted to make, I decided to separate out the changes and leave them for 24 hours to verify that it would not affect my site. The critical change I needed to make was changing from the "Free" mode to the "Shared" mode for the website. Azure would block the site if I did not do this because I was over my resources. This was a "no brainer" for me so this was my first change. I re-enabled my redirect from the server that hosted this site before and all was working again. Monday night rolls around and all has been stable. My next change, the URL to my domain name, was prepped and executed. My site was stable for the rest of the night and into the next day. My analysis was correct, the configuration of GIT as a "publishing" system was the cause of my outages on Sunday. Tuesday night led to a lot of review of Azure web publishing. All of the information I was able to find led me to my final conclusion; I am not developing my own code and do not need publishing. None of the systems would help me and only looked to make things more difficult. In its current mode, I can FTP files up and down from the site which is good enough for me.
Let's move on to Wednesday. I received a notice from DotNetNuke that they released 7.0.4 of their system and my site is currently running 7.0.3. I should upgrade it to make sure I am safe, secure and stable, right? As I started to download the code for the update, I got the gun back out again, filled and loaded that magazine, chambered a round, and got it aimed right next to the hole I put through my foot on Sunday. Using FTP, I uploaded the update code and pulled up the upgrade installation page. I waited for the upgrade to complete while working through my e-mail. When it completed, I turned and saw "Completed with errors". BLAM! I got to stop shooting myself like this.
One of the modern advantages of DotNetNuke is the logging that upgrades and installs do now. I was able to pull up the installation log and get the exact error messages from the upgrade installation: 3 SQL errors when it was processing the SQL upgrade statements. Looking at each error, the error messages were confusing to me. In two of the errors, the upgrade tried to determine if an index was in place and then remove said index to replace with a new one. Yet, when this was performed on my Azure DB, it threw an error saying "DROP INDEX with two-part name is not supported in this version of SQL Server". How am I going to fix this? For those of you that don't know, my start in IT was in SQL DBA and programming. I dug out my rusty SQL skills and started through the database alongside online the MSDN website for Azure SQL. In no time, I figure out what I need to do to modify the DotNetNuke code and run the SQL statements against my Azure SQL DB. The third error was even more interesting. The DotNetNuke code wanted to verify that a default value was set for a column in one of the tables. The way this is done normally in SQL Server is to query against the sys.sysconstraints system view. The problem with this in Azure SQL DB is that there is no sysconstraints view available. The SQL statement that ran returns "Invalid object name 'sysconstraints'". More digging and I found my answer; Azure SQL has the new Catalog Views of check_constraints, default_constraints, and key_constraints available. Quick change to using the default_constraints view and I found that the desired default was in place. My upgrade is now complete and a success.
As you can see, I did all of the damage myself; I cannot blame Azure for it. My impatience to not read all the way through and just get things going caused my own downtimes. I have no doubt my thrifty behavior will also be my downfall when Azure has any sort of outage in the US West Websites or SQL DB layers. If I want a website that will not go down, I need to create and pay for the Azure infrastructure to do that. For now, I am super happy with my decision. To the cloud!
Are you thinking about moving your website into a cloud provider? If not, what is stopping you from doing that? Post your questions and comments below.
Well, I finally made the switch. As many of you can see in the URL, my blog has moved from my personal servers onto the Azure fabric. It is something I wanted to do for a while and never got quite around to finishing until now. It is not totally done but I am happy with the interim results.
For those that don't know, Azure can offer easy web hosting up in their cloud with CMS systems like WordPress and DotNetNuke. I personally do use DotNetNuke and have for several years. Installation was looking to be interesting thanks to a few projects around like the DotNetNuke Azure Accelerator. Other blog entries and wikis are out there talking about how to get this accomplished.
A few weeks ago, I tried to use these "recipes" and failed miserably. In the same evening, I also screwed up local installs of some test servers and thought if I could just strike out at a bar, the evening would be complete. The process seemed to be fraught with missing settings, steps that did not work as advertised and some complications, later found out to be caused by Azure issues.
I started down their wizard path to creating my new Azure website using the Gallery image of DotNetNuke Community edition 7.0.3. Clicking the "next" arrow brought me to their initial configuration screen where I put in my Azure URL, told it to create me a new DB for the project, and allowed me to choose the region for hosting, West US in my case. One more screen for the DB setup on a new server, the DB username and password, and which region for the DB hosting, West US again for me, and we are off to the races. The next steps are very DNN specific so I will not bore the majority of my readers with those details.
Once all was setup, I could browse to the Azure Base Site URL and look at my new DNN installation. Within 10 minutes, I had my beloved DotNetNuke 7.0.3 running in the Azure cloud without any major work on my part. I was able to install my favorite blogging module, Live Blog from Mandeeps, and thanks to my SQL knowledge, port over this blog from my personal server to the Azure site. A quick set of redirections and here you are with my new version of my blog. Now, I just need to get more content up here…
Have you started using Azure for hosting of your sites? If not, why not give it a try with a 90-day trial? Sign up at http://www.windowsazure.com/en-us/pricing/free-trial/
I got a lucky offer from Karuana Gatimu to help Carrie Doring present her presentation at SharePoint Saturday in Redmond. She was supposed to speak on "The Future of the Social Collaboration Experience - A platform and community overview for beginners". I have been working on finding ways to speak and present more and had some recent opportunities with the MS IT Institute and MS IT Showcase teams. This was a different opportunity for me.
Luckily, much of what Carrie and I were going to talk about was what we were actually using to manage our preparation for the presentation. Karuana sent us a copy of the deck. I stored that copy on my OneDrive for Business and shared it out to Carrie. She and I had a couple of chats on Lync to sync up on preparation. We met together to work on it in the web version of PowerPoint so we both could make updates at the same time. It was great to use this as an example as we walked through our presentation.
The audience was awesome and had some great questions. They were quite open to the information including the community involvement information I gave them. I was able to squeeze in a plug for #TheKrewe of TechEd when I was talking about community. After the presentation was over, I even got to spend some time personally with a couple of the audience members to talk specifics to their situation.
I had a blast and had to be ushered out by the following presenter a bit. I think that I will be doing more of this. To quote the old Life commercials ... "I think he likes it!"
By the way, here is a link to the deck as a PDF file that can be downloaded by anyone.
Have you ever started troubleshooting a performance issue with an application? Nothing else can be more frustrating that trying to resolve performance issues. Recently, I have been doing this on a SharePoint 2013 Server installation. Pages take a while, up to 5 to 6 seconds, to render the page properly. Sometimes, it was taking up to 20 seconds to render. I have been doing adjustments to the system as well as warming up the application pools. Much of this I will post on here in the future.
I listen to net/podcasts and thanks to Todd Klindt's Netcast, specifically Episode 152 - Splat Bang Click Hieroglyph, I found a possible answer for some of the performance issues I was seeing. He brought up Microsoft Support KB 952167 which is named "Certain folders may have to be excluded from antivirus scanning when you use file-level antivirus software in SharePoint". I had totally forgotten that AV scanning live systems like Exchange, SharePoint or SQL can make them crawl at times. It heavily affects PACS systems in healthcare, the Picture Archive and Communication System, for radiology imaging. I should have known better and went to the AV System Admin and asked to have these exceptions put in. Result was some improvement of the rendering. There's more to do but this did help.
I got a great opportunity from Denise Begley from the Microsoft TechEd planning team to talk about my session selection process. She wanted to hear from several returning alumni (including Harjit Dhaliwal, and Michael Bender) about their process to help brand new attendees or others reading the official Microsoft TechEd Blog. I felt it was an honor and wanted to give back to the TechEd Community that has taken me in to offer my view on schedule building. Head on over to the Microsoft TechEd Blog to read the other posts with Harjit's and Michael's process. Here is my full process:
Going into TechEd 2014 in Houston, my selection process has changed for this coming TechEd due to many factors. In the past, I spent a lot of time looking through the schedule and trying to select which sessions I want to go to. To give a little bit of reference here, I have attended the last 3 TechEd North American conferences: Atlanta, Orlando, and New Orleans.
In the first year at Atlanta, I tried to get to every session at every offering. I had listed in my schedule one session per time period and spent my time running as my sessions were so spread out. If you stayed in one vertical of sessions, you didn't have to go too far. I was choosing sessions from all over. By the time I made it to many sessions, the rooms were full with no real empty seats to be found. I spent a good amount of time sitting on the floors with my back to the wall. I was shooting pictures of the slides and trying to scribble a ton of notes.
I learned many lessons from Atlanta and had a different plan of attack for Orlando. I also had the notion of taking more tests for my certifications. The sessions I went to were more focused on technologies I was investing in and I spent a lot of time in exam preps and taking exams. On top of that, I was doing more meetings in the hallways, Alumni area, and the Expo floor. I knew I was going to be able to get videos and slide decks from all the sessions when I got home. The focused sessions got me great content and I was very happy with the results of the sessions, just not of my test taking.
I went through the schedule planner and blocked out the specific sessions I did not want to miss. Then, I went through finding the presenters I really like to listen to. Lastly, I filled in my schedule with other sessions that caught my interest. In my OneNote, I noted which items were "had to attend", "nice to attend", and "just filler". I used this to block out my time to meet and stroll the Expo floor. I also planned multiple sessions per time block if I couldn't get to the room due to logistics or the room being full. All in all, I was impressed with the strategy.
So, you have heard my thoughts about prior years. Now, I will let you know about this year's TechEd. I am looking at some changes again for a couple of reasons:
So, starting my journey to Houston, I started my session planning by finding all the presenters I want to hear. Yup, that's right, I am starting with the Speakers. I looked for sessions from Rick Claus, Joey Snow, Pierre Roman, Mark Minasi, Mark Russinovich, Jessica DeVita, and Ed Horley. Some of these are friends I want to support and have great material to present; others are presenters I want to see in person. Next, I went through by Topic/Product and pulled up items like SharePoint, IaaS, System Center, Windows Azure, and more. Once that filled up my schedule, I looked at timeframes with zero or one session. In those blocks, I looked at the specific offerings at that time. Last thing in my schedule build is the Hand On Labs. In years past, I either did not take advantage on-site or took very little advantage. This year, I might look for more HOL work to try and learn some things that way. I will be keeping my schedule flexible as well but the items marked as primary are going to get me there.
Here are some of the sessions I have selected:
OFC-B220 - Stop, Collaborate, and Listen - Jessica DeVita
Jessica is talking about how to think about collaboration even before acquiring tools to do that collaboration. As she says in the description, "Before you start down the path of selecting a tool, you have to determine your organizational readiness and understand the what, why, and how of collaboration systems." With my current job in SharePoint, this is food for thought.
DCIM-B325/326 - The Real-World Guide to Upgrading Your IT Skills AND Your Infrastructure (parts 1 & 2) - Rick Claus, Joey Snow
Are you an IT Pro scared about what "The Cloud" really means for you? As they say in their description, "Two self-proclaimed “Server Huggers” will take you along their journey of how they overcame their apprehension of Cloud technologies to level up their IT Skills. In other words bringing clarity to the role of the IT Professional in a cloud world." I am starting to see where "The Cloud" integrates into my future as an IT Pro but think more folks can get a lot out of this session.
DCIM-B359 - TWC: Pass-the-Hash: How Attackers Spread and How to Stop Them - Mark Russinovich, Nathan Ide
I have heard about this presentation as it was given at the RSA Conference earlier and want to see it myself. Security is a theme for IT Pros in the future, regardless of The Cloud or on-prem. Mark's talks at TechEd always give you the best insights to Security.
WIN-B354 - Case of the Unexplained: Troubleshooting with Mark Russinovich
This session is always full! This session is always fun! I got to attend my first year in Atlanta and Mark returns every year with new cases of the unexplained. He goes through, showing how SysInternals tools were used to find either bugs in products or malware hidden away on user systems. While it is a full house every year, try and get into this one or watch it later.
OFC-B333 - Microsoft SharePoint on Microsoft Azure VM and Virtual Networks: How the Cloud (IaaS) Changed the Way I Work and Improved Customer Productivity - Patrick Heyde
At the same time Mark has his Case of the Unexplained, Patrick is giving his session on SharePoint on Azure IaaS. Knowing that Mark's session will be full, I am giving major consideration on passing it this year for listening to Patrick talk about this topic. As a SharePoint Engineer, I need to understand this as an option for hosting SharePoint to a group/company in an isolated space while the servers are in the cloud.
DCIM-B373 - How IPv6 Impacts Private Cloud - Ed Horley
Sign up for this session now! Go on … go put it in your schedule. IPv6 is coming for all of us and Ed is here to help all Windows Systems Admins understand its impact on us. Ed has written the book on IPv6 for Windows Systems Admins. This is going to be coming sooner than you might imagine so pop in and learn all you can.
Those are a few of my selected sessions. How are you planning your schedule for TechEd in Houston this year? Looking forward to seeing everyone there? I sure am. See you in Houston!
Change is something you cannot stop …
Change impacts you in multiple ways …
Change is just a part of the Information Technology industry …
Change is something IT Pros must embrace or be left behind …
Just over a year ago, I went through a pretty big change. I left the company I was working for to head back to Microsoft to join the on-prem SharePoint Custom Portals team with Microsoft IT. Looking back over that year has seen many changes with both my job, my team and Microsoft IT in general. Right now, we are in a major change around Modern Engineering practices within Microsoft IT. On top of that, SharePoint is undergoing a massive change around Office 365 and SharePoint on-prem with new releases slated for later this year. Change has been a constant companion and one should not expect anything different, especially in the world of IT.
Many people have talked about the change in the world of IT to allow for rapid evolution and growth of systems and teams. One of the names this has been given is "DevOps". Many may argue about what DevOps is and is not but the view that we are taking is one of the better views that I have seen. DevOps is not about tools. DevOps is not about process. DevOps is about philosophy. It is getting the agility and growth by having all aspects of IT working together and allowing evolution cycles to deliver the best services possible for the customer/user. It is not about developers setting up servers like operations teams. It is not about operations teams making changes to code bases. It is the feature team working together in the best possible manner to deliver the best products and focusing on the customers or users. That means that focusing on the live site and user experiences are the key to delivery in this new world. Some are scared by this change but I see this as where IT needs to focus. Money spent on services should be spent on the best possible experiences for the users.
As we are seeing this change, SharePoint continues to drive towards a major change in its own paradigm. SharePoint has been slowly moving from on-prem to the cloud with Office 365. While SharePoint is still entrenched in the on-prem world, new features are being created for Office 365 that are not making their way back to on-prem. Things like Delve and Groups are being built on the Office 365 infrastructure but are likely not going to find their way back to the on-prem version. We might see some more feature parity from Office 365 to on-prem but the Office 365 version will always have more features than on-prem due to the integration that can be done with Exchange and Lync as well as scale and Azure technologies. It shouldn't be a shock to see that products are changing. To see just how much SharePoint and all of Office 365 is changing, you should head on over to http://roadmap.office.com.
Now, in my previous roles, I didn't get to dive into the technology as my job was to be the Service Owner or Director of IT. In the Microsoft IT world, the Service Owner is the "buck stops with me" person for the service. They work with partner teams and customers to ensure the service is delivered to the users. They meet with all the various teams, provide data on the service, talk about the improvements to the service, and represent the team at all levels. Having been a Service Owner in MSIT earlier, I knew the role and the requirements for the position. In many ways, the Service Owner is the human shield between much of the IT bureaucracy and the team they represent. As I said, I have performed this role, both at Microsoft and my role as Director of IT. This can be a bit of a thankless position and if you are doing a good job, things just work.
So let met talk about why I came back to work at Microsoft. One of the things that I got offered by returning to Microsoft was to return in a very technical role as a Senior Service Engineer. My time would be spent working on the technology again. I got to focus and play with technology over the past year. Part of what I got to focus on was SharePoint 2013 infrastructure upgrades, running SharePoint on Azure and preparing for the next version of SharePoint. I was not a Service Owner anymore. This was freeing for me and I have loved the past year. On top of this, I got to work with and for a friend of mine I used to work with on my first stint with Microsoft. She was a good Service Owner in that she was the human shield allowing me to get work done. You might be wondering where all of this is going … Back in January, my Service Owner gave her notice of intent to leave at the end of the month.
I am now the Service Owner as well as being the Senior Service Engineer. I have been working with my peer to determine our roadmap and the future of our service. Being both the engineer and service owner, my time is taken up with more meetings, data gathering, and worrying about live site issues more so than just as the engineer.
I want to thank my friend, Aliya Kahn, for being that human shield for the past year. In just 4 weeks of being in this position, I can see exactly what she did for me and the team. Service ownership is very tough and can take a toll on a person professionally and personally. I will miss her, her jokes, and her optimism on a daily basis. She is missed but change brings opportunities. Here's to those new opportunities.
Many of you might be wondering why I haven't been on much of social media as of late. These new responsibilities of the Service Owner position has been keeping me very busy. I am still trying to figure out my schedule and keeping my sanity, but I will be back online soon.