So how many of you upgraded your machines to Windows 10 Technical Preview? Go ahead ... raise your hands. I will! My work desktop is a lovely Dell Precision T7610 with Core i7 processor, 32 GB of RAM, 2 TB of storage and nice dual monitors form me to spread my work out onto. The Windows 10 Technical Preview program has been great on this rig and I can't wait to see what the team will do next.
However, I came in this Monday and started to do some work on my Hyper-V Virtual Machines that I host on this system. With 32 GB of RAM and dual NICs, I setup one for all of my VMs to connect while the other NIC is for my main OS. However, none of my VMs could get out of the main box. I checked and the cables were all plugged in. The system looked happy. What was I to do?
First, I noticed that the virtual NIC on the VMs showed that the cable was "disconnected". Now, I am a bit slow on the uptake sometime but how can a virtual NIC connected to a virtual switch be disconnected. Looking through my system, I couldn't find a virtual cable that I had to plug in. Maybe that will be something for HoloLens but I couldn't tell you because I haven't gotten my rig yet. So, that indicator gave me a suspicion around the connectivity.
Second, I went into the NIC settings on the computer/VM host to validate that the NIC was indeed connected to the network and it was happy there. Of course, it was happy and said that all should be well. This led me to think that the virtual switch was not working right.
Third, I went into the VM settings for the NICs and "removed" the cable setting the virtual switch to "Not Connected" applying that setting. After doing that, I put the setting back to my virtual switch that I use for my network connectivity ... still no go. Something tells me this is the switch.
Taking in my work so far, I set all VMs using this virtual switch to "Not Connected", thus unplugging them from the virtual switch. I removed the virtual switch and saved the settings. Then, I created a new virtual switch connected to that NIC validating that all settings worked. Once the virtual switch was recreated, I "plugged" the virtual machines in. Logging in to each machine, I verified that indeed all connectivity was returned and they were all happy. Good thing too as I needed to test some RegEx expressions for IIS URL Rewrite Tool.
I hope this helps you in your troubleshooting and maybe gives you a quick resolution to check prior to more troubleshooting.
Another monthly "Patch Tuesday" just passed by and, like many folks, I updated all my systems to include these updates. I also awoke Wednesday and Thursday to machines that had been rebooted because Windows patched itself and rebooted the computer. In both cases, I was upset as I had open documents that I had not saved. To make this easier, many software packages are going to background updating with no notice to users. I question if this is really better for users.
Everyone knows about software updates thanks to Microsoft and their Windows/Microsoft Update system. Starting out with the "Windows Update" website with Windows 95, the updates were delivered via the internet. Later versions of Windows had more integration with the Update service to the point of the current incarnation of Windows Update built into Windows 7 and soon to be released Windows 8. Microsoft gives the user many levels of customization around the update process. One option, set as the recommended standard by Microsoft, is to install critical updates and reboot after the installation. This has caused issues for many users where the computer is rebooted without letting the user know. Users have complained about losing data .
This has cause Microsoft to provide deep customizations around their updates to ensure no data loss.
Windows 8 changes this again. Having gone through a few months of patches with my Windows 8 installations, both Customer Preview and Release Preview, I prefer the new updater. Windows 8 performs the monthly patching using the Windows/Microsoft Update process as before. Users can customize this experience but the reboot is the key difference. Windows 8 gives the user notification that they should reboot within the next 3 days before it is automatically done. Finally, Microsoft is on the right path! The only thing better Microsoft can do is figure out to apply the updates without requiring reboots. As the Windows NT Core becomes more and more modular, this should be easier to do. Only the core elements would require the reboot while all subsystems could be restarted with new code.
Now, take a look at how Adobe, Mozilla and Google are doing their updates. Almost all of them have changed how they are doing their updates for their main products: Flash for Adobe, Firefox for Mozilla, and Chrome for Google. Their most current versions, as well as earlier versions of Chrome, are now setup to automatically download and install updates. If the default settings are used, all of them do this without notifying the user that there is a change. The only way to find the current version is to look in the package's "About this product" page or screen. I have not yet heard of issues with this process but a major concern is what happens if a bad release happens? Users would be confused as to why their computer wasn't working. A good example of this was Cisco's firmware update of Linksys E2700, E3500 and E4500 in late June. The update forced users to no longer use a local administrative system but a cloud-based system. There were issues with the cloud-based system and what information it tracked. With no other way to manage their routers, users are given no choice all cause by automatic updates. Cisco has reversed this but it is impacting their perception by users as many are not happy and some even returning their units.
As a manager of IT services, this concern is my biggest concern and makes me unwilling to support products that update automatically in the background. Within a managed environment, unannounced changes cause many problems. Microsoft created its monthly patching update cycle that is has around this design for enterprise environments. It is truly built around IT management systems. The updates are announced upon their delivery and allows IT teams to review and determine their risks for the organization. It also allows for testing cycles and deployment systems managed by the IT teams. The new unannounced automated updates do not allow for this.
With this movement to unannounced automated changes, some in the tech world think this change as the best thing for users. One argument is that it is good for developers as products keep improving, mentioning that it is similar to how web applications can be upgraded without user intervention. This is a bad comparison as web applications can be fully tested and conformed to "standards". Applications installed on a users' computer are more difficult. Did the software publisher check it in all configurations? This is much easier in controlled platforms like Apple's iOS and Mac OS X. With Microsoft's Windows platform and Linux based operating systems, this cannot be done easily. In one way, the fact that Microsoft can make Windows work on so many different configurations working with the hardware providers is absolutely amazing. I would suspect that Adobe, Mozilla and Google do not do this sort of in-depth testing.
I can see automatic unannounced updates for consumer users being a positive thing but personally do not like it at all. I have told Adobe to inform me of updates of Flash instead of just installing it. I am using a version of Firefox that does not have this automatic update when I need to use Firefox and have stayed on IE mostly for my personal use. To my dismay, Microsoft is now going to start performing automatic updates like Chrome and Firefox. My hope is that they offer a manage system for IT teams to control this process. Having worked at Microsoft, I wonder what the internal IT teams there think of this automatic update process.
Further automating the update process will make more users up-to-date and improve the overall security of the internet. Microsoft showed this with the move to the monthly patch process. Currently, statistics from security sources like Kaspersky Lab show a major shift in malware writers from attacking Windows directly to using other software as the attack vector, the most popular being Adobe Flash and Oracle/Sun Java. This opens up the malware folks to infecting more than just Windows, but Apple Mac and mobile devices like iOS and Google Android. The response to these threats is to do automated updates of those attack vectors. This helps users and increases security on the internet, but Microsoft has shown that a standard cadence can work. Adobe did try a standard cadence for updates to its products but has not been able to keep to a cadence due to the severity of their security issues being patched as of late. Instead of trying to make it work, they are moving to the models popularized by Google and, then, Mozilla.
The downside to all of this is the platform for upgrades. Every product seems to need to make its own product for monitoring for and applying new updates. Google and Mozilla both now install their own updater service that runs on the computer all the time and with administrative privileges. That is the only way for a service to run and install code without user intervention. My IT "spidey senses" go on high alert any time I hear this. Right now, on many home computers, there are most likely 5-10 updater services of some sort running. One solution is to have the operating system provide a standard mechanism for this sort of updating. Another is to use the task scheduling system of the operating system to schedule checks for updates. One great opportunity is the CoApp project headed up by Garrett Serack (@fearthecowboy) with many contributors. This could be a single updater that all the packages could use for their updates. Some sort of standardized and single point for updates would make users' systems run cleaner and happier.
The issue of unpatched systems on the internet is a major one for all of the computing world but especially for IT teams and their configuration management. In my review of ITIL/ITSM management philosophies, the configuration management part is the most critical aspect. Controlling change is how an IT team keeps a company running. It is the one area that most IT teams do not do well and it shows. If the push is to these unannounced automatic updates for web browsers and more companies use web tools to run their companies, how will they verify that all the web tools are going to work with each update? Will they see more Helpdesk calls from users confused when sites and tools don't work? What do you think?
Thanks to Mary Jo Foley of ZDNet/CNet, we have finally heard about Microsoft's licensing plans for the new version of Windows Server 2012 (codenamed "Windows Server 8") to be released this fall. In her article here, she covers the crux of the licensing announcement made by Microsoft but I want to look in depth at it a bit.
When you review the article by Mary Jo Foley, you can start to see some of Microsoft's next plays and whom they are going after with their pricing model and their offerings. As she says in her article:
The four SKUs are Foundation (available to OEMs only); Essentials; Standard and Datacenter. The Essentials SKU is for small/mid-size businesses and is limited to 25 users. The Standard and Datacenter SKUs round out the line-up. The former Windows Server Enterprise SKU is gone from the set of offered options.
The four SKUs are Foundation (available to OEMs only); Essentials; Standard and Datacenter. The Essentials SKU is for small/mid-size businesses and is limited to 25 users. The Standard and Datacenter SKUs round out the line-up. The former Windows Server Enterprise SKU is gone from the set of offered options.
Microsoft is removing a few of the SKU's. This includes the Enterprise SKU which was a step between Standard and Enterprise in the 2008/2008 R2 licensing model, the HPC (High Power Cluster) SKU meant for folks doing large scale computing and modeling like scientists or researchers, and the Small Business Server SKU meant for small companies as a bundle. Out of all of these SKU's, I think the biggest loss for most consumers is the Small Business Server one.
Small businesses do not have large capital to drop in large IT systems. As such, they find what they can to fit into their budget but tend to fall back on lower cost or free software to fill in the gaps. When I have worked with small business owners in the past, many had their teenager kids "build them a server" and install a Linux variant on it. The child goes off to school leaving the business to suffer with a server that cannot be updated for either features or security. In my history, about 30-40% of my consulting calls were this exact scenario.
To remedy, I would help them find a server that did cost more but gave them more bang for their buck. In many cases, it would be a commodity server of some sort running Microsoft Small Business Server. It gave the business owner something familiar for them in Windows, but also some more advanced offerings like Exchange and SQL Server. This gave them the ability to run their own messaging and calendaring server in Exchange and higher-end database server in SQL Server. They could buy software that needed one or the other to work to give them a competitive advantage against others that did not have these options. All in all, the Small Business Server was one of the better ideas that Microsoft came up with.
With the V2 release of Windows Home Server, Microsoft also released a Small Business Server related to the Home Server. This was a continuation of the Small Business Server with the Windows Home Server GUI placed on it. It offered easy AD creation, integration with Office 365, and a Premium add-in that gave the business Exchange and SQL on-premise versus only in the cloud. When I saw this offering, I was thrilled for small business owners. This could have been the "Small Business Server Appliance" operating system that could steamroll the market. After its release, all I did hear was crickets chirping and the deafening silence; the product never got off the ground.
Fast forward to Windows Server 2012 and no more Small Business Server SKU announcements today. For a business to replicate this offering,they will need to licensed either Essentials or Standard edition based on if they have more than 25 users. Then, the business will either have to license Exchange 2010/2013 (when released) or get their Exchange offering through Office 365. For the SQL services, the business could use the Express version of SQL for free but be limited by its connections/licensing model or purchase a larger copy of SQL. (For more information on 2012 SQL Licensing, check out the "Features Supported by the Editions of SQL Server 2012" page in MSDN.)
This is much more expensive than the Small Business Server model offered and will cause many small business to go back and rethink their IT strategy. In this one stroke, Microsoft may re-open the door for free packages like Linux and MySQL or have businesses using desktop operating systems as servers. Someone in Redmond needs to really look at this and remember that the small business is a large market for them. Don't just hand it over to the competition.
I know I am a big-time geek. I use my TechNet subscription from Microsoft and run many servers at home since I am not always very technical at my job as Director of IT. It is a management position, not a technical position. In setting up Windows 8 on my ASUS EP121, I wanted to use the Mail App to check on my personal mail accounts. I do have several and hoped to get the main boxes into the app. It has been a fight to get one account on and using a troubleshooting tool to get it done. Here's the info on all this …
As I stated, I have a TechNet account and run many of my own servers as testing grounds for Microsoft technologies. One of these servers is an Exchange 2010 system for e-mail. I also have my own Active Directory for my home network along with my own internal Certificate Authority for SSL Certificates. By importing the CA root, I can create certificates for my internal systems and websites. I implemented my own CA root and have my Exchange server utilizing it for its SSL certificates.
After installing the Consumer Preview of Windows 8, I tried to setup my personal Exchange account to the Mail App. After spending about 2 hours on the issue, I could not get a connection. I was tired of hitting my head against the wall and ended up connecting to my Windows 7 desktop using Outlook 2010. *sigh*
I was hoping that the Release Preview of Windows 8 and the apps would fix this. I was wrong. Hitting the forums and answers sites, I am finding I was not the only one. Many people fixed this by buying SSL certificates from a public CA like GoDaddy, Comodo or VeriSign. I was determined not to have to do this. If Microsoft wanted to support enterprise users, this is a need. They will want to use Win RT to connect to their work mail servers and many companies use their own CA.
After posting into the Answers forum, I got a PM from one of the forum moderators and a person working with Microsoft, either a vendor or Blue Badge. They wanted more information and to have me run some tools. In the course of several contacts, I did exactly what they asked and provided more and more logs. Eventually, they said that it was definitely a problem with my SSL Certificate and would I run Fiddler to look and see what the problem is. I installed the Beta copy of Fiddler for Windows 8 (v22.214.171.124 beta) and installed the Win 8 Loopback Extensions to capture the Metro app. Post installation of Fiddler and its add-on, I fired up Mail. To my amazement, it connected and synced my mail, calendar, and contacts. Everything was working as it should normally. A quick run through of my settings in Fiddler to ensure I was not ignoring SSL problems and a couple of other tests, I found myself with a working Mail app.
After reporting this to the forum moderator, I still not have heard anything from him other than "we are still working on the issue" posts. I find it an interesting solution to this issue. I wonder if this has anything to do with known SSL/HTTPS issues in MetroTwit on the Windows 8 Desktop. I also can say that the Metro IE will not pull up HTTPS pages properly but they work in the Desktop version of IE 10. I hope they fix some of these protocol oddities soon as the release to manufacturing is around the corner.
UPDATE 1: I also found that the Skydrive app does not work with Fiddler running and working for the Mail App in my case. More information on this issue is located at the Microsoft Answers Page on the issue.
This is a topic near and dear to me these days. Having suffered a recent outage at my job with over 9 hours of downtime, this is now a major issue for me to work through. Everyone always gives disaster recovery (DR) lip service. They come up with ways to backup data, provide alternative networking access as they can afford, and try to create plans. My feeling, much like what I have dealt with at prior positions, is that no one really invests into DR. I hope to provide a few cautionary tales to help you convince your management to make the investment.
Disaster recovery is insurance. All the investment that is made in DR is insurance against a downtime. At the same time, everyone keeps saying "it will never happen to me." I can provide references that it does happen and the outcomes can be brutal for a business. Downtime can lead to loss of business opportunity, change in customer perception reducing their business with you, loss of customers entirely, or complete collapse and closure of the business. To offset these outcomes, businesses invest in disaster recovery to mitigate the impact of downtimes.
When looking at DR, first thing that people need to determine is what are those critical systems; what systems do you have that if you lost them would impact the business most. For a manufacturing company, it could be their control systems for their machinery. For a datacenter, it could be the power and networking systems to keep the hosted systems online. For a healthcare company, it could be all the systems involved with patient care. The IT team needs to sit with the business and management teams to determine which systems are those critical systems and all of the infrastructure that supports it.
Now that the critical systems are identified and their infrastructure is determined, a full risk assessment of those systems and infrastructure needs to be completed. Are there devices that have single points of failure? Can servers be connected to the network in diverse paths, also known as teaming? Can the software be setup in clustering technologies to allow more than one server to be setup and kept in sync? What equipment is the oldest and have a higher possibility of failure? Working through the risk assessment with knowledgeable team members in both the IT and business teams will help find the answers quickly.
Now that the risks are identified, the professionals need to step in and make some plans to mitigate those risks. That planning can include duplicate systems, cluster creation, backup and recovery techniques, additional networking equipment and lines, and warm/cold spare hardware to name a few. Each of these plans need to be fully thought out including the costs of creation and ongoing maintenance.
Part of the maintenance of backup systems is using them, a largely overlooked step of DR planning. Both business and IT teams need to role-play disasters to ensure these policies, procedures, and systems will work. These sorts of tests interrupt normal business operations but should be done on a regular basis to ensure all systems are go for a real disaster. After each test, the affected teams should get together and review the test event to improve policies, procedures, or systems in the future.
I know that what I have said so far is something that everyone else has said to their management to push for better DR planning and testing. I have said it myself at times. Having gone through a large outage that affected my company's business has brought it to the forefront for me and gotten the attention of my company, a company that runs 24x7 for our business. We lost our primary datacenter, the hosting location for primary servers and the hub of our network, for approximately 9 hours on a Thursday night, which is our busiest times of the week. While we had some basic processes and procedures in place, it was thanks to the hard working teams at my company that we made it through the outage.
During the outage, the primary datacenter lost its primary power at the Automatic Transfer Switch (ATS) that allowed them to select either the utility company or their generators as the power source. Not only did they lose the power there, the ATS literally blew up blowing out part of the wall behind it. In trying to get the datacenter power back online, they also found that a fuse in the transformer was bad, possibly causing the whole problem. To correct the transformer fuse, they would have to fail their second power source from the utility to generator to allow the utility to pull a fuse from that second transformer as the utility crew did not have a spare on hand instead of waiting up to 2.5 hours for them to go get one at their warehouse and returning.
While seeming a simple fix, this would have impacted part of the datacenter that was still operational and hosting one of their biggest customers. That customer did not want any more change introduced into their hosting systems. As a customer impacted by the continued outage, I pushed on the datacenter to start the change with haste. This put the datacenter in the middle between customers.
Eventually, this was resolved and the generator added to the second circuit, allowing the utility to repair the primary circuit. This is where good process and planning helped out my team because we knew which systems had to be started first and what order to effectively restart our business. Once we got our systems up, the business teams started in cleaning up their issues from the outage.
After the outage, an emphasis was placed on all parts of my company to determine ways to improve our business resilience to outages. This includes alternative network connectivity for outages, secondary datacenters, hardened systems, and improved policies and procedures to reduce the impact on our customers if we have another outage.
I will admit that I wrote this blog entry a while ago but could not finish it off until now. It was difficult to read what I wrote because it would make me go back and remember all that happened; reading my blog entry brought back all of those memories and feelings as if they were happening again. Major service interruptions are difficult for any group. What made this worse for me was that there was nothing I could do but wait for our hosting provider to fix their facility and services. Since this occurred, they also have taken some steps to improve their offering to ensure clients like my company do not suffer through something like this again. Improvement can happen for you directly or for your providers and partners.
The key takeaway is that outages will occur. The better your systems and networks are designed and the more time is invested in both business and IT policies and procedures, downtime impact can be reduced and customers can be kept happy during those outages. The best outcome that IT and business teams can hope for is no impacts for their customers at all while systems are offline or unavailable. No single system can stay 100% available forever but well-designed systems and networks can offer the "Five 9's of availability" (99.999%) or no more than just over 5 minutes per year of downtime.
What are you doing for your disaster recovery? Is it even a thought for you or your company?
Well, I have been using my Surface RT for over a week now and been finding it's finer points that work for me and the things that drive me absolutely batty. Overall, I have found the Surface RT to be a complete replacement for my Asus EP-121 slate. However, I will not be selling/giving away my EP-121 just yet.
Let me explain my main uses for computers in my life. Being that I am a Director of IT in my day job, I am surrounded by technology a lot. My current "arsenal" of systems include:
That is just my home workstations and servers and my workstation at work. That doesn't include my phones or other mobile tech.
Since Windows 8 CP, I have had Windows 8 installed on my Asus EP-121 and enjoyed the environment immensely. I knew from that experience that touch was going to be a key to the success of Windows 8 in general. The UI felt comfortable with mouse and keyboard but was geared for the touch interaction. The overall experience was good, except for driver management and occasional crashes.
Move the calendar to July and Microsoft's announcement about the Surface line. I was jazzed to hear this foray by Microsoft into the hardware platforms. To date, none of the slate/tablets that the partners put out were the experience. I did like my Asus EP-121 and Samsung had a solid device in their Series 7 slates. Microsoft was going to put a flag in the ground saying this was to be the premier experience of users on the Windows 8 and Windows RT platforms. I spent a lot of time pouring over the specs and restrictions of each system which drove me to an idea.
Starting in July, I restricted my use on the EP-121 to what would be available to me on the Surface RT; I would use M**** apps, Office apps (Word, Excel, PowerPoint and OneNote only) and other built-ins. My mobile digital life revolved around this device and it worked out for me pretty well.
Flash-forward to October 26th and the arrival of my Surface RT; my excitement was immeasurable. I busted it out and started to play with it over the weekend. So far, so good for me with the Surface RT. Now, I have been using the Surface RT all week for both personal and work type activities. Here's what I have found so far:
I go back to my original post on the Surface RT and re-affirm the purchase criteria:
How has your experience with the Surface RT been? Are you a developer and just got yours at BUILD? If you own another tablet, are you considering a Surface RT?
As you might have read, I moved my website onto Azure a couple of weeks ago. I have not looked back at all. Well, okay. Two events made me rethink my strategy around hosting on Azure. One was my own doing and the other is a conflict between DotNetNuke and the Azure SQL model. Both were resolved and I am again 100% on hosting via Azure, until the next problem rears its ugly head.
Let's review how I got to today. First, I started out on Azure with my DotNetNuke instance using the DotNetNuke Azure Accelerator. It was a miserable failure and I was floundering. I also had other issues going on that night with various technologies and decided to skip it. Then, I found the ease of setting up my Azure hosted DotNetNuke CMS system. Success!
Let's move on to last Saturday, March 2nd. I decided to do some re-configuring of the website on Azure. First thing, I reviewed my account and my bandwidth and processing needs were pushing the limits of the free account. I had to change from their "Free" webhosting instance to the "Shared" model. On top of that change, I wanted the URL to be my own website's URL and not the azurewebsites.net version that is created when you setup a website on Azure. Lastly, I wanted to use a publishing system so I could upload changes to my site when update came out. In my case, the only one I had some experience in (and not very much as I find out) was GIT but I did not want to tie my Azure site to GitHub, so I selected localized GIT on my desktop. With all of these actions, I pulled out the gun, filled and loaded the magazine, chambered a round, and pointed it at my foot.
Sunday morning rolls around and I get a text message page at 6:30 am; my Azure website is offline. HUH? How can it be offline? Did Azure have another one of their illustrious outages? Looking at the site on my phone, I got at 502 error. Ummmm … "Bad Gateway"??? Thinking my DNS was having issues, I went to the default Azure website URL and got slapped with another 502 error. My site was down! Jumping out of bed, I fumble into my computer and start to look at the issue. I pulled up the Azure Portal, my site, my monitoring services and my VM hosted mail server to get an external perspective on the issue. No matter how many times I pressed SHIFT-F5, the site was down. I checked all browsers and still the same. I had the monitoring service check from all of its servers; still down. Looking through the Azure portal, nothing seemed to be misconfigured. Checking the Azure DB, no issues were seen there. Last check was looking at the webserver logs from Azure; the logs did not show anyone visiting the site. Huh? How could my attempts from my phone, home computer and hosted VM not register in the Logs. I restarted the website services and nothing in the logs. One more SHIFT-F5 and "Ta da!", website functional. HUH? BLAM! That hurt.
I don't like having mysteries. One of the toughest thing for me in my IT world is to have something fix itself and not know what the root cause is. Many of you might remember IBM's commercials around the "Self-Healing Server Pixie Dust". I mock these commercials because parts of servers can fix themselves but others cannot. System Admins are still a necessary group of people no matter what technologies you add to hardware or software. Giving those professionals the information they need to perform good root cause analysis is more important than self-healing. Yet, this is what I was looking at. Nothing in the logs, in the stats, nor in the code told me what was wrong. Nothing like this happened the 7 days I was hosting it on the "Free" model. Being a good IT Operations person, I started rolling back my changes. Doing the easy stuff first, I reversed the DNS work and then went to breakfast. During my meal, I got 10 pages that my site was up, then down, then up, then … well, you get the idea. After breakfast, I went home and switched the site back to the "Free" model. I waited for any changes and was met with similar pages and watching my site go from non-responsive to responsive. My final thought was that the problem must be in the GIT deployment system.
The story turns very interesting at this point. Reviewing the settings for Azure, there is no way for an Azure administrator to remove a deployment system from a website. No mechanism is in the Azure Portal to change once a deployment system is selected. I was stuck with an unstable site and no way to revert back what I did. It seems Azure's method is to just recreate the site. I copied the code from my Azure website to my local computer, deleted the Azure website and created a new one in Azure, copying the code back from my desktop. Thanks to many factors, the file copying seemed to take hours though, in reality, it took 35 minutes for both down and up loads. I clicked on the link for the new site and ".NET ERROR". A huge sigh and facepalm later, I delved into what was going on. DotNetNuke was missing key files; my copy from the internet did not include them. Instead of trying to figure out where I went wrong, I reviewed what I had: an Azure website with code that was bad and an Azure SQL DB with my data. To make it easy for me, I decided to just build a new DotNetNuke installation from scratch with a new DB. Then, recopy my blog data back in to complete my work. After approximately 2 hours of work later, my site was back up and running again on the Azure URL. Success!
Going over all of the changes I wanted to make, I decided to separate out the changes and leave them for 24 hours to verify that it would not affect my site. The critical change I needed to make was changing from the "Free" mode to the "Shared" mode for the website. Azure would block the site if I did not do this because I was over my resources. This was a "no brainer" for me so this was my first change. I re-enabled my redirect from the server that hosted this site before and all was working again. Monday night rolls around and all has been stable. My next change, the URL to my domain name, was prepped and executed. My site was stable for the rest of the night and into the next day. My analysis was correct, the configuration of GIT as a "publishing" system was the cause of my outages on Sunday. Tuesday night led to a lot of review of Azure web publishing. All of the information I was able to find led me to my final conclusion; I am not developing my own code and do not need publishing. None of the systems would help me and only looked to make things more difficult. In its current mode, I can FTP files up and down from the site which is good enough for me.
Let's move on to Wednesday. I received a notice from DotNetNuke that they released 7.0.4 of their system and my site is currently running 7.0.3. I should upgrade it to make sure I am safe, secure and stable, right? As I started to download the code for the update, I got the gun back out again, filled and loaded that magazine, chambered a round, and got it aimed right next to the hole I put through my foot on Sunday. Using FTP, I uploaded the update code and pulled up the upgrade installation page. I waited for the upgrade to complete while working through my e-mail. When it completed, I turned and saw "Completed with errors". BLAM! I got to stop shooting myself like this.
One of the modern advantages of DotNetNuke is the logging that upgrades and installs do now. I was able to pull up the installation log and get the exact error messages from the upgrade installation: 3 SQL errors when it was processing the SQL upgrade statements. Looking at each error, the error messages were confusing to me. In two of the errors, the upgrade tried to determine if an index was in place and then remove said index to replace with a new one. Yet, when this was performed on my Azure DB, it threw an error saying "DROP INDEX with two-part name is not supported in this version of SQL Server". How am I going to fix this? For those of you that don't know, my start in IT was in SQL DBA and programming. I dug out my rusty SQL skills and started through the database alongside online the MSDN website for Azure SQL. In no time, I figure out what I need to do to modify the DotNetNuke code and run the SQL statements against my Azure SQL DB. The third error was even more interesting. The DotNetNuke code wanted to verify that a default value was set for a column in one of the tables. The way this is done normally in SQL Server is to query against the sys.sysconstraints system view. The problem with this in Azure SQL DB is that there is no sysconstraints view available. The SQL statement that ran returns "Invalid object name 'sysconstraints'". More digging and I found my answer; Azure SQL has the new Catalog Views of check_constraints, default_constraints, and key_constraints available. Quick change to using the default_constraints view and I found that the desired default was in place. My upgrade is now complete and a success.
As you can see, I did all of the damage myself; I cannot blame Azure for it. My impatience to not read all the way through and just get things going caused my own downtimes. I have no doubt my thrifty behavior will also be my downfall when Azure has any sort of outage in the US West Websites or SQL DB layers. If I want a website that will not go down, I need to create and pay for the Azure infrastructure to do that. For now, I am super happy with my decision. To the cloud!
Are you thinking about moving your website into a cloud provider? If not, what is stopping you from doing that? Post your questions and comments below.