Change is something you cannot stop …
Change impacts you in multiple ways …
Change is just a part of the Information Technology industry …
Change is something IT Pros must embrace or be left behind …
Just over a year ago, I went through a pretty big change. I left the company I was working for to head back to Microsoft to join the on-prem SharePoint Custom Portals team with Microsoft IT. Looking back over that year has seen many changes with both my job, my team and Microsoft IT in general. Right now, we are in a major change around Modern Engineering practices within Microsoft IT. On top of that, SharePoint is undergoing a massive change around Office 365 and SharePoint on-prem with new releases slated for later this year. Change has been a constant companion and one should not expect anything different, especially in the world of IT.
Many people have talked about the change in the world of IT to allow for rapid evolution and growth of systems and teams. One of the names this has been given is "DevOps". Many may argue about what DevOps is and is not but the view that we are taking is one of the better views that I have seen. DevOps is not about tools. DevOps is not about process. DevOps is about philosophy. It is getting the agility and growth by having all aspects of IT working together and allowing evolution cycles to deliver the best services possible for the customer/user. It is not about developers setting up servers like operations teams. It is not about operations teams making changes to code bases. It is the feature team working together in the best possible manner to deliver the best products and focusing on the customers or users. That means that focusing on the live site and user experiences are the key to delivery in this new world. Some are scared by this change but I see this as where IT needs to focus. Money spent on services should be spent on the best possible experiences for the users.
As we are seeing this change, SharePoint continues to drive towards a major change in its own paradigm. SharePoint has been slowly moving from on-prem to the cloud with Office 365. While SharePoint is still entrenched in the on-prem world, new features are being created for Office 365 that are not making their way back to on-prem. Things like Delve and Groups are being built on the Office 365 infrastructure but are likely not going to find their way back to the on-prem version. We might see some more feature parity from Office 365 to on-prem but the Office 365 version will always have more features than on-prem due to the integration that can be done with Exchange and Lync as well as scale and Azure technologies. It shouldn't be a shock to see that products are changing. To see just how much SharePoint and all of Office 365 is changing, you should head on over to http://roadmap.office.com.
Now, in my previous roles, I didn't get to dive into the technology as my job was to be the Service Owner or Director of IT. In the Microsoft IT world, the Service Owner is the "buck stops with me" person for the service. They work with partner teams and customers to ensure the service is delivered to the users. They meet with all the various teams, provide data on the service, talk about the improvements to the service, and represent the team at all levels. Having been a Service Owner in MSIT earlier, I knew the role and the requirements for the position. In many ways, the Service Owner is the human shield between much of the IT bureaucracy and the team they represent. As I said, I have performed this role, both at Microsoft and my role as Director of IT. This can be a bit of a thankless position and if you are doing a good job, things just work.
So let met talk about why I came back to work at Microsoft. One of the things that I got offered by returning to Microsoft was to return in a very technical role as a Senior Service Engineer. My time would be spent working on the technology again. I got to focus and play with technology over the past year. Part of what I got to focus on was SharePoint 2013 infrastructure upgrades, running SharePoint on Azure and preparing for the next version of SharePoint. I was not a Service Owner anymore. This was freeing for me and I have loved the past year. On top of this, I got to work with and for a friend of mine I used to work with on my first stint with Microsoft. She was a good Service Owner in that she was the human shield allowing me to get work done. You might be wondering where all of this is going … Back in January, my Service Owner gave her notice of intent to leave at the end of the month.
I am now the Service Owner as well as being the Senior Service Engineer. I have been working with my peer to determine our roadmap and the future of our service. Being both the engineer and service owner, my time is taken up with more meetings, data gathering, and worrying about live site issues more so than just as the engineer.
I want to thank my friend, Aliya Kahn, for being that human shield for the past year. In just 4 weeks of being in this position, I can see exactly what she did for me and the team. Service ownership is very tough and can take a toll on a person professionally and personally. I will miss her, her jokes, and her optimism on a daily basis. She is missed but change brings opportunities. Here's to those new opportunities.
Many of you might be wondering why I haven't been on much of social media as of late. These new responsibilities of the Service Owner position has been keeping me very busy. I am still trying to figure out my schedule and keeping my sanity, but I will be back online soon.
Everybody does it. The New Year drives changes they want to make. New Year's Resolutions. Gyms get a large push of visitors. Home stores have higher sales. The New Year causes people to want to effect change in their lives. Some industries have sprung up around these changes. Many of you might be wondering where I am going with this. Well, I am about to go through my own New Year change.
As of January 24th, I will no longer be the Director of IT Services for Radia Inc., PS. I have spent almost 4 years with Radia since leaving Microsoft in January of 2010. Over those 4 years, I have worked with good people who have a single driven goal to patient care. I was able to bring some new ideas to the IT Operations and help Radia get into a Microsoft Enterprise Agreement to help manage our software needs. Many folks have told me how I have changed how IT engaged with them to get their needs met. The best thing I was told was how I had a great laugh. At the same time, I was able to grow, learn, and mature. While I have enjoyed the people and where we were going as a team and company, I was approached with a great offer.
Starting on January 27th, I will be returning back to Microsoft. Not only will I be returning to Microsoft, I will be returning as a "Blue Badge", or full-time employee. Instead of going back in as a manager, I will be returning back as a Senior Service Engineer. My return will be back into Microsoft IT but I am coming back to the SharePoint team. While I am joining a new team, my manager and director are people I know and have worked with/for before. I hope to go more into my return to Microsoft on this blog as well as what I can about my projects. Some of you may wonder if I am going to be attending conferences like MS TechEd. I have been fortunate enough to have permission to attend conferences. So, expect to see me out and about this year.
Wrapping this us, I want thank my manager and the folks at Radia for the trust and collaboration I had with them while I was there. I also thank my new manager and director for the new opportunity ahead of me. I also thank all my friends that have believed in me and pushed me forward.
Onward and upward.
As you might have read, I moved my website onto Azure a couple of weeks ago. I have not looked back at all. Well, okay. Two events made me rethink my strategy around hosting on Azure. One was my own doing and the other is a conflict between DotNetNuke and the Azure SQL model. Both were resolved and I am again 100% on hosting via Azure, until the next problem rears its ugly head.
Let's review how I got to today. First, I started out on Azure with my DotNetNuke instance using the DotNetNuke Azure Accelerator. It was a miserable failure and I was floundering. I also had other issues going on that night with various technologies and decided to skip it. Then, I found the ease of setting up my Azure hosted DotNetNuke CMS system. Success!
Let's move on to last Saturday, March 2nd. I decided to do some re-configuring of the website on Azure. First thing, I reviewed my account and my bandwidth and processing needs were pushing the limits of the free account. I had to change from their "Free" webhosting instance to the "Shared" model. On top of that change, I wanted the URL to be my own website's URL and not the azurewebsites.net version that is created when you setup a website on Azure. Lastly, I wanted to use a publishing system so I could upload changes to my site when update came out. In my case, the only one I had some experience in (and not very much as I find out) was GIT but I did not want to tie my Azure site to GitHub, so I selected localized GIT on my desktop. With all of these actions, I pulled out the gun, filled and loaded the magazine, chambered a round, and pointed it at my foot.
Sunday morning rolls around and I get a text message page at 6:30 am; my Azure website is offline. HUH? How can it be offline? Did Azure have another one of their illustrious outages? Looking at the site on my phone, I got at 502 error. Ummmm … "Bad Gateway"??? Thinking my DNS was having issues, I went to the default Azure website URL and got slapped with another 502 error. My site was down! Jumping out of bed, I fumble into my computer and start to look at the issue. I pulled up the Azure Portal, my site, my monitoring services and my VM hosted mail server to get an external perspective on the issue. No matter how many times I pressed SHIFT-F5, the site was down. I checked all browsers and still the same. I had the monitoring service check from all of its servers; still down. Looking through the Azure portal, nothing seemed to be misconfigured. Checking the Azure DB, no issues were seen there. Last check was looking at the webserver logs from Azure; the logs did not show anyone visiting the site. Huh? How could my attempts from my phone, home computer and hosted VM not register in the Logs. I restarted the website services and nothing in the logs. One more SHIFT-F5 and "Ta da!", website functional. HUH? BLAM! That hurt.
I don't like having mysteries. One of the toughest thing for me in my IT world is to have something fix itself and not know what the root cause is. Many of you might remember IBM's commercials around the "Self-Healing Server Pixie Dust". I mock these commercials because parts of servers can fix themselves but others cannot. System Admins are still a necessary group of people no matter what technologies you add to hardware or software. Giving those professionals the information they need to perform good root cause analysis is more important than self-healing. Yet, this is what I was looking at. Nothing in the logs, in the stats, nor in the code told me what was wrong. Nothing like this happened the 7 days I was hosting it on the "Free" model. Being a good IT Operations person, I started rolling back my changes. Doing the easy stuff first, I reversed the DNS work and then went to breakfast. During my meal, I got 10 pages that my site was up, then down, then up, then … well, you get the idea. After breakfast, I went home and switched the site back to the "Free" model. I waited for any changes and was met with similar pages and watching my site go from non-responsive to responsive. My final thought was that the problem must be in the GIT deployment system.
The story turns very interesting at this point. Reviewing the settings for Azure, there is no way for an Azure administrator to remove a deployment system from a website. No mechanism is in the Azure Portal to change once a deployment system is selected. I was stuck with an unstable site and no way to revert back what I did. It seems Azure's method is to just recreate the site. I copied the code from my Azure website to my local computer, deleted the Azure website and created a new one in Azure, copying the code back from my desktop. Thanks to many factors, the file copying seemed to take hours though, in reality, it took 35 minutes for both down and up loads. I clicked on the link for the new site and ".NET ERROR". A huge sigh and facepalm later, I delved into what was going on. DotNetNuke was missing key files; my copy from the internet did not include them. Instead of trying to figure out where I went wrong, I reviewed what I had: an Azure website with code that was bad and an Azure SQL DB with my data. To make it easy for me, I decided to just build a new DotNetNuke installation from scratch with a new DB. Then, recopy my blog data back in to complete my work. After approximately 2 hours of work later, my site was back up and running again on the Azure URL. Success!
Going over all of the changes I wanted to make, I decided to separate out the changes and leave them for 24 hours to verify that it would not affect my site. The critical change I needed to make was changing from the "Free" mode to the "Shared" mode for the website. Azure would block the site if I did not do this because I was over my resources. This was a "no brainer" for me so this was my first change. I re-enabled my redirect from the server that hosted this site before and all was working again. Monday night rolls around and all has been stable. My next change, the URL to my domain name, was prepped and executed. My site was stable for the rest of the night and into the next day. My analysis was correct, the configuration of GIT as a "publishing" system was the cause of my outages on Sunday. Tuesday night led to a lot of review of Azure web publishing. All of the information I was able to find led me to my final conclusion; I am not developing my own code and do not need publishing. None of the systems would help me and only looked to make things more difficult. In its current mode, I can FTP files up and down from the site which is good enough for me.
Let's move on to Wednesday. I received a notice from DotNetNuke that they released 7.0.4 of their system and my site is currently running 7.0.3. I should upgrade it to make sure I am safe, secure and stable, right? As I started to download the code for the update, I got the gun back out again, filled and loaded that magazine, chambered a round, and got it aimed right next to the hole I put through my foot on Sunday. Using FTP, I uploaded the update code and pulled up the upgrade installation page. I waited for the upgrade to complete while working through my e-mail. When it completed, I turned and saw "Completed with errors". BLAM! I got to stop shooting myself like this.
One of the modern advantages of DotNetNuke is the logging that upgrades and installs do now. I was able to pull up the installation log and get the exact error messages from the upgrade installation: 3 SQL errors when it was processing the SQL upgrade statements. Looking at each error, the error messages were confusing to me. In two of the errors, the upgrade tried to determine if an index was in place and then remove said index to replace with a new one. Yet, when this was performed on my Azure DB, it threw an error saying "DROP INDEX with two-part name is not supported in this version of SQL Server". How am I going to fix this? For those of you that don't know, my start in IT was in SQL DBA and programming. I dug out my rusty SQL skills and started through the database alongside online the MSDN website for Azure SQL. In no time, I figure out what I need to do to modify the DotNetNuke code and run the SQL statements against my Azure SQL DB. The third error was even more interesting. The DotNetNuke code wanted to verify that a default value was set for a column in one of the tables. The way this is done normally in SQL Server is to query against the sys.sysconstraints system view. The problem with this in Azure SQL DB is that there is no sysconstraints view available. The SQL statement that ran returns "Invalid object name 'sysconstraints'". More digging and I found my answer; Azure SQL has the new Catalog Views of check_constraints, default_constraints, and key_constraints available. Quick change to using the default_constraints view and I found that the desired default was in place. My upgrade is now complete and a success.
As you can see, I did all of the damage myself; I cannot blame Azure for it. My impatience to not read all the way through and just get things going caused my own downtimes. I have no doubt my thrifty behavior will also be my downfall when Azure has any sort of outage in the US West Websites or SQL DB layers. If I want a website that will not go down, I need to create and pay for the Azure infrastructure to do that. For now, I am super happy with my decision. To the cloud!
Are you thinking about moving your website into a cloud provider? If not, what is stopping you from doing that? Post your questions and comments below.
Another monthly "Patch Tuesday" just passed by and, like many folks, I updated all my systems to include these updates. I also awoke Wednesday and Thursday to machines that had been rebooted because Windows patched itself and rebooted the computer. In both cases, I was upset as I had open documents that I had not saved. To make this easier, many software packages are going to background updating with no notice to users. I question if this is really better for users.
Everyone knows about software updates thanks to Microsoft and their Windows/Microsoft Update system. Starting out with the "Windows Update" website with Windows 95, the updates were delivered via the internet. Later versions of Windows had more integration with the Update service to the point of the current incarnation of Windows Update built into Windows 7 and soon to be released Windows 8. Microsoft gives the user many levels of customization around the update process. One option, set as the recommended standard by Microsoft, is to install critical updates and reboot after the installation. This has caused issues for many users where the computer is rebooted without letting the user know. Users have complained about losing data .
This has cause Microsoft to provide deep customizations around their updates to ensure no data loss.
Windows 8 changes this again. Having gone through a few months of patches with my Windows 8 installations, both Customer Preview and Release Preview, I prefer the new updater. Windows 8 performs the monthly patching using the Windows/Microsoft Update process as before. Users can customize this experience but the reboot is the key difference. Windows 8 gives the user notification that they should reboot within the next 3 days before it is automatically done. Finally, Microsoft is on the right path! The only thing better Microsoft can do is figure out to apply the updates without requiring reboots. As the Windows NT Core becomes more and more modular, this should be easier to do. Only the core elements would require the reboot while all subsystems could be restarted with new code.
Now, take a look at how Adobe, Mozilla and Google are doing their updates. Almost all of them have changed how they are doing their updates for their main products: Flash for Adobe, Firefox for Mozilla, and Chrome for Google. Their most current versions, as well as earlier versions of Chrome, are now setup to automatically download and install updates. If the default settings are used, all of them do this without notifying the user that there is a change. The only way to find the current version is to look in the package's "About this product" page or screen. I have not yet heard of issues with this process but a major concern is what happens if a bad release happens? Users would be confused as to why their computer wasn't working. A good example of this was Cisco's firmware update of Linksys E2700, E3500 and E4500 in late June. The update forced users to no longer use a local administrative system but a cloud-based system. There were issues with the cloud-based system and what information it tracked. With no other way to manage their routers, users are given no choice all cause by automatic updates. Cisco has reversed this but it is impacting their perception by users as many are not happy and some even returning their units.
As a manager of IT services, this concern is my biggest concern and makes me unwilling to support products that update automatically in the background. Within a managed environment, unannounced changes cause many problems. Microsoft created its monthly patching update cycle that is has around this design for enterprise environments. It is truly built around IT management systems. The updates are announced upon their delivery and allows IT teams to review and determine their risks for the organization. It also allows for testing cycles and deployment systems managed by the IT teams. The new unannounced automated updates do not allow for this.
With this movement to unannounced automated changes, some in the tech world think this change as the best thing for users. One argument is that it is good for developers as products keep improving, mentioning that it is similar to how web applications can be upgraded without user intervention. This is a bad comparison as web applications can be fully tested and conformed to "standards". Applications installed on a users' computer are more difficult. Did the software publisher check it in all configurations? This is much easier in controlled platforms like Apple's iOS and Mac OS X. With Microsoft's Windows platform and Linux based operating systems, this cannot be done easily. In one way, the fact that Microsoft can make Windows work on so many different configurations working with the hardware providers is absolutely amazing. I would suspect that Adobe, Mozilla and Google do not do this sort of in-depth testing.
I can see automatic unannounced updates for consumer users being a positive thing but personally do not like it at all. I have told Adobe to inform me of updates of Flash instead of just installing it. I am using a version of Firefox that does not have this automatic update when I need to use Firefox and have stayed on IE mostly for my personal use. To my dismay, Microsoft is now going to start performing automatic updates like Chrome and Firefox. My hope is that they offer a manage system for IT teams to control this process. Having worked at Microsoft, I wonder what the internal IT teams there think of this automatic update process.
Further automating the update process will make more users up-to-date and improve the overall security of the internet. Microsoft showed this with the move to the monthly patch process. Currently, statistics from security sources like Kaspersky Lab show a major shift in malware writers from attacking Windows directly to using other software as the attack vector, the most popular being Adobe Flash and Oracle/Sun Java. This opens up the malware folks to infecting more than just Windows, but Apple Mac and mobile devices like iOS and Google Android. The response to these threats is to do automated updates of those attack vectors. This helps users and increases security on the internet, but Microsoft has shown that a standard cadence can work. Adobe did try a standard cadence for updates to its products but has not been able to keep to a cadence due to the severity of their security issues being patched as of late. Instead of trying to make it work, they are moving to the models popularized by Google and, then, Mozilla.
The downside to all of this is the platform for upgrades. Every product seems to need to make its own product for monitoring for and applying new updates. Google and Mozilla both now install their own updater service that runs on the computer all the time and with administrative privileges. That is the only way for a service to run and install code without user intervention. My IT "spidey senses" go on high alert any time I hear this. Right now, on many home computers, there are most likely 5-10 updater services of some sort running. One solution is to have the operating system provide a standard mechanism for this sort of updating. Another is to use the task scheduling system of the operating system to schedule checks for updates. One great opportunity is the CoApp project headed up by Garrett Serack (@fearthecowboy) with many contributors. This could be a single updater that all the packages could use for their updates. Some sort of standardized and single point for updates would make users' systems run cleaner and happier.
The issue of unpatched systems on the internet is a major one for all of the computing world but especially for IT teams and their configuration management. In my review of ITIL/ITSM management philosophies, the configuration management part is the most critical aspect. Controlling change is how an IT team keeps a company running. It is the one area that most IT teams do not do well and it shows. If the push is to these unannounced automatic updates for web browsers and more companies use web tools to run their companies, how will they verify that all the web tools are going to work with each update? Will they see more Helpdesk calls from users confused when sites and tools don't work? What do you think?
"Cutting the Cord" is a catch phrase that is thrown around in this modern age. The main meaning is ability for many people to remove services that they used to pay for that seems redundant in these changing days, primary being television and telephone services. For me, I am fully cord cut when it comes to telephone and nearly cut with television. I will explain what I have done, how I came about my decisions and what it took to execute my cord cutting. In the end, there is no way to cut all cords unless you want to disconnect from the world and entertainment. Instead, the goal for most cord cutters is to run all of their needs across their data service lines. What you need to do is find your goals for cord cutting and then find what will help you achieve those goals.
The easiest service for me to cut was the telephone. I live on my cell phone; I know many others that do as well. When I used to have a phone line, I was paying nearly $50 for something I rarely used. My concerns for dropping landline service keyed around people getting a hold of me and emergency services. Since everyone had my cell phone that I wanted to contact me, that first concern was null. As for emergency services, the concern is proper location services for emergency crews to arrive at. Cell phones are now required to have E911 (Enhanced 911) location services but this is not a guarantee. Instead, I use a little known fact.
My condo already was wired for phones and that service is attached to the local phone carrier. I can plug a phone into that line and call 911 without costing me anything. This is perfect for emergencies and the 911 operator will have the location information for the line that was established by the phone company. For everything else, I use my cell phone.
My solution for phone service was easy for me, but it won't be easy for everyone. There are some good alternatives out there including Vonage and Ring Central that provide VoIP solutions over your broadband data connection to Skype and Google Voice that provide some call management and VoIP features as well. Think through what you need for yourself and your family. Then, find the service that provides what you are looking for.
Now that I have started cutting the cords, I reviewed my television entertainment needs. This will vary from person to person, not just as a whole family unit. Those needs can change over the years which will mean that flexibility is key. Let's use myself as a test case now. In 2004, I used to be a background TV person where I left the TV on all the time not really noticing what was on. Over the years, I have changed my consumption habit to enjoy specific programs. These changes were both caused and caused by my cord cutting choice.
I find that most of the shows I like to watch are available on the main networks, or specific shows on cable channels. Since I cannot purchase an a la carte cable package and do not want to pay up to $75/mo. for the small number of channels I want, I worked through the shows legally online. I start with channels that my local cable company offers via their "Basic" package (Comcast offers at $15/mo). This includes the local network stations (ABC, NBC, CBS, PBS, Fox, WB, CBC, Ion) in HD, Discovery Channel in SD, and several local off-band stations. It also includes stations that I do not care for (religious, 24 shopping, government access, non-english) thus making it a normal cable subscription. Depending on your state regulations, not every cable provider offers this basic subscription. You should be able to get these channels with a TV tuner (in a TV or in a computer) that can get digital cable in the clear (Clear QAM). However, some companies still require one of their set-top boxes to get even this "basic" package. I will be cutting this service if that occurs with Comcast.
Add to the basic cable subscription, I utilize several online services to watch episodes to fill in on channels I do not have. With my desired list of shows, most of them are available online via services like Netflix ($8/mo.) and Hulu/Hulu Plus (Free on web/$8/mo.). When I add up all the entertainment I get that way, about 90% of the TV shows I want to watch are available. To fill that last gap, I utilize Xbox Video (formerly Zune Video) and Amazon Instant Streaming. One or both of these services has the rest of the shows I want to see available to purchase with pricing based on the length of the season and the quality. There are other sources like iTunes but I choose not to use them as my devices are not well matched for it. With smaller cable subscriptions and online sources, you can find most of your content that you want to watch without paying the high rates of cable. Want even better news? You now have more options to watch entertainment thanks to the internet.
Remember that one of the changes I have undergone is having the TV on in the background to now watching specific shows and paying attention to them? This change alone reduced the amount that I was watching and helped to filter what I watched to a very specific set of shows. It also added a brand new source of content that most forget is available, the internet. Content creators have started to understand they do not need to work through "traditional" media publishing channels. They can create a website and an RSS feed to launch a "video netcast or podcast". Some large media people have jumped over to this new medium such as Leo Laporte with his TWiT network, Adam Carolla with his Adam Carolla Entertainment network, and former MTV VJ Adam Curry with his Mevio network. Others have been on the internet from inception like Audible for audiobooks and special interest sites such as Technet on Microsoft for Microsoft IT professionals. In my experiences, I find this content better than the content coming from the networks and cable, making me miss the deluge of cable channels filled with programs I never watched.
Now that you have done some homework on what you want for content, you need to think about how you are going to consume that content. Since I started with the notion of replacing television, I will focus on the use of a television in either a living or bed room. The easiest devices are some sort of set-top box that has the content available through apps. I have a Roku device in my bedroom and installed a Windows Media Center PC in my living room. I am not the norm though here in that I built and managed a computer that was a DVR/set-top box. It was the most flexible and offered all the content but not all in a 10-foot UI. Some of the content I had to use a web browser with a wireless mouse and keyboard to access. I was willing to go through that while others are not.
To make things very simple for the average user, you should really look at the Roku devices to plug into online services with their applications. They offer applications that connect to content services like Netflix, Amazon, Crackle and Hulu Plus. In addition to the major content services, Roku devices can connect to many new media companies with apps like TWiT, Revision3, The Onion News, and CNET. This gives the Roku devices a big advantage in the fight for a single box to add these services. Even some traditional television channels have applications of their own on Roku devices like CNBC, Sail TV, Fox News and NBC News that offer live feeds from their cable channels online. HBO even has their HBO-to-go service, targeted at mobile devices like tablets and phones, available on Roku devices. For this to work, you need to have a cable subscription with HBO added to it to access it. The Roku devices could be easy, cheaper set-top boxes for additional TV's in a house to reduce the need for cable/satellite set-top boxes.
Recently, I changed my living room to use a Xbox 360 as the primary device. With apps for Hulu Plus, Netflix, Amazon, YouTube and other online services, connectivity to my Media Center for recorded and live content, native support for Xbox Video, and it's DVD drive, it is a single device that I can use. As my media collection does not include Blu-rays, the HD content I get is from online sources so I am not hurt by the missing Blu-ray drive of the Xbox 360. If I paid for a full cable subscription, I could use my Xbox 360 as a set-top box for their services on Xfinity and FiOS. Through the Xbox devices, I see Microsoft trying to make a play for the living room via easy to use devices and I get that now. Rumors are with the next release of Xboxes coming in 2013, we might see a specialized media only device along with a new gaming unit.
As you have seen, I have done a lot of research on what is best for me given my consumption of entertainment. What works for me may not work for everyone. One key demographic I can see is families with children. The story here is improving with a "children's focused" Netflix integration and view along with specific apps on Roku for kid's programming. Parents need to research what is best as there is so much content available on the internet. While it can be overwhelming, it is the same thing I would expect most parents to do with other forms of entertainment. Most of the better systems to allow for parental controls to manage what kids watch but it does not beat being there and watching with them to know what they watch.
Cord cutting is possible today, even though we are in the early days of it. You see large media companies trying to slow or stop it as much as they can to keep their current revenue models flowing the dollars to them. Accepting that some content will not be available for a long time or ever is one thing a "cord cutter" using legal sources has to accept. Just one example of this for me is Game of Thrones from HBO. Without an HBO subscription, requiring a much higher cable package than I had nor wanted, I accepted that I would not get it until it was released on Xbox Video or Amazon, nearly 1-2 years later. Spend the time figuring out what you can live with and without, where can you source it, and what device can show it on your preferred screen.
The last thing I will mention is most of this requires a broadband style network connection and can push usage caps if they are attached . Since I am an IT Professional doing a lot of work online and knew I would be using media services, I purchased business class internet that provides to me 25 Mb download guaranteed with no caps. This is not cheap internet at $110/mo. and if added to everything else, might push someone back to regular cable service. I use the bandwidth for more than entertainment so I feel the monthly costs for that inexpensive and would rather put my money for that over a television subscription. Add it all up for yourself and figure out what works best for you.
What sort of cord cutting have you done? What are your goals with cord cutting? Let us all know through the comments below and help others get out the wire cutters.
Thanks to Mary Jo Foley of ZDNet/CNet, we have finally heard about Microsoft's licensing plans for the new version of Windows Server 2012 (codenamed "Windows Server 8") to be released this fall. In her article here, she covers the crux of the licensing announcement made by Microsoft but I want to look in depth at it a bit.
When you review the article by Mary Jo Foley, you can start to see some of Microsoft's next plays and whom they are going after with their pricing model and their offerings. As she says in her article:
The four SKUs are Foundation (available to OEMs only); Essentials; Standard and Datacenter. The Essentials SKU is for small/mid-size businesses and is limited to 25 users. The Standard and Datacenter SKUs round out the line-up. The former Windows Server Enterprise SKU is gone from the set of offered options.
The four SKUs are Foundation (available to OEMs only); Essentials; Standard and Datacenter. The Essentials SKU is for small/mid-size businesses and is limited to 25 users. The Standard and Datacenter SKUs round out the line-up. The former Windows Server Enterprise SKU is gone from the set of offered options.
Microsoft is removing a few of the SKU's. This includes the Enterprise SKU which was a step between Standard and Enterprise in the 2008/2008 R2 licensing model, the HPC (High Power Cluster) SKU meant for folks doing large scale computing and modeling like scientists or researchers, and the Small Business Server SKU meant for small companies as a bundle. Out of all of these SKU's, I think the biggest loss for most consumers is the Small Business Server one.
Small businesses do not have large capital to drop in large IT systems. As such, they find what they can to fit into their budget but tend to fall back on lower cost or free software to fill in the gaps. When I have worked with small business owners in the past, many had their teenager kids "build them a server" and install a Linux variant on it. The child goes off to school leaving the business to suffer with a server that cannot be updated for either features or security. In my history, about 30-40% of my consulting calls were this exact scenario.
To remedy, I would help them find a server that did cost more but gave them more bang for their buck. In many cases, it would be a commodity server of some sort running Microsoft Small Business Server. It gave the business owner something familiar for them in Windows, but also some more advanced offerings like Exchange and SQL Server. This gave them the ability to run their own messaging and calendaring server in Exchange and higher-end database server in SQL Server. They could buy software that needed one or the other to work to give them a competitive advantage against others that did not have these options. All in all, the Small Business Server was one of the better ideas that Microsoft came up with.
With the V2 release of Windows Home Server, Microsoft also released a Small Business Server related to the Home Server. This was a continuation of the Small Business Server with the Windows Home Server GUI placed on it. It offered easy AD creation, integration with Office 365, and a Premium add-in that gave the business Exchange and SQL on-premise versus only in the cloud. When I saw this offering, I was thrilled for small business owners. This could have been the "Small Business Server Appliance" operating system that could steamroll the market. After its release, all I did hear was crickets chirping and the deafening silence; the product never got off the ground.
Fast forward to Windows Server 2012 and no more Small Business Server SKU announcements today. For a business to replicate this offering,they will need to licensed either Essentials or Standard edition based on if they have more than 25 users. Then, the business will either have to license Exchange 2010/2013 (when released) or get their Exchange offering through Office 365. For the SQL services, the business could use the Express version of SQL for free but be limited by its connections/licensing model or purchase a larger copy of SQL. (For more information on 2012 SQL Licensing, check out the "Features Supported by the Editions of SQL Server 2012" page in MSDN.)
This is much more expensive than the Small Business Server model offered and will cause many small business to go back and rethink their IT strategy. In this one stroke, Microsoft may re-open the door for free packages like Linux and MySQL or have businesses using desktop operating systems as servers. Someone in Redmond needs to really look at this and remember that the small business is a large market for them. Don't just hand it over to the competition.
I got a great opportunity from Denise Begley from the Microsoft TechEd planning team to talk about my session selection process. She wanted to hear from several returning alumni (including Harjit Dhaliwal, and Michael Bender) about their process to help brand new attendees or others reading the official Microsoft TechEd Blog. I felt it was an honor and wanted to give back to the TechEd Community that has taken me in to offer my view on schedule building. Head on over to the Microsoft TechEd Blog to read the other posts with Harjit's and Michael's process. Here is my full process:
Going into TechEd 2014 in Houston, my selection process has changed for this coming TechEd due to many factors. In the past, I spent a lot of time looking through the schedule and trying to select which sessions I want to go to. To give a little bit of reference here, I have attended the last 3 TechEd North American conferences: Atlanta, Orlando, and New Orleans.
In the first year at Atlanta, I tried to get to every session at every offering. I had listed in my schedule one session per time period and spent my time running as my sessions were so spread out. If you stayed in one vertical of sessions, you didn't have to go too far. I was choosing sessions from all over. By the time I made it to many sessions, the rooms were full with no real empty seats to be found. I spent a good amount of time sitting on the floors with my back to the wall. I was shooting pictures of the slides and trying to scribble a ton of notes.
I learned many lessons from Atlanta and had a different plan of attack for Orlando. I also had the notion of taking more tests for my certifications. The sessions I went to were more focused on technologies I was investing in and I spent a lot of time in exam preps and taking exams. On top of that, I was doing more meetings in the hallways, Alumni area, and the Expo floor. I knew I was going to be able to get videos and slide decks from all the sessions when I got home. The focused sessions got me great content and I was very happy with the results of the sessions, just not of my test taking.
I went through the schedule planner and blocked out the specific sessions I did not want to miss. Then, I went through finding the presenters I really like to listen to. Lastly, I filled in my schedule with other sessions that caught my interest. In my OneNote, I noted which items were "had to attend", "nice to attend", and "just filler". I used this to block out my time to meet and stroll the Expo floor. I also planned multiple sessions per time block if I couldn't get to the room due to logistics or the room being full. All in all, I was impressed with the strategy.
So, you have heard my thoughts about prior years. Now, I will let you know about this year's TechEd. I am looking at some changes again for a couple of reasons:
So, starting my journey to Houston, I started my session planning by finding all the presenters I want to hear. Yup, that's right, I am starting with the Speakers. I looked for sessions from Rick Claus, Joey Snow, Pierre Roman, Mark Minasi, Mark Russinovich, Jessica DeVita, and Ed Horley. Some of these are friends I want to support and have great material to present; others are presenters I want to see in person. Next, I went through by Topic/Product and pulled up items like SharePoint, IaaS, System Center, Windows Azure, and more. Once that filled up my schedule, I looked at timeframes with zero or one session. In those blocks, I looked at the specific offerings at that time. Last thing in my schedule build is the Hand On Labs. In years past, I either did not take advantage on-site or took very little advantage. This year, I might look for more HOL work to try and learn some things that way. I will be keeping my schedule flexible as well but the items marked as primary are going to get me there.
Here are some of the sessions I have selected:
OFC-B220 - Stop, Collaborate, and Listen - Jessica DeVita
Jessica is talking about how to think about collaboration even before acquiring tools to do that collaboration. As she says in the description, "Before you start down the path of selecting a tool, you have to determine your organizational readiness and understand the what, why, and how of collaboration systems." With my current job in SharePoint, this is food for thought.
DCIM-B325/326 - The Real-World Guide to Upgrading Your IT Skills AND Your Infrastructure (parts 1 & 2) - Rick Claus, Joey Snow
Are you an IT Pro scared about what "The Cloud" really means for you? As they say in their description, "Two self-proclaimed “Server Huggers” will take you along their journey of how they overcame their apprehension of Cloud technologies to level up their IT Skills. In other words bringing clarity to the role of the IT Professional in a cloud world." I am starting to see where "The Cloud" integrates into my future as an IT Pro but think more folks can get a lot out of this session.
DCIM-B359 - TWC: Pass-the-Hash: How Attackers Spread and How to Stop Them - Mark Russinovich, Nathan Ide
I have heard about this presentation as it was given at the RSA Conference earlier and want to see it myself. Security is a theme for IT Pros in the future, regardless of The Cloud or on-prem. Mark's talks at TechEd always give you the best insights to Security.
WIN-B354 - Case of the Unexplained: Troubleshooting with Mark Russinovich
This session is always full! This session is always fun! I got to attend my first year in Atlanta and Mark returns every year with new cases of the unexplained. He goes through, showing how SysInternals tools were used to find either bugs in products or malware hidden away on user systems. While it is a full house every year, try and get into this one or watch it later.
OFC-B333 - Microsoft SharePoint on Microsoft Azure VM and Virtual Networks: How the Cloud (IaaS) Changed the Way I Work and Improved Customer Productivity - Patrick Heyde
At the same time Mark has his Case of the Unexplained, Patrick is giving his session on SharePoint on Azure IaaS. Knowing that Mark's session will be full, I am giving major consideration on passing it this year for listening to Patrick talk about this topic. As a SharePoint Engineer, I need to understand this as an option for hosting SharePoint to a group/company in an isolated space while the servers are in the cloud.
DCIM-B373 - How IPv6 Impacts Private Cloud - Ed Horley
Sign up for this session now! Go on … go put it in your schedule. IPv6 is coming for all of us and Ed is here to help all Windows Systems Admins understand its impact on us. Ed has written the book on IPv6 for Windows Systems Admins. This is going to be coming sooner than you might imagine so pop in and learn all you can.
Those are a few of my selected sessions. How are you planning your schedule for TechEd in Houston this year? Looking forward to seeing everyone there? I sure am. See you in Houston!