Monday, August 21, 2017

Azure Active Directory (AAD): Where Is My Data Stored?

Situation
A customer wants to use Azure Active Directory (AAD) but needs to know where the data (like user name, credentials and attributes) is stored. On itself a solid question. However, the answer wasn’t easily found. Or better, quite obscure.

The basics
Before the answer is found (and clarified) one most familiarize him/herself with some Azure ‘slang’. In this posting I limit myself to the ones related to this article.

  • Geo: Abbreviation for geography. At this  moment Azure is to be found in 13 geo’s and two more are announced (France & South Africa).
  • Region: Can be looked upon as one HUGE data center, hosting many Azure services. For instance, there is an Azure region in Amsterdam (Netherlands) and one in Dublin (Ireland)
  • Region Pair: Two directly connected Azure regions, placed within the same geography BUT located greater then 300 miles apart (when possible). An Azure Region Pair offers benefits like data residency (except for AAD…), Azure system update isolation, platform provided replication, physical isolation and region recovery order.

Example of a Geo, with its Azure Regions and Region Pair is Geo Europe. This Geo has two Azure Regions: one in Amsterdam (Netherlands), named West Europe and the other Azure Region located in Dublin (Ireland), named North Europe. Together they make up the Region Pair for Geo Europe.

Azure data storage location by default
By default most Azure services are deployed regionally, enabling the customer to specify the Azure Region where their customer data will be stored. This is the case for VMs, storage and Azure SQL databases.

So when you deploy a set of VMs in the Region West Europe with related storage, that data will be stored in Amsterdam (Netherlands). And yes, and some parts of that data will be replicated to North Europe as well since both Regions are part of the same Region Pair. Reasons for this replication might be of an operational nature and/or of data redundancy options selected by the customer.

This is as expected. However it get’s trickier…

USDS (United States of Data Storage)?
However, there ARE exceptions to the above. In quite a few cases customer data will be stored outside by the customer selected Region (and Region Pair as such).

For instance their are some Azure regional services like Azure RemoteApp, Microsoft Cognitive Services, Preview, beta, or other prerelease services and Azure Security Center which data may be transferred and stored globally by Microsoft. And many times it will end up (in some form) in the USA, or United States of Data Storage…

How about AAD?
AAD isn’t an Azure service offered locally, but is designed to run globally. Any Azure service designed to run globally, it doesn’t allow the customer to specify a certain Region where to store the data related to that same Azure service.

And again, Microsoft isn’t very clear about where that data is exactly stored: ‘…Azure Active Directory, which may store Active Directory data globally…’.

To make it even more confusing the same website states: ‘…This does not apply to Active Directory deployments in the United States (where Active Directory data is stored solely in the United States) and in Europe (where Active Directory data is stored in Europe or the United States)…’

Azure services which operate globally are:

  • Content Delivery Network (CDN);
  • Azure Active Directory (AAD);
  • Azure Multi-Factor Authentication (AMFA);
  • Services that provide global routing functions and do not themselves process or store customer data (Eg: Traffic Manager, Azure DNS).

Still not sure where AAD stores its data…
Because Microsoft is a bit elusive about where EXACTLY AAD data is stored, it’s better to look how AAD is made up technically. Many times the technicians don’t do politics Smile.

The article Understand Azure Active Directory architecture is quite recent and very informative. It tells about primary and secondary replicas used for storing AAD data. And the latter ones make it interesting: ‘…which (the secondary replicas) are at data centers that are physically located across different geographies...’.

Basically it tells me that AAD data is replicated globally. And sure, it will turn up in the USA (USDS) as well. As the matter of a fact, it will turn up in every Region servicing Office 365. Simply because without AAD there is no Office 365 consumption.

And for sure, the same article clarifies it even more with the header Data centers: ‘…Azure AD’s replicas are stored in datacenters located throughout the world…’.

Verdict
When using AAD you know for certain that user data (user names, credentials and meta data for instance) ARE replicated globally.

Do I need to worry?
That depends. Know however, that Microsoft goes to extreme lengths to secure your data. Physical access to their data centers is limited to a subset of highly screened people. On top of it all, Microsoft doesn’t allow governments and agencies to access customer data that easily.

And yes, Microsoft offers the Trusted Cloud. Looking at the sheer amount of certifications and data residency guarantees, you can rest assured that Microsoft does its outmost best to offer the most secure cloud services platform ever built.

Alternatives?
Sure, you can look for alternatives. Like Amazon AWS S3. However, the meta data related to those ‘buckets’, also containing customer data, isn’t guaranteed to stay at a certain location either…

Another approach could be using Azure Geo Azure Germany. Because of VERY strict privacy laws, the exceptions for data storage for regional and global Azure services DON’T apply…

Recommended resources
For a better understanding of this article I recommend to read these resources:



Cross Post: Speeding up OpsMgr Dashboards Based On The SQL Visualization Library

Dirk Brinkmann (Microsoft SCOM PFE, based in Germany) has posted an excellent article all about an easy (and undocumented) way to speed up the SCOM/OpsMgr dashboards bases on the SQL Visualization Library MP.

Go here to read all about it.

Thank you Dirk for sharing!

Largest Microsoft Ebook Giveaway!

Ever wanted to know anything about the latest Microsoft technologies, but were afraid to BUY an ebook because todays technologies are changing too fast? So what you buy today is outdated tomorrow?

Fear no longer! Simply download a FREE Microsoft ebook on the topic you want to know more about it and be done with it. Oh, and because it’s FREE why not download many more Microsoft ebooks?

Want to know more? Hunger for more knowledge? Looking for FREE ebooks, reference guides, Step-By-Step Guides, and other informational resources? Go here and be AMAZED, just like me.

A BIG thanks to Microsoft!

PDF: Overview of Microsoft Azure compliance

When you’re about to use Azure and want to know whether it’s compliant with the regulations your company has to met, I strongly advise you to download the PDF Microsoft Azure Compliance Offerings.

As Microsoft describes: ‘…Azure compliance offerings are based on various types of assurances, including formal certifications, attestations, validations, authorizations, and assessments produced by independent third-party auditing firms, as well as contractual amendments, self-assessments, and customer guidance documents produced by Microsoft. Each offering description in this document provides an up to date scope statement indicating which Azure customer-facing services are in scope for the assessment, as well as links to downloadable resources to assist customers with their own compliance obligations. Azure compliance offerings are grouped into four segments: globally applicable, US government, industry specific, and region/country specific…’

Wednesday, July 26, 2017

Holiday

This blog will be silent for the next few weeks because I am going on holiday, enjoying my family to the fullest.
Image result for national lampoon's european vacation
(Picture from the movie ‘National Lampoon's European Vacation’)

After the holiday ‘I’ll be back’ with quite a few postings, like (but not limited to):

  • The 2 last postings for the series about the future of the System Center stack related to Microsoft’s  ‘Cloud & Mobile First’ strategy;
  • Quite a few postings about Azure (IaaS & management);
  • SCOM updates and the lot.

I wish everybody a nice holiday (if not already enjoying it) and see you all later.

Bye!

Thursday, July 20, 2017

‘Mobile First–Cloud First’ Strategy – How About System Center – 05 – SCSM


Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
01 – Kickoff
02 – SCCM
03 – SCOrch
04 – SCDPM


In the fifth posting of this series I’ll write about how System Center Service Manager (SCSM) relates to Microsoft’s Mobile First – Cloud First strategy. Like SCOrch I think that SCSM isn’t going to make it to the cloud…

Ever heard of Service Desk?
The very start of SCSM was a bumpy ride. Originally it was code-named Service Desk and was tested back in 2006, with the release scheduled somewhere in 2008. The beta release ran on 32-bits(!) version of Windows Server 2003, with IIS 6.0, some .NET Frame work versions (of course), SQL Server 2005 and SharePoint Server 2007 Enterprise.

Service Desk was really a beast. Terrible to install, a disaster to ‘run’ (it was slooooooooooooooow) and filled to the brim with bugs. Totally unworkable. Back then I was part of a test team which put the ‘latest & greatest’ of Microsoft’s products through its paces. The whole team was amazed about the pre-beta level of it. Never ever before we bumped into such crappy software. We even wondered whether we had received the proper beta bits…

So none of us was surprised when Microsoft pulled the plug on it and sent the developers back to their drawing boards. In the beginning of 2008 Microsoft officially announced it was delaying the release until 2010, because the beta release had performance and scalability issues. Duh!

Meanwhile a new name was agreed upon: Service Manager.

2010: Say hello to SCSM 2010
In 2010 the totally rewritten SCSM 2010 was publicly released at MMS, Las Vegas. For sure, the code base for SCSM 2010 was totally new, but somehow the developers had succeeded in bringing back some of the issues which plagued Service Desk: performance and scalability issues… Ouch!

Because SCSM 2010 was really the first version (totally rewritten code remember?) it missed out on a lot of functionality. As a result Microsoft quickly brought out Service Pack 1 for it, somewhere in the end of 2010. For SCSM 2010 SP1 in total 4 cumulative updates were published, alongside a few hotfixes.

From 2012x to 2016 in a nutshell
Sure with every new version (2012, 2012 SP1, 2012 R2 and 2016) the performance and scalability issues were partially addressed, but never they really disappeared. As a result SCSM has a track record for being slow and resource hungry. For SCSM 2016 Microsoft claims that data processing throughout has been increased by 4 times.

None the less, the requirements for SCSM 2016 are still something to be taken seriously. For instance, Microsoft recommends physical hosts, 8-core CPU’s and so on. The number of required systems can run over 10+(!), especially when you want to use Data Warehouse cubes and the lot. Even for enterprises this is quite an investment for just ONE tool.

Also with every new version, additional functionality was added. For instance, SCSM 2016 introduced a HTML based Self Service Portal. Unfortunately, the first version of that portal had some serious issues, most of them addressed in Update Rollup #2.

All in all, the evolution from SCSM 2010 up to SCSM 2016 UR#2 has been quite a bumpy ride with many challenges and issues.

Deep integration
Of course, SCSM offers a lot of good stuff as well. It’s just that SCSM is – IMHO – the component of the SC stack with the most challenges. One of the things I like about SCSM is the out-of-the-box integration with other tools and environments.

SCSM can integrate with AD, other System Center stack components (SCOM, SCCM, SCVMM and SCOrch). And – still in preview – you can use the IT Service Management Connector (ITSMC) in OMS Log Analytics to centrally monitor and manage work items in SCSM. As a result, the underlying CMDB is enriched with tons of additional information for the contained CI’s.

SCSM & Azure
At this moment – besides the earlier mentioned ITSMC in OMS – there are no other Azure Connectors available, made by Microsoft that is. There are some open source efforts, like the Azure Automation SCSM Connector on GitHub. But as far as I know, it isn’t fully functional.

Other companies like Gridpro and Cireson, are offering their solutions. But since these companies do have to earn a living as well, their solutions don’t come for free, adding additional costs to your SCSM investment. Still, some of their solutions resolve SCSM pain points for once and for all. So in many cases these products deserve at least a POC.

But still the Azure integration is limited. On top of it all, Microsoft itself doesn’t offer any Azure based SCSM alternatives. Azure Marketplace offers a few third party Service Management solutions (like Avensoft nService for instance) but none of them Microsoft based.

Of course, you could  install SCSM on Azure VMs, but shouldn’t since it’s a resource savvy product, which would bump up Azure consumption (and thus the monthly bill) BIG time.

No Roadmap?!
Until now Microsoft is pretty unclear about their future investments in SCSM. There is nowhere a roadmap to be found. So no one knows – outside Microsoft that is – what will happen with SCSM in the near future. Will there ever be a new version after SCSM 2016? I don’t know for sure. But the signs are the tell tale sign their won’t be…

ServiceNow
In the last years the online service management solution ServiceNow has seen an enormous push and growth. Not just in numbers but also in products and services.

Basically ServiceNow delivers – among tons of other things – SCSM functionality in the cloud. Fast, and reliable. It just works. Also it integrates with many environments, tools and the lot.

Verdict
SCSM has a troublesome codebase which isn’t easily converted to Azure without (again Smile) a required rewrite. When looking at where SCSM stands today, the reputation it has, I dare to say it’s end of the line for SCSM. No follow up in the cloud, nor a phased migration (like SCDPM or SCCM) to it.

Instead Microsoft is silent about the future of SCSM which on itself says a lot. One doesn’t need to speak in order to get the message across.

Combined with the power of ServiceNow, fully cloud based, it’s time to move on. When you don’t run SCSM now, stay away from it. Because anything you put into that CMDB must be migrated to another Service Management solution sooner or later. Instead it’s better to look for alternatives, using todays technologies to the fullest, like ServiceNow or Avensoft nService. For sure, there are other offerings as well. POC them and when they adhere to your company’s standards, use them.

When already running SCSM, upgrade it to the 2016 version. It has Mainstream Support till 11th of January 2022. Time enough to look out for alternatives, whether on-premise or in the cloud. Because SCSM won’t move to the cloud nor will Microsoft invest heavily in it like it did before it adopted their Mobile First – Cloud First strategy.

So don’t wait until it’s 2022, but move away from SCSM before that year, so you can do things on your own terms and speed, not dictated by an end-of-life date set for an already diminishing System Center stack component.

Coming up next
In the sixth posting of this series I’ll write about SCVMM (System Center Virtual Machine Manager). See you all next time.

Monday, July 17, 2017

Azure Stack and Azure Stack Development Kit Q&A

Since Azure Stack is GA, many questions have come forward. Not only about Azure Stack but also about Azure Stack Development Kit. I’ll do my best to answer most questions and refer to the online resources as well.

01: What’s Azure Stack?
As Microsoft states: ‘Microsoft Azure Stack is a hybrid cloud platform that lets you deliver Azure services from your organization’s datacenter…’. Still it sounds like marketing mumbo jumbo.

Basically it means that with Azure Stack your organization has the same Azure technology on-premise available, deeply integrated with the public Azure. Of course, Azure Stack doesn’t offer the same breadth and depth of services as the public Azure, but still it packs awesome cloud power. It’s to be expected that with future updates Azure Stack will offer more and more public Azure based services and technologies, based on the use cases and demands of existing Azure Stack customers.

And because Azure Stack and the public Azure use the same technologies, the end user experience is fully transparent. The same goes for the administration experience. So basically Azure Stack can be looked upon as an extension of Azure.

So yes, one could look at Azure Stack as a kind of private cloud which can be heavily tied into the public Azure, thus creating a super powered hybrid cloud. But there is more.

02: Does Azure Stack require a permanent connection with public Azure?
No, it doesn’t. You can run Azure Stack either in a Connected scenario or Disconnected scenario. In a Connected scenario Azure Stack has a permanent connection with the public Azure. In a Disconnected scenario, Azure Stack doesn’t have a permanent connection.

Even though the first scenario – Connected – makes the most sense, there are enough valid use cases for the Disconnected scenario as well. Think about area’s with a low internet connection density combined with a far away public Azure region. Or how about hospitals, embassies, military installations and bases? The kind of information kept and processed in places like those are valid use cases for the Disconnected scenario.

03: Why should companies use Azure Stack while public Azure offers more services and is more powerful?
Good question! Suppose you’ve got a production facility which generates HUGE amount of data. That data is processed, and the result sets are used further down the production line. In a public Azure setup it would require an enormous data pipeline to Azure in order to get that data across. And when processed, the result sets have to send back as well. Which is egress traffic = money. On top of it all there is latency since the data travels between the factories and Azure.

With Azure Stack, that data is processed locally (no data traffic costs since it’s local LAN, no WAN) and there is no to very small latency.

Another valid use case is app development. Here public Azure is used for development and Azure Stack is used for production, or vice versa.

Or how about sensitive data which – based on regulations and law – isn’t allowed to live in the public cloud? Now you can keep the data onsite (Azure Stack) and use apps living in the public Azure.

And these are just some of the valid use cases for Azure Stack. There are many more, believe me.

04: Does Azure Stack offer the same services as the public Azure?
No, it doesn’t. Which makes sense when you compare the size of an average Azure region compared to an Azure Stack Smile. However, as stated before, the amount of services offered by Azure Stack will grow in the future, based on customer demand and use-/business cases for Azure Stack.

For now(*) Azure Stack offers these foundational services:

  • Compute;
  • Storage;
  • Networking;
  • Key Vault.

On top of it, Azure Stack offers these PaaS services(*):

  • App Service
  • Azure Functions
  • SQL and MySQL databases

(*: This is per 10th of July 2017. Since Azure Stack is in constant development, changes are that the amount of services offered by Azure Stack will have changed over time. Please check Microsoft for the most recent updates and overview of services offered by Azure Stack.)

05: Can I download Azure Stack and install it on spare hardware I’ve got?
No, you can’t. Because Microsoft invests hard to offer you the same Azure experience (pay as you go, consume with no worries about the hardware and so on) with Azure Stack, they had to lock down the hardware on which Azure Stack runs.

Therefore Azure Stack is delivered as a whole package, hardware and software integrated into one. For now HPE, Dell EMC and Lenovo deliver Azure Stack with their own hardware. Soon other hardware vendors will follow suit.

06: So I can’t test drive it? How do I know whether Azure Stack works for me?
Sure you can test drive Azure Stack, POC it or use it as a developer environment. For this Microsoft has specifically developed Azure Stack Development Kit.

You can download it for free and install it on hardware of your choice. Of course there are some requirements to be met for this hardware, but still it’s up to you what vendor to use.

07: What’s Azure Stack Development Kit? Can I use it for production?
As Microsoft states: ‘…It’s a single-node version of Azure Stack, which you can use to evaluate and learn about Azure Stack. You can also use Azure Stack Development Kit as a developer environment, where you can develop using consistent APIs and tooling…’

As such Azure Stack Development Kit isn’t meant for production. It’s meant for POCs and stuff like that. Go here to learn more about it.

08: Do I need to pay for Azure Stack?
Sure you do. But the prices are lower compared to using the public Azure. Which makes sense because your company pays the hardware and operating costs. Check out this Microsoft Azure Packaging & Pricing Sheet (*) for more information.

(*: Please know this sheet will be updated in the future. As such, just Google for Microsoft Azure Packaging and Pricing Sheet and you’ll find the latest version of it.)

09: Is Azure Stack Development Kit free?
Yes, Azure Stack Development Kit itself is free. However, the moment you connect it to (one of) your Azure subscriptions and start moving on-premise workloads to the public Azure, you will be charged for it.

10: Do have some useful links for me?
Sure, hang on. Here are some useful links, all about Azure Stack and/or Azure Stack Development Kit:

Thursday, July 13, 2017

‘Mobile First–Cloud First’ Strategy – How About System Center – 04 – SCDPM


Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
01 – Kickoff
02 – SCCM
03 – SCOrch
05 – SCSM


In the fourth posting of this series I’ll write about how System Center Data Protection Manager (SCDPM) relates to Microsoft’s Mobile First – Cloud First strategy. Even though it’s a bit ‘clouded’ it’s pretty sure SCDPM will move to the cloud, one way or the other. But before I go there, let’s take a few steps back and take a look at SCDPM itself.

SCDPM
From the very first day it saw the light SCDPM was different compared to other backup products. For instance, Microsoft positioned it as a RESTORE product, not a backup product. By this Microsoft meant to say that as a SCDPM admin you could easily restore any Microsoft based workload, like SQL, Exchange, SharePoint and so on, WITHOUT having any (deep) understanding of the products involved.

Even though SCDPM’s usability was limited to Microsoft workloads, it offered a solution to the ever growing amount of data to be backed up with a never growing backup window: continuous backup!

Therefore SCDPM offered something new, if only a refreshed approach to the backup challenges faced by many companies back then.

Unfortunately Microsoft dropped the ball on SCDPM some years later on, because further development of new functionalities and capabilities was stopped.  As such it was overtaken by many other backup vendors, delivering improved implementations of continuous backup and easiness of restore jobs.

On top of it all, SCDPM kept it’s focus on Microsoft based workloads. Only for a short period SCDPM was capable of backing up VMware based VMs (SCDPM 2012 R2 UR#11), to be abandoned when SCDPM 2016 went RTM. Sure, one of the reasons being that the VMware components required to be installed on the SCDPM server to support VMware backup isn’t yet supported on Windows server 2016. None the less, the result is the same: SCDPM covers Microsoft based workloads only.

Combined it has led to an ever shrinking market for SCDPM. With Microsoft’s strong focus on Azure it looks like SCDPM is going to the cloud, one way or the other.

SCDPM & Azure
Valid backup strategies are vital for any company, whether working on-premise, in the cloud or hybrid. Therefore Azure offers different backup services, which are potentially confusing. Even more confusing because the starting point for consuming Azure backup services is the same.

It all starts with creating a Recovery Services Vault which is an online storage entity in Azure used to hold data such as backup copies, recovery points and backup policies. From there one can configure the backup of Azure or on-premise based workloads.

When choosing to backup on-premise based workloads there are three options to choose from:

  1. When you’re already using SCDPM, you have to download and install the Microsoft Azure Recovery Services (MARS) Agent:
    image
    The MARS Agent is installed on the SCDPM server. Now SCDPM will be extended from disk-2-disk backup to disk-2-disk-2-cloud backup. The on-premise backup will be used for short-term retention and Azure will be used for long-term retention.


  2. Of course, the MARS Agent can be used outside SCDPM as well, in which case you have to install and configure it separately on every server/workstation you want to protect. In bigger environments this creates enormous overhead.

    As such this approach should be avoided and is only viable in smaller environments where you have just a few on-premise laptops/workstations to protect and run everything else in the cloud (Azure/AWS).


  3. When you don’t use SCDPM, you have to download and install Microsoft Azure Backup Server (MABS) v2:
    image

    MABS is actually a FREE and customized version of SCDPM with support for both disk-2-disk backup for local copies and disk-2-disk-2-cloud backup for long term retention. And contrary to SCDPM, MABS supports the backup of VMware based VMs!

    Of course, the moment you start using Azure for long term retention, you’ve to pay for the storage used by your backups. And the moment you restore from Azure to on-premise or to Azure in another region, you have to pay for the egress traffic.

    On top of it, MABS requires a live Azure subscription. The moment the subscription is deactivated, MABS will stop functioning.


When using a Recovery Services Vault to backup Azure based workloads you can only backup Azure VMs, which is an extension to an Azure VM. This will cover the whole VM and all related disks to that VM. The backup will run only once a day and a restore can only be done at disk level.

Azure Site Recovery
And no, this isn’t everything there is. Another option is Azure Site Recovery.

As Microsoft states: ‘… (it) ensures business continuity by keeping your apps running on VMs and physical servers available if a site goes down. Site Recovery replicates workloads running on VMs and physical servers so that they remain available in a secondary location if the primary site isn't available. It recovers workloads to the primary site when it's up and running again…’

Too many choices to choose from?
As you can see, Azure offers different backup services, aimed at different scenario’s. Also SCDPM can be used together with Azure backup, turning SCDPM into a hybrid solution.

And SCDPM can be installed on an Azure VM and the same goes for MABS, enabling you backup cloud based workloads running on Azure based VMs.

Even more options to choose from! To make it even more confusing, Azure is in an ever state of (r)evolution. What’s lacking today, is in preview tomorrow and next week in production. The same goes for Azure backup services and Site Recovery.

Verdict
SCDPM is moving to the cloud. Or better, it has already arrived there. One way is using SCDPM in conjunction with the MARS Agent, another way is installing SCDPM on Azure based VMs. Or instead, using the revamped and customized free version of SCDPM, branded MABS. Which can be installed on-premise or on Azure based VMs.

So there are choices to be made. The right choice depends much on the type of workloads your company is running, combined with the location (on-premise, cloud or hybrid) and the Business Continuity and Disaster Recovery (BCDR) strategy in place.

On top of it, the moment of your decision is also important. Simply because Azure backup services are just like Azure itself, changing and growing by the month. This Microsoft Azure document webpage might aid you in making the right decision.

But no matter what the future might bring, one thing is for sure: SCDPM as a local on-premise entity is transforming more and more into a cloud based solution. Of course, when running on-premise or hybrid workloads, there will be a hard requirement for a small on-premise footprint. But more and more the logic, storage and management of it all will move into the cloud.

On top of it all, many backup options will be integrated more and more into specific services. As a result there won’t be 100% coverage offered by SCDPM or the Azure based backup services. In other cases there won’t be a out-of-the-box backup solution available at all. As a result third parties will jump into that gap, created by Microsoft.

A ‘shiny’ example is the backup of Office 365. Lacking by default and not in Microsoft’s pipeline, Veeam jumped into that gap by offering a solution made by them.

So at the end, the technical solution to your company’s BCDR strategy might turn into a hard to manage landscape of different point solutions instead of the ultimate Set & Forget single backup solution…

Coming up next
In the fifth posting of this series I’ll write about SCSM (System Center Service Manager). See you all next time.

Webinar: PowerShell Monitoring Management Pack

SquaredUp will present a webinar on the 19th of July in which they will release their PowerShell Monitoring Management Pack.

During that webinar the new MP will be demonstrated. The developers will take a technical deep-dive and show some examples of use cases for this MP.

With this MP we can put VB scripting behind us and focus us on the here, now AND future by using PowerShell workflows for SCOM monitoring!

And the price of the MP? SquaredUp is pretty clear about it: ‘…As part of our continuing commitment to the SCOM community we’re extremely excited to announce that we will be making a new PowerShell Monitoring Management Pack freely available to the community, available to download from our site and open-sourced via GitHub…’

Want to know more? Go here and signup for the webinar.

Wednesday, July 5, 2017

Cross-Post: Azure Stack Pre-GA Update

Mark Scholman posted an excellent article about the current status of Azure Stack, just before it becomes GA (General Available).

Since this article is an excellent write-up all about Azure Stack, I recommend anyone interested in it, in any kind of way, to read it.

Thanks Mark for sharing!

Azure Tip: How To Restore The Portal To Default

Bumped into this situation myself: I modified the ‘default’ Azure portal dashboard a little bit too much…

So I wanted to go back to the default layout. Took me some time to locate this option. When found I experienced a ‘duh’ moment. In order to save you the same embarrassment I decided to share this tip.

  1. When requiring to set the Azure portal default dashboard back to it’s original settings go to Portal Settings;
    image
  2. Hit the button Discard modifications. You’ll be shown this screen:
    image
  3. Select Yes. The Azure portal will freeze now for a couple of seconds;
  4. After the temporary freeze, the Azure portal will ‘restart’ like it’s the first time, including the Welcome screen:
    image
  5. Select the option you prefer and presto, the Azure portal default dashboard is back to it’s original layout.

Good to know when restoring default settings:

  • Your custom made OTHER dashboards won’t be affected. So they are retained;
  • The previously chosen theme will also be retained.

Tuesday, June 27, 2017

Cross Post: Alternative Logical Disk Space Monitors

Tim McFadden authored a MP containing two new logical disk space Monitors, only working on the PERCENTAGE of remaining disk space left. These Monitors generate a warning Alert at 10% logical disk free space and a critical Alert at 5% logical disk free space.

These two Monitors are a better approach compared to the over complex Monitors present in the Server OS MP.

As such I advice anyone running SCOM to take a look at this MP and how it’s configured. Yes, when imported, some additional one-time tasks are required. After that, it’s simply Set & Forget.

Go here for more information about this MP. A BIG thanks to Tim McFadden for providing this MP.

Cross Post: Exchange Server 2013 Extension MP

Volkan Coskun has written an extension MP for Exchange Server 2013. This MP discovers individual Mailbox databases (stand alone  or DAG)  and Transport Queues on Exchange servers.

This MP also contains a few Monitors:

  • Check Database Mount Status: Checks if DB is mounted or not
  • Mailbox Database LastAnyBackup Check: This is  a modified version of 2010 MP. In my script  I check both incremental and full backup and if any backup exist in configured period monitor is healthy.
  • Active Preference Check: This monitor will check if database is mounted on Active Preference 1  if database failovers to any other node  monitor will become warning.

The same MP contains 7 performance collection Rules:

  • Database size
  • Database Whitespace
  • Number of Mailboxes in Database
  • Local Mail Flow latency  ( Test-Mailflow )
  • Login Latency  (Uses Test-MAPIConnectivity)
  • Last full  backup age
  • Last incremental backup age

Haven’t tested this MP myself yet, but it looks promising. Of course, as it goes with ANY new MP: TEST it first before rolling it out in production.

Go here for more information about this MP.

New MP: Microsoft Azure Stack (MAS) MP, Version 1.0.1.0

Sometime ago Microsoft released a MP for monitoring the availability of the Microsoft Azure Stack (MAS), version 1.0.1.0. There are some things you must know however:

  1. This MP monitors the availability of the MAS infrastructure running MAS TP3;
  2. Yes, the MAS nodes are totally locked down, so there is no SCOM Agent involved here;
  3. Instead some MAS APIs are leveraged to remotely discover and collect instrumentation information, such as a Deployments, Regions and Alerts;
  4. Out of the box, this MP doesn’t do anything. After import additional actions are required;
  5. Concurrent monitoring of multiple regions has not been tested with this MP;
  6. For MAS deployments using AAD, the SCOM Management Server requires a connection with Azure. This can also be done from the system running the SCOM Console, used for configuring the MAS MP;
  7. .NET Framework 4.5 MUST be installed on all SCOM Management Servers and systems running the SCOM Console;
  8. The SSL certificate provided for the Microsoft Azure Stack deployment of Azure Resource Manager, must be installed in the Trusted Root Certificate Authority Store of all SCOM Management Servers and the computer(s) with the SCOM Console used for configuring the MAS MP;
  9. When SPN is used for authentication, the same certificate created along the SPN must be installed on all SCOM Management Servers and the computer(s) with the SCOM Console used for configuring the MAS MP;
  10. The account credentials which have Owner rights to the Default Provider Subscription of MAS (mostly the Azure Stack Service Administrator account) are required when configuring the MAS MP.

The MP and it’s related guide (PLEASE RTFM!!!) can be found here.

Monday, June 19, 2017

‘Mobile First–Cloud First’ Strategy – How About System Center – 03 – SCOrch


Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
01 – Kickoff
02 – SCCM
04 – SCDPM
05 – SCSM


In the third posting of this series I’ll write about how System Center Orchestrator (SCOrch) relate to Microsoft’s Mobile First – Cloud First strategy. And as stated in the end of the second posting of this series, SCOrch isn’t in a good shape at all…

SCOrch
Sure, there is SCOrch 2016. And YES, it has the Mainstream Support End Date set on the 11th of January 2022, just like the whole SC 2016 stack. Also the Integration Packs for SCOrch 2016 are available for download. So on the outside all seems to be just fine. SCOrch is alive and kicking!

But wait! Hold your horses! Because here the earlier mentioned iceberg comes into play. Time to take a look what’s UNDER the water line, outside the regular view…
image

Yikes! x86 (32-bit) ONLY…
The day 64-bit workloads were special are long gone. All important Microsoft products and services are 64-bit based. Meaning, x86 (32-bits) isn’t default anymore. None the less, SCOrch 2016 is still x86 based and there aren’t any plans at Microsoft to rewrite the code to x64.

Therefore SCOrch native PowerShell (PS) execution runs in a V2.0, 32-bits PowerShell session, causing all kinds of issues. Sure, there are workarounds for it, but still they are workarounds.

Even though SCOrch packs some serious power, the x86 limitation is something to reckon with.

The ‘Engine’ & the ‘Graphical Editor’
These are crucial parts of any automation tool, SCOrch included:

  1. The ‘Engine’ enables the automation tooling to ‘translate’ the defined activities as stated in the runbook (eg. running a script, stopping a service, creating a folder, etc etc). SCOrch runs it’s own runbook engine, using it’s own proprietary runbook format.


  2. The ‘Graphical Editor’ allows for a ‘drag & drop’ experience when creating new runbooks/workflows (Eg: When the printer spooler service stops, restart it. Wait 2 minutes, check the state of the spooler service. When started, close the related Alert. When still not running create a ticket and escalate it to right tiers).

    SCOrch brought this ‘drag & drop’ experience to a whole new level because it doesn’t require any scripting. Just drag & drop the required activities – from the loaded integration packs - to your ‘canvas’, connect them as required, apply filters/criteria and so on and be ‘done’ with it. Of course, good Runbook authoring is far more complicated, all I am trying to do here to share the basics of how it’s done. The gest of this is to say that even without any scripting skills, one can build advanced runbooks with SCOrch.

However, things have moved on. In today’s world many times the on-premise/data center based workloads are connected to the cloud. Whether we’re talking Azure IaaS/PaaS/SaaS or Office 365 for instance here. Whenever automating management of cloud based workloads, PS is a hard requirement, whether you like it or not.

The challenges
And here SCOrch has two serious issues/flaws:

  1. By default SCOrch PS execution runs in 32-bits PowerShell session, missing out many advanced PS features introduced in the x64 editions;
  2. By default the SCOrch engine isn’t PS based.

As such, there will always be a translation from the native SCOrch engine to PS. On top of it, there will be ALSO a translation form x86 to x64 and vice versa…

And as it goes with every translation, there will be a performance penalty. Even worse, the whole chain (SCOrch > AA/SMA > targets to hit with a runbook/workflow) becomes longer and therefore more vulnerable to (human) errors. So why not cut out the ‘middle man’ or in this particular case, SCOrch and start directly with PS? Because SMA and AA both use an identical runbook format based on Windows PowerShell Workflow, x64 based.

No more translation, neither from a proprietary runbook format, nor from x86 PS execution to x64. Nice!

Port SCOrch to x64 and native PS?
For sure. Microsoft could solve it all by rewriting SCOrch in such a way that it would run natively x64 and use the identical runbook format based on Windows PowerShell Workflow. However, Microsoft isn’t going to do that.

Already in 2014(!) Microsoft was pretty clear about the ‘future’ of SCOrch. In 2015 Microsoft published the SCOrch Migration Toolkit (still in beta?!). Around the same date Microsoft also released the SCOrch Integration Modules, being converted SCOrch Integration Packs, ready for import in AA. In 2016 Microsoft published a blog posting about how to use the previously mentioned tools and modules.

And that’s about all the efforts Microsoft aimed at SCOrch specifically… Instead Microsoft tries to push you to AA or (in some cases) SMA, when using WAP. For most people however, AA is the future (at least Microsoft hopes).

Verdict for SCOrch and it’s future
Yes. SCOrch 2016 is available. And it still packs a lot of power. BUT at the end of the day, SCOrch 2016 is dead in the water. Not too many efforts, budget nor resources are allocated to it. Only the bare minimum. Sure it has gotten the 2016 boiler plate AND the related Integration Packs (IPs) are updated to support the 2016 Windows Server workloads. But that’s it.

Nothing new coming out of that door. End of the line for SCOrch 2016 after the 11th of January 2022. Even the recent posting of Microsoft about the new delivery model for the System Center stack is pretty clear about SCOrch: Not a single word about it. Which is a statement on it self.

What to do?
When not using SCOrch, but using other System Center 2016 components of the stack: Think twice. Sure, you already got the licenses for it. But please keep in mind that every effort and investment for SCOrch must be doubled: One time to get it into SCOrch and the second time to get it out to another automation tooling, no matter what you choose for.

When using SCOrch already, it’s time to look for alternatives. Also look OUTSIDE the Microsoft boundaries please. POC the alternatives and look at the possibilities to export the SCOrch based runbooks to your alternative choices. Also test the connectivity with the cloud and on-premise/datacenter based workloads. And TEST and EXPERIENCE how the graphical editors are functioning, how easy they are to operate and last but not least, how easy it is to catch errors and act upon them. AA still has some challenges to address, like the easiness of operation and capturing errors…

Coming up next
In the fourth posting of this series I’ll write about SCDPM (System Center Data Protection Manager). See you all next time.

Friday, June 16, 2017

!!!Hot News!!! Frequent, Continuous Releases Coming For System Center!!!

Wow! For some time Microsoft told their clients that one day the SCCM release cycle, also known as Current Branch (CB), would come (in one form or another) to the rest of the System Center stack.

And FINALLY Microsoft has released more information about how the System Center stack is going to adapt to a faster release cadence.

In a nutshell, this is going to happen:

  1. Microsoft will be delivering features and enhancements on a faster cadence in the next year;
  2. Main focus here will be on the highest priority needs of Microsoft’s customers across System Center components;
  3. There will be releases TWICE per year, in allignment with the Windows Server semi-annual channel;
  4. A technical preview release is planned in the fall with the first production version available early next calendar year;
  5. There will be subsequent releases approximately every six months;
  6. These releases will be available to System Center customers with active Software Assurance;
  7. SCCM/ConfigMgr will continue to offer three releases per year.

In the first release wave the main focus will be on three SC components:

  1. SCOM(!);
  2. SCDPM;
  3. SCVMM.

Key areas of investment will be:

  1. Support for Windows Server & Linux;
  2. Enhanced performance, usability & reliability;
  3. Extensibility with Azure-based security & management services.

What’s in the pipeline for SCOM specifically?

  1. Expanded HTML5 dashboards (FINALLY!!!);
  2. Enhancements in performance & usability;
  3. More integrations with Azure services (eg. integration with Azure Insight & Analytics Service Map);
  4. Improved monitoring for Linux using a FluentD agent.

On top of it all, YOU can influence the upcoming releases! Therefore Microsoft encourages you to join the System Center Tech Community and UserVoice forums to provide your feedback and suggestions.

Go here to read the posting I got all this information from. A BIG thanks to Peter Daalmans who pointed this posting out to me.

Recap
For me this is THE sign that Microsoft has FINALLY decided about the future of the System Center stack, by delivering insight in how they’re going to execute on their previously made promisses to port the SC release cycle more to the Current Branch (CB) model.

As such I expect the end of the notation like SC 2016. It makes sense to introduce a new naming scheme, like YYMM. Example: System Center 1806, refers to the SC release of June 2018. As a result I expect that there will be a new support model as well, just like the one in place for SCCM/ConfigMgr CB.

For now Microsoft is silent about it but to me it looks like the next logical step in it all. It makes no sense to support the new release cadence like the current SC 2016 with a Mainstream Support End Date. Even for a company like Microsoft, it would cost far too much money and resources, better used elsewhere (read: Azure Smile).

None the less, this development is a huge step forward and makes the future of the SC stack much more brighter. For sure, it doesn’t have an eternal live expectation. It never had. But at least there is something of a roadmap. And yes, one day the SC stack will be fully incorporated into Azure, which makes sense as well. But at least for now, Microsoft has recognized the significance of the SC stack.

Wednesday, June 14, 2017

SCOM 2016 Must Haves

Good to know:
This posting is based on the power of the community since it advices MPs, Best Practices and so on, all publicly available for free, shared under the motto:  ‘Sharing is caring’. So all credits should go to the people who made this possible. This posting is nothing but a referral to all content mentioned in this posting.

Why this posting?
’SCOM 2016 is just a little bit more complex compared to Notepad’ I many times say to my customers. Just trying to get the message across that even though SCOM packs quite awesome monitoring power, it still needs attention and knowledge in order to get the most out of it.

Even with the cloud in general and OMS to be more specific, SCOM still deservers it own place and delivers ROI for the years to come. And NO, OMS isn’t SCOM! Enough about that, time to move on…

None the less, everything making SCOM 2016 more robust and/or easier to maintain is a welcome effort. And not just that, but should be used to the fullest extend.

Therefore this posting in which I try to point out the best MPs, fixes, workarounds, tweaks & tricks all aimed at making your life as a SCOM admin more easier. Since content comes and goes, this posting will be updated when required.

I’ve grouped the topics in various area’s, trying to make more accessible for you. There is much to share, so let’s start.


01 – SCOM Web Console REPLACEMENT
Ouch! If there is a SCOM component I really dislike it’s the SCOM WEB Console. Why? It’s too slow, STILL has Silverlight dependencies (yikes!) and misses out on a lot of functionality. As such it’s quite dysfunctional and quite likely to become a BoS (Blob of Software) instead of a many times used SCOM component… Therefore, most of the times I simply don’t install it Smile.

Still, a FUNCTIONAL SCOM Web Console would be great. And when done right, it could be used as a replacement for the SCOM GUI (SCOM Console). But what to use? And when there’s an alternative, for what price?

Stop searching! The SCOM Web Console (and even SCOM GUI) alternative is already there! And yes, it’s a commercial solution. But wait! It has a FREE version, titled Community Edition! It’s HTML5 driven, taps into BOTH SCOM SQL databases, enabling the user to consume both data in ONE screen. So can look at current operational data and cross reference it with data contained in the Data Warehouse!

And not just that, but it’s FAST as well! And I mean REALLY fast!

For many users this product has become a full replacement for BOTH SCOM Consoles. As a result the SCOM GUI is only used for SCOM maintenance by the SCOM admins. The consumption of SCOM data, state information and alerts however is mostly done by using the HTML5 Console.

Yes, I am talking about SquaredUp here. Go here to check it out. Click on pricing to see the available versions, ranging from FREE(!) to Enterprise Application Monitoring.

Oh, and while you’re at it, check out their new Visual Application Discovery & Analysis (VADA) proposition, enabling end users(!) to automatically map the application topologies they’re responsible for, all in the matter of minutes!

Advise: Download the CE version and be amazed about how FAST and good a SCOM Console can be!


02 – Automating SCOM maintenance & checks
I know. The name implies SCOM 2012. But guess what? SCOM 2016 is based on SCOM 2012 R2. As such the MP I am about to advice works just fine in SCOM 2016 environments as well.

Whenever you’re running SCOM 2016 I strongly advise you to import AND tune the OpsMgr 2012 Self Maintenance MP. It helps you to automate many things AND is capable of preventing SCOM MS servers being put into Maintenance Mode (MM). When that happens (and the MP is properly configured!), this MP will remove these SCOM MS servers from MM! Also it’s capable of exporting ALL MPs on a regular basis and keep an archive of these exports for just as many days you prefer.

Please know that ONLY importing this MP won’t do. It requires some tuning, otherwise nothing will happen. Gladly Tao Yang (the person who made this MP) provided a well written guide, explaining EVERYTHING! So RTFM is key here.

Advise: This MP is a MUST for any SCOM 2016 environment. Import and TUNE it.


03 – Prevent SCOM Health Service Restarts (on monitored Windows servers)
The name I am about to mention is of a person who has made SCOM a far more better product then it ever was. Without his efforts, time and investments SCOM would be far more of a challenge to master.

Yeah, I am talking about Kevin Holman. For anyone working with SCOM he doesn’t need any introduction. One of his postings is all about unnecessary restarts of the SCOM Health Service, the very heart of every SCOM Agent installed on any monitored Windows based system.

The same posting refers to TechNet Gallery containing a MP, addressing the causes of this nagging issue. Please RTFM his posting FIRST before importing the MP. As such you’ll differentiate yourself from the monkey in the zoo pushing a button in order to get a banana without ever understanding the mechanisms behind it…

Advise: Import this MP in EVERY SCOM 2016 environment you own.


04 – Registry tweaks for SCOM MS servers
And yes, he also wrote a posting about recommended registry tweaks for SCOM 2016 Management Servers. And YES, he also provided the commands in order to rollout those tweaks.

Again: RTFM first before applying them. Alternative: Press the button and be amazed when a banana appears out of thin air Smile

Advise: Make sure to run these registry tweaks on ALL your SCOM 2016 Management Servers.


05 – SQL RunAs Addendum MP
Like I already stated, we – the SCOM users – own one man in particular a lot of thanks, even when he doesn’t want to hear about it. So it’s the same person here as well we’re talking about.

Until now I haven’t seen any SCOM environment NOT monitoring SQL instances. The SQL MP delivers tons of good information and actionable Alerts on top of it. As such, the SQL MP is imported and configured. The latter WAS quite a challenge, all about making sure SCOM has enough permissions to monitor the SQL instances.

Luckily this difficulty is addressed with the SQL RunAs Addendum MP. Again RTFM! But when read, import the MP and be amazed! Sure, this MP came to be with the effort of many people, so a BIG word of thanks to all the people involved here.

Advice: IMPORT this MP and USE it! It makes your life much easier and saves you lots of time, to be used elsewhere.


06 – Agent Management Pack (MP)
Sure. When SCOM monitors something a Management Pack is required. Without it, NO monitoring. Period. But still, the SCOM Agent running on the monitored Windows Server is also crucial. So all available information on those very same SCOM Agents is welcome, combined with some smart tasks in order to triage or remedy common issues.

Therefore it’s too bad that SCOM out of the box, lacks many of those things. Sure the basics are covered, but that leaves a lot of ground uncovered.

Gladly, a community based MP solves this issue. Again RTFM first before importing this MP, to be found here.

Advice: RTFM, import this MP and soon you’ll find wondering yourself how you ever got along WITHOUT it.


07 – Enable proxy on SCOM Agents as default
Whenever SCOM wants to monitor workloads living outside the boundaries of a server (like SQL, AD and so on) it has to look ‘outside’ that same Windows server. By default the SCOM Agent isn’t allowed to do that, because of security reasons.

Sure, people can hack into anything. But to think that a hacker would impersonate a SCOM Health Service workload, is something else all together. Why? Well the moment a hacker is already that deep into your network changes are far more likely he/she will have found something far more lucrative AND easier to grasp.

None the less, by default the SCOM Agent proxy is disabled by default. Sure, you can enable the Agent Proxy with a scheduled script. But when you’re already applying that workaround (that’s what it is…), why not change the source instead and be done with it?

Go here and follow the advice and apply the scripts. From that moment on the SCOM Agent proxy is ENABLED by DEFAULT. Problem solved. Next!

Advice: Enable the SCOM Agent proxy and forget about it Smile.


08 – SCOM 2016 System Center Core Monitoring Fix
The System Center Core MP from SCOM 2016 (up to UR#3!) contains some issues, as stated by Lerun on TechNet Gallery: ‘…temporary fix for rules and monitors in the System Center Core Monitoring MP shipped with SCOM 2016 (UR3). Issues arise when using WinRM to extract WMI information for some configurations. The issue is reported to Microsoft, though until they make a fix this is the only workaround except from disabling them…’

RTFM his description and import the MP from TechNet Gallery.

Advice: Import this MP and forget about this issue.


09 – SCOM Health Check Report V3
Okay. This MP is written when SCOM 2016 was only a dream. But still this MP works with SCOM 2016. Again RFTM is required here. But again, the guide tells you all there is to know and to DO before importing this MP.

This MP gives you great insight into the health of your SCOM environment and is made by people I highly respect (Pete Zerger and Oskar Landman). Download the MP AND the guide from TechNet Gallery, RTFM the guide, do as stated in the guide, import the MP and be amazed about the tons of worthwhile insights you get.

Advice: Is the MP already in place? If not, please do so now Smile.


As you can see, for now there are 9(!) tweaks, advices, MPs and so on all enabling you to have a better life with SCOM 2016. Feel free to share your experiences, best practices, tweaks and so on.

When double checked, I’ll update this posting accordingly with your name as well of course!


 


Wednesday, May 24, 2017

System Center 2016 Update Rollup 3 Is Out

Yesterday Microsoft released Update Rollup 3 (UR#3) for System Center 2016. UR#3 contains a bunch of fixes for SCOM 2016 issues. KB4016126 contains the whole list of the fixes for SCOM 2016.

And YES, the earlier mentioned APM issue of the MMA crashing IIS Application Pools running under .NET Framework 2.0 is fixed with this UR!

Tuesday, May 23, 2017

‘Mobile First–Cloud First’ Strategy – How About System Center – 02 – SCCM


Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
01 – Kickoff
03 – SCOrch
04 – SCDPM
05 – SCSM


In the second posting of this series I’ll write about how SCCM relates to Microsoft’s Mobile First – Cloud First strategy. Reason why I start with SCCM is because this component is quite special compared to the other System Center stack components. For a long time already it has it’s own space, even outside the regular SC stack. There is much to tell, so let’s start.

Big dollars
First of all SCCM is still BIG business for Microsoft. We all know that Microsoft makes a lot of money so when something is BIG business to them, think BIG as well Smile. Many enterprise customers use SCCM and not just some parts of it, but to it’s fullest extend. All this results into SCCM being one of Microsoft’s flagship product/service, thus getting proper funding and resource allocation, combined with a healthy and clear roadmap.

Even though SCCM still has System Center in it’s name, it’s being pushed outside the regular System Center stack more and more. And yes, I do see (and respect) the suspected reasons behind it all.
image

Current Branch (CB)
For some time now SCCM has introduced a new approach to software maintenance. As such SCCM no longer adheres to the well known ‘Mainstream Support & Extended Support’ end date model which is still in place for the other components of the System Center stack.

Instead SCCM is updated on an almost quarterly basis, meaning SCCM gets about 4(!) updates per year! Which is quite impressive. However, with this new approach new branding is required AND a new support model. Even for a company like Microsoft it’s undoable to support a plethora of SCCM versions for many years.

So instead of using a year branding like SCOM 2016, a new kind of boiler plate was invented, titled Current Branch (CB) release. Now the CB releases of SCCM are made up like SCCM YYMM. Some examples: SCCM 1610, SCCM 1702 and so on. So SCCM 1702 is the CB release of 2017 (17) and the second month (02) of that year.

And not just that, but there are even CB releases with a MONTHLY cycle. However, those CB releases are kept inside a small circle existing out of Microsoft itself and some special customers and SCCM MVPs. The details are unknown to me since Microsoft doesn’t talk much about it. Only CB releases which are deemed good and stable enough are pushed out to the public, which happens once per 4 months in general.

Sometimes some these ‘in between’ CB releases are made available under TP, Technical Preview. Not meant for production (nor supported!!!), but meant for testing. At this moment SCCM 1704 is TP.

Why CB?
There are plenty reasons for the CB approach, like supporting the latest version of Windows 10 which also adheres to a CB based release cycle. So whenever new functionality is introduced with the latest release of Windows 10, the most current CB release of SCCM supports it 100%.

Another reason is that customer feedback is incorporated many times faster, compared to the old approach where – if lucky – once per 1.5 years an update was released. Now instead, just a few months later customer requests and feedback are incorporated directly into the latest CB release.
image

And yes, there is also another reason…

CB and the cloud: SCCM as SaaS!
Sure, with every latest CB release of SCCM, you’ll notice that SCCM is tied more and more into the cloud. This doesn’t end with deeper integration with Windows Intune but also with Azure in general. So step by step SCCM is growing into a Software as a Service (SaaS) cloud delivery model.

And the proof of it is already there. Because updating SCCM can be quite a challenge. Microsoft has addressed this issue quite good and with every CB release the upgrade process and experience is improved even further.

Since CB saw the light, SCCM can be upgraded quite easily, all powered by Azure. Sure as a SCCM admin you still have some work to do, but the upgrade process has become quite solid and safe. Just follow the guide lines setout by SCCM itself, and you’ll be okay in most cases. No more Russian roulette here!

How about support for CB releases?
Good question. Like I already stated CB releases adhere to a new support model as well. And those new support models don’t last years like we see for the rest of the System Center stack, but MONTHS! Which is quite understandable. Instead of Mainstream / Extended Support, SCCM CB adheres to two so called Servicing Phases:

  1. Security & Critical Updates Servicing Phase;
  2. Security Updates Servicing Phase.

The names of the servicing phases are quite self explanatory so no need to repeat it here I hope Smile. The first servicing phase is aimed at the most current CB release publicly available, and second servicing phase is aimed at the CB-1 release, being the previous CB release before the most current CB release. 
image

How it works? Let’s take a look at today’s situation. SCCM 1702 is the most current CB release. As such it adheres to the first servicing phase (Security & Critical Updates). Meaning, it’s fully supported by Microsoft. Security and critical updates will be released for it.

SCCM 1610 is the CB-1 release now. So this CB release adheres to the second servicing phase (Security Updates). So this CB release doesn’t have Microsoft’s full support. Instead it will only receive security updates and that’s it.

Suppose a new SCCM CB release becomes publicly available, let’s say SCCM 1706. Everything will move one rung down the servicing phase ladder:

  • SCCM 1706 will adhere to the first servicing phase (Security & Critical Updates);
  • SCCM 1702 will adhere to the second servicing phase (Security Updates);
  • SCCM 1610 won’t be supported anymore.

Sure, it forces companies to follow the CB flow as much as possible. But with every new CB release life is made easier because SCCM is growing into SaaS, making the upgrade easier every time.

!!!Spoiler alert!!! CB isn’t just a boiler plate
Please keep this in the back of your mind – at least for this series of blog postings – CB is way much more than just a new boiler plate!

As you can see with SCCM, CB encompasses not only a whole new support model (aka Servicing Phases) but also the development cycle is totally different. The way customer feedback is being processed, and decided upon whether or not to incorporate it into a future CB release or not. The way SCCM is being tied more and more into the cloud, growing to a SaaS delivery model. How SCCM is upgraded from one CB to another.

And so on. And yes, introducing and maintaining and growing the CB model costs money and resources. Which are available for SCCM without any doubt. As you’ll see in the future postings of this series however, this kind of funding and resources is kind of different for the other components of the System Center stack.

Verdict for SCCM and it’s future
Without a doubt, the future for SCCM is okay. For sure more and more SCCM will be tied into the cloud. But that’s not bad at all. Also with every CB release SCCM will grow even more into a SaaS delivery model, enabling you the administrator to focus on the FUNCTIONALITY of SCCM instead of working hard to keep it just running…

SCCM adheres for a full 100% Microsoft’s Mobile First – Cloud First strategy. And not just that, but also enables it by the functionality it offers. So whenever you’re working with SCCM, rest assured.

Many changes are ahead for it, but SCCM is in it for the long run, stepping away more and more from the System Center stack as a whole and as such, creating it’s own space within the Microsoft cloud port folio and service offerings.

SCCM is safe and sound and will give you full ROI for many years to come. Simply keep up with the CB pace and you’ll be just fine.
image

Coming up next
In the third posting of this series I’ll write the epitaph for Orchestrator - SCOrch  (I am sorry to bring the bad news but why lie about it?). See you all next time.

Friday, May 19, 2017

‘Mobile First–Cloud First’ Strategy – How About System Center – 01 – Kickoff


Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
02 – SCCM
03 – SCOrch
04 – SCDPM
05 – SCSM


In this new series of blog postings I’ll write about the effect of Microsoft’s ‘Mobile First – Cloud First’ strategy on the System Center stack.
image

This posting is the first of this series.

‘Put your money where your mouth is’
This phrase is most certainly at play when looking at Microsoft’s ‘new’ Mobile First – Cloud First strategy. And not just that, Microsoft has given the phrase ‘Put your money where your mouth is’ a whole new dimension of depth and breadth. Simply because their investments in the cloud (Azure, Office 365, Windows Intune and so on) and everything related, are unprecedented.
image

Azure regions are added on almost quarterly basis, while a single Azure region requires a multi billion dollar investment. Azure on itself is growing on a weekly basis. New services are added whereas existing ones are modified or extended.

It’s quite safe to say that Microsoft’s Mobile First – Cloud First strategy isn’t marketing mumbo jumbo, but the real deal. Microsoft is changing from a software vendor to a service delivery provider with a global reach. On top of it all Microsoft is also capable of delivering the cloud to goverments, adhering to specific laws and regulations.

The speed of alle these changes is enormous. Like an oil tanker turning into a speed boat while changing course and direction. As such one could say that Microsoft is rebuilding itself from the ground up. Nothing is left untouched, even the foundations are rebuild or removed when deemed unnecessary.

As a direct result many well known Microsoft products are revamped. Especially products which originally had a strong on-premise focus, like Windows Server. Now these same products are far more easier to integrate with Azure based services. As such these products are growing into a more hybrid model, enabling customers to reap the benefits of both worlds: on-premise and the (public) cloud.

How about System Center?
For sure, this massive reinvention of how Microsoft does business is affecting the System Center stack as well. Many components of the System Center stack date from the so called ‘pre-cloud era’, the days when the cloud was nothing but a buzz word. Most workloads and enterprise environments were located in on-premise datacenters. Not much if anything at all was running in any cloud, whether public or private.

Mind you, this is outside SCVMM of course.

The source code of many System Center stack components still reflect that outdated approach. So when Microsoft would think about turning the System Center stack into a more hybrid solution, much of that source code would require serious rewrites. Without huge investments this can’t be done.
image

Why this new series of blog postings:
So this brings us to the main question on which this series of postings is based: Where does System Center fit into the new ‘Mobile First – Cloud First’ strategy? At this moment the System Center stack looks to be isolated compared to other Microsoft based solutions.

In this series of blog postings I’ll take a look per System Center component and how it relates to the new Microsoft. Also I’ll write about available Azure based alternatives (if any). The last posting of this series will be about the System Center stack as a whole and whether it still deserves a place in the brave new world, powered by Azure.

I can tell you, many things are happening with the System Center stack. Most of them in plain sight but some of them hidden from your direct line of sight. Just like an iceberg…
image

So stay tuned. In following articles of this series I’ll show you where and how to look in order to see the whole iceberg…

Tuesday, May 2, 2017

MP Authoring – Quick & Dirty – 05 – Q&A


Advice to the reader
This posting is part of a series of articles. In order to get a full grasp of it, I strongly advise you to start at the beginning of it.

Other postings in the same series:
00 – Introduction
01 – Overview
02 – Authoring the Template MP XML Code
03 - Example Using The Template MP XML Code
04 - Testing The Example MP


In the last posting of this series I’ll do an Q&A in order to answer questions and respond to feedback I’ve got while working on this series. Whenever you’re missing your questions/feedback, don’t hesitate and reach out to me, whether directly or by comment to this post.

Q01: Is the ‘Quick & Dirty’ approach only doable for ONE Class and a ‘single’ layered application/service?
A01: No, you can add as many Classes as you require. However, there are some things to reckon with:

  1. When monitoring a multi-layered application the ‘Quick & Dirty’ approach may be a way to address it.
  2. However, when there are more than 3 layers, it’s better to look for alternatives.
  3. When adding a new Class to the MP, don’t forget to add the Reverse Discovery and Group as well. The Group is required as the target for enabling the Discovery (which is disabled by default, hence REVERSE DiscoverySmile)

 

Q02: I see what you’re trying to achieve. None the less, I rather prefer to target my Discoveries at registry keys which are more specific to the application/service I author my MP for. Why use your method instead?
A02: For many IT shops authoring MPs is quite a challenge. Whether based on their current workload, available time, budget, resources and knowledge.

For environments like those custom MP authoring isn’t a fun thing at all. None the less, sometimes they have to deliver custom MPs of their own.

In situations like these many challenges of MP authoring need to be addressed. In my line of work I notice that many times buggy MPs are delivered, resulting in a bad SCOM experience. Many times the bugs in the MPs are based on badly designed Discoveries and poorly defined Classes.

By introducing a template for their MP XML code containing a predefined Class with a REVERSE Discovery these two main challenges are properly addressed. On top of it all, it enables IT shops to deliver quickly a custom monitoring solution with proper Classes, Discoveries and monitoring. And it’s far more easier to learn them this approach instead of taking the deep dive into the world of MP Authoring.

Sure, it’s always better to work with registry based Discoveries targeted at registry keys unique to the workload to be monitored. But for IT shops like that it’s better all together to stay away from the ‘Quick & Dirty’ approach.

 

Q03: Do I need to pay for Silect MP Studio in order to use your ‘Quick & Dirty’ approach?
A03: No you don’t. However, there is a small caveat to it. As long as your custom MP can cover the requirements with basic Monitors and Rules, the free version of MP Author will suffice. The FREE version of MP Author allows you to build these Monitors and Rules:

  1. Windows Database Monitor;
  2. Windows Event Monitor;
  3. Windows Performance Monitor;
  4. Windows Script Monitor;
  5. Windows Service Monitor;
  6. Windows Website Monitor;
  7. Windows Event/Alert Rule;
  8. Windows Performance Rule;
  9. Windows Script Performance Rule.

As you can see, an impressive list. The paid version (MP Author Professional) offers on top of the previous list these additional Monitors and Rules:

  1. Windows Process Monitor;
  2. SNMP Probe/Trap Monitor;
  3. Dependency Rollup Monitor;
  4. Aggregate Rollup Monitor;
  5. SNMP Probe Event Rule;
  6. SNMP Probe Performance Rule;
  7. SNMP Trap Event/Alert Rule.

So when requiring SNMP monitoring, you have to buy the Professional version.

 

Q04: I rather stick to the MP authoring tool released for SCOM 2007x. It’s still available and FREE as well. And it allows me to build any Monitor/Rule I need. Why change?
A04: With the introduction of SCOM 2012x, the MP Schema is changed as well. For multiple reasons, among them the extended monitoring capabilities of SCOM 2012x and later SCOM 2016.

In the MP authoring tools for SCOM 2007x, the new MP Schema isn’t supported. Nor are the new SCOM 2012x/2016 monitoring features. Sure, any SCOM 2007x MP using the old XML Schema will be converted to the new one. However, the SCOM 2007x MP Authoring tool can’t work with it.

As such, your MP development will suffer, sooner or later when using this outdated tool. Also has this tool a steep learning curve. In cases like this it’s better to master MP Author and move on to the paid version when required, or (when the proper licenses are in place) to move to VSAE.

 

Q05: I find it much of a coincidence that you post a whole series of MP Authoring using Silect MP Author and that a new  version of it is launched soon after that. And now you’re also presenting at MP University 2017!
A05: I wish I am part of such a scheme. Would make me earn loads of more money (duh!) Smile. But let’s put the joke behind us and give a serious answer.

At the moment I started to write this series I had no connections what so ever with Silect. None. So having them bringing out a new version of MP Author is pure coincidence. And also a pain because I had to screenshot many steps all over again…

None the less, because of this series of postings I got on the radar of Silect. As such they asked me whether I wanted to present a session at their MP University event. That’s all there is to it. Nothing more, nothing less.

And no, I have nothing to do with chemtrails, or other conspiracy theories. How much I would love to, I simply don’t have the time for it SmileSmileSmile 

 

Q06: Do you recommend VSAE over MP Author or vice versa?
A06: There is no one size fits all when it comes down to MP Authoring. Sure with MP Fragments VSAE enables you to author MPs in a very fast manner. But your company requires licenses for VS. When not in place, don’t use VSAE in a commercial setting since you’re in breach of the license agreement.

On top of it all, MP Author is very accessible tooling for non-developers. And with the latest update, MP Author Professional supports the usage of MP fragments as well!

Therefore, the choice is yours, based on your liking, background, requirements and available budget.