Monday, January 30, 2017

UR#12 SCOM 2012 R2 Is Available

Just checked the Microsoft Update Catalog in order to see whether UR#12 for SCOM 2012 R2 is available. And the answer is Yes:

KB3209604 tells more on CU#12 for System Center 2012 R2 as a whole and refers to  KB3209587 for more detailed information about UR#12 for SCOM 2012 R2 specifically.

Taken directly from this KB article, this is what UR#12 for SCOM 2012 R2 fixes:

  • When you try to upgrade System Center 2012 R2 Operations Manager Reporting Server to System Center 2016 Operations Manager reporting server, the upgrade fails for the following configuration:
    • Server A is configured as System Center 2012 R2 Operations Manager including Management Server.
    • Server B is configured as System Center 2012 R2 Operations Manager, including Operations Manager Database (OpsMgrDB), Operations Manager Data Warehouse (OpsMgrDW) and Operations Manager Reporting Server.

    ( X ) Management Server Upgraded Check

    The management server to which this component reports has not been upgraded.

  • Recovery tasks on "Computer Not Reachable" messages in the System Center Operations Manager Monitor generate failed logons for System Center Operations Manager Agents that are not part of the same domain as the Management Groups.
  • When a Management Server is removed from the All Management Servers Resource Pool, the monitoring host process do not update the Type Space Cache.
  • SHA1 is deprecated for the System Center 2012 R2 Operations Manager Agent and SHA2 is now supported.
  • Because of incorrect computations of configuration and overrides, some managed entities go into an unmonitored state. This behavior is accompanied by event 1215 errors that are logged in the Operations Manager log.
  • IntelliTrace Profiling workflows fail on certain Windows operating system versions. The workflow cannot resolve Shell32 interface issues correctly.
  • There is a character limitation of 50 characters on the custom fields in the notification subscription criteria. This update increases the size of the limitation to 255 characters.
  • You cannot add Windows Client computers for Operational Insights (OMS) monitoring. This update fixes the OMS Managed Computers wizard in the System Center Operations Manager Administration pane to let you search or add Windows Client computers.
  • When you use the Unix Process Monitoring Template wizard to add a new template to the monitor processes on UNIX servers, the monitored data is not inserted into the database. This issue occurs until the Monitoring Host is restarted.

You can download UR#12 for SCOM 2012 R2 from here.

Please TEST this UR#12 FIRST before rolling it out in production environments! Test yourself before your wreck yourself!

Thursday, January 26, 2017

OMS Fun: Monitor Minecraft Server With OMS

Former MVP Anders Bengtsson has written an excellent posting about the power of OMS and how one can use it to monitor all kinds of workloads, even a Minecraft Server!
Image result for minecraft server

Want to know more? Go here.

All credits go to Anders Bengtsson of course.

Update Rollup #2 For Remaining SC 2016 Components To Be Released In February 2017…

As posted before, UR#2 for SCVMM 2016 and SCOrch Service Provider Foundation 2016 are already available. As it turns out, for the remaining System Center 2016 components UR#2 will be released in February 2017 (exact date yet unknown):

  • SCOM 2016;
  • SCDPM 2016;
  • SCSM 2016;
  • WAP Websites.

SC 2016 components for which NO UR#2 will be released are:

  • SCOrch;
  • WAP;
  • Service Management Automation.

Please know that sometimes specific SC components aren’t touched by a certain UR. However, next UR might contain updates for those components.

Want to know more? Go to KB3209601.

System Center 2012 R2 Update Rollup #12 Partially Released

Update 30-01-2017: UR#12 for SCOM 2012 R2 is available now for download. Go here for more information.

Also noticed on the Microsoft Update Catalog that Microsoft is in the process of releasing Update Rollup #12 for System Center 2012 R2:

And again (like UR#2 for System Center 2016) only SCVMM and SCOrch are ‘touched’ but I am pretty sure UR#12 for other System Center 2012 R2 components will be available soon as well.

I’ll update this posting ASAP when there is more to share.

System Center 2016 Update Rollup #2 Partially Released

Update 26-01-2017: Most remaining SC 2016 components will receive UR#2 in February 2017.

Just noticed on the Microsoft Update Catalog that Microsoft is in the process of releasing Update Rollup #2 for System Center 2016:

Until now only SCVMM and SCOrch Service Provider Foundation are ‘touched’ but I am pretty sure UR#2 for other System Center 2016 components will be available soon as well.

I’ll update this posting ASAP when there is more to share.

Tuesday, January 24, 2017

Developer Resources For IT Pro’s

I bumped into an interesting article on the Microsoft Azure Developers blog. It contains links to many resources, containing information about Azure (of course Smile).

However, what makes this article so interesting, per item (like IoT, Functions, Bot Service, Container Service and so on) the resources are divided into two categories: Quick and More Time.

I’ve checked some of those resources and I find the Quick items highly accessible, easy to consume and in the mean time, providing one with a basic understanding of the topic at hand.

Since todays world of the IT Pro is moving more into a dev-ops world, it’s important to know at least the basics of the coolest new stuff available.

With this article you can cover the basics by using the Quick category per item, and when needed, you can take a deeper dive by using the More Time category.

Test it yourself and go here.

Wednesday, January 18, 2017

WMUG NL #1 Event: All About Windows Server 2016

Windows Management User Group Netherlands (WMUG NL) is going to host their first event of 2017. This event will be about Windows Server 2016. The line up of speakers is still unknown. But with all the experts they know I am sure it will be good!

This event is scheduled for the 15th of February. Location will be Pink Elephant in Naarden (Netherlands). This is the agenda:

16:00 – 16:30 Entrance
16:30 – 17:30 Session 1
17:30 – 18:30 Food
18:30 – 19:15 Sponsor session
19:15 – 19:30 Pause
19:30 – 20:30 Session 2
20:30 – 21:30 Drinks

As you can see there are in total two one hour sessions. One could argue it isn’t much but I myself prefer quality over quantity.

Want to signup for this session? Go here.

SQL 2012 SP3 Officially Supported By SCOM 2016

SCOM 2016 and SQL Server 2012x. No…
When SCOM 2016 was in preview, SQL Server 2012x was supported. But when SCOM 2016 (with UR#1) became General Available, the support for SQL Server 2012x was dropped, making the upgrade path of SCOM 2012 R2 UR#11 to SCOM 2016 UR#1 much harder, since most SCOM 2012 R2 environments are running SQL Server 2012x as back end.

It works but no support?
None the less, I successfully upgraded some SCOM 2012 R2 Management Groups to SCOM 2016 UR#1, even though the SQL backends were running SQL Server 2012 SP3 (or even earlier, like SP2/SP1). And in every case, the upgrade went well and the SCOM 2016 UR#1 MGs are running without a glitch.

In 99% of the cases, this approach (running SCOM 2016 outside the official supported configuration) is unwanted and strongly advised against doing. Because when something goes wrong and you contact Microsoft Support, you won’t be eligible for support. Period! No matter how ‘fat’ your support contract might be… Ouch!

SCOM 2016 loves SQL 2012 SP3 now!
Gladly this has changed. Since Microsoft realised that SQL Server 2012x is the ‘engine’ of many SCOM 2012 R2x environments, they tested SCOM 2016 UR#1 with SQL Server 2012 SP3.

And (as expected Smile), it turns out that SCOM 2016 UR#1 runs great with SQL Server 2012 SP3 as backend! Therefore Microsoft is in the process updating the related documentation.

But as of yesterday Microsoft officially supports SQL Server 2012 SP3 for SCOM 2016 UR#1!!!

This allows you to upgrade to SCOM 2016 UR#1 in a much simpler way. SQL Server 2014 or later isn’t anymore a hard requirement! Awesome!

I got this from Kevin Holman’s blog, article to be found here.

Friday, January 13, 2017

SQL Server Reporting Services 2016 RTM Bug ‘Could not load folder contents. Something went wrong. Please try again later.’. Fixed with CU#1 or later AND with SP1…

Update: As it turns out, this issue is solved in CU#1 for SQL Server 2016 and later CUs (#2 and #3) and in SQL Server 2016 SP1 as well. To be more specific, KB3172981FIX: Home page of SSRS web portal becomes empty after you enable My Reports feature in SQL Server 2016’ fixes this issue.

In one of my test labs I am rolling out System Center 2016 on Windows Server 2016. Using core editions as much as possible in order to learn how to handle those systems. On one box though I had to install the GUI edition of Windows Server since it hosts the SQL Server Reporting Services (SSRS) 2016 instance for SCOM.

Afterwards I succesfully installed SCOM 2016 RTM Reporting. All went well, and soon the SCOM Console showed the Reporting ‘wunderbar’ and all the reports. Nice!

However, when I tried to open the web portal url of the SSRS 2016 instance, I got this error: Could not load folder contents. Something went wrong. Please try again later.

This really surprised me since the SCOM Reporting component works great! Time for some troubleshooting.

How to troubleshoot SSRS 2016
Gladly SSRS 2016 logs by default quite a lot. The best approach is to clean up the ‘old’ logs and start fresh. This way you’re looking at the real issues at hand and not old issues.

Therefore I stopped the SSRS 2016 service on the problematic box (SQL Server Reporting Services ([SQL INSTANCE NAME])), SQL Server Reporting Services (MSSQLSERVER) in my case, and deleted all the logfiles present in the folder ~:\Program Files\Microsoft SQL Server\MSRS13.MSSQLSERVER\Reporting Services\LogFiles.

Then I started the SSRS 2016 service. When started two new logfiles are created:

  • Microsoft.ReportingServices.Portal.WebHost_MM_DD_YYYY_HH_MM_SS.log;
  • ReportServerService__MM_DD_YYYY_HH_MM_SS.log.

After starting the SSRS 2016 service, I started IE on the same box and opened the web portal url of SSRS 2016. And again I was thrown the previous mentioned error.

Time to check both logfiles. As it turned out only one logfile contained an error message, Microsoft.ReportingServices.Portal.WebHost_MM_DD_YYYY_HH_MM_SS.log: OData exception occurred: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.ArgumentOutOfRangeException: The UTC time represented when the offset is applied must be between year 0 and 10,000.

A BUG?!!!
Time to Google! And soon I found this SSRS 2016 bug on Microsoft Connect:

What?! The SSRS box having this issue is set in the time zone UTC+1:00:

Could it really be this?!

Time to test it
So I set the Time zone to UTC without ANY offset:

Afterwards I restarted the SSRS 2016 service, and reopened the web portal url of SSRS 2016. Kabam!!!

So this is a serious bug!!! Sad to see that the bug is closed, reason being External:


Basically meaning that SSRS 2016 out of the box doesn’t have this issue.

Just to be sure, I tested a new SSRS 2016 instance on a new VM (SSRSTestBug), with SSRS 2016 Reporting installed WITH UTC offset (UTC+1:00). On this the home folder of web portal url of SSRS 2016 is neatly loaded and shown:

CU#1 fixes this issue
A fellow MVP (Tao Yang) pointed out to me that he doesn’t experience these issues. But since he lives in Australia the SSRS server also has an offset. And yet it works for him.

As it turned out, he runs SP1 for SQL Server 2016. So I investigated and this issue is fixed in CU#1 for SQL Server 2016 and later CUs (#2 and #3). SP1 for SQL Server 2016 contains CU#3 (containing CU#2 and CU#1) and other fixes as well.

To be more specific, KB3172981 describes this issue and is folded into CU#1 and later, and SP1 as well. As a test I rolled out CU#1 which fixed this nagging issue. SP1 is scheduled to be applied later on.

The status of the bug report threw me off, since it didn’t show it was fixed, but ‘Closed as external’ instead. So apparently, this report hasn’t been updated accordingly.

However, at least I could have applied CU#3 for SQL Server 2016 at least, which would have fixed it for me. Of course, I could have rolled out SP1 instead. But I am always careful with rolling out SPs for SQL since they might break compatability with SC components.

Thursday, January 12, 2017

Hoax or Fact: ‘SCOM Is Dead’?

Update 01-13-2017: Wow! This posting got many good comments, also from Damian himself. I really appreciate all the comments and as promissed, I’ve updated my blog with the comments, including the names the persons who commented. All the comments are found at the end of this posting, in a table.

Why this posting?
Yesterday evening I attended the first WMUGNL webinar of 2017, titled Head to Head: Azure Pack and Azure Stack. In this webinar the presenter Damian Flynn talked about the differences between Windows Azure Pack (WAP) and Microsoft Azure Stack (MAS), and when to choose for WAP or MAS.

During the same presentation he stated multiple times that ‘SCOM is dead’. And that statement made me think. Is SCOM really dead right at this moment? Are WAP and MAS the nails in the coffin of SCOM? And why is SCOM dead? Has SCOM outlived it’s purpose? Or is there more to it, and needs it more perspective?

Time to investigate. Also because I highly respect Damian Flynn and knows that he just doesn’t say anything because it has a nice ring to it.

Some background information
In the days SCOM saw the light, the IT world was pretty well defined. All IT related hardware (compute, storage, networking) resided in one or more datacenters and was owned (or rented) by the ‘end user’, meaning the companies using it.

Cloud was still a buzz word and something not too many people really comprehended, nor didn’t suspect it to become what it is today (and still growing in size and capabilities).

Back to the datacenters. On top of all those IT ran all the other layers with on top the applications, processing the data for the end user, enabling them to run their business.

As such, monitoring was required and SCOM was one of those tools allowing IT departments to monitor all workloads, from the hardware up to the whole application chain.

The cloud is landing…
Maintaining a datacenter is quite a challenge, with life cycle management and so on. So a new type of datacenter was born, the Software Defined Datacenter (SDN). In the mean time the cloud was ‘landing’ in the IT shops. Excuse my pun, but what I mean is that step by step, companies started to see the advantages of ‘IT on demand’ enabled by the cloud. And IT departments had to follow suit, in order not to loose their (internal) customers AND to battle shadow IT. When you can’t beat them, join them!

As such WAP was born, allowing customers to build their own on-premise (better, in their datacenters) IT-as-a-Service offering, driven by System Center technologies, and connected to the public cloud, Azure. Not really a blessing since every single bit of the System Center stack is quite a challenge to master AND to maintain. But the power it delivers made a lot up for it. When combined it even becomes more of a challenge to proper maintain it.

Close but no cigar…
And yes, the Look & Feel of WAP felt like Azure (classic that is), but under the hood a whole different set of technologies was being used. Who ever thought that Microsoft uses System Center in their own datacenters, think again… One of the reasons being that not a single System Center component (except SCCM but that’s a breed of it’s own and NOT part of WAP) was ever architected to manage that kind of scale (thousands and thousands of servers, multiplied  by their workloads).

Meaning, Azure and WAP were (and never will be) the same, making the connection between them a challenge to maintain AND to make full usage of it. As such, there was/is a serious disconnect between the two of them.

Say hello to Azure in your own datacenter
So it was time for another approach. Why not really extend Azure into your datacenter? Meaning Azure and it’s underlying technologies not only live in the public cloud, but also in your datacenter, allowing for a single consistent experience, usage AND management.

As such, not only the look & feel will be the same, but under the hood, the same technologies will run. Of course, not all Azure based services will run in your datacenter, but the ones that do will be the same. And is to be expected that the amount of Azure based services running in your datacenter will only grow in the time to come. Already MAS TP2 offers more services compared to MAS TP1.

And now witness the birth of MAS. Still in the making, but progressing to GA as we ‘speak’. But please, do not forget the delivery model of MAS. Where you can install WAP yourself and configure it, MAS comes pre-installed on server hardware from three different (for now) hardware vendors to choose from: Dell, HP or Lenovo.

And out of the box, MAS monitors itself. Meaning not the hardware (it’s to be expected that the vendors will deliver additional hardware/software for that), but the Azure based services making up MAS, are monitored in the same way as Azure is capable of monitoring itself.

In WAP SCOM is the monitoring tool, delivering insight from the hardware up to the application layer.

Back to todays world
So in the world of MAS, SCOM doesn’t play a role at all. Monitoring is delivered through other mechanisms, and also with a whole different philosophy behind it. Only when something is really broken AND you’re expected to be capable to fix it, you’ll be alerted. In other cases, you won’t be alerted but the vendor and/or Microsoft instead. Perhaps you’ll get a notification.

So in that respect, SCOM is dead. Or simply not present. However, one might discuss that whether the lack of presence equals death. But there is quite a catch here. More about that later on.

When looking at WAP, SCOM isn’t dead. SCOM is still part of it, and will deliver it’s monitoring functionality. However, when WAP is based on Windows Server 2016, some functionality in the past delivered by SCVMM and SCOM, is now ‘baked’ into Windows Server 2016 itself. The Failover Cluster has become a lot smarter and as such, far more capable of using the available resources in a smarter way. So in that respect the role SCOM plays is diminished.

Shifting roles
However, what is true monitoring? Meaning, do you want to know that server X or database server Y just stopped working? Or do you want to know that application flow X is impacted BECAUSE database server Y just stopped working?

Many IT shops (and their customers) choose the latter. Which makes perfect sense. And this is where SCOM comes in and still plays a significant role, especially in todays world where many workloads are hybrid, running partially on-premise and for the other part in the cloud.

Sure, MAS delivers monitoring but in a more siloed approach. Not the whole chain is covered, making up the business critical applications. And this is where additional monitoring is required. Either delivered by SCOM or other tools, perhaps OMS. However, the latter has made itself rather unpopular with the new licensing model. Besides being unclear, the costs have grown significantly.

So when already having a SCOM environment in place, it’s easier to extend the monitoring footprint so it covers the hybrid business critical application(s) as well.

So time for the recap. First an overview.

SCOM is lacking presence (feel free to translate it to ‘dead’) when:

  • Running MAS and monitoring is only targeted at the silo’s (compute, storage, networking, platform, Web, SQL and so on);
  • Monitoring across all these silo’s, making up one ore more business critical applications, isn’t required;
  • MAS and Azure are the ‘only’ workloads and there is no on-premise SCOM environment at hand. In such a case I would strongly advise against a SCOM deployment, but advise for OMS instead. What’s lacking today will be there in the time to come, since the drive behind OMS is huge;
  • Running MAS and hardware monitoring is required. That ‘gap’ will be dealt with the vendor delivering MAS (Dell, HP or Lenovo) since the hosts running MAS won’t allow any ‘alien’ software by running lock down policies. All non-MAS software will be simply blacklisted.

SCOM is still alive and kicking when:

  • Running WAP combined with public cloud (Azure);
  • Running a hybrid environment (on-premise/private cloud/public cloud);
  • Monitoring across all the previous mentioned silo’s, making up one ore more business critical applications, is required;
  • Other applications are running on-premise only and require monitoring as well;
  • The company you’re working for is moving towards to the cloud, whether MAS or Azure based and while in transition, monitoring is required.

BUT don’t be sentimental:

  • SCOM is ‘old school’ technology, based on SCOM 2007 which saw the light in (duh!) 2007;
  • In 2007 the cloud was branded as a hype to be, a buzz word. Not many expected it to become the next big thing;
  • SCOM isn’t made for the cloud, it can’t encompass the sheer size and numbers;
  • New IT delivery models demand/require a new way of monitoring.


  • System Center 2016 (and SCOM 2016 as such) has the Mainstream End Support Date set on the 11th of January 2022;
  • So until that date SCOM has still the support of Microsoft and will grow and develop further (within the previous mentioned borders);
  • Meaning ongoing investments in SCOM are still valid and do make sense;
  • SCOM still delivers monitoring capabilities yet lacking in OMS;
  • When already running SCOM it makes sense to upgrade to 2016;
  • When not running SCOM, think TWICE whether it’s worth the investment, or perhaps wiser to move on to OMS;
  • Make a decision based on the (monitoring) requirements and not because you want to implement the newest sexy thing.

Putting it all together:
Basically I am trying to say is that Damian Flynn didn’t talk rubbish but that it requires more perspective. I hope I’ve provided the (for me mostly) so much required perspective.

Feel free to comment on this posting and share your thoughts. When you provide your name AND your feedback is well founded, I’ll update my posting with your feedback AND your name.

Comments this posting got so far
As stated earlier, feel free to comment. Here are the comments this posting got so far. And of course a BIG thank YOU for taking time to comment. Much appreciated!

Just like Damian commented: ‘…This is why we have this amazing community, so that we can talk, share, learn and stir up Debates :)…’. Amen to that Damian!

Damian Flynn:

Thanks Marnix

I sure did not want to stir up a storm to suggest that you need to drop any investments in SCOM, or even stop investing in SCOM; I am aware of a number of organizations whom are currently active in creating bespoke management pack, and will be in Stuttgart next week providing input on a commercial MP being developed for SCOM. So honestly that was not my intention

My perspective, is aligned tightly with the brilliant perspective you have presented in this post; The plumbing on the new fabric for clouds build in Azure Stack or Open Stack is no longer dependent on foundations like System Centre; and its health, orchestration, and core functions have truly been reimagined. From that point SCOM is dead, but so to in that vein is VMM.

However, In the spirit of the presentation, WAP 2016, is still a very powerful proposition; and its foundation is VMM, SCOM and SPF; and all 3 of these components are mandatory for a full deployment; in this perspective it’s a Long Live SCOM chant.

The middle Ground however is that if you want, or need to maintain that single pain of glass view of your environment, and your chosen product to deliver this is powered by SCOM; then even if you are going to adopt MAS or OpenStack as your new Cloud Platform; all that will be required is a new Management Pack; which will communicate directly with the Health Services of these platforms.

And of course, we have ignored Application Insights, and OMS; neither of which are on any road map I am aware of to be moved on premise. Or, if you are working in an Air Gapped environment, where online; or Hybrid are not options to consider; then again we will be quite happy to have good support until mid-2022 with SCOM..

As I mentioned in the webinar, there are a LOT of moving parts now, and the choices for what cloud platform is no longer a simple A, B, C or D decision. As we highlighted with the Migration from VMM 2012 to VMM 2016; this challenge also stands true for SCOM also. Identify the correct sequence, understand what works and what will not, what is continued in the support matrix, and what is no longer considered supportable...

This is why we have this amazing community, so that we can talk, share, learn and stir up Debates :)

It’s all opinions.
Best Regards



When it comes to the future of SCOM, I think we also need to look at Microsoft's actions. From what I can see, SCOM2016 is a very incremental upgrade....more akin to an R2 release than a whole new version. Compare that to the rate of innovation and development of OMS in the same time frame and it becomes clear where Microsoft is devoting their engineering resources.

Microsoft also has a financial motive. If you notice, OMS was originally part of the System Center Suite but was taken out. Most companies are licensed for the entire System Center suite as part of Software Assurance licensing. But OMS requires a separate add-on license so it provides them with an additional revenue stream,

Also, look at what happened with the Bluestripe acquisition. Instead of integrating that topology-mapping technology into SCOM they instead put it into OMS. As far as I know there are no plans to incorporate the Bluestripe technology into SCOM. That to me is the biggest disappointment and tells me that any further improvements to SCOM will be of an incremental nature while products like OMS continue to see rapid development and innovation and will start overlapping more and more with SCOM's core strengths.

Thursday, January 5, 2017

WMUG NL Webinar #1 2017: ‘Head to Head: Azure Pack and Azure Stack’

Fellow MVP Damian Flynn will host the 1st webinar of 2017 for WMUG NL, titled Head to Head: Azure Pack and Azure Stack.`

This webinar is scheduled for on Wednesday the 11th of January 2017 at 20:00 hours, local time (GMT +2). In order to convert it to your own local time, go here.

Since this webinar is hosted by Damian Flynn it will be good AND in English. And because of being a webinar, your own location doesn’t matter. All you need is a working internet connection and you’re in!

Session abstract (taken directly from the WMUG NL website):
’…Azure Pack is built on the proven System Center stack, and will be supported by Microsoft until 2022; Azure Stack brings the public resource manager on premise, with fabric resource providers. Learn how these products work, what the share in common, and how to differ. Each solution has a place in our data-centers, learn which is the correct solution for your implementation, and why!…

For anyone working with System Center and/or going to work with Azure Pack I highly recommend this session.

Go here and reserve yourself a place!

Monday, January 2, 2017

Free Ebook: ‘Understanding Azure - A Guide For Developers’

Recently Microsoft released a new FREE e-book, titled Understanding Azure—a guide for developers. It’s all about how to develop on Azure from day one using common app design scenarios.

This free e-book can be downloaded from here.

How To Drive Your IT Career To The (Microsoft) Cloud

Ever wondered about the latest cloud developments and how they will affect your own IT career? And when? And perhaps even wondered where it leaves you? And what new opportunities there are for you?

If so, it’s good to ask your self questions like not. When not, it’s about time to start asking those questions, simply because cloud isn’t marketing mumbo jumbo. It’s here, it’s getting bigger and YES it will affect your IT career, no matter what.

So better prepare for it. One way to do this is to visit the newly opened website by Microsoft, the Microsoft IT Pro Career Center.

What it is and provides? Taken from the same website: ‘…is a free online resource to help map your cloud career path. Learn what industry experts suggest for your cloud role and the skills to get you there. Follow a learning curriculum at your own pace to build the skills you need most to stay relevant…’

I subscribed myself and started the advised learning curriculum.

Oh and did I mention it already? It’s FREE!!!

8th MVP Award Received

Yesterday I received an e-mail from Microsoft confirming my 8th MVP Award:

Wh00t! This is awesome! Thank you Microsoft! A very good start of 2017!