Thursday, November 26, 2009

OpsMgr Agent push installation goes wrong. Error Code: 80070035

When running a Console based installation of the OpsMgr Agent this error might appear:

The Operations Manager Server cannot process the install/uninstall request for computer xxxxxxx due to failure of operating system version verification.
image 

There is a good posting about how to troubleshoot Console based OpsMgr Agent installations, to be found here. There is also a list to be found with the needed ports and services in order for an Agent push to run successfully.

However, at a customers site I bumped into the above mentioned error. It happened on Windows Server 2008 systems where a tightened security policy is in place. This GPO had been created by running the Security Configuration Wizard so some additional tweaking was needed in order to get the Agent push to run.
image

All the needed ports had been opened on the firewall and the needed services had been enabled as well through a GPO. The GPO had been applied successfully. But still no luck. The earlier mentioned error appeared.

After some searching it turned out that the Security Configuration Wizard had also disabled the Server service. After enabling that service through the GPO and having applied that GPO, the Agent push went fine.

When Error Code: 80070035 appears with a Console based installation of the OpsMgr Agent and the targeted systems are Windows Server 2008 based systems which have their security hardened by using the Security Configuration Wizard, check to see whether the Server service is running as well besides the other needed services and opened ports as stated in Kevin Holman’s posting.

Wednesday, November 25, 2009

Windows Server 2008 and .NET Framework versions. A bit like the Matryoshka doll

You know Matryoshka? The Russian nested doll. One big doll contains a smaller one and so on. Since one picture says more than a thousand words:

The same thing can be said about .NET Framework on Windows Server 2008 where version 3.0 is installed as a Feature:
image

But this version contains ALSO version 2. Even though it is not to be seen in Programs and Features in the Control Panel, under water this version is present. Go check it out your self in the folder: ‘C:\Windows\Microsoft.NET’. There are two folders present. One for the 32 bits architecture and one for the 64 bits architecture. Check the x64 folder and this is what you will see:
image

Okay, you have made your point. So what?

Suppose, you run into an issue with OpsMgr where high handle counts on the Management Servers (or servers with an OpsMgr Agent) are found. These servers are x64 based. It will finally result in your Management Server(s) and/or RMS ending up with a red flag: a critical Alert will be raised about a handle count which is way too high.

This can be caused by a issue with .NET Framework 2 SP2. For this Microsoft has released a hotfix, to be found here: http://support.microsoft.com/default.aspx/kb/968760. This hotfix also applies to Windows Server 2008 SP2!

This hotfix does NOT apply to Windows Server 2008 R2.
image

So far so good. Suppose the servers are Windows 2003. No problem what so ever. But what when your servers are Windows 2008 based? As long as your .NET Framework is up-to-date, no worries. But suppose you have installed .NET Framework 3.0 as a Feature and haven’t updated it since?

The installation of the hotfix will NOT run since it needs SP2 for .NET Framework 2.0 to be in place. OK. So you try to install SP2 for .NET Framework 2.0? Nope. Won’t do either. But still, SP2 needs to be in place. Otherwise the hotfix will not install. This is the way to go:

You already installed .NET Framework 3.0 (contains version 2.0 as well) as a Feature > install .NET Framework 3.5 SP1 (contains SP2 for version 2.0 as well) > install the hotfix.

Advise:
Keep .NET Framework updated on all your servers all the time. Newer versions will be installed alongside previous versions. So software depending on a particular version is most likely not to be affected. Many times, newer versions contain updates/SPs for previous versions as well. These updates/SPs are important.

Monday, November 23, 2009

OpsMgr. Where the technology ends and the organization starts. Part II: I think I saw an Alert...

--------------------------------------------------------------------------------- 
Postings in the same series:
Part   I:  Introduction.
Part III:  Beyond The Alert.
---------------------------------------------------------------------------------

Before monitoring starts
As stated in the previous blog posting in this series, at a certain stage in the monitoring process, the technology ends and the organization kicks in. Actually this isn’t correct at all. The organization has to be in place during the whole monitoring process as well of course. Even better, before the monitoring starts.
image

Say what?
Let me explain it more detailed. Suppose you have an IT environment in place. In that environment certain Line of Business Applications (LOBs) are running. These LOBs must be monitored. But how can you properly monitor it without exactly knowing what elements do make up these LOBs? Without having the organization reached an agreement on that, proper monitoring of these LOBs won’t be possible at all.

So before monitoring starts, the organization must be in place AND also have a list of what to monitor, based on their core business, LOBs and what elements do make up those LOBs. Only then proper monitoring is possible. Some times I see organizations where the IT department is told to monitor LOBs without having a clear definition of what elements are part of those same LOBs. Per department (development/testers/users/management for instance) there are totally different views upon it. This results in monitoring too much/less objects or the wrong ones. Thus having a negative impact on the monitoring process as a whole. People think they have it covered but actually they haven’t and are confronted with it on the worst moment…

Advise 01:
Before creating a Distributed Application or a whole set of other rules/monitors in order to monitor a LOB make sure there is mutual agreement on what elements do make up that particular LOB. Also check on the severity levels of the Alerts and their priority settings in order to reflect the status of the monitored LOB so it matches the organizations priorities. (Never raise a critical Alert while a Warning is sufficient.)

Monitoring is running
OK. The LOBs got defined just fine and are being monitored properly in OpsMgr. Every one happy. Let’s switch off the Console and look the other way! Wait! This is not the way to go about it. But nor is putting some one in front of the OpsMgr Console, waiting until an Alert comes in. (Don’t you dare to take your eyes of the screen…..)
image

Now what?
Divide and Conquer! Simple as that. Let OpsMgr work for you and let the related departments work with OpsMgr. Create Diagram Views related to the departments fields of work. Diagram Views are nice since they are graphical display of the monitored LOBs/services/servers/objects. Use big screen TV's for it so no one can neglect it. With a single glance one can see the status of their Exchange/SQL/IIS environment for instance of that of the LOBs. When the Diagram View doesn’t work for you, there are other solutions available, like Visio or Savision Live Maps.

On top of that, create specific Folders in the Console containing the Alert-, Performance- and State Views relevant to the departments. And scope these Folders so the departments only see what is relevant to their line of work. This will keep the Console neat and tidy without too much information in it.

Also use different Alert Resolution States. Based on that different Alert Views can be built as well which filter the Alerts on their Resolution State and a certain group of Servers. This particular Alert View can be added to the folder relevant for a department. Thus keeping the Alerts they see related to their line of work. With PS the process of changing Alert Resolution State can be automated.
image

Every department has some one who is the manager. Use subscriptions here to get the manager of the department notified about Critical Alerts which are related to the responsibilities of that same department. This way the manager ‘knows’ about it and can manage the process of getting things fixed. Also this way the manager gets in the loop earlier and not when it is ‘too late’.

Advise 02:
Visualize as much as possible. Use Distributed Applications and Diagram Views for this purpose. Scope these visualizations to the related departments. Use big screen TV’s per department so a whole department knows with a single glance how they are doing.
Visio or Savision Live Maps are good tools as well here.

Advise 03:
Use scoped Views in combination with Alert Resolution States. This will make OpsMgr as a whole more workable for the departments. They are not interested in becoming OpsMgr experts. They are interested in how their systems/services are operating. Provide that and you will see how happy they are with OpsMgr.

Advise 04:
Get the manager of the related department involved as well. Provide a read-only Console, reports which are automatically created and (automatically) send to him/her on a regular basis and set up a notification model for the manager which – for instance – sends out only the Critical Alerts related to that department.

Next posting in this series will be about using Notifications, how to connect to ticketing systems (not a technical approach). Also how  to close an Alert will be described in this series.

Duh! Right click, and select Close. Nope. There is more to it, at least when you want to use OpsMgr to its fullest. :) 
image

Friday, November 20, 2009

PowerGUI with PowerPack for OpsMgr

Came across this tool on the Microsoft TechNet Operations Manager Forums. It is a free gui for PowerShell and has also an extension for OpsMgr, aka PowerPack.
image 

When installed and correctly configured one can get a quick look into their OpsMgr environment. Also certain tasks can be executed. Some screen dumps:
image

A quick run of a PS command:
image 
In the screen: Command selection, parameters to be defined. Click OK.

Results:
image

Also a script editor is to be found, with a debugger. Seems to work great (added an error just to see if it is working):
image

Haven’t got time to take a deeper dive but the first sights do look very promising! PowerPack to be downloaded here. First the PowerGUI needs to be installed before the PowerPack can be used. PowerGUI can be downloaded from here.

SRS, UAC, IE ESC to name a few abbreviations and a puzzle…

At a customers site I bumped into this puzzling issue: All servers are Windows Server 2008 SP2 based, with User Account Control (UAC) enabled. On one server I installed SQL 2008 SP1 with SQL Reporting Services (SRS). Also Internet Explorer Enhanced Security Configuration (IE ESC) is enabled by default for all users, admins included.

After having applied SP1 for SQL 2008 I always check SRS. Does it work? So I started IE and surfed to this address: http://localhost/reports. This is what I got:
image

Hmm. Looks almost good. SRS is working. Wait. Some buttons are missing!? Now what? So I set IE ESC for Users only. Still no luck. I set it back. I disabled Protected Mode. Still no luck. Set IE ESC for Users only again, so I have both adjustments now. Still no luck.

So I checked the settings in IE:
clip_image002

Seems OK to me. Then I added the site (http://localhost) to the list of sites for Local Intranet:
image

Closed IE, started IE and surfed to the SRS website. Still no luck! What about approaching this website from another server? I logged on to another server with the same credentials, started IE and surfed to the SRS website residing on the SQL server. Now I used the name of server instead of localhost (duh!). This is what I got:
image

That looks better. But strange it is since IE is set to this:
clip_image002[4]

And as we all know, Trusted Sites is more protected (more content is blocked) then Local Intranet, the setting on the SQL server itself! So why does it work from any other server but not from the SQL Server it self?

So time to use the internet. Found this posting of an SQL/SRS MVP but still no luck…

A colleague of mine came with an idea: Let’s run IE on the SQL server with elevated permissions. So I did. This is what I got:
image 

Okay, it is working. But still I am puzzled why without elevated permissions it works from any other server AND with higher security settings? Any one out there with an explanation, please step up and tell me. Since I am still puzzled here.

Good to know: Reporting installed fine since on Windows 2008 Server I run every installation with elevated permissions by default.

Thursday, November 19, 2009

OpsMgr. Where the technology ends and the organization starts. Part I: Introduction.

--------------------------------------------------------------------------------- 
Postings in the same series:
Part  II: I Think I Saw An Alert.
Part III: Beyond The Alert.
---------------------------------------------------------------------------------

This series of postings won’t be about technology at all. Many postings have covered that aspect and future postings will do so as well. This series is about the way an organization could connect/interact with OpsMgr. The way the IT department works with it. Like any other tool, a hammer is only a hammer. It becomes something much more in the experienced hands of a craftsman who uses it to construct furniture or builds a house with it. (You don’t want to see me with a hammer. At least my wife doesn’t. So that toolbox is out of my sight. And to be frankly, I am happy with it.)
image

The same comparison can be made for OpsMgr. However, this is not a tool, but a set of tools, like a toolbox. This toolbox certainly has added value for any organization. When being used properly it saves organizations much costly (down)time, unavailable servers, services, processes, LOBs and enhances the overall quality of the whole IT environment to a great extend. Thus making the organization as a whole more robust and competitive.

But as with any other tool/toolbox, one MUST read the enclosed user guides and comprehend them. But this is just a part of it. When the guides are read AND understood, the other workflows and processes  must be in place as well. Otherwise the added value isn’t that big at all.

Huh? What am I talking about? Let make an example. Suppose you have a hammer. You understand and know how to use it. But there are no nails. No wood. Simply because no one thought about ordering it and getting it where it is needed. Nor is there any agreement whether to construct a table, a chair or a house. These latter items (nails, wood, the order process and the agreement on what to build) can be looked upon as the processes on any IT department. Lets bring it closer to home.

An Alert pops up in the OpsMgr Console. Okay. Monitoring is working. Great! But now what?

How to go about it? Are there any processes in place to get it fixed? Or do we look the other way, hoping that the Alert, and the underlying cause which triggered the Alert, disappears out of itself? How to keep track of the Alerts and the processes? How does one know that an Alert is already noticed AND acted upon? When gets an Alert escalated? And how? When will an Alert be closed? And how?
image

As you can see, none of these questions are directly related to technology but more to the way the IT departments are organized. The way ITIL/MOF is shaped/translated to everyday life.

I will certainly not pretend to have all the answers. Nor will this series cover MOF/ITIL in any detail. I mean there tons of books/online guides out there doing (or trying, one better than the other) that already and doing a better job of it than I ever will. So I won’t go there. However, in this series I will give the outlines of what needs to be in place in order to get the best out of OpsMgr/SCOM.

How many postings this series will cover? That I do not know. Just keep an eye on my blog and you will see.

Wednesday, November 18, 2009

MUI (Multilingual User Interface) issue AND wrong targeting when collecting performance data

At a customers site, located in the Netherlands, a systems engineer had built a performance collection rule, based on this blog posting of mine. But no data what so ever came in. Also the Performance View stayed empty. Nothing to be seen. For another almost similar constructed rule, data came in but a detailed Performance View, on a per server basis, wasn’t possible. Only the total results of all servers was being shown. (At least that is what they thought…..)

So when visiting them I checked certain items in order to find the cause(s) behind it.

Issue 1: No performance data is coming in at all
First thing I did is checking the OpsMgr event log of the Management Server. Many times people dive into the UI of OpsMgr, but believe me, the OpsMgr event log tells one way much more.

Oops! Multiple EventID’s 33333 were to be found:
image

This means that somehow data can not be written to the Data Warehouse. This can have multiple reasons. Time to dive deeper. So I checked the Rule itself:
image

Looking at Object and Counter, the localized names (Dutch) are used here. The targeted servers are US English based Windows 2008 servers, using MUI (Multilingual User Interface). So when one connects to the OS, the Dutch language for the interface is being used here. When the systems engineer built the rule he ran the SCOM UI from his system (also Dutch). So when selecting the Object and Counter the Dutch names were being shown.

However, a MUI covers the interface, not the underlying layers which are still US English based. And here the culprit was found. After adjusting it (I ran the SCOM UI from a Management Server which runs the English Server OS) the Object and Counter with the correct name were selected:
image
(Of course, the information can be added manually as well but is more prone to errors/mistakes.)

So now the rule was set correctly. From now on the collection of data is running fine and data comes in. However, in the Performance View the collected data was shown as a single line. No option to select one or more servers.

The EventID’s 33333 are gone now.

Time to move on to the second issue.

Issue 2: No detailed Performance View is possible. Only ALL (?) data is shown as a single line
The cause here turned out to be the way the rule was targeted. Ideally, a new class is made. But in smaller environments this can be a bit too much of an effort. So another approach was used here. The rule was targeted directly at a group of servers. The systems engineer thought the rule collected the performance data of all the servers residing in that group.

That is not true however. When a rule/monitor is targeted at a group, the group will NOT be enumerated but run against the owner of that group which is the RMS. So only the collected data of the RMS was being shown in the Performance View.

But this is not the way to go. When one doesn’t want to make a new class, it is OK. But then another approach must be used. First one targets the rule (or monitor) at a proper existing class. For this rule I have chosen the ‘Windows Server 2008 Operating System’ class. When building this rule I disabled it by default. So the rule won’t run at all.

Afterwards I enabled the rule by using an override targeted at a group which is dynamically populated based on their DNS Name. Now this rule will only run on the servers residing in the group where the override is targeted against. Now all is well:
image

Lessons learned:

  1. When using MUI and creating Performance Rules, make sure using the correct language for the names. Otherwise no data will be collected.

  2. When targeting a rule/monitor without creating a new class, DO NOT target it at a group. Target it at a proper existing Class, disable the rule/monitor by default, enable it using an override targeted at a Group.

Tuesday, November 17, 2009

SCOM WebConsole doesn't show Informational Alerts

17-11-2009 Update:
This is a repost of the blog posting which was originally posted on the 14th of February 2009. It contained some minor issues which have been corrected. I have tested it and it works with OpsMgr SP1 and R2 as well, also the architecture (x86/x64) makes no difference.

The Web Console shows only Critical and Warning Alerts. This behavior is by default. Informational Alerts aren't shown.

When one wants to see the Informational Alerts as well, one has add an entry to the web.config file (location: ~:\Program Files\System Center Operations Manager 2007\Web Console)

Add this line to the <configuration> <appSettings> node:

<add key="AlertSeverity" value="2" />
see screen dump:
image

The WebConsole will show Informational Alerts as well from now:
image

Tech-Ed Berlin 2009: Total Recap

As promised I would post an article about the road ahead, grouped per product.

However, it is based on my personal interpretation. Since I am only human, I am prone to error. Nor do I pretend to ‘Know-it-all’ nor to possess the ability to foresee the future. Nor does this posting present Microsoft’s view in any way. It is just me thinking out loud.

Having said all that, here it comes.

Overall impression
Microsoft is seriously taking care of it’s business. The speed in the development of new products or updates to its software portfolio is most impressive. And yet, speed is not the only keyword here. It happens in conjunction with quality, customer feedback and trends. So Microsoft isn’t just developing software, but based on a vision, input from their customers and trend analysis. Also, to put it more bluntly, they ‘put their money where their mouth is’. Microsoft doesn’t only talk about Clouds but invest heavily in it. How else does one describe the deployment of 10.000 servers per month by Microsoft? So the Cloud is coming. It is going to happen. The dynamics/vibes at Tech-Ed were evident. Much is going on and happening or about to. So no sleepy company, but one that is awake, alive and kicking! (in a decent manner that is.)

Roadmap SC Products
Since one picture says more than a thousand words just take a look at this:
image
(All the people working on the SC product line have their work cut out for the months/years to come….)

More detailed information per product:

SCOM/OpsMgr
With the recent release of R2, SCOM/OpsMgr has grown up to a level where it can compete with IBM TEC and HP OVO. On top of it, Microsoft has stepped up the development and publishing of MPs. For instance, the MP for Exchange 2010 came almost available in the same week when Exchange 2010 itself became RTM. The MPs for Windows 7 and Windows Server 2008 R2 also came out on a fast schedule. Besides that the overall quality of the MPs has been improved as well. The guides – a very important part of every MP – are also grown up. The days where a MP covered a mere 10 pages are gone.

Personally I think that the year 2010 no really big changes are going to be seen for SCOM. I do not expect a R3 release. Perhaps a SP or a Roll Up Package, but no major release. Everything which could be put into a R3 release for instance will be found back in vNext. Reason is that SCOM has made very serious progress compared to other SC products like SCCM. However in 2011 the newest version of SCOM will be released, vNext. This newest version will at least just as revolutionary as the move from MOM 2005 to SCOM 2007 RTM.

Why?

UI
First of all, the UI. Even in SCOM R2 it is a bit locked. There are no ways to add an additional Wunder Bar (like Administration/Authoring/Reporting) and the like. When looking at the demo’s given with SCSM or SCCM vNext, these SC products used the new UI which is much more extensible by default. Here third parties can create their own Wunderbar and add more functionality. So SCOM vNext will become much much more a framework where all kind of extensions are possible. Travis Wright, Lead Program Manager for SCSM, has provided me with a link about how to create your own Wunderbars, to be found here. Thanks Travis! 
image
(Wunder Bars of SCSM Pre Beta, with an additional Wunder Bar ‘Compliance and Risk Items’, added by importing a MP developed by Microsoft.)

With MPs for that version of SCOM, a vendor has much more the ability to extend SCOM. Thus making SCOM a more flexible product then it already is.

Monitoring Network Devices
For what I have heard during the sessions, SCOM vNext will have out-of-the-box much more ability to monitor network devices.

Cross Platform Functionality
The way SCOM monitors non-Microsoft platforms is based on two pillars: *NIX Agent and the related MP. This way SCOM doesn’t have to be partially rewritten for supporting additional *NIX distributions.

Data Warehouse
With SCSM vNext there is a big change that this product will contain the Data Warehouse for other SC products as well. So the Data Warehouse of SCOM is likely to move to SCSM. Whether this means the SCOM vNext related reports are only to be found in SCSM or SCOM as well, that I do not know.

Umbrella SCSM and SLAs
For all what I have seen and heard during the sessions, SCSM will become the umbrella for the other SC products. So SCOM vNext will be integrated into SCSM to a great extend, thus making the whole workflow of IT Management Processes more accessible and manageable as well. Monitoring of SLAs will be enhanced greatly since SCSM is going to be THE tool of all SC products for workflow management/automation. So SCSM will contain much more (background) information about the SLAs then SCOM ever will.

SCCM
With SP2 for SCCM just being out for a couple of weeks, this product is growing up fast. Why?

R3
Yes. That’s right. R3 will come out in the first half of 2010. Besides many improvements to features already present, it will contain a new Client as well: the Power Management Client.
image
This will allow for centralized Power Management of the clients. Microsoft is going Green! With this functionality reports will be added as well:
image

Also support for Nokia E-series mobiles will be added as well in this release. Customers with an SA are entitled to R3.

vNext
In 2011 SCCM vNext will be released. Here the changes are really huge. For instance the hierarchy will be flattened out. Check it out yourself:
image  
(SCCM today)

image
(SCCM vNext)

Like I mentioned it before, the UI will be the new one, the MMC is dropped. This UI will be extensible as well. Besides tons of changes it will also contain Mobile Device Manager (MDM) 2008 SP1. From 2010 and later MDM won’t be available anymore as a separate product. Some other changes are:

  • SQL Reporting Services from SQL 2008 SP1 is being used
  • Self Healing capabilities of the clients
  • Peer-to-peer distribution
  • Web based portal for the users
  • In-Console Alerts
  • New security model: Security Role AND Security Scope based
  • Support for Anti-virus signature. Fully automated updated of AV updates is supported!
  • Desired Configuration Management (DCM) WITH auto remediation for non-compliant registry-, wmi and script-based settings

Umbrella SCSM
With SCCM there will be a tight integration with SCSM as well. For instance when DCM find one or multiple clients to be drifting it will be found back in SCSM as well. That process can be automated even further with auto-ticket creation AND assignment as well. This is just a small example of all the possibilities.

Recap SCCM vNext
In short SCCM vNext will be all about User Centralized Management (targeted at the user) where as today’s versions are focused on System Centric Management (targeted at the device).

System Center Service Manager (SCSM)
In the first half of 2010 this new product will go RTM. SCSM will be more than ‘JATS(Just-Another-Ticketing-System). As Microsoft described it during one of the sessions I attended about SCSM, it will be an ‘workflow automation platform’. So all IT processes as defined in ITIL/MOF can be defined here. Also many connections will be made to SCCM and SCOM where SCSM will become the umbrella of all SC Products. So SCSM will contain these knowledge's (and the related workflows and forms) by default:

  • Incident Management
  • Change Management
  • Knowledge Management

Also SCSM will be extensible to a huge extend. So companies can add their own Knowledge, Workflows and Forms. So SCSM will become a framework which can modified to the organizational needs. For this MPs are to be found here as well, using Classes, Rules and Workflows. Haven’t we seen that somewhere else as well? :)

SLA monitoring
Here, with the data flows coming in from SCCM and SCOM,  very detailed and enhanced SLA Monitoring, forecasting and trending is a reality. Finally the IT Management really knows what is going on.

Umbrella SCSM
As stated earlier, SCSM will become the umbrella where all other SC products come together. In the SCSM version to be expected this integration will be strong. In the vNext edition it will even go further.
image

vNext
In 2011 SCSM vNext will be released. For what I heard the integration with the other SC products will go further. For instance, the Data Ware House functionality for SCOM is likely to be hosted by SCSM.

Recap SCSM (2010 and vNext)
With SCSM Microsoft will release the umbrella for many SC products. This will become the starting point for the IT Management in order to see how their IT environment is doing on multiple levels. The reporting capabilities and workflows in combination with the incoming data flows from SCCM and SCOM real SLA measuring is a possibility. So SCSM is not JATS but way much more. With the initial release in 2010 and the vNext version in 20111 where the connectivity with other SC products becomes even more stronger, a real powerhouse is about to be born. With the thrust, energy and resources Microsoft puts into it, it will certainly become a success.

DPM
I am not that specialized in DPM. All I can tell is that in the first half of 2010 the newest version of DPM will be released, DPM 2010. As far as I know, no updated version in 2011 will be released. What DPM 2010 does compared to the current edition? To be frankly, I do not know exactly.

A respected colleague of mine, Matthijs Vreeken, is however an expert on that topic. So I will refrain myself from saying anything about DPM (2010) since it would be out of my league. Just keep a close eye on his blog, to be found here since he can tell way much more about it.

SCE
This product will get a huge overhaul. In 2010 the newest version will be released, SCE 2010 (the names aren’t that much fancy. No names like SCE Fireball or SCE Dragon… :) something which I am glad about!). In 2011 there will be no vNext release. This is to be expected since SCE is a product on its own, targeted at a certain market, without any real connections to other SC products.
image 

SCE 2010 will have these SC capabilities/functionalities:

  • Hyper-V Management
  • SCOM functionality
  • SCCM functionality

Since I have only seen some small demo’s if it, I can’t tell anything more.

New KB article: Communication between an OpsMgr Gateway Server and Management Server fails when the Agent is installed on the Gateway Server

An OpsMgr Gateway Server has problems communicating with the Management Server. This might occur when an OpsMgr Agent has been installed on that server. Removing the OpsMgr Agent using the Add/Remove Programs feature will not work here.

This KB article describes this issue and how to solve it: KB2004886.

New KB article: Upgrade to Ops Mgr R2 fails with "An older version of Operations Manager 2007 that Setup cannot upgrade has been found on this computer"

When one tries to upgrade OpsMgr SP1 to OpsMgr R2, this error message is to be found in the MOMx.log: ‘An older version of Operations Manager 2007 that Setup cannot upgrade has been found on this computer. Uninstall the older version and run Setup again.

This KB article describes this issue and how to solve it: KB2004885.

Saturday, November 14, 2009

First anniversary!

image
Exactly one year ago I posted my first article on my blog. It was about R2 which was announced on Tech-Ed 2008 in Barcelona. During this year I have written 279 blog postings covering all kind of topics on OpsMgr. And the way Microsoft is investing in OpsMgr and the other SC products as well (SCCM V3/vNext, DPM 2010, SCE 2010, SCOM vNext, SCSM, SCSM vNext to name a few…) there won’t be lack of information or experiences from the field to blog about.

Many times I get comments from visitors of this blog, thanking me for handing them a solution for an issue they bumped in to. This is very nice and respectful. How ever, it is my turn now to say thanks to the community since I see my blog as a two-edged sword. I learn just as much from it as well.

Without the input from much respected community members like (in random order) Pete Zerger, Maarten Goet, Alexandre Verkinderen, Anders Bengtsson, Graham Davies, Tim McFadden, Kris Bash and Cameron Fuller, many of them who run very good blogs, it wouldn’t have been what it is today. Thank you guys!

And besides the community there are some people working at Microsoft who I would like mention here as well. They have helped me on many occasions and provided me with good information. In random order: Stefan Stranger, Kevin Holman (almost spammed that guy :) ), Marius Sutara and Eugene Bykov. Thank you as well guys! And keep up the good work since SCOM and all other SC products are really great and growing up fast!

I might have forgotten to mention other names as well. But to them: Thank You!

Past, Present and the Future
image
Before I started my blog I gave it some serious thoughts. One of the main reasons was that I wanted to start a blog which is alive: at least eight informative postings on a monthly basis. So I realized I had to invest in it and keep on blogging. Not for a month or half a year but as long as it takes.

Also I have set myself certain goals. Among others these are:

  1. Since there were already good OpsMgr blogs out there, mine had to be up to specs as well.
  2. Because I don’t like copycats, the blog had to add something new to the community.
  3. The blog had to be dedicated to OpsMgr and not to myself.
  4. The blog must contain articles covering OpsMgr and not just one- or two-liners.
  5. When an article is about trouble-shooting a certain issue, the article has to be complete, WITH the solution as well.
  6. It had to grow to a certain level of attention from the OpsMgr Community.
  7. The blog must have a positive character. No negative news or bashing/flaming. When  something is not OK, I simply do not blog about it.

When I look at the blog today I can say that it has reached many of the goals I had set out a year ago. I do get a lot of positive feedback from the OpsMgr Community and Microsoft as well. The blog gets many regular visitors from all over the world: on an average weekday it gets 350+ visitors, and this number is still growing. The blog postings are a mix of call outs like new/updated MPs/Hotfixes being available but also postings about how to address certain issues, informative ones about OpsMgr processes, series about certain aspects of OpsMgr and the like. And still there is much I want to blog about. So many ideas. But simply not enough time.

However, with the months going by and Microsoft getting up to steam with all the other System Center products, I foresee a future where my blog will not only cover OpsMgr but also other SC products. The years 2010 and 2011 are going to be very exciting years for all SC products. Many positive (BIG) changes with the SC Products are going to happen. And with the months going by this blog will be reflecting many of those changes. How this new format is going to look? To be frankly, I do not know. The future, and feedback from the community, will tell.

There is much to do and to learn, thus much to blog about. THANK YOU ALL FOR TAKING TIME TO VISIT THIS BLOG AND YOUR COMMENTS. We’ll meet again with the all other future postings to come. Since I must say, blogging is addictive. Very addictive I must add. And, last but not least: keep me sharp!

Friday, November 13, 2009

Tech-Ed Berlin 2009: Day 5, Friday 13th of November

Phew. I was invited by a company for dinner. Also many OpsMgr MVP’s were there and it got a bit late. Afterwards we went to Club Felix in Berlin which made it even more late. So I skipped the first session since my ‘system’ wasn’t up to it. :)
image

Fingerprint or poor password?
This session was all about the new/enhanced security features in Windows Server 2008 R2 and Windows 7. The speaker was Rafal Lukawiecki, and he knows it all inside and out. On top of it, he has a very good way of presenting it so it was fun listening to him.
image

His session was divided in three parts. The first part was ‘generic’ about the security framework and –model. The second part was all about the security related to Windows Server 2008 R2 and the last part was about Windows 7. A good session it was and very interesting.

Troubleshooting, explained by The King Himself
It was hard to choose. Either to see Wally Mead in action about upgrading from SMS 2003 to SCCM (This man knows so much about SCCM. Back in Barcelona I attended an interactive session of him. A guy tried to outsmart him (duh!) and talked about the SDK. Wally said: ‘Don’t teach me on the SDK. I wrote it myself!’. The guy became very very silent….) or Mark Russinovich, the man behind SysInternals.

Finally I ended up in his session which was good as well. I have used Process Monitor, Process Explorer and other SysInternal tools many times in my days being a Systems Engineer. Now I finally saw the man behind it all. The way he uses that tooling is awesome. He had multiple cases from the real world which he demonstrated and the way they went about it in order to solve it. Very inspiring it was.
image

All good things come to an end
After that session it was finished. No more sessions left. Tech-Ed 2009 had come to an end. :( So it was time to take the shuttle to the airport and go back to Holland. As you can see I wasn’t the only one who had brought along his luggage:
image

Its really over…
image

Ready to go…
image

The five days at Tech-Ed were awesome. Attending Tech-Ed conferences is really something I can advise to every one working in the IT business. How else do you get the change to meet with the people behind all the software you/your company uses on a daily basis? Also THE change to meet with other IT Pro’s as well. Of course one can read a lot, check out the internet, but never ever will one get that close to the source itself. Also the advise, insight, tips & tricks one gets here is worth the effort and time. Of course there are tons of other reasons as well why to attend Tech-Ed.

Next week I will post an article which will summarize this whole awesome week. It won’t be about the sessions but about the road ahead, grouped per product.

Thursday, November 12, 2009

Tech-Ed Berlin 2009: Day 4, Thursday 12th of November

Don’t know how we did it, but we arrived in time. So no need to hurry. The S-Bahn to Tech-Ed was really packed to the brim. Somehow these trains should have a sign like this: ‘This train wagon can have 200 people OR 600 IT staff attending Tech-Ed’. Seriously, when the train entered the station for Tech-Ed it was flooded with IT staff from all over the world. Never the density of laptops and smart phones per person on that station has been that high!
image 

Upgrading to SQL Server 2008 Done Right
The first session was about how to upgrade to SQL Server 2008 done right. The speaker was Dandy Weyn, Technical Development Manager. When the upgrade to SQL Server 2008 isn’t prepared properly one is likely to end up like this:
image

Step by step the available upgrade scenario’s and the caveats were discussed. Also some known compatibility issues were brought to the attention. When upgrading one has not to forget to look outside the SQL server itself as well. For instance cross-database dependencies, linked servers and extended stored procedures might make the upgrade more complex.

Gladly, Microsoft has delivered good tooling AND documentation in order to aid in the upgrade process. One of these tools is the SQL 2008 Upgrade Advisor. It contains another tool ‘Upgrade Advisor Analysis Wizard’. With it one can select the DBs to be upgraded and check to see whether there are certain issues to be fixed prior the upgrade. In conjunction with this tool the SQL 2008 Profiler is run as well. This way much useful information is collected. When the ‘Upgrade Advisor Analysis Wizard’ is finished the results are shown which can be looked at in every detail:
image

In one demo an upgrade from SQL 2000 DB to SQL 2008 was shown. This upgrade went wrong. With the above mentioned tooling the causes of this failure was shown and resolved. Afterwards the upgrade went fine. Afterwards the Server Objects (e.g. linked servers) Agent Jobs were taken care of as well.

Another demo was about upgrading from SQL 2005 to SQL 2008. The same approach is used here as well. One very important thing to do is to update the statistics of the upgraded DB. Otherwise performance won’t be optimal.

Shall I LTI or ZTI?
Second session I attended was all about Windows 7 deployment with MDOP and SCCM. Good demo’s were given. The session was about many topics, some of them were:

Resolving application compatibility issues
The Application Compatibility Toolkit was discussed and demonstrated shortly. Certainly a tool worthwhile to be used by organizations migrating to Windows 7. Nice touch is the UI of this tool. It has the look & feel of the SC products like SCOM! Other tooling was demonstrated as well, like Standard User Analyzer and Microsoft Application Compatibility Database. These tools do not only show what the issues are but also deliver the solutions to it. This is great since it helps companies in their quest to resolve all compatibility issues with the applications they are using.
image

Upgrading to Windows 7 and how to go about it?
Besides SCCM there are other tools available as well like the Microsoft Deployment Toolkit (MDT), aka Light Touch Installation (LTI) since it still needs input from the administrator per system to be upgraded where as SCCM is Zero Touch Installation (ZTI). Here all the needed operations are centralized. Not only the upgrade process itself, but also the monitoring, the reporting, the repairs (when needed). Now no input on a per client basis is needed anymore.

The earlier mentioned tool Application Compatibility Toolkit integrates with SCCM by usage of the ‘Application Compatibility Toolkit Connector’. So its features are leveraged within SCCM. Also the code used for MDT is used for SCCM and vice versa. In reporting a cool report is to be found, the Windows 7 Upgrade Assessment report which can be run against the collection within SCCM.

When an organization runs SCCM and want to upgrade to Windows 7, the usage of SCCM will be a very good aid in order to run an upgrade as smoothly as possible.

All you ever wanted to ask about SCOM
In this ‘Birds-of-a-Feather’ session Pete Zerger and Rory McCaw answered questions from the audience about all kinds of SCOM issues. Simon Skinner was also present and with the three of them all questions raised by the audience were answered very well. Again the knowledge AND experience these people have is awesome!
image

X-plat and Agents that won’t work
image
An interactive session where speaker Barry Shilmover answered questions from the audience. He also demonstrated some issues which might occur when discovering and installing the Agent on a UNIX/Linux system.

During this session Barry told that the Discovery process in SCOM R2 of UNIX/Linux systems is the MOST complex operation. Mainly because it is purely data-driven. The MP delivers here all the knowledge.The Discovery Process itself has no knowledge at all about the systems it is discovering. So when support for another UNIX/Linux system is added, Microsoft ships a MP and an Agent. There is no need to alter SCOM R2 itself. Also good to know is that by default the Discovery Process will  scan for 25(!) different systems.

DNS is eminent in this process since a certificate will be created on the fly as well for which the correct FQDN is required. This is the FQDN as the UNIX/Linux system knows it. Number One issue of troublesome Agent installation is: the resolved name from the RMS doesn’t match the name as UNIX/Linux system sees it.

Who ever thought to see this on UNIX/Linux system?
image

Since UNIX/Linux systems many times use per system different passwords, Microsoft has also revised the security model within SCOM R2, aka Run As Accounts and Run As Profiles. The distribution method (secure/less secure) comes here into play as well. For Windows systems this is not very important since the accounts (and passwords) reside in AD. The Windows servers being monitored are only sent the related SID of the account, not the account nor the password itself.

For UNIX/Linux systems this is different. Also the way the Run As Account is targeted in the Run As Profile is very important. Here one can really make a granular selection to what UNIX/Linux computer the credentials are targeted against, thus enabling SCOM R2 to use multiple passwords for different UNIX/Linux systems.
image

HOT NEWS !!!
Audit Collection Services (ACS) for UNIX/Linux is coming up. Microsoft is still testing it but it will soon come out, to be expected this year! In order for this to work new MPs must be imported as well as specific ACS Forwarders. Microsoft is also talking with companies like Bridgeways and Novell in order to extend it.

This was a very good session and I learned very much from it.

Even though I will attend more sessions today I won’t find time to put into this blog posting. There fore I put this on my blog already.

Again I must say it has been a very interesting day as well. Many new things learned and seen. It is amazing to see how much effort Microsoft puts into the development of all its products. Whether it is Windows 7, SQL, SCCM, SCSM, SCOM, X-plat monitoring AND ACS for it as well (!!!!), the drive and roadmaps are very impressive. Good to be part of it. :)