SCCM – Windows 10 no longer ticked as deployment target OS for packages and TS’s after upgrade to current branch

This is a follow up to this post : https://blog.adexis.com.au/2019/06/14/sccm-powershell-errors-after-moving-primary-site-server-via-site-failover/ 

 

Another issue i ran into this upgrade – which was far more interesting, was that all packages and task sequences which had windows 10 ticked within their “supported platforms” – the tick was now gone after upgrade.

 

Now, before we go any further, i should state that i am dead against using this functionality in programs and task sequences – i see it create way more hassle than help where-ever clients use it…. if you want to limit, use a limiting collection or exclude collections on your targets…. additionally – if you are still using packages – get with the times, move to applications!

That being said, this client did have multiple outsourcers in their environment, lots of politics and bafflingly bad decisions (how bad? they have DP’s on DC’s…. yes… absolute insanity) – so while i would prefer that setting wasn’t used – the guys that utilise SCCM just wanted what they had back – and fair enough too.

Now, back to the issue.

in order to troubleshoot, i ticked the win 10 box within a dummy package/program and compared it to one of the broken ones – this was done using

Get-CMPackage -id <package ID> -fast | Get-CMProgram | select –expandproperty SupportedOperatingsystems

the output you’ll get will be something like this

SmsProviderObjectPath : SMS_OS_Details

MaxVersion            : 10.00.9999.9999

MinVersion            : 10.00.0000.0

Name                  : Win NT

Platform              : I386

 

SmsProviderObjectPath : SMS_OS_Details

MaxVersion            : 10.00.9999.9999

MinVersion            : 10.00.0000.0

Name                  : Win NT

Platform              : x64

 

SmsProviderObjectPath : SMS_OS_Details

MaxVersion            : 6.10.9999.9999

MinVersion            : 6.10.0000.0

Name                  : Win NT

Platform              : x64

 

SmsProviderObjectPath : SMS_OS_Details

MaxVersion            : 6.20.9999.9999

MinVersion            : 6.20.0000.0

Name                  : Win NT

Platform              : x64

 

on the one that was manually ticked – the output was this (cut down to win 10 for sake of post length)

SmsProviderObjectPath : SMS_OS_Details

MaxVersion            : 10.00.99999.9999

MinVersion            : 10.00.0000.0

Name                  : Win NT

Platform              : x64

 

The difference is subtle – but there. The broken one has 4 x 9’s in the middle, the “correct” one has 5 x 9’s in the middle…. at some stage during the CB’s – the Maxversion of windows 10 obviously changed – and for reasons im still not sure about – the upgrade process did not correctly update them.

This can also be viewed by directly viewing the SQL table dbo.PKGProgramOS . Program names with “*” are for task sequences.

There were a couple of ways to address this.

Initially, i looked at using powershell and doing everything through the SMS provider, so everything would be supported. The downside of this is that there was no way to delete/change the stuffed entry – only to add the correct entry.

Working the script out really sucked, as when using get-CMSupportedPlatform on 1902, there are multiple windows 10 entries, including “All windows 10 (64-bit)” – which doesn’t work when piped into a script… instead, you have to use “All windows 10 (64-bit) Client” – sure, it seems logical now… but i, and one of my team spent hours on that!

Once that was sorted, the plan was to use a script such as: (the below was never completed, tested or implemented, so its below for a conceptual reference only! it does not work!)

$programs = Get-CMPackage -fast | Get-CMProgram | select packageID, PackageName, PackageVersion, ProgramName -expandproperty SupportedOperatingsystems

$win10x64 = Get-CMSupportedPlatform -fast -Name “All Windows 10 (64-bit) client”

write-host “Programs: ” $programCount.count -ForegroundColor Green

foreach ($program in $programs) {

if ($program.MaxVersion -eq “10.00.9999.9999”) {

write-host $program.PackageID $program.PackageName $program.PackageVersion  $program.ProgramName $program.MaxVersion $program.Platform

if ($program.Platform -eq “x64” {

Set-CMProgram -PackageName $program.PackageName -StandardProgram -ProgramName $program.ProgramName -AddSupportedOperatingSystemPlatform $Win10x64

}

}

 

Fortunately, this client has an awesome database guy (thanks Alan) when was able to help in a much more sensible way – updating the SQL table directly.

While this is unsupported by MS (use at your own risk etc etc) – since MS support has basically become completely useless… their products are effectively unsupported anyway. It takes 2-3 weeks to get through the first lines of support to someone with a brain – and dont get me wrong, when you get to that person – they can be fantastic…. but the time frame is just unrealistic.

With that in mind, we made a copy of the dbo.PKGProgramOS table and then ran the following sql (comments included)

–First step is to backup all the records that could potentially be changed by the script.
–This table, PkgProgramOS_49s_20190603, can then be removed in a few days/weeks depending on the thoroughness of user acceptance testing
— NB – all the commented out “.PkgID = ” lines were used to test the logic prior to updating the records
–Count of records found (2 different runs)
–6836, 6777
SELECT *
INTO PkgProgramOS_49s_20190603
FROM [CM_C01].[dbo].[PkgProgramOS] (nolock) y
where 1=1
–and y.PkgID = ‘00100050’ –‘C01008EE’
and OSMaxVersion = ‘10.00.9999.9999’

–Confirmation that expected number of records are in the backup table. Measure Thrice Cut Once!
select * from PkgProgramOS_49s_20190603; –6777

–This is the select version of the update to bring back and confirm all records that will be updated.
–Requirement was to update all 10.00.9999.9999 records to be 10.00.99999.9999 where there was not already a 10.00.99999.9999.
–i.e. Update where all Package/OS/Program records with the 4 Nines to be 5 Nines, except where there was already a 5 Nines to prevent creating a duplicate 5 Nine.
–Join the Set of 4 Nine records with those with 5 Nines and return those with out the 5 Nine record
–Count of Records found
–6835, 6776
SELECT x.[PkgProgramOS_PK]
,x.[PkgID]
,x.[ProgramName]
,x.[OSName]
,x.[OSPlatform]
,x.[OSMinVersion]
,x.[OSMaxVersion]
,n.PkgProgramOS_PK, n.OSMaxVersion, n.OSPlatform, n.PkgID, n.ProgramName
FROM [CM_C01].[dbo].[PkgProgramOS] (nolock) x
LEFT OUTER JOIN ( — Records with 5 Nines
SELECT y.[PkgProgramOS_PK]
,y.[PkgID]
,y.[ProgramName]
,y.[OSName]
,y.[OSPlatform]
,y.[OSMinVersion]
,y.[OSMaxVersion]
FROM [CM_C01].[dbo].[PkgProgramOS] (nolock) y
where 1=1
–and y.PkgID = ‘C01008EE’
and y.OSMaxVersion = ‘10.00.99999.9999’
) n on x.PkgID = n.PkgID and x.OSName = n.OSName and x.OSPlatform = n.OSPlatform and x.ProgramName = n.ProgramName
where 1=1
–and x.PkgID = ‘C01008EE’–00100050
and x.OSMaxVersion = ‘10.00.9999.9999’ and isnull(n.OSMaxVersion,”) <> ‘10.00.99999.9999’ –Include 4 Nines and Exclude where a 5 Nine is found.

–Update version of the above Select.
–Always create the select, then copy and paste and convert to an update, that way the condition is always in place
–(This prevents the tragic error of writing update table set value =, and then finger slipping and hitting F5, before the where clause is added!)
UPDATE x set [OSMaxVersion] = ‘10.00.99999.9999’
FROM [CM_C01].[dbo].[PkgProgramOS] x
LEFT OUTER JOIN (
SELECT y.[PkgProgramOS_PK]
,y.[PkgID]
,y.[ProgramName]
,y.[OSName]
,y.[OSPlatform]
,y.[OSMinVersion]
,y.[OSMaxVersion]
FROM [CM_C01].[dbo].[PkgProgramOS] (nolock) y
where 1=1
–and y.PkgID = ‘C01008EE’
and y.OSMaxVersion = ‘10.00.99999.9999’
) n on x.PkgID = n.PkgID and x.OSName = n.OSName and x.OSPlatform = n.OSPlatform and x.ProgramName = n.ProgramName
where 1=1
–and x.PkgID = ‘C01008EE’–00100050
and x.OSMaxVersion = ‘10.00.9999.9999’ and isnull(n.OSMaxVersion,”) <> ‘10.00.99999.9999’
–6776 — Records updated. Ooh look it matches our select!

–Run the select to find if any 4 Nine records are left.
SELECT *
FROM [CM_C01].[dbo].[PkgProgramOS] (nolock) y
where 1=1
–and y.PkgID = ‘00100050’ –‘C01008EE’
and OSMaxVersion = ‘10.00.99999.9999’
–Pass the system back to the users for testing

 

After this – the programs and task sequences that previously had “All windows 10 32bit” (and 64bit) ticked will have it ticked again.

Clients that have evaluated these programs already will have the following status – Program rejected (Wrong Platform)

 

In order to get around this, simply update any properties of the program e.g. the estimated run time (for example) – this will force clients to re-evaluate the package – and with the fixed “supportedOperatingsystem” entry – this will now run correctly.

SCCM powershell errors after moving primary site server via site failover

Recently i completed a somewhat challenging SCCM upgrade from:

SCCM 2012 R2

Server 2008

SQL 2008 R2

2 SMS providers, 2 MP’s, 130-odd DP’s, SMP’s

33,000 odd clients

 

to:

SCCM CB (1902 at time of upgrade)

Server 2016 (latest the outsourcer would support)

SQL 2016 (latest the outsourcer would support)

Site server failover

 

i know its not big by worldwide standards – but for a small town like Adelaide – its pretty reasonable.

 

all-in-all it was a good project…. too much politics for my liking due to multiple outsourcing agreements – but it went quite smoothly.

 

There were a few somewhat unique issues i ran into which i think are worthy of blogging… they’re a bit obscure – and i’m not sure many other people will have them, but hey – if this find and helps a few people – all the better.

So – after completing the move to the new servers, there were still SMS providers and management points on two “old” servers. When decommissioning these, some minor errors started to occur.

When opening powershell from the SCCM console after the SMS providers were uninstalled, it still worked, but one of the following errors would appear:

Import-Module : The SMS provider reported an error

or

Import module : A drive with the name <sitecode> already exists

I checked a number of things, initially thinking that maybe the SMS providers had not uninstalled correctly (even though everything reported correctly in the logs)

First – The table in the SQL database which holds this data is “smsdata” – viewing this confirmed the provider was correct

Next – WMI – using WBEMtest, point it at the root\SMS namespace on the site server and failover site server, then enumerate the class “SMSProvider” – the correct SMS providers showed up here too.

Next – i had a look at the registry on a machine with the console installed and noticed HKEY_CURRENT_USER\SOFTWARE\Microsoft\ConfigMgr10\AdminUI\MRU still had the “old” server listed… once the entry to the “old” server was removed, the errors stopped occurring

In follow up to this, there were still complaints about new users going to the “old” site server by default. This can be updated in HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\ConfigMgr10\AdminUI\Connection\Server (on x64 machines…. which i would be surprised if there are admins out there not running x64)

These changes could be performed manually, or via SCCM, but my preference is via group policy preferences. Check for the existence of the console, if it exists, then delete the hkCU key and update the HKLM value

This one was not a big one – powershell still worked after the initial error, the console exhibited no issues – but removing warnings/errors where-ever possible is always good.

 

Windows and NTP

It’s important that Windows time is set correctly – but how Windows time works seems to be a poorly understood area.

In this article, I’ll try to clear up the concepts and explain what is, in my opinion, the best way to implement time services throughout your domain(s).

Background

  • Windows, by default, will automatically set its time from the domain controller which holds the FSMO role “PDC emulator”
  • In a multi-domain environment, the PDCe in the forest root domain is the overall master
  • Port 123 (NTP) is used for all communications
  • All other DC’s will, by default, look for the PDCe as their time source. There is no need to set anything here unless something has gone wrong.
  • All workstations will, by default, look for the PDCe as their time source. There is no need to set anything here unless something has gone wrong.
  • Windows 2016 time service offers (optionally) more accurate time services than previous versions – https://docs.microsoft.com/en-us/windows-server/networking/windows-time-service/accurate-time

Setting up NTP on the PDCe

I strongly recommend utilising group policy to set up NTP on your PDC emulator, not the command line. Using a group policy makes the settings a) obvious and b) easily transportable to new DC’s as your migrate upgrade in the future

  • Create a new GPO, I name mine “Domain Controller – Set NTP on PDCe”
    • Narrow it down to your PDCe by either
      • Removing “authenticated users” and adding your current PDCe (This will need to be manually updated if/when the PDCe role moves)
      • Utilising the WMI query “Select * from Win32_ComputerSystem where DomainRole = 5” (This will auto-update when the PDCe moves)
    • Set the following within the group policy
      • Computer Configuration > Administrative Templates > System > Windows Time Service > Time Providers
      • Enable Windows NTP Client: Enabled
      • Enable Windows NTP Server: Enabled
      • Configure Windows NTP Client: Enabled
        • NtpServer: <YourExternalNTPServer1>,0x1 <YourExternalNTPServer2>,0x1 (for Adelaide based clients, i used ntp.internode.on.net and ntp.adelaide.edu.au – a local ISP and a local University – but these could be any publicly available NTP server)
        • Type: NTP
        • CrossSiteSyncFlags: 2
        • ResolvePeerBackoffMinutes: 15
        • Resolve Peer BAckoffMaxTimes: 7
        • SpecilalPoolInterval: 3600
        • EventLogFlags: 0

Commands to check status and troubleshoot

  • w32tm /monitor – this exceedingly useful command will show you the status of all DC’s in the domain, where they are configured to get their time source from and their offset from the authoritative time source
  • if a domain controller is having issues
    • w32tm /config /syncfromflags:domhier /update
    • net stop w32time
    • net start w32time
  • w32tm /query /status

Using policy to set clients to look at AD for time

This is the default behaviour of windows – and you should not need to set this, however, for some places I’ve found we have had to

  • Computer Configuration -> Administrative Templates -> System -> Windows Time Service -> Time Providers
    • Configure Windows NTP Client: Enabled
      • NtpServer: <YourDC1>,0x1 <YourDC2>,0x1
      • Type: NTDS5
      • CrossSiteSyncFlags: 2
      • ResolvePeerBackoffMinutes: 15
      • ResolvePeerBackoffMaxTimes: 7
      • SpecilalPoolInterval: 3600
      • EventLogFlags: 0

References

https://docs.microsoft.com/en-us/windows-server/networking/windows-time-service/how-the-windows-time-service-works

https://docs.microsoft.com/en-us/windows-server/networking/windows-time-service/accurate-time

https://blogs.technet.microsoft.com/nepapfe/2013/03/01/its-simple-time-configuration-in-active-directory/

Configure NTP Time Sync using Group Policy

 

Active Directory 2019 and Exchange 2019 – what’s new

Cross-post with http://www.hayesjupe.com/active-directory-2019-and-exchange-2019-whats-new/

 

The short answer is – not much.

Exchange 2019 was released a few weeks back, but was effectively un-usable, as Exchange 2019 requires Windows Server 2019…. and Windows server 2019 got pulled from release (like Windows 10 1809) due to some issues.

Windows Server 2019 was re-released a few days ago, which allowed nerds everywhere (including me) to put Server 2019 and Exchange 2019 into a test environment.

The most striking thing that is immediately noticeable is that everything looks the same…. The install process, the GUI, the management, all looks the same as it did in 2016. To me, this is a good thing – while Microsoft of the past seemed to believe that moving functions between areas was good – some consistency is nice to have too.

 

Active Directory

First appearances indicate there is nothing new in AD 2019, the installation process and management is exactly the same as 2016.

While installing, there is not even an option to set the forest and domain functional level to “2019” – only 2016.

A quick look at the schema version indicates it has increased and quick google finds this article

https://blogs.technet.microsoft.com/389thoughts/2018/08/21/whats-new-in-active-directory-2019-nothing/

So, while there is something new in the schema, its an incredibly small update….. and there are no new features or functionality of any type to focus on.

 

Exchange 2019

Exchange 2019 is a bit the same as AD, everything appears to be the same as Exchange 2016, from the install process to the management interface.

A google comes up with this

Should you upgrade to Exchange Server 2019?

So there are some changes and feature updates – but these updates may not have an impact/matter to your organization.

 

I found these two releases interesting overall as

  • AD is the core of many enterprise networks
  • Exchange is a core business application

To see a new release of both of these products with very minimal improvements I think demonstrates where all Microsoft’s development effort is going (which, to be fair, we already knew)

Deploy Win32 applications with Intune

from http://www.scconfigmgr.com/2018/09/24/deploy-win32-applications-with-microsoft-intune/

WIN32 APPLICATION DEPLOYMENTS

The ability to “package” applications for deployment in Microsoft Intune is something that has been highly requested by many organisations making the move to management of devices through Intune. Although there is a fundamental difference in deploying applications through Configuration Manager and Intune, Microsoft is developing tools to provide similar functionality across the management stack. Up until now it it has been possible to deploy applications through Intune, this relied on a single MSI installation file with no external dependencies. In some cases this meant that repackaging of applications was the only method of deploying those business applications, thus being a time consuming process.

Today it is now possible to deploy applications through Intune without those restrictions, this process creates a packaged container of the setup files along with command line installation and uninstall commands.

 

This is a significant feature towards bringing Intune from the realms of “good for mobile device management only” to “also good for desktop management”.

SCCM currently does (and probably will for quite a while) have additional functionality which larger enterprises require – however, this is a good step in allowing smaller organisations more flexibility in their deployment options.

 

Note: as of 25/9, this feature is available with an Intune tenant running a preview of the GA release.

Creating large stand-alone media from SCCM – issues when the HP SPP 2018.03 is a package in the TS

We often create one task sequence for all workstation builds and another for all server builds, utilising task sequence variables to perform the decision making within the task sequence.

One of the downsides of this is that the task sequence binaries can get quite large, especially for server builds where we have current and legacy versions of the HP SPP, the Dell SUU and (at a minimum) server 2012 R2 and server 2016.

This isn’t an issue for network based builds, as non-required content is simply skipped, however, for media builds, it can lead to 40GB+ requirements, which the SCCM console doesnt handle well.

This is where Rufus comes in.

Rufus can help out by allowing use of larger hard drives (just tick the “list USB hard drives” option) and can apply bootable iso’s generated by SCCM (utilising the “unlimited size” option) to a USB hard drive.

This has been incredibly useful for us in the past, utilising large hard drives as USMT stores at slow link sites for client deployment.

In this instance, ive been using Rufus to apply a 50GB server build iso to a hard drive, but keep getting presented with a warning

“This ISO image seems to use an obsolete version of ‘vesamenu.c32’. Boot menus may not display properly because of this.”

Irrevelant of how you proceed (allow Rufus to download an update or not), the drive is not bootable.

Upon investigation of the rusfus logs and the resultant media, i found that syslinux.cfg was actually pointing to my HP SPP package.

This forum post then confirmed that Rufus is finding a syslinux.cfg, assuming that it is “real” bootable media and hence the ‘vesamenu.c32’ prompt.

After a few hours of troubleshooting and trying to get around it, i simply removed the “usb” and “system” folders from my HP SPP packages (as we wont be booting to it ever, its only for use in SCCM), re-created my standalone media iso – then used Rufus to write the bootable iso to the USB HDD, this time with no issues.

I realise this is a fairly obscure issue , but hopefully it helps someone.

 

 

Speed up offline servicing

Currently i am creating some server builds for a place which will be deploying large numbers of servers over the coming months.

One of things that is/was taking up a great deal of time was offline servicing for the base OS, primarily because the SCCM server is currently sitting on a virtual environment with disk that is struggling. With 2016, this isn’t so bad, as due to cumulative updates, there are only a few updates to be installed. With 2012 R2 however, there is a large number of updates – and the process continually fails due to the poor performance of the server.

One of things you can do to speed this process up is to remove unused images from your wim.

Both Server 2012 R2 and 2016 come with 4 images (with an index of 1 to 4) within the install.wim. These generally correlate with:

  • Index1 – Server 2012R2/2016 standard core
  • Index2 – Server 2012R2/2016 standard desktop experience
  • Index3 – Server 2012R2/2016 datacentre core
  • Index4 – Server 2012R2/2016 datacentre desktop experience

If you view Logs\OfflineServicingMgr.log during an offline servicing operation, you will notice lines that state things such as:

Applying update with ID xxxxxx on image at index 1

Then the same update will apply to image 2,3 and 4. In this enviornment, we are not deploying server core, so we only need indexes 2 and 4 (standard and datacentre with desktop).

We can view the indexes available within the wim by typing:

dism /get-imageinfo /imagefile:E:\<path to wim>\Install.wim

Then, if you dont need indexes 1 and 3 (as we dont in this scenario)

dism /delete-image /imagefile:E:\<path to wim>\Install.wim /index:1
dism /delete-image /imagefile:E:\<path to wim>\Install.wim /index:3

Now when you use offline servicing, each update will only be compared against 2 images, instead of 4, significantly reducing the processing time/disk usage, especially for 2012 R2 (where there are a large number of updates to apply)

This can also be used for client OS’s, such as Windows 10.

One important note – this will not reduce the size of the WIM. It will simply remove the index and save you time for offline servicing.

If your image is already in SCCM, then you must

  1. Go to Software Library | Operating systems | Operating system images
  2. Right click on the appropriate image | properties | Images tab
  3. Click on “reload”, then notice the dropdown has been reduce from 4 index’s, then hit “ok” to exit.
  4. Go into your task sequence
  5. Update the image index as required.

Importing updates into WSUS on Server 2016 fails

I ran into a situation recentlly where i needed to import a specific update from the Windows update catalog into WSUS (and in turn into SCCM)

I opened WSUS, clicked on “import updates”, seletced my update and was presented with

“This update cannot be imported into Windows Server Update Services, because it is not compatible with your version of WSUS”

Strange…. WSUS on 2016 is extremely similar to WSUS on 2012 R2… so whats going on here ?

Long story short… there seems to be issue with the url passed by the WSUS console when you click “import updates” to the browser.

When you first click on “Import updates”, IE will open (or you will use IE because it makes importing updates into WSUS easier) to

http://catalog.update.microsoft.com/v7/site/Home.aspx?SKU=WSUS&Version=10.0.14393.2248&ServerName=<servername>&PortNumber=8530&Ssl=False&Protocol=1.20

Simply change the last part “1.20” to “1.80” – and importing updates will now work

i.e

http://catalog.update.microsoft.com/v7/site/Home.aspx?SKU=WSUS&Version=10.0.14393.2248&ServerName=<servername>&PortNumber=8530&Ssl=False&Protocol=1.80

The importance of cleaning up WSUS – even if your are only using it for SCCM

In the distant past, we would generally say to clients “Leave WSUS admin alone – the SCCM settings will take precedence, so there is no point in using it”

As the years have passed, and the number of updates available has grown considerably, this is no longer the case. The SCCM settings still take precedence, however the pure number of updates has gotten so large that it can cause performance issues for the SCCM server – and even the IIS timeout to expire when SCCM is syncing updates. This generally results in and endless loop and 100% CPU for w3wp.exe.

Unfortunately, trying to list the updates in the WSUS console will often lead to the console crashing and the dreaded prompt to “reset server node”.

The best way to address this, isn’t really one of the many articles you will find by googling “sccm high CPU w3wp.exe” (or similar). Generally these will suggest modifying a number of entries in your IIS config to increase time outs, etc – these can assist, but they don’t really address the root cause of the issue – which is simply the huge number of updates.

The best way to resolve this is to simply reduce the number of updates shown in WSUS. This will reduce your memory usage, reduce the number of updates that SCCM has to scan each time, and generally put less load on your server.

There are two ways you can go about this:

 

The manual method

If you have the resources available, I’ve found increasing the RAM and CPU count on the SCCM server temporarily can help allevate the issue of the “reset node” issue.

Once you get in (it may take a few attempts), go to ‘updates’ > ‘all updates’, set the search criteria to “Any except declined” and “any” and hit refresh. Once loaded, add the “supersedence” column to the view and sort by that.

Decline all updates that are superseded. If you don’t clean up regularly, this number could be very high.

After this, you can create views to decline updates for products you no longer use (e.g. Windows 7 and Office 2010) or search for things including “beta”, “preview” and “itanium” and decline those updates as well.

After all that is done, run the server cleanup wizard. You will likely need to do this a number of times, as if your server is struggling already, this also will struggle to complete (and it seems to be quite poorly coded to handle large numbers of updates on low end servers)

 

The scripted method

A guy called “AdamJ” has written a very useful script which you can get at https://community.spiceworks.com/scripts/show/2998-wsus-automated-maintenance-formerly-adamj-clean-wsus . I know, I can see some of you recoiling at the suggestion of using a user submitted spiceworks script… they do have a whole bunch of people (just like the MS forums) suggesting to use “sfc /scannow” for anything and everything – which is a sign of a non-enterprise tech that has NFI… however, this script is really very good – and something I’ve been using for approx 2 years, with nothing but good things to say about it.

You can run it with the “-firstrun” parameter and it will, by default, clean out superseded updates – which is the main cause of huge update numbers, but it will also grab the ever annoying itanium, preview and expired updates. At approx line 629 of the script, you can also configure it to remove IE7,8,9,10 updates, Beta updates etc (or if you are one of the few people in the world with itanium, keep itanium updates!).

This script, unlike the console, will keep plugging away… and if it should happen to get stopped for whatever reason, will resume where it left off.

When removing obsolete updates, I have seen some clients (with lower spec servers) where this process can take a long time, so long that you may have to leave it overnight (or over the weekend), and sometimes restart the process.

This process will get you a fair chunk of the way, and allow you to then open the WSUS console and decline further updates, such as products you no longer use (Windows 7 and office 2010 are reasonably common ones these days), x86 updates if you have an all x64 environment, and in the case of Win 10, updates that no longer apply to you (e.g. if your entire fleet is on Win 10 1609 and 1703, you don’t need the 1511 updates)

After all this is complete, you do need to run the server cleanup wizard again – which does frequently crash and end up with the “reset server node” error. So you can re-run the WSUS cleanup script, or simply run the server cleanup wizard multiple times.

 

My experiences using these methods

I’ve found that in environments that were previously at 100% CPU, they start working again, and environments that were massively over-specc’d that didn’t have the high CPU issue went from using 6gb of RAM for the w3wp.exe process down to 500mb. This will obviously vary from environment to environment.

After this process is completed, you should be able to get into the WSUS console and run the server cleanup wiazrd, without crashes.

If you’re interested, you can also sync SCCM software updates and look at the wsyncmgr.log and see the far smaller list of updates it will sync against now.

Longer term, the AdamJ script does have monthly options that you can schedule in, or for our clients that are uncomfortable with that, simply get in and clean up once every 3 months or so, so you list of updates doesn’t get out hand.

The first cleanup is the biggest, after that, performing the same operations once every 3 months is plenty, and if you forget about and it happens to be once every 6 months instead, you’ll still be fine.

 

Taking it a step further – shrinking the SUSDB

One of the things which the AdamJ cleanup script does is truncate the SQL table “tbEventInstance” which uses up the majority of space in most WSUS databases that have been in use for a while.

If you are not comfortable with a script doing this, you can connect to the database and execute the following query against the “SUSDB” database – “truncate table tbEventInstance”.

If the DB is on a full version of SQL (which, if your running SCCM, i would argue the SUSDB should be on the same SQL instance, rather than installing an additional windows internal database), you can then create a maintenance plan to reindex, shrink etc the database.

If you are using Windows internal database, you can still install SQL management studio, then connect to “\\.\pipe\MICROSOFT##WID\tsql\query”, from there you can execute the truncate, shrink the database etc. Keep in mind that you cannot use maintenance plans with Windows internal databases.

 

What about large enviornments where you do require a wide range of updates ?

In large environments, you may not be able to decline entire product sets for extended periods (e.g. its relatively easy to move everyone onto Windows 10 (and get rid of all Win 7) for 2,000 PC’s, but not so easy for 50,000 PC’s), however, many of the points in this article still hold true.

  • The largest reduction in updates will still come from superceded updates
  • Language packs are another area where there’s lot’s of opportunity for reduction (e.g. if you require english and french – there are many other languages that can declined)
  • Ensure your SUSDB is on your full SQL instance…. that way you are running one less database instance (and therefore utilising less resources) and also have maintenance plans at your disposal
  • Use a maintenance plan to keep your SUSDB database optimal

 

SCCM Update Cleanup

It’s also worth noting that once the SUSDB has been cleaned up, SCCM will execute its own cleanup after the next sync. This cleanup removes obsolete Update CIs (Configuration Items) that corresponded to the items removed from the SUSDB. In most environments, this isn’t usually something noticeable, however in severely under resourced SCCM servers it can cause its own set of problems (though there’s not a huge amount you can do about it other than wait). This will generally present as the SCCM console locking up while it’s doing back-end SQL processes – and if you look at the SQL threads, you’ll see a WSUS related one blocking all other threads. Realistically your best option to resolve this is to increase the resources available to the server – and if that isn’t a possibility, settle in for a long wait!