SCCM – Windows 10 no longer ticked as deployment target OS for packages and TS’s after upgrade to current branch

This is a follow up to this post : https://blog.adexis.com.au/2019/06/14/sccm-powershell-errors-after-moving-primary-site-server-via-site-failover/ 

 

Another issue i ran into this upgrade – which was far more interesting, was that all packages and task sequences which had windows 10 ticked within their “supported platforms” – the tick was now gone after upgrade.

 

Now, before we go any further, i should state that i am dead against using this functionality in programs and task sequences – i see it create way more hassle than help where-ever clients use it…. if you want to limit, use a limiting collection or exclude collections on your targets…. additionally – if you are still using packages – get with the times, move to applications!

That being said, this client did have multiple outsourcers in their environment, lots of politics and bafflingly bad decisions (how bad? they have DP’s on DC’s…. yes… absolute insanity) – so while i would prefer that setting wasn’t used – the guys that utilise SCCM just wanted what they had back – and fair enough too.

Now, back to the issue.

in order to troubleshoot, i ticked the win 10 box within a dummy package/program and compared it to one of the broken ones – this was done using

Get-CMPackage -id <package ID> -fast | Get-CMProgram | select –expandproperty SupportedOperatingsystems

the output you’ll get will be something like this

SmsProviderObjectPath : SMS_OS_Details

MaxVersion            : 10.00.9999.9999

MinVersion            : 10.00.0000.0

Name                  : Win NT

Platform              : I386

 

SmsProviderObjectPath : SMS_OS_Details

MaxVersion            : 10.00.9999.9999

MinVersion            : 10.00.0000.0

Name                  : Win NT

Platform              : x64

 

SmsProviderObjectPath : SMS_OS_Details

MaxVersion            : 6.10.9999.9999

MinVersion            : 6.10.0000.0

Name                  : Win NT

Platform              : x64

 

SmsProviderObjectPath : SMS_OS_Details

MaxVersion            : 6.20.9999.9999

MinVersion            : 6.20.0000.0

Name                  : Win NT

Platform              : x64

 

on the one that was manually ticked – the output was this (cut down to win 10 for sake of post length)

SmsProviderObjectPath : SMS_OS_Details

MaxVersion            : 10.00.99999.9999

MinVersion            : 10.00.0000.0

Name                  : Win NT

Platform              : x64

 

The difference is subtle – but there. The broken one has 4 x 9’s in the middle, the “correct” one has 5 x 9’s in the middle…. at some stage during the CB’s – the Maxversion of windows 10 obviously changed – and for reasons im still not sure about – the upgrade process did not correctly update them.

This can also be viewed by directly viewing the SQL table dbo.PKGProgramOS . Program names with “*” are for task sequences.

There were a couple of ways to address this.

Initially, i looked at using powershell and doing everything through the SMS provider, so everything would be supported. The downside of this is that there was no way to delete/change the stuffed entry – only to add the correct entry.

Working the script out really sucked, as when using get-CMSupportedPlatform on 1902, there are multiple windows 10 entries, including “All windows 10 (64-bit)” – which doesn’t work when piped into a script… instead, you have to use “All windows 10 (64-bit) Client” – sure, it seems logical now… but i, and one of my team spent hours on that!

Once that was sorted, the plan was to use a script such as: (the below was never completed, tested or implemented, so its below for a conceptual reference only! it does not work!)

$programs = Get-CMPackage -fast | Get-CMProgram | select packageID, PackageName, PackageVersion, ProgramName -expandproperty SupportedOperatingsystems

$win10x64 = Get-CMSupportedPlatform -fast -Name “All Windows 10 (64-bit) client”

write-host “Programs: ” $programCount.count -ForegroundColor Green

foreach ($program in $programs) {

if ($program.MaxVersion -eq “10.00.9999.9999”) {

write-host $program.PackageID $program.PackageName $program.PackageVersion  $program.ProgramName $program.MaxVersion $program.Platform

if ($program.Platform -eq “x64” {

Set-CMProgram -PackageName $program.PackageName -StandardProgram -ProgramName $program.ProgramName -AddSupportedOperatingSystemPlatform $Win10x64

}

}

 

Fortunately, this client has an awesome database guy (thanks Alan) when was able to help in a much more sensible way – updating the SQL table directly.

While this is unsupported by MS (use at your own risk etc etc) – since MS support has basically become completely useless… their products are effectively unsupported anyway. It takes 2-3 weeks to get through the first lines of support to someone with a brain – and dont get me wrong, when you get to that person – they can be fantastic…. but the time frame is just unrealistic.

With that in mind, we made a copy of the dbo.PKGProgramOS table and then ran the following sql (comments included)

–First step is to backup all the records that could potentially be changed by the script.
–This table, PkgProgramOS_49s_20190603, can then be removed in a few days/weeks depending on the thoroughness of user acceptance testing
— NB – all the commented out “.PkgID = ” lines were used to test the logic prior to updating the records
–Count of records found (2 different runs)
–6836, 6777
SELECT *
INTO PkgProgramOS_49s_20190603
FROM [CM_C01].[dbo].[PkgProgramOS] (nolock) y
where 1=1
–and y.PkgID = ‘00100050’ –‘C01008EE’
and OSMaxVersion = ‘10.00.9999.9999’

–Confirmation that expected number of records are in the backup table. Measure Thrice Cut Once!
select * from PkgProgramOS_49s_20190603; –6777

–This is the select version of the update to bring back and confirm all records that will be updated.
–Requirement was to update all 10.00.9999.9999 records to be 10.00.99999.9999 where there was not already a 10.00.99999.9999.
–i.e. Update where all Package/OS/Program records with the 4 Nines to be 5 Nines, except where there was already a 5 Nines to prevent creating a duplicate 5 Nine.
–Join the Set of 4 Nine records with those with 5 Nines and return those with out the 5 Nine record
–Count of Records found
–6835, 6776
SELECT x.[PkgProgramOS_PK]
,x.[PkgID]
,x.[ProgramName]
,x.[OSName]
,x.[OSPlatform]
,x.[OSMinVersion]
,x.[OSMaxVersion]
,n.PkgProgramOS_PK, n.OSMaxVersion, n.OSPlatform, n.PkgID, n.ProgramName
FROM [CM_C01].[dbo].[PkgProgramOS] (nolock) x
LEFT OUTER JOIN ( — Records with 5 Nines
SELECT y.[PkgProgramOS_PK]
,y.[PkgID]
,y.[ProgramName]
,y.[OSName]
,y.[OSPlatform]
,y.[OSMinVersion]
,y.[OSMaxVersion]
FROM [CM_C01].[dbo].[PkgProgramOS] (nolock) y
where 1=1
–and y.PkgID = ‘C01008EE’
and y.OSMaxVersion = ‘10.00.99999.9999’
) n on x.PkgID = n.PkgID and x.OSName = n.OSName and x.OSPlatform = n.OSPlatform and x.ProgramName = n.ProgramName
where 1=1
–and x.PkgID = ‘C01008EE’–00100050
and x.OSMaxVersion = ‘10.00.9999.9999’ and isnull(n.OSMaxVersion,”) <> ‘10.00.99999.9999’ –Include 4 Nines and Exclude where a 5 Nine is found.

–Update version of the above Select.
–Always create the select, then copy and paste and convert to an update, that way the condition is always in place
–(This prevents the tragic error of writing update table set value =, and then finger slipping and hitting F5, before the where clause is added!)
UPDATE x set [OSMaxVersion] = ‘10.00.99999.9999’
FROM [CM_C01].[dbo].[PkgProgramOS] x
LEFT OUTER JOIN (
SELECT y.[PkgProgramOS_PK]
,y.[PkgID]
,y.[ProgramName]
,y.[OSName]
,y.[OSPlatform]
,y.[OSMinVersion]
,y.[OSMaxVersion]
FROM [CM_C01].[dbo].[PkgProgramOS] (nolock) y
where 1=1
–and y.PkgID = ‘C01008EE’
and y.OSMaxVersion = ‘10.00.99999.9999’
) n on x.PkgID = n.PkgID and x.OSName = n.OSName and x.OSPlatform = n.OSPlatform and x.ProgramName = n.ProgramName
where 1=1
–and x.PkgID = ‘C01008EE’–00100050
and x.OSMaxVersion = ‘10.00.9999.9999’ and isnull(n.OSMaxVersion,”) <> ‘10.00.99999.9999’
–6776 — Records updated. Ooh look it matches our select!

–Run the select to find if any 4 Nine records are left.
SELECT *
FROM [CM_C01].[dbo].[PkgProgramOS] (nolock) y
where 1=1
–and y.PkgID = ‘00100050’ –‘C01008EE’
and OSMaxVersion = ‘10.00.99999.9999’
–Pass the system back to the users for testing

 

After this – the programs and task sequences that previously had “All windows 10 32bit” (and 64bit) ticked will have it ticked again.

Clients that have evaluated these programs already will have the following status – Program rejected (Wrong Platform)

 

In order to get around this, simply update any properties of the program e.g. the estimated run time (for example) – this will force clients to re-evaluate the package – and with the fixed “supportedOperatingsystem” entry – this will now run correctly.

SCCM powershell errors after moving primary site server via site failover

Recently i completed a somewhat challenging SCCM upgrade from:

SCCM 2012 R2

Server 2008

SQL 2008 R2

2 SMS providers, 2 MP’s, 130-odd DP’s, SMP’s

33,000 odd clients

 

to:

SCCM CB (1902 at time of upgrade)

Server 2016 (latest the outsourcer would support)

SQL 2016 (latest the outsourcer would support)

Site server failover

 

i know its not big by worldwide standards – but for a small town like Adelaide – its pretty reasonable.

 

all-in-all it was a good project…. too much politics for my liking due to multiple outsourcing agreements – but it went quite smoothly.

 

There were a few somewhat unique issues i ran into which i think are worthy of blogging… they’re a bit obscure – and i’m not sure many other people will have them, but hey – if this find and helps a few people – all the better.

So – after completing the move to the new servers, there were still SMS providers and management points on two “old” servers. When decommissioning these, some minor errors started to occur.

When opening powershell from the SCCM console after the SMS providers were uninstalled, it still worked, but one of the following errors would appear:

Import-Module : The SMS provider reported an error

or

Import module : A drive with the name <sitecode> already exists

I checked a number of things, initially thinking that maybe the SMS providers had not uninstalled correctly (even though everything reported correctly in the logs)

First – The table in the SQL database which holds this data is “smsdata” – viewing this confirmed the provider was correct

Next – WMI – using WBEMtest, point it at the root\SMS namespace on the site server and failover site server, then enumerate the class “SMSProvider” – the correct SMS providers showed up here too.

Next – i had a look at the registry on a machine with the console installed and noticed HKEY_CURRENT_USER\SOFTWARE\Microsoft\ConfigMgr10\AdminUI\MRU still had the “old” server listed… once the entry to the “old” server was removed, the errors stopped occurring

In follow up to this, there were still complaints about new users going to the “old” site server by default. This can be updated in HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\ConfigMgr10\AdminUI\Connection\Server (on x64 machines…. which i would be surprised if there are admins out there not running x64)

These changes could be performed manually, or via SCCM, but my preference is via group policy preferences. Check for the existence of the console, if it exists, then delete the hkCU key and update the HKLM value

This one was not a big one – powershell still worked after the initial error, the console exhibited no issues – but removing warnings/errors where-ever possible is always good.

 

Microsoft releases security update for new IE zero-day

All I want for Christmas is a new security update to patch a zero day IE exploit …….

Microsoft have today released a new out of band update for an Internet Explorer vulnerability that is currently being abused in the wild. Just in time for all those Admins planning to have some time off and well after any planned change lock out windows have come into effect!

According to a security advisory released, the IE zero-day exploit can allow an attacker to execute malicious code on a user’s computer. An attacker who successfully exploited the vulnerability could gain the same user rights as the current user, and we’re all doing the right thing and NOT granting our users Admin rights, aren’t we!!!!!

In a nice move by Microsoft, for all those IT Admins out there half way out the door for the holiday period, and may not have the time to thoroughly test and deploy the latest hot-fix and cumulative update, the security advisory CVE-2018-8653 also contains workarounds for restricting access to the IE scripting engine, until system administrators can deploy today’s official patch.

Workarounds

The workaround provided by Microsoft, is to simply disable user access to the DLL that is affected (jscript.dll), which is not the default JavaScript engine DLL that Internet Explorer uses (Jscript9.dll). The jscript.dll is only called in a specific manner, in this instance a malicious method, so the workaround  should have minimal impact for general use.

Edit (22/12): Workaround modified slightly by Microsoft (added takeown cmd) and republished, updated below.

Edit (20/12): 15:30 AEDST Microsoft have unpublished the suggested workaround.

Restrict access to JScript.dll For 32-bit systems, enter the following command at an administrative command prompt:

For 64-bit systems, enter the following command at an administrative command prompt:

Impact of Workaround. By default, IE11, IE10, and IE9 uses Jscript9.dll which is not impacted by this vulnerability. This vulnerability only affects certain websites that utilizes jscript as the scripting engine.

How to undo the workaround. For 32-bit systems, enter the following command at an administrative command prompt:


For 64-bit systems, enter the following command at an administrative command prompt:

Windows and NTP

It’s important that Windows time is set correctly – but how Windows time works seems to be a poorly understood area.

In this article, I’ll try to clear up the concepts and explain what is, in my opinion, the best way to implement time services throughout your domain(s).

Background

  • Windows, by default, will automatically set its time from the domain controller which holds the FSMO role “PDC emulator”
  • In a multi-domain environment, the PDCe in the forest root domain is the overall master
  • Port 123 (NTP) is used for all communications
  • All other DC’s will, by default, look for the PDCe as their time source. There is no need to set anything here unless something has gone wrong.
  • All workstations will, by default, look for the PDCe as their time source. There is no need to set anything here unless something has gone wrong.
  • Windows 2016 time service offers (optionally) more accurate time services than previous versions – https://docs.microsoft.com/en-us/windows-server/networking/windows-time-service/accurate-time

Setting up NTP on the PDCe

I strongly recommend utilising group policy to set up NTP on your PDC emulator, not the command line. Using a group policy makes the settings a) obvious and b) easily transportable to new DC’s as your migrate upgrade in the future

  • Create a new GPO, I name mine “Domain Controller – Set NTP on PDCe”
    • Narrow it down to your PDCe by either
      • Removing “authenticated users” and adding your current PDCe (This will need to be manually updated if/when the PDCe role moves)
      • Utilising the WMI query “Select * from Win32_ComputerSystem where DomainRole = 5” (This will auto-update when the PDCe moves)
    • Set the following within the group policy
      • Computer Configuration > Administrative Templates > System > Windows Time Service > Time Providers
      • Enable Windows NTP Client: Enabled
      • Enable Windows NTP Server: Enabled
      • Configure Windows NTP Client: Enabled
        • NtpServer: <YourExternalNTPServer1>,0x1 <YourExternalNTPServer2>,0x1 (for Adelaide based clients, i used ntp.internode.on.net and ntp.adelaide.edu.au – a local ISP and a local University – but these could be any publicly available NTP server)
        • Type: NTP
        • CrossSiteSyncFlags: 2
        • ResolvePeerBackoffMinutes: 15
        • Resolve Peer BAckoffMaxTimes: 7
        • SpecilalPoolInterval: 3600
        • EventLogFlags: 0

Commands to check status and troubleshoot

  • w32tm /monitor – this exceedingly useful command will show you the status of all DC’s in the domain, where they are configured to get their time source from and their offset from the authoritative time source
  • if a domain controller is having issues
    • w32tm /config /syncfromflags:domhier /update
    • net stop w32time
    • net start w32time
  • w32tm /query /status

Using policy to set clients to look at AD for time

This is the default behaviour of windows – and you should not need to set this, however, for some places I’ve found we have had to

  • Computer Configuration -> Administrative Templates -> System -> Windows Time Service -> Time Providers
    • Configure Windows NTP Client: Enabled
      • NtpServer: <YourDC1>,0x1 <YourDC2>,0x1
      • Type: NTDS5
      • CrossSiteSyncFlags: 2
      • ResolvePeerBackoffMinutes: 15
      • ResolvePeerBackoffMaxTimes: 7
      • SpecilalPoolInterval: 3600
      • EventLogFlags: 0

References

https://docs.microsoft.com/en-us/windows-server/networking/windows-time-service/how-the-windows-time-service-works

https://docs.microsoft.com/en-us/windows-server/networking/windows-time-service/accurate-time

https://blogs.technet.microsoft.com/nepapfe/2013/03/01/its-simple-time-configuration-in-active-directory/

Configure NTP Time Sync using Group Policy

 

Active Directory 2019 and Exchange 2019 – what’s new

Cross-post with http://www.hayesjupe.com/active-directory-2019-and-exchange-2019-whats-new/

 

The short answer is – not much.

Exchange 2019 was released a few weeks back, but was effectively un-usable, as Exchange 2019 requires Windows Server 2019…. and Windows server 2019 got pulled from release (like Windows 10 1809) due to some issues.

Windows Server 2019 was re-released a few days ago, which allowed nerds everywhere (including me) to put Server 2019 and Exchange 2019 into a test environment.

The most striking thing that is immediately noticeable is that everything looks the same…. The install process, the GUI, the management, all looks the same as it did in 2016. To me, this is a good thing – while Microsoft of the past seemed to believe that moving functions between areas was good – some consistency is nice to have too.

 

Active Directory

First appearances indicate there is nothing new in AD 2019, the installation process and management is exactly the same as 2016.

While installing, there is not even an option to set the forest and domain functional level to “2019” – only 2016.

A quick look at the schema version indicates it has increased and quick google finds this article

https://blogs.technet.microsoft.com/389thoughts/2018/08/21/whats-new-in-active-directory-2019-nothing/

So, while there is something new in the schema, its an incredibly small update….. and there are no new features or functionality of any type to focus on.

 

Exchange 2019

Exchange 2019 is a bit the same as AD, everything appears to be the same as Exchange 2016, from the install process to the management interface.

A google comes up with this

Should you upgrade to Exchange Server 2019?

So there are some changes and feature updates – but these updates may not have an impact/matter to your organization.

 

I found these two releases interesting overall as

  • AD is the core of many enterprise networks
  • Exchange is a core business application

To see a new release of both of these products with very minimal improvements I think demonstrates where all Microsoft’s development effort is going (which, to be fair, we already knew)

Cloud should stay up forever, right? Well, no.

Last month there was an outage in the Azure – South Central US region, which, by reports, seemed to have some knock on effects for other regions.

This was reported at:

In the discussions that followed with our customers, particularly with those currently considering their digital transformation strategies including moves to Office 365 and/or Azure, some expressed varying levels of concern. This prompted some very valuable debate around Adexis and what we feel are some important viewpoints when it comes to digital transformation. Here were some of our thoughts;

Outages happen

Even with the enterprise-grade resources of Microsoft (or Amazon), 100% uptime of any service over a long period of time is not realistic. Between hardware issues, software bugs, scheduled downtime and human error, something, at some point will go wrong – just like in your on-premise environment. With all the buzz around cloud, it can sometimes be easy to forget that this is essentially just an IT environment somewhere else maintained by someone else. Like any IT environment, it is still reliant on humans and physical hardware which will inevitably experience failures of service from time to time.

Control and visibility

When an outage happens on-premise, the local IT team are able to remediate and have as much information as it’s possible to have – and can provide their users with detailed information regarding the restoration of service. Everything is in the hands of the local IT team (or the company to which it has been outsourced).
When an outage happens with Azure, the amount of information the local IT team has is minimal in comparison. Microsoft’s communication during O365/Azure outages varies, however, ETA’s and other information is generally vague at best. All control is with Microsoft and all the local IT team can say to staff is “Microsoft are working on it”. While Microsoft may be able to resolve the situation faster than you could on site (or not), the lack of visibility and control can sometimes be daunting. It’s not all doom and gloom though. In situations where the issue would need to be escalated to Microsoft anyway (i.e. premier support), the criticality of an international user-base can often mean a greater focus from Microsoft and inherently a faster resolution than what would be achieved for your single company.

Site resilience

Azure has many features which enable site resilience to protect a single data centre failure – but sometimes these are not used. This could be down to flawed design of services or simple cost saving. When architecting your environment (or engaging the experts at Adexis to provide these specialist services), it’s important you carefully consider your DR and BCP plans and ensure you have the redundancy built into your environment that matches those requirements. This is not unique to either cloud or on-premise and always must be carefully considered.

Root cause

It’s not uncommon for on-premise service outages to be “fixed” by a reboot. Root cause analysis and effective problem management is something that while nice, not many IT teams have time to complete.
Microsoft have the resources to perform these functions to great depth and in-fact their brand depends on it. A complete root cause analysis feeds back into improvement of their overall operations, which leads to greater consumer confidence and therefore greater penetration into the market. They also literally have access to the source code for the operating systems and many apps, in addition to strong relationships with hardware vendors to be able to get patches/fixes in times that all of us can only dream of.
While Microsoft has been known to hold their cards close to their chest at times in terms of releasing the real root cause of outages, they are definitely invested in resolving those root causes behind the scenes and preventing further outages. This means that the environment remains far more up to date and typically, far more robust than an on-premise environment.

SLA

While Microsoft might suffer reputational damage as the result of an outage, do not expect any form of meaningful compensation
The finically backed SLA that salespeople spruik is a joke – http://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=37
This is table for many services (but it does vary depending on specific services)

Monthly Uptime %Service Credit
<99.9%25%
<99.0%50%
<95.0%100%

A 31 day month has 44,640 minutes, 2,232 minutes is 5% of that. So the service would have to be down a whopping 37.2 hours to get back 100% of your fees for that month only, and the compensation is in the form of a service credit off next month’s bill.
How to claim this service credit is detailed on page 5 of the document and basically, the onus is on you to prove that there was an outage and submit the paperwork within 2 months. A separate claim must be created for each service. What this essentially means is it’s usually more effort than it’s worth to log the claim for the service credits.

In Summary

Outages for cloud services must be anticipated, just like outages to on-premise services. The attitude of “It’s in the cloud so it’s not our problem” is simply not realistic and likely to catch you out, unprepared.
If you have vital services that you are considering moving to Azure (or AWS, or anywhere else), rest assured it can be safe to do so, but make sure you allow for site resiliency in your design and costing.

Adexis is neither pro, nor anti cloud. Unlike many other vendors, we have no skin in the game, no incentive to push you in one direction or the other. We are completely independent and can provide you with unbiased specialist advice on what is best for your environment and your business, including the pros and cons of staying on-premise or moving to the cloud for each service.

Every environment is different when it comes to security requirements, IT skillset, hardware availability, CapEx vs OpEx spend and a range of other factors – and these all feed into what is the best solution for your business.

If you’d like to explore your IT strategy further, please be sure to give us a call.

Deploy Win32 applications with Intune

from http://www.scconfigmgr.com/2018/09/24/deploy-win32-applications-with-microsoft-intune/

WIN32 APPLICATION DEPLOYMENTS

The ability to “package” applications for deployment in Microsoft Intune is something that has been highly requested by many organisations making the move to management of devices through Intune. Although there is a fundamental difference in deploying applications through Configuration Manager and Intune, Microsoft is developing tools to provide similar functionality across the management stack. Up until now it it has been possible to deploy applications through Intune, this relied on a single MSI installation file with no external dependencies. In some cases this meant that repackaging of applications was the only method of deploying those business applications, thus being a time consuming process.

Today it is now possible to deploy applications through Intune without those restrictions, this process creates a packaged container of the setup files along with command line installation and uninstall commands.

 

This is a significant feature towards bringing Intune from the realms of “good for mobile device management only” to “also good for desktop management”.

SCCM currently does (and probably will for quite a while) have additional functionality which larger enterprises require – however, this is a good step in allowing smaller organisations more flexibility in their deployment options.

 

Note: as of 25/9, this feature is available with an Intune tenant running a preview of the GA release.

Execution status received: 24 (Application download failed)

I came across an interesting issue today where I couldn’t get applications to install on a specific piece of hardware during a task sequence. All task sequence steps would run fine, other than ‘Application’ installs – and they would work fine on other hardware.

Looking in the smsts.log file, I could see the following error for each application:
Execution status received: 24 (Application download failed)

I checked the boundaries, everything was good. Google has many instances of the same issue, but none seemed to have relevant (or helpful) solutions. In the end, I realised this device has 4G LTE with a valid SIM in it, and it was connecting automatically during the task sequence. It seems this was confusing it and it couldn’t locate the content for applications!

The simplest solution I could find was to disable the NIC during the task sequence, then re-enable it at the end. The following are the powershell commands I put in the task sequence to get it working:

To Disable: powershell.exe -ExecutionPolicy Bypass -Command “Get-NetAdapter | ?{$_.MediaType -eq ‘Wireless WAN’} | Disable-NetAdapter -Confirm:$False

To Enable: powershell.exe -ExecutionPolicy Bypass -Command “Get-NetAdapter | ?{$_.MediaType -eq ‘Wireless WAN’} | Enable-NetAdapter -Confirm:$False

Avoiding a Microsoft Teams Nightmare

Have you ever had the experience of providing users a document management system or Sharepoint site only to find that everyone uses it differently, creates folders all over the place in different ways, stores documents differently and after six months time it’s so hard to find anything that it defeats the purpose for which it was implemented in the first place? What a nightmare! You’re not alone.

With Microsoft Teams quickly becoming a preferred collaboration tool, you’d be forgiven for having fears of this nightmare becoming a reality all over again. The primary reason for that is there’s no technical ‘silver-bullet’ to prevent this from happening, it’s more of a governance discussion. Notwithstanding, there are some things you can do on a technical level that can help.

There are basically four levels of administration to be considered:

  • Global Settings – There are a number of features and functionality for Teams that can be turned on or off at a global level and these should be risk assessed for each environment. Ideally this should be done before the first Team site is even created.
  • Team creation – Microsoft Teams, while based off Office 365 Groups, will also provision a Sharepoint site for each Team. Therefore the decision as to who should be creating Teams is the same as for who should be creating Groups and Sites. One approach that we’ve found works well is to have these functions centrally managed with Teams created on request. There is of course an admin overhead to be considered however. See below;
  • Team Owners – These are the users that really run the individual Teams and will have the best insight as to the value of the Team and how it should be used. Trying to run this centrally is likely to lead to frustration all round so once created, administration should really be handed over to the Team owners. They can then add Team members, assign roles, create Channels and enable Apps etc as they see fit.
  • Team Users – Obvious statement but these are the ones who should be seeing value in Teams collaboration. Paradoxically one way to dilute that is by being in too many Teams. Users shouldn’t be confused about what spaces they should be collaborating in or where to store documents etc. To prevent this, ideally Teams should have clearly defined functions, whether that be organisational, operational or project based collaboration. Confusion arises where these functions overlap between Teams so clear delineation is important. This is another reason centrally managing Team creation can work well. In larger environments implementing practices like naming standards for Teams will also be of value.

Some of the central administration technical considerations are outlined here: https://docs.microsoft.com/en-us/microsoftteams/enable-features-office-365

Melissa Hubbard also provides some useful considerations in her blog post on the topic and while it’s a little while ago now, it’s still a great starter for some of the governance considerations:  https://melihubb.com/2017/07/25/microsoft-teams-governance-planning-guide

If Microsoft Teams is on your agenda for implementation, be sure to reach out to the Adexis team who can assist with design and implementation and help you to provide this wonderful platform to your users to enable communication and efficient collaboration, without the admin headaches.

Update 1806 for Configuration Manager current branch is now available

ConfigMgr 1806 was released yesterday morning https://cloudblogs.microsoft.com/enterprisemobility/2018/07/31/update-1806-for-configuration-manager-current-branch-is-now-available/

Couple of nice usability features:

CMPivot, building off our real-time script capability.  CMPivot is a new in-console utility that provides access to real-time state of devices in your environment.

With CMPivot, you can get instant insights into your environment.  Need to know your current compliance state in real-time?  CMPivot can get you those insights in minutes.

Third-party software updates – You can subscribe to partner catalogs in the Configuration Manager console and publish the updates to WSUS using the new third-party software updates feature. You can then deploy these updates using the existing software update management process.

Uninstall application on approval revocation – After enabling the optional feature Approve application requests for users per device, when you deny the request for the application, the client uninstalls the application from the user’s device.

Configure a remote content library for the site server – You can now relocate the content library to another storage location to free up hard drive space on your central administration or primary site servers or to configure site server high availability.

View the currently signed on user for a device – You can see a column for the currently logged on user now displayed by default in the Devices node of the Assets and Compliance workspace.

 

Note: As the update is rolled out globally in the coming weeks, it will be automatically downloaded, and you will be notified when it is ready to install from the “Updates and Servicing” node in your Configuration Manager console. If you can’t wait to try these new features, this PowerShell script can be used to ensure that you are in the first wave of customers getting the update. By running this script, you will see the update available in your console right away.