Storing credentials for Powershell scripts – Encrypted Strings or Credential Manager

If you’ve been reading through some of our other blog posts, you’ll have noticed that we generally like to automate things using Powershell – which quite often requires storing credentials somehow. Preferably not in plain text.

In our existing scripts on this blog, the process has generally involved generating an encryption key, using that key to convert a password to an encrypted string, then storing both in individual files. While that method works fine and has some nice benefits, there’s another alternative that’s worth knowing about: storing credentials in Windows Credential Manager.

I’ll put the code for both methods further down, but first I’d like to list the main differences between the two methods:

Encrypted string in file

  • Portable between users and computers
  • Can be used in WinPE or wherever Powershell is available
  • Only as secure as the NTFS permissions on the files storing the key and encrypted password (someone with read access can use powershell to reveal the password if they know what the files are for)
  • Utilises a bit of ‘security by obfuscation’ – depending on how you name the files, no one is going to know what they’re for, and malware/viruses definitely aren’t going to realise the files are encrypted credentials that could be easily reverse-engineered.
  • When used with Task Scheduler, the scheduled task can be run as any user/local system

Windows Credential Manager

  • Only portable between computers if using roaming profiles; isn’t portable between users
  • Can only be used in a full-windows environment that has Windows Credential Manager (ie: not in WinPE)
  • Is as secure as the Windows account attached to the credential store (assuming a complex password, pretty secure)
  • If the Windows account becomes compromised somehow (you leave the PC unlocked, malware/virus, etc), it’s a relatively simple process to pull any account details you’ve stored in Windows Credential Manager via Powershell.
  • When used with Task Scheduler, the scheduled task has to be running as the Windows account attached to the credentials store

So, how do you do each of these?

 

Creating large stand-alone media from SCCM – issues when the HP SPP 2018.03 is a package in the TS

We often create one task sequence for all workstation builds and another for all server builds, utilising task sequence variables to perform the decision making within the task sequence.

One of the downsides of this is that the task sequence binaries can get quite large, especially for server builds where we have current and legacy versions of the HP SPP, the Dell SUU and (at a minimum) server 2012 R2 and server 2016.

This isn’t an issue for network based builds, as non-required content is simply skipped, however, for media builds, it can lead to 40GB+ requirements, which the SCCM console doesnt handle well.

This is where Rufus comes in.

Rufus can help out by allowing use of larger hard drives (just tick the “list USB hard drives” option) and can apply bootable iso’s generated by SCCM (utilising the “unlimited size” option) to a USB hard drive.

This has been incredibly useful for us in the past, utilising large hard drives as USMT stores at slow link sites for client deployment.

In this instance, ive been using Rufus to apply a 50GB server build iso to a hard drive, but keep getting presented with a warning

“This ISO image seems to use an obsolete version of ‘vesamenu.c32’. Boot menus may not display properly because of this.”

Irrevelant of how you proceed (allow Rufus to download an update or not), the drive is not bootable.

Upon investigation of the rusfus logs and the resultant media, i found that syslinux.cfg was actually pointing to my HP SPP package.

This forum post then confirmed that Rufus is finding a syslinux.cfg, assuming that it is “real” bootable media and hence the ‘vesamenu.c32’ prompt.

After a few hours of troubleshooting and trying to get around it, i simply removed the “usb” and “system” folders from my HP SPP packages (as we wont be booting to it ever, its only for use in SCCM), re-created my standalone media iso – then used Rufus to write the bootable iso to the USB HDD, this time with no issues.

I realise this is a fairly obscure issue , but hopefully it helps someone.

 

 

Speed up offline servicing

Currently i am creating some server builds for a place which will be deploying large numbers of servers over the coming months.

One of things that is/was taking up a great deal of time was offline servicing for the base OS, primarily because the SCCM server is currently sitting on a virtual environment with disk that is struggling. With 2016, this isn’t so bad, as due to cumulative updates, there are only a few updates to be installed. With 2012 R2 however, there is a large number of updates – and the process continually fails due to the poor performance of the server.

One of things you can do to speed this process up is to remove unused images from your wim.

Both Server 2012 R2 and 2016 come with 4 images (with an index of 1 to 4) within the install.wim. These generally correlate with:

  • Index1 – Server 2012R2/2016 standard core
  • Index2 – Server 2012R2/2016 standard desktop experience
  • Index3 – Server 2012R2/2016 datacentre core
  • Index4 – Server 2012R2/2016 datacentre desktop experience

If you view Logs\OfflineServicingMgr.log during an offline servicing operation, you will notice lines that state things such as:

Applying update with ID xxxxxx on image at index 1

Then the same update will apply to image 2,3 and 4. In this enviornment, we are not deploying server core, so we only need indexes 2 and 4 (standard and datacentre with desktop).

We can view the indexes available within the wim by typing:

dism /get-imageinfo /imagefile:E:\<path to wim>\Install.wim

Then, if you dont need indexes 1 and 3 (as we dont in this scenario)

dism /delete-image /imagefile:E:\<path to wim>\Install.wim /index:1
dism /delete-image /imagefile:E:\<path to wim>\Install.wim /index:3

Now when you use offline servicing, each update will only be compared against 2 images, instead of 4, significantly reducing the processing time/disk usage, especially for 2012 R2 (where there are a large number of updates to apply)

This can also be used for client OS’s, such as Windows 10.

One important note – this will not reduce the size of the WIM. It will simply remove the index and save you time for offline servicing.

If your image is already in SCCM, then you must

  1. Go to Software Library | Operating systems | Operating system images
  2. Right click on the appropriate image | properties | Images tab
  3. Click on “reload”, then notice the dropdown has been reduce from 4 index’s, then hit “ok” to exit.
  4. Go into your task sequence
  5. Update the image index as required.

Importing updates into WSUS on Server 2016 fails

I ran into a situation recentlly where i needed to import a specific update from the Windows update catalog into WSUS (and in turn into SCCM)

I opened WSUS, clicked on “import updates”, seletced my update and was presented with

“This update cannot be imported into Windows Server Update Services, because it is not compatible with your version of WSUS”

Strange…. WSUS on 2016 is extremely similar to WSUS on 2012 R2… so whats going on here ?

Long story short… there seems to be issue with the url passed by the WSUS console when you click “import updates” to the browser.

When you first click on “Import updates”, IE will open (or you will use IE because it makes importing updates into WSUS easier) to

http://catalog.update.microsoft.com/v7/site/Home.aspx?SKU=WSUS&Version=10.0.14393.2248&ServerName=<servername>&PortNumber=8530&Ssl=False&Protocol=1.20

Simply change the last part “1.20” to “1.80” – and importing updates will now work

i.e

http://catalog.update.microsoft.com/v7/site/Home.aspx?SKU=WSUS&Version=10.0.14393.2248&ServerName=<servername>&PortNumber=8530&Ssl=False&Protocol=1.80

The importance of cleaning up WSUS – even if your are only using it for SCCM

In the distant past, we would generally say to clients “Leave WSUS admin alone – the SCCM settings will take precedence, so there is no point in using it”

As the years have passed, and the number of updates available has grown considerably, this is no longer the case. The SCCM settings still take precedence, however the pure number of updates has gotten so large that it can cause performance issues for the SCCM server – and even the IIS timeout to expire when SCCM is syncing updates. This generally results in and endless loop and 100% CPU for w3wp.exe.

Unfortunately, trying to list the updates in the WSUS console will often lead to the console crashing and the dreaded prompt to “reset server node”.

The best way to address this, isn’t really one of the many articles you will find by googling “sccm high CPU w3wp.exe” (or similar). Generally these will suggest modifying a number of entries in your IIS config to increase time outs, etc – these can assist, but they don’t really address the root cause of the issue – which is simply the huge number of updates.

The best way to resolve this is to simply reduce the number of updates shown in WSUS. This will reduce your memory usage, reduce the number of updates that SCCM has to scan each time, and generally put less load on your server.

There are two ways you can go about this:

 

The manual method

If you have the resources available, I’ve found increasing the RAM and CPU count on the SCCM server temporarily can help allevate the issue of the “reset node” issue.

Once you get in (it may take a few attempts), go to ‘updates’ > ‘all updates’, set the search criteria to “Any except declined” and “any” and hit refresh. Once loaded, add the “supersedence” column to the view and sort by that.

Decline all updates that are superseded. If you don’t clean up regularly, this number could be very high.

After this, you can create views to decline updates for products you no longer use (e.g. Windows 7 and Office 2010) or search for things including “beta”, “preview” and “itanium” and decline those updates as well.

After all that is done, run the server cleanup wizard. You will likely need to do this a number of times, as if your server is struggling already, this also will struggle to complete (and it seems to be quite poorly coded to handle large numbers of updates on low end servers)

 

The scripted method

A guy called “AdamJ” has written a very useful script which you can get at https://community.spiceworks.com/scripts/show/2998-wsus-automated-maintenance-formerly-adamj-clean-wsus . I know, I can see some of you recoiling at the suggestion of using a user submitted spiceworks script… they do have a whole bunch of people (just like the MS forums) suggesting to use “sfc /scannow” for anything and everything – which is a sign of a non-enterprise tech that has NFI… however, this script is really very good – and something I’ve been using for approx 2 years, with nothing but good things to say about it.

You can run it with the “-firstrun” parameter and it will, by default, clean out superseded updates – which is the main cause of huge update numbers, but it will also grab the ever annoying itanium, preview and expired updates. At approx line 629 of the script, you can also configure it to remove IE7,8,9,10 updates, Beta updates etc (or if you are one of the few people in the world with itanium, keep itanium updates!).

This script, unlike the console, will keep plugging away… and if it should happen to get stopped for whatever reason, will resume where it left off.

When removing obsolete updates, I have seen some clients (with lower spec servers) where this process can take a long time, so long that you may have to leave it overnight (or over the weekend), and sometimes restart the process.

This process will get you a fair chunk of the way, and allow you to then open the WSUS console and decline further updates, such as products you no longer use (Windows 7 and office 2010 are reasonably common ones these days), x86 updates if you have an all x64 environment, and in the case of Win 10, updates that no longer apply to you (e.g. if your entire fleet is on Win 10 1609 and 1703, you don’t need the 1511 updates)

After all this is complete, you do need to run the server cleanup wizard again – which does frequently crash and end up with the “reset server node” error. So you can re-run the WSUS cleanup script, or simply run the server cleanup wizard multiple times.

 

My experiences using these methods

I’ve found that in environments that were previously at 100% CPU, they start working again, and environments that were massively over-specc’d that didn’t have the high CPU issue went from using 6gb of RAM for the w3wp.exe process down to 500mb. This will obviously vary from environment to environment.

After this process is completed, you should be able to get into the WSUS console and run the server cleanup wiazrd, without crashes.

If you’re interested, you can also sync SCCM software updates and look at the wsyncmgr.log and see the far smaller list of updates it will sync against now.

Longer term, the AdamJ script does have monthly options that you can schedule in, or for our clients that are uncomfortable with that, simply get in and clean up once every 3 months or so, so you list of updates doesn’t get out hand.

The first cleanup is the biggest, after that, performing the same operations once every 3 months is plenty, and if you forget about and it happens to be once every 6 months instead, you’ll still be fine.

 

Taking it a step further – shrinking the SUSDB

One of the things which the AdamJ cleanup script does is truncate the SQL table “tbEventInstance” which uses up the majority of space in most WSUS databases that have been in use for a while.

If you are not comfortable with a script doing this, you can connect to the database and execute the following query against the “SUSDB” database – “truncate table tbEventInstance”.

If the DB is on a full version of SQL (which, if your running SCCM, i would argue the SUSDB should be on the same SQL instance, rather than installing an additional windows internal database), you can then create a maintenance plan to reindex, shrink etc the database.

If you are using Windows internal database, you can still install SQL management studio, then connect to “\\.\pipe\MICROSOFT##WID\tsql\query”, from there you can execute the truncate, shrink the database etc. Keep in mind that you cannot use maintenance plans with Windows internal databases.

 

What about large enviornments where you do require a wide range of updates ?

In large environments, you may not be able to decline entire product sets for extended periods (e.g. its relatively easy to move everyone onto Windows 10 (and get rid of all Win 7) for 2,000 PC’s, but not so easy for 50,000 PC’s), however, many of the points in this article still hold true.

  • The largest reduction in updates will still come from superceded updates
  • Language packs are another area where there’s lot’s of opportunity for reduction (e.g. if you require english and french – there are many other languages that can declined)
  • Ensure your SUSDB is on your full SQL instance…. that way you are running one less database instance (and therefore utilising less resources) and also have maintenance plans at your disposal
  • Use a maintenance plan to keep your SUSDB database optimal

 

SCCM Update Cleanup

It’s also worth noting that once the SUSDB has been cleaned up, SCCM will execute its own cleanup after the next sync. This cleanup removes obsolete Update CIs (Configuration Items) that corresponded to the items removed from the SUSDB. In most environments, this isn’t usually something noticeable, however in severely under resourced SCCM servers it can cause its own set of problems (though there’s not a huge amount you can do about it other than wait). This will generally present as the SCCM console locking up while it’s doing back-end SQL processes – and if you look at the SQL threads, you’ll see a WSUS related one blocking all other threads. Realistically your best option to resolve this is to increase the resources available to the server – and if that isn’t a possibility, settle in for a long wait!

Exchange hybrid – mailboxes missing on-premise

While hybrid exchange environments are awesome for stretching your on premise exchange topology to Office 365, they do introduce a bunch of complexity – primarily around user creation, licensing, and mail flow.

I recently had an issue at a client where they had email bounce-backs from an on premise service destined for a few Exchange Online mailboxes. For some reason, these few mailboxes didn’t appear in the on-premise exchange environment (as remote Office 365 mailboxes), so exchange was unable to route the emails destined for those particular mailboxes.

In general, you should be creating your mailboxes on premise (Enable-RemoteMailbox), then synchronising via AADConnect – that way the on premise environment knows about the mailbox and it can be managed properly. This client was actually doing this, but obviously the process broke somewhere along the way for a few mailboxes.

There’s a bunch of different options on Google about how to get the mailbox to show up on premise – with a lot of them recommending to remove the mailbox and start again (er… how about no!).

I came across this Microsoft article on a very similar issue, but for Shared Mailboxes created purely in Exchange Online. Looking at the process, it looked like a modified version may work for user mailboxes – and it does. Below is a quick and dirty powershell script that can be used to fix a single mailbox:

 

Windows 10 1709 and installing Hyper-V

It’s not often that I actually install Hyper-V on a client OS, so it was only by chance that I came across a bit of a weird issue when installing it on Windows 10 1709. Obviously I performed the usual process: Virtualization was enabled in the BIOS, enabled Hyper-V in Windows Features, rebooted and it all appeared to install/enable successfully.

Launched the Hyper-V console, and the local PC wasn’t automatically selected. Odd. Added ‘Localhost’ to the view, and received an error that indicated the services may not be running. Sure enough, Hyper-V Virtual Machine Manager was running, but Hyper-V Host Compute Service (vmcompute.exe) wasn’t. When trying to launch it, I received “The service did not respond to the start or control request in a timely fashion”. Event viewer detailed the exact same error – nothing more. Awesome!

Tried it on another machine in the same environment and experienced the exact same issue. Apparently, another Adexian (Hayes) also installed Hyper-V on one of his 1709 PCs recently – and his worked fine – so what the trigger is, I’ve yet to determine. On a related note, Hayes’s machine won’t shut down since the Hyper-V install – it reboots instead (and he’s yet to find a fix for this).

Obviously it’s time for Google – and it seems to be quite a common issue with 1709. Apparently Microsoft added some additional security policies that prevents Hyper-V running in certain scenarios (usually when there’s some non-Microsoft dll’s loaded in vmcompute.exe). There’s even a Microsoft support article detailing a similar issue where the vmcompute.exe process is crashing (rather than in my case where it wasn’t even launching in the first place).

In the end, the recommended solutions I could find were pretty varied:

  • Roll back to 1703 (no thanks – plus it wasn’t an upgrade)
  • Uninstall Sophos (wasn’t installed)
  • Uninstall any other Antivirus (McAfee installed in this instance, though anecdotal evidence suggests uninstalling it doesn’t work – didn’t try)
  • Configure ‘Control Flow Guard’ in the Exploit settings of Defender to be ‘On’ (which it was)

Going with the easiest option first (configure Control Flow Guard), I figured I’d set that to ‘On’. You can find this setting under:

Windows Defender Security Center > App and Browser Control > Exploit Protection Settings > Control flow guard

For me, it was already set to ‘Use Default (On)’. Damn. Ok, so what happens if we turn it off (and reboot). Unsurprisingly, it didn’t fix the issue. What it did do though, was cause vmcompute.exe to start launching and generating a crash error (as detailed in the microsoft support article).

Given the setting is meant to be ‘On’, I decided to turn it back on and see what happens. And it works. Why? No idea!

Either way, the solution for me (on two computers) was to disable CGF, reboot, re-enable CFG and reboot again.

Microsoft products – consolidated table of end of life dates

Microsoft product end of support dates are sometimes not easy to find and its not getting any better with the “current branch” releases and cloud solutions being governed by the Modern lifecycle policy.

The Modern lifecycle policy page further links to 3 product catagories, O365, Cloud platform and Dynamics. Unfortunately, its not clear (at least to me) how this helps with products such as SCCM current branch (be it 1606, 1702, 1706, 1710 or 1802) – however this information is available at another location

Likewise with the “traditional” products, most end of life information is available here – but to say that the information is difficult to search through is an understatement.

It also sometimes lacks detail, for example, there is no metion of the differing support for Windows 8.1 without update 1 and with update 1.

We have a number of clients that take the approach that while a server is running, to leave it there – and while I may personally not like this approach (i prefer to roll through the OS upgrades as they come out) – they have a valid approach and end of life information is important for them.

Keep in mind that everything listed below is end of extended support, not mainstream support – and i have taken some liberties (e.g. assumed that windows 8.1 is 8.1 with update 1)

Windows 10 dates have been sourced from the product lifecycle page, however this blog entry states than an additional 6 months has been granted to displayed Windows 10 versions.

If you find the below useful – cool. If i’ve got something wrong, or missed something that is key (in your opinion), please leave a comment.

 

ProductEnd of life date (end of extended support)
Windows 2003 SP2July 14, 2015
Windows 2008July 12, 2011
Windows 2008 R2 SP1Jan 14, 2020
Windows 2012Oct 10, 2023
Windows 2012 R2Oct 10, 2023
Windows 2016Jan 11, 2027
Windows XP SP3Jan 11, 2011
Windows Vista SP2April 4, 2017
Windows 7 SP1Jan 14, 2020
Windows 8Jan 12, 2016
Windows 8.1 with update 1Jan 10, 2023
Windows 10 RTM (1507)May 9, 2017
Windows 10 1511Oct 10, 2017 (End of support)
April 10, 2018 (additional servicing for enterprise and education)
Windows 10 1607April 10, 2018 (End of support)
Oct 9, 2018 (additional servicing for enterprise and education)
Windows 10 1703Oct 9, 2018 (end of support)
April 9, 2018 (additional servicing for enterprise and education)
Windows 10 1709April 9, 2019 (End of support)
Oct 8, 2019 (additional servicing for enterprise and education)
Office 2007Oct 10, 2017
Office 2010 SP2Oct 13, 2020
Office 2013April 11, 2023
Office 2016Oct 14, 2025
Lync 2010April 13, 2021
Lync 2013April 11, 2023
Skype for Business 2015April 11, 2023
Exchange 2010 SP3Jan 14, 2020
Exchange 2013 SP1April 11, 2023
Exchange 2016Oct 14, 2025
Forefront TMGJuly 12, 2011
Sharepoint 2010July 10, 2012
Sharepoint 2013 SP1April 11, 2023
Sharepoint 2016July 14, 2026
SCCM 2012 SP2July 12, 2022
SCCM 2012 R2 SP1July 12, 2022
SCCM 1606July 22, 2017
SCCM 1610Nov 18, 2017
SCCM 1702March 27, 2018
SCCM 1706July 31, 2018
SCCM 1710May 20, 2019
SCCM 1802Sept 22, 2019
SCVMM 2012 SP1July 12, 2022
SCVMM 2016Jan 11, 2027
SCOM 2012 SP1July 12, 2022
SCOM 2012 R2July 12, 2022
SCOM 2016Jan 11, 2027
SCORCH 2012 SP1July 12, 2022
SCORCH 2012 R2July 12, 2022
SCORCH 2016Jan 11, 2027
SCSM 2016Jan 11, 2027

Microsoft Surface Laptops and SCCM OS Deployment

So you’ve just picked up your shiny new Microsoft Surface Laptop and want to put your current Windows 10 Enterprise SOE on it. Of course you have it all set up in SCCM (right?), so you figure you’ll kick off a build and be up and running in a half hour or so.

Granted, the Surface Laptop isn’t marketed as an Enterprise device and comes with Windows 10 S pre-installed, but you can do a simple in-place upgrade to Windows 10 Pro. So obviously it should be capable of running any of the Windows 10 OS’s (which it is).

Unfortunately it’s not quite so simple to get them to build through SCCM. Below are the three issues I experienced when trying to build one via SCCM (and was silly enough to think it’d be straight plug and play!):

  • PXE doesn’t work. I’ve got a couple of USB to Ethernet adapter that works fine with PXE on other devices – but they simply don’t work with the Surface Laptop. Updated the firmware on the laptop to the latest (as Surface Pro’s had similar issues with old firmware), and still had the same issue. Whether this is unique to the Surface Laptop I have, or all of them – I’m not sure (as I’ve only got the one so far).
  • The keyboard doesn’t work in WinPE. Touchpad mouse works fine – so you can kick off the task sequence – you just can’t use the keyboard at all. Fine if your task sequence is fully automated, but if you need to enter a computer name or troubleshoot issues, you’re going to need to import some drivers.
  • SecureBoot. Once I’d decided to use a bootable USB to kick off the task sequence, it started running through fine. Rebooted into Windows Setup, then proceeded to complain about the “Microsoft-Windows-Deployment” component of the Specialize pass in the Unattend.xml. Long story short, it’s caused by SecureBoot (more on this further down).

So lets break this down into how to fix each.

PXE: Simply put, I haven’t been able to fix this. Updated firmware, used different PXE capable USB to Ethernet devices. As above, I’m not sure if this is just the one I have, or all of them – or even if it’ll be fixed in newer firmwares. At this stage, it looks like the only option is SCCM boot media. Since the Surface Laptop only has a single USB port, you’ll either need to use a USB/Ethernet combo (like one of these – not sure if they’re PXE capable or not, haven’t tested), or you’ll need to use an external USB hub. Note: you can initiate PXE from full power off by pressing Volume Down + Power. If you want to USB boot, you need to go into the UEFI setup by pressing Volume Up + Power. On the boot settings page, you can swipe left on any of the boot options to initiate a ‘boot now to selected device’.

SecureBoot/Unattend.xml: This one is a little more tricky. Essentially if you look at the setupact.log file, you’ll see something along the lines of “[SETUPGC.exe] Hit an error (hr = 0x800711c7) while running [cmd / c net user Administrator /active:yes]“. 0x800711c7 translates to “Your organization used Device Guard to block this app”. According to this Microsoft article, it’s due to the Code Integrity policy in UEFI being cleared as part of the OS upgrade (from Windows 10 S to something else). Since Windows 10 S won’t let you launch certain executables, it blocks them during the OS Deployment as well. Supposedly fixed by a reboot – but by then you’ve broken your OS deployment. The obvious fix is to disable SecureBoot, then re-enable it after the task sequence completes. I’m not really a fan of this approach, and I’m not sure how you’d automate it (even if you could).

During my research, I found a reference to someone suggesting that applying the Surface Laptop drivers during the OS Deployment actually fixes the issue. I’m not 100% sure if this is the case – as I disabled SecureBoot, rebooted and re-enabled SecureBoot before testing out this process – but doing so afterwards did actually work (with SecureBoot enabled). Since I’ve only got the one device so far, I’ll update this blog once I’ve tested on others. I’ve put some instructions further below on how to import the drivers into SCCM (as they’re provided in MSI format now, and you’ll need them to apply before Windows Setup).

Keyboard during WinPE: Essentially, you need 5 drivers imported into the boot image for the keyboard to work (as detailed by James in this technet blog) Below is an image of the drivers required in the boot image:

Adding Surface Laptop Drivers to SCCM

Surface drivers all now come in MSI format – which is good for normal deployments, but doesn’t really help you for OS Deployments (assuming you want to apply the drivers prior to Windows Setup). After downloading the latest Surface Laptop drivers from Microsoft, you can use the following example command to extract them (which goes from around 300MB to a whopping 1.3GB):

msiexec.exe /a C:\Temp\SurfaceLaptop_Win10_15063_1802008_1.msi targetdir=C:\Temp\Extract /qn

From there you can import the “Drivers” sub-folder into SCCM as per usual. If you plan on applying them as a proper ‘Driver Package’ during the OS Deployment, you’ll need to import them into their own driver package, distribute it to the relevant Distribution Points, then add it to the task sequence. You can use the following WMI query to filter it to just Surface Laptops:

SELECT * FROM Win32_ComputerSystem WHERE Model LIKE “%Surface Laptop%”

Obviously you can now also add the keyboard drivers to the required boot image!

Azure AD Connect – Permissions Issues

I’ve had various versions of AD Sync/Azure AD Connect running in my development environment over the years, and have used a number of different service accounts when testing out different configurations or new features. Needless to say, the permissions for my Sync account were probably a bit of a mess.

Recently, I wanted to try out group writeback. It’s been a preview feature of Azure AD Connect for quite a while now – it allows you to synchronise Exchange Online groups back to your AD environment so that on premise users can send and receive emails from these groups.

Launched the AADConnect configuration, enabled Group Writeback, then kicked off a sync. Of course, I start getting ‘Access Denied’ errors for each of the Exchange Online groups – couldn’t be that easy!

Generally speaking, you need to also run one of the “Initialize-<something>Writeback” commands. When I went looking for the appropriate commands (as I don’t remember these things off the top of my head!), I came across an interesting TechNet Blog article: Advanced AAD Connect Permissions Configuration – it’s pretty much a comprehensive script that handles all the relevant permissions (including locating the configured sync account and sync OUs).

Gave it a whirl, entered credentials as required, and what do you know – permissions all now good!