Execution status received: 24 (Application download failed)

I came across an interesting issue today where I couldn’t get applications to install on a specific piece of hardware during a task sequence. All task sequence steps would run fine, other than ‘Application’ installs – and they would work fine on other hardware.

Looking in the smsts.log file, I could see the following error for each application:
Execution status received: 24 (Application download failed)

I checked the boundaries, everything was good. Google has many instances of the same issue, but none seemed to have relevant (or helpful) solutions. In the end, I realised this device has 4G LTE with a valid SIM in it, and it was connecting automatically during the task sequence. It seems this was confusing it and it couldn’t locate the content for applications!

The simplest solution I could find was to disable the NIC during the task sequence, then re-enable it at the end. The following are the powershell commands I put in the task sequence to get it working:

To Disable: powershell.exe -ExecutionPolicy Bypass -Command “Get-NetAdapter | ?{$_.MediaType -eq ‘Wireless WAN’} | Disable-NetAdapter -Confirm:$False

To Enable: powershell.exe -ExecutionPolicy Bypass -Command “Get-NetAdapter | ?{$_.MediaType -eq ‘Wireless WAN’} | Enable-NetAdapter -Confirm:$False

Storing credentials for Powershell scripts – Encrypted Strings or Credential Manager

If you’ve been reading through some of our other blog posts, you’ll have noticed that we generally like to automate things using Powershell – which quite often requires storing credentials somehow. Preferably not in plain text.

In our existing scripts on this blog, the process has generally involved generating an encryption key, using that key to convert a password to an encrypted string, then storing both in individual files. While that method works fine and has some nice benefits, there’s another alternative that’s worth knowing about: storing credentials in Windows Credential Manager.

I’ll put the code for both methods further down, but first I’d like to list the main differences between the two methods:

Encrypted string in file

  • Portable between users and computers
  • Can be used in WinPE or wherever Powershell is available
  • Only as secure as the NTFS permissions on the files storing the key and encrypted password (someone with read access can use powershell to reveal the password if they know what the files are for)
  • Utilises a bit of ‘security by obfuscation’ – depending on how you name the files, no one is going to know what they’re for, and malware/viruses definitely aren’t going to realise the files are encrypted credentials that could be easily reverse-engineered.
  • When used with Task Scheduler, the scheduled task can be run as any user/local system

Windows Credential Manager

  • Only portable between computers if using roaming profiles; isn’t portable between users
  • Can only be used in a full-windows environment that has Windows Credential Manager (ie: not in WinPE)
  • Is as secure as the Windows account attached to the credential store (assuming a complex password, pretty secure)
  • If the Windows account becomes compromised somehow (you leave the PC unlocked, malware/virus, etc), it’s a relatively simple process to pull any account details you’ve stored in Windows Credential Manager via Powershell.
  • When used with Task Scheduler, the scheduled task has to be running as the Windows account attached to the credentials store

So, how do you do each of these?

 

Exchange hybrid – mailboxes missing on-premise

While hybrid exchange environments are awesome for stretching your on premise exchange topology to Office 365, they do introduce a bunch of complexity – primarily around user creation, licensing, and mail flow.

I recently had an issue at a client where they had email bounce-backs from an on premise service destined for a few Exchange Online mailboxes. For some reason, these few mailboxes didn’t appear in the on-premise exchange environment (as remote Office 365 mailboxes), so exchange was unable to route the emails destined for those particular mailboxes.

In general, you should be creating your mailboxes on premise (Enable-RemoteMailbox), then synchronising via AADConnect – that way the on premise environment knows about the mailbox and it can be managed properly. This client was actually doing this, but obviously the process broke somewhere along the way for a few mailboxes.

There’s a bunch of different options on Google about how to get the mailbox to show up on premise – with a lot of them recommending to remove the mailbox and start again (er… how about no!).

I came across this Microsoft article on a very similar issue, but for Shared Mailboxes created purely in Exchange Online. Looking at the process, it looked like a modified version may work for user mailboxes – and it does. Below is a quick and dirty powershell script that can be used to fix a single mailbox:

 

Windows 10 1709 and installing Hyper-V

It’s not often that I actually install Hyper-V on a client OS, so it was only by chance that I came across a bit of a weird issue when installing it on Windows 10 1709. Obviously I performed the usual process: Virtualization was enabled in the BIOS, enabled Hyper-V in Windows Features, rebooted and it all appeared to install/enable successfully.

Launched the Hyper-V console, and the local PC wasn’t automatically selected. Odd. Added ‘Localhost’ to the view, and received an error that indicated the services may not be running. Sure enough, Hyper-V Virtual Machine Manager was running, but Hyper-V Host Compute Service (vmcompute.exe) wasn’t. When trying to launch it, I received “The service did not respond to the start or control request in a timely fashion”. Event viewer detailed the exact same error – nothing more. Awesome!

Tried it on another machine in the same environment and experienced the exact same issue. Apparently, another Adexian (Hayes) also installed Hyper-V on one of his 1709 PCs recently – and his worked fine – so what the trigger is, I’ve yet to determine. On a related note, Hayes’s machine won’t shut down since the Hyper-V install – it reboots instead (and he’s yet to find a fix for this).

Obviously it’s time for Google – and it seems to be quite a common issue with 1709. Apparently Microsoft added some additional security policies that prevents Hyper-V running in certain scenarios (usually when there’s some non-Microsoft dll’s loaded in vmcompute.exe). There’s even a Microsoft support article detailing a similar issue where the vmcompute.exe process is crashing (rather than in my case where it wasn’t even launching in the first place).

In the end, the recommended solutions I could find were pretty varied:

  • Roll back to 1703 (no thanks – plus it wasn’t an upgrade)
  • Uninstall Sophos (wasn’t installed)
  • Uninstall any other Antivirus (McAfee installed in this instance, though anecdotal evidence suggests uninstalling it doesn’t work – didn’t try)
  • Configure ‘Control Flow Guard’ in the Exploit settings of Defender to be ‘On’ (which it was)

Going with the easiest option first (configure Control Flow Guard), I figured I’d set that to ‘On’. You can find this setting under:

Windows Defender Security Center > App and Browser Control > Exploit Protection Settings > Control flow guard

For me, it was already set to ‘Use Default (On)’. Damn. Ok, so what happens if we turn it off (and reboot). Unsurprisingly, it didn’t fix the issue. What it did do though, was cause vmcompute.exe to start launching and generating a crash error (as detailed in the microsoft support article).

Given the setting is meant to be ‘On’, I decided to turn it back on and see what happens. And it works. Why? No idea!

Either way, the solution for me (on two computers) was to disable CGF, reboot, re-enable CFG and reboot again.

Microsoft Surface Laptops and SCCM OS Deployment

So you’ve just picked up your shiny new Microsoft Surface Laptop and want to put your current Windows 10 Enterprise SOE on it. Of course you have it all set up in SCCM (right?), so you figure you’ll kick off a build and be up and running in a half hour or so.

Granted, the Surface Laptop isn’t marketed as an Enterprise device and comes with Windows 10 S pre-installed, but you can do a simple in-place upgrade to Windows 10 Pro. So obviously it should be capable of running any of the Windows 10 OS’s (which it is).

Unfortunately it’s not quite so simple to get them to build through SCCM. Below are the three issues I experienced when trying to build one via SCCM (and was silly enough to think it’d be straight plug and play!):

  • PXE doesn’t work. I’ve got a couple of USB to Ethernet adapter that works fine with PXE on other devices – but they simply don’t work with the Surface Laptop. Updated the firmware on the laptop to the latest (as Surface Pro’s had similar issues with old firmware), and still had the same issue. Whether this is unique to the Surface Laptop I have, or all of them – I’m not sure (as I’ve only got the one so far).
  • The keyboard doesn’t work in WinPE. Touchpad mouse works fine – so you can kick off the task sequence – you just can’t use the keyboard at all. Fine if your task sequence is fully automated, but if you need to enter a computer name or troubleshoot issues, you’re going to need to import some drivers.
  • SecureBoot. Once I’d decided to use a bootable USB to kick off the task sequence, it started running through fine. Rebooted into Windows Setup, then proceeded to complain about the “Microsoft-Windows-Deployment” component of the Specialize pass in the Unattend.xml. Long story short, it’s caused by SecureBoot (more on this further down).

So lets break this down into how to fix each.

PXE: Simply put, I haven’t been able to fix this. Updated firmware, used different PXE capable USB to Ethernet devices. As above, I’m not sure if this is just the one I have, or all of them – or even if it’ll be fixed in newer firmwares. At this stage, it looks like the only option is SCCM boot media. Since the Surface Laptop only has a single USB port, you’ll either need to use a USB/Ethernet combo (like one of these – not sure if they’re PXE capable or not, haven’t tested), or you’ll need to use an external USB hub. Note: you can initiate PXE from full power off by pressing Volume Down + Power. If you want to USB boot, you need to go into the UEFI setup by pressing Volume Up + Power. On the boot settings page, you can swipe left on any of the boot options to initiate a ‘boot now to selected device’.

SecureBoot/Unattend.xml: This one is a little more tricky. Essentially if you look at the setupact.log file, you’ll see something along the lines of “[SETUPGC.exe] Hit an error (hr = 0x800711c7) while running [cmd / c net user Administrator /active:yes]“. 0x800711c7 translates to “Your organization used Device Guard to block this app”. According to this Microsoft article, it’s due to the Code Integrity policy in UEFI being cleared as part of the OS upgrade (from Windows 10 S to something else). Since Windows 10 S won’t let you launch certain executables, it blocks them during the OS Deployment as well. Supposedly fixed by a reboot – but by then you’ve broken your OS deployment. The obvious fix is to disable SecureBoot, then re-enable it after the task sequence completes. I’m not really a fan of this approach, and I’m not sure how you’d automate it (even if you could).

During my research, I found a reference to someone suggesting that applying the Surface Laptop drivers during the OS Deployment actually fixes the issue. I’m not 100% sure if this is the case – as I disabled SecureBoot, rebooted and re-enabled SecureBoot before testing out this process – but doing so afterwards did actually work (with SecureBoot enabled). Since I’ve only got the one device so far, I’ll update this blog once I’ve tested on others. I’ve put some instructions further below on how to import the drivers into SCCM (as they’re provided in MSI format now, and you’ll need them to apply before Windows Setup).

Keyboard during WinPE: Essentially, you need 5 drivers imported into the boot image for the keyboard to work (as detailed by James in this technet blog) Below is an image of the drivers required in the boot image:

Adding Surface Laptop Drivers to SCCM

Surface drivers all now come in MSI format – which is good for normal deployments, but doesn’t really help you for OS Deployments (assuming you want to apply the drivers prior to Windows Setup). After downloading the latest Surface Laptop drivers from Microsoft, you can use the following example command to extract them (which goes from around 300MB to a whopping 1.3GB):

msiexec.exe /a C:\Temp\SurfaceLaptop_Win10_15063_1802008_1.msi targetdir=C:\Temp\Extract /qn

From there you can import the “Drivers” sub-folder into SCCM as per usual. If you plan on applying them as a proper ‘Driver Package’ during the OS Deployment, you’ll need to import them into their own driver package, distribute it to the relevant Distribution Points, then add it to the task sequence. You can use the following WMI query to filter it to just Surface Laptops:

SELECT * FROM Win32_ComputerSystem WHERE Model LIKE “%Surface Laptop%”

Obviously you can now also add the keyboard drivers to the required boot image!

Azure AD Connect – Permissions Issues

I’ve had various versions of AD Sync/Azure AD Connect running in my development environment over the years, and have used a number of different service accounts when testing out different configurations or new features. Needless to say, the permissions for my Sync account were probably a bit of a mess.

Recently, I wanted to try out group writeback. It’s been a preview feature of Azure AD Connect for quite a while now – it allows you to synchronise Exchange Online groups back to your AD environment so that on premise users can send and receive emails from these groups.

Launched the AADConnect configuration, enabled Group Writeback, then kicked off a sync. Of course, I start getting ‘Access Denied’ errors for each of the Exchange Online groups – couldn’t be that easy!

Generally speaking, you need to also run one of the “Initialize-<something>Writeback” commands. When I went looking for the appropriate commands (as I don’t remember these things off the top of my head!), I came across an interesting TechNet Blog article: Advanced AAD Connect Permissions Configuration – it’s pretty much a comprehensive script that handles all the relevant permissions (including locating the configured sync account and sync OUs).

Gave it a whirl, entered credentials as required, and what do you know – permissions all now good!

Project Honolulu

With the recent release of Server 1709 (and the fact that it’s Server Core only), Microsoft have also recently released a preview of a new server management product. This product is currently code-named ‘Project Honolulu’, and is a light-weight web-based management console for Windows Server. Details of the original preview release can be found here.

In the past, when you wanted to manage remote servers (and especially Server Core instances), you either had to:

  • RDP in to configure it
  • Enable WinRM and use remote powershell
  • Enable WinRM and use a remote Server Manager instance from Server 2012 R2 or Server 2016

Depending on what you were trying to achieve, you also may have had to use remote consoles like Computer Management, Event Viewer, Storage Management, Certificate Management, Firewall Management – none of which are built in to Server Manager, but can be launched from there.

Based on initial impressions of Project Honolulu, it looks like Microsoft is trying to improve that experience, and it looks like they’re actually moving in the right direction.

As you can see from the above image, they’ve actually rolled a bunch of the above tools into a single interface – and the really surprising part, is that it’s actually fast. Not like Server Manager where sometimes it can take quite a while to refresh or load.

Obviously it’s not a complete product yet, and there’s a bunch of stuff in Server Manager that it would be nice to see in Honolulu – mostly around dashboards, additional tool sets (Active Directory tools, for example), and additional functionality in the existing tools.

Some of the things I found quite good with Honolulu:

  • The speed of loading remote information (event logs, services, etc)
  • The ability to remotely import a certificate PFX file without having to resort to painful powershell commands and scripts!
  • Set basic IP config on a remote server
  • Remote process monitor (graphical – not tasklist.exe!)
  • Remote storage management without setting up additional firewall rules
  • Virtual Machine dashboards/management. I haven’t had too much of a play with this so I’d say there’s stuff missing, but what’s there is actually quite good.
  • Remote Windows Update management (for those of you not using SCCM/WSUS to automate update installation)

Things it can’t do yet, but I’m hoping they add in:

  • Dashboards for overall status of servers
  • Search function for Registry viewing/editing
  • More settings for Services (eg: Logon details)
  • Editing of certificate private key permissions
  • Additional remote tools – such as Active Directory Users and Computers, DHCP, DNS, Remote Access Management, IIS, etc

If you’re interested in trying out Project Honolulu, it can be downloaded here. Installation instructions can be found here. To be honest, it’s a super simple setup – whether that changes in the future is yet to be seen!.

Note: remote management via Honolulu does require Windows Management Framework 5.0, so you’ll need to install that on non-2016 servers.

Server 2016 LTSC vs Server 1709 Semi-Annual

In September 2016, Microsoft released Server 2016. A couple months ago, they then released Server 1709. You’d be forgiven for thinking that Server 1709 is an upgrade to 2016 – because it’s actually not.

Much like Windows 10, Microsoft have gone down the path of having multiple ‘Channels’ with the Server products. Essentially:

  • Server 2016 is the server equivalent of Windows 10 LTSB (Long Term Servicing Branch)
  • Server 1709 is the server equivalent of Windows 10 CBB (Current Branch for Business)

Instead of using LTSB and CBB, the server OS’s are ‘Channels’ – so LTSC (Long Term Servicing Channel) and SAC (Semi-Annual Channel).

So what are the main differences?

Server 2016

  • Available in Standard, Datacenter and Essential editions
  • Available as Server Core, or Server with a GUI
  • 5 year mainstream support, 5 year extended support
  • New release expected every 2-3 years

Server 1709

  • Available in Standard or Datacenter (no Essential edition)
  • Only available as Server Core
  • 18 months mainstream support, no extended (much like Windows 10 CB and CBB)
  • Releases semi-annual

So why would you go with 1709 over Server 2016? In general, it depends on your use-case scenario. The largest improvements for 1709 are around Containers and Nano Containers (with Nano Server being deprecated), along with some Hyper-V. Obviously you’re going to be restricted to Server Core, but that’s not as big of a deal these days when you’re talking about built-in roles (due to significant improvements in remote management for Server 2016). In general, you’re only going to be considering 1709 (or any SAC release) in the following scenarios:

  • You’re looking to build a new Server Core server and you don’t mind upgrading it every 12-18 months
  • There’s specific features available in 1709 that aren’t available in 2016

For a full list of updated features in 1709, here’s the full list. There’s also a new management interface on the way, currently named ‘Project Honolulu’ – this may help with some of the Server Core management concerns.

A couple of gotchas:

  • If you’re using Automatic Virtual Machine Activation (AVMA) on Datacenter Hyper-V hosts, it doesn’t seem to work with Server 1709 – at this stage I’m unable to find official information about this, so it seems that you’ll need to use Manual Activation Keys (MAK) in the mean-time (or…ongoing).
  • You can’t upgrade from Server 2016 to Server 1709 (even if it’s 2016 Core) – much like you can’t upgrade from Windows 10 LTSB to CBB.

 

ADFS, WAP and updating their public certificates

Renewing public certificates within an environment is always a bit of a pain – especially when you use the same certificate on a range of different systems and have to update each manually! When you’ve got a number of web-based systems that you publish externally, using a reverse proxy such as a Microsoft Web Application Proxy (WAP) can make the task a little less tedious.

With WAP, you can use a single wildcard certificate to publish any number of web-based services to the public internet. When you need to update the public certificate, you only need to update it in the one place – you don’t need to update each individual web service. In addition, WAP can also act as an Active Directory Federation Services Proxy (ADFS Proxy) – this allows you to present your ADFS infrastructure to the public internet without directly exposing your ADFS server(s).

In general, ADFS and WAP should go hand-in-hand. Internal clients hit the ADFS server directly (via the ADFS namespace), while external clients communicate via the WAP. By doing this, you can also set up different rules in ADFS to define what should happen for external authentication requests, compared to internal authentication requests (e.g: 2-factor auth for external, windows auth for internal).

Now, both ADFS and WAP need to have a public-signed certificate. What happens when those certificates expire? Obviously you need to renew them and update the configuration – which is what prompted me to write this article. Usually this is a pretty simple process – you import the new certificate into the local computer certificate store on each of your ADFS/WAP servers, then update the configuration.

Initially I noticed I was getting the following in the event logs of the WAP server:

I had a look at the certificate on the ADFS server and sure enough, the certificate thumbprint matched the expired certificate on the ADFS server. Since I was using that certificate on the WAP server as well, I needed to update it in both systems. I started by importing the new public wildcard certificate into both the ADFS and WAP servers.

The next step is to update the configuration. For ADFS, you can pull up the ADFS console and go to the Service\Certificate node. From there, you select the ‘Service Communications’ certificate, hit the ‘Set Service Communications Certificate’ link, then follow the wizard. Then in the ADFS event log I started getting:

Whoops, I forgot to give access to the service account for the private key! In the Certificate Management console, locate the public cert, right-click, select ‘All Tasks’ – ‘Manage Private Keys’ and make sure the service account has full access. I restarted the ADFS service (adfssrv) and the ADFS server looked to start up successfully. Or so I thought.

Assuming ADFS was all good, I then proceeded to update the main proxy certificate in WAP. To do this you really only have the option to use a powershell command:

…and of course I was still getting trust errors. In the end I removed and re-added the WAP role to the server (it was a development environment – and since the rules and configuration are stored with ADFS, it’s wasn’t a huge issue). When trying to re-create the trust to the ADFS server via the wizard, I was getting a trust error – along with the following in the event log:

Odd. I could resolve and ping the ADFS server (both directly and via the ADFS namespace) – and the credentials used were an administrator on the remote server. The new certificate was showing correctly in the ADFS console, and the event logs on the ADFS server indicated it was all fine. So I started going through all the config via Powershell instead. After a bit of investigation, I ran the Get-AdfsSslCertificate  command. Despite the ADFS console showing the correct certificate, powershell was still showing the old one!

I ran: Get-ChildItem -path cert:\LocalMachine\My  to get the thumbprint of the new certificate, then Set-AdfsSslCertification thumbprint <newthumbprint>  to set it. I restarted the service with  Restart-Service adfssrv and double-checked the certificate. Ok, NOW we were looking good.

As it turns out, the GUI wizard will update the configuration in the ADFS database, but not the binding on HTTP.sys.

I re-ran the WAP wizard and everything started working correctly.

One other thing to take note of – the above commands are all about updating certificates specifically for ADFS and the ADFS Proxy (WAP) – if you have additional published rules in WAP, you’ll need to update the certificate thumbprint against those as well!

 

Windows 10 Fast Startup Mode – Maybe not so good for enterprise!

Windows 10 includes a feature called “Fast Startup”, which is enabled by default. The whole idea behind this feature is to make it so computers don’t take as long to boot up after being shut down (rather than going into hibernation or sleep). It achieves this by essentially using a cut-down implementation of Windows Hibernation. Instead of saving all user and application state to a file like traditional hibernation, it only saves the kernel and system session to the hibernation file (no user session data) – that way when it “turns on”, it loads the previous system session into RAM and off you go. Its worth noting that this process doesn’t apply to reboots – only shutdowns. Reboots follow the traditional process of completely unloading the kernel and starting from scratch on boot-up.

Obviously, it’s a great idea for consumers – quicker boot-up and login times = happy consumers.

When you start using it in a corporate environment though, you can start running into some issues – primarily:

  • It can cause the network adaptor to not be ready prior to the user logging in. If you’re using folder redirection (without offline files – for computers that are always network-connected), then this isn’t such a good thing. It’s also not such a great thing for application of user-based group policies that only apply during login.
  • Some Windows Updates require the computer to be shut down/rebooted for them to install correctly. In the case of Fast Startup, the system isn’t really shutting down – it’s hibernating. Since users in corporate environments quite often just “shut down” at the end of the day (hibernate with Fast Startup), these updates don’t get installed. Of course there’s ways around this (have SCCM prompt the user to reboot, for example), but they’re not always an acceptable solution for every customer.

Obviously if the computer doesn’t support hibernation, there’s no issues.

If you’d like to disable Fast Startup, there doesn’t seem to be a specific GPO setting – you’ll have to use Group Policy Preferences instead. The relevant registry setting is here:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Power\HiberbootEnabled    (1 = enable, 0 = disable)