Happy New Year – 2017 is here!

Wow, is it a new year already?
As we all enjoy the pleasure that is traffic, cubicles, tea-room chats, to-do lists and all things ‘back to work’, I’d like to take this opportunitity to thank our customers for their support in 2016 and look back at the year as well as the future for Adexis.

While we continued to deliver outstanding technical services and no-fuss consulting to our customers in 2016, we also did a lot of looking inside, for growth and to prepare ourselves for a big 2017 and beyond. To kickstart these changes, Hayes hired a General Manager. If I haven’t met you yet, hello, I’m Shawn Donaldson. I started my journey with Adexis by meeting with many of our customers to get their feedback and input on Adexis. I was pleasently surprised and sincerely appreciated the overwhelmingly positive feedback we received. With this in mind we continued to provide the services you’ve come to expect from Adexis. To extend beyond that, we also made some improvements and created some new services and capabilities including;

  • Cloud – We increased our focus on cloud centering primarily around Office 365, Azure and Hybrid-cloud solutions. We invested in training and certifications while increasing the volume of cloud work we performed enabling us to provide outstanding cloud planning, migration, implimentation and maintnence services. We created the Cloud Readiness Audit to help customers to understand their environment, it’s applicability to cloud and the pros and cons of cloud solutions for them.
  • Proactive Services – We created a new portfolio of proactive services to compliment the professional services portfolio we’re known for. This includes Managed Services which centres around outsourcing portions of your environment so you can focus on your preferred core functions and Scheduled Services which allows you to maintain your own environment with a Micorosft specialist providing guidance and quality assurance. Together these provide a completely flexible, technical-focussed approach to managing your environment.
  • Partnerships – We recognised a need to help our customers with a greater range of technologies. Understanding the contrasting feedback that our customers come to Adexis for our specialist Microsoft skills we decided not to generalise and diversify, thus diluting our quality of service. Instead, we partnered with outstanding organisations that share the same values of no-fuss consulting lead by technical excellence. We first partnered with Hastwell IT, who provide networking procurement, consulting and technical services. We have also partnered with AISH Solutions for hardware and software procurement and Academy IT for Microsoft-focussed training. Together with these vendors we can provide a complete set of technology services with consultants who all specialise in their respective fields and contribute to a cohesive delivery model.
  • Increased Certifications – Understanding the importance of our primary vendor, Microsoft, our team undertook a range of learning tasks and exams to increase our certification including multiple gold partner and Software Assurance Planning Services partner. We also increased our level of collaboration with Microsoft.
  • Our own back yard – We understand that providing quality service comes from a baseline of great people with passion and the right tools and processes to support them. We invested in our internal tools and processes as well as establishing greater collaboration so that we can provide a greater level of service and better quality documentation to you.
  • Digital Presense – We established an improved digital presence so we can hear from you and tell you some of the exciting things that are happening. This included the creation of this blog, our LinkedIn page, our Facebook page and an update to our website with information on some of these exciting new services we’re offering.

So what’s in store for 2017?

Much of the above work in 2016 was preparation work for a great 2017. We’ll use these initiatives as a foundation to continue growth in all of these key areas and we look forward to the increased flexibility, quality and range of services this will enable us to provide our customers in 2017. At the same time we’ll continue to focus on what’s most important; providing a technically competent, no fuss, independent Microsoft infrastructure specialist consulting service that’s genuinely customer focussed.

Our vision is that Adexis will be recognised as a premier provider of IT consulting services in the Australian market and to be seen as a partner of choice by clients, which we hope to achieve by daily practise of our core values;

  • Technical Excellence
  • Honesty
  • Independence
  • Reliable Support
  • No Sales-Speak

Happy new year!
Thank you again for your support and we look forward to working with you in 2017 and beyond!

Windows or Office deployment: it’s Microsoft’s shout

Adexis is known in Adelaide as the number one experts in SCCM and windows deployment. Did you know you can engage an Adexis expert and have Microsoft pick up the tab?

moneyMany customers aren’t aware that they’re entitled to a little golden nugget called Desktop Deployment Planning Services. DDPS  is a benefit provided by Microsoft to customers who purchase Software Assurance through their Volume Licensing whereby you can engage qualified partners to provide services for you and Microsoft will put in the dollars. In addition to our SCCM and deployment expertise, Adexis is also a qualified Desktop Deployment Planning Services partner, meaning you can engage under this scheme.
Through this scheme we can help you to plan and/or impliment on-premise, cloud-based or hybrid solutions for the deployment of windows and office to your user base utilising products including SCCM, Windows, Office and Office365.

Imagine for example you would like to try out Windows 10 for your users. In order to do that you would need to assess it’s suitability and compatibility, plan an upgrade of SCCM to 1606 or later, plan the windows deployment and impliment both solutions. These are services that Adexis can help you with and can be provided for under Desktop Deployment Planning Services. Perhaps you’ve been thinking about doing an upgrade like this but budget constraints have proven to be a challenge, this is where DDPS comes in.

Microsoft does have some strict guidelines on how these services can be utilised however. For example, they can only be utilised for the improvement of your environment using approved products. So, don’t be thinking “I can get that tricky issue fixed in my SCCM environment”, Microsoft won’t cover that one. Engagements are set up in groups of days and can allocate funds of between $3000 and $15,000.

So, you’ve decided you’d like to utilise your entitlement, how do you go about it?
The first step is to call us and let us know what you have in mind. We can provide some guidance on what is eligible and what would need to be covered by you. You should also check your eligibility with Microsoft by visiting the Volume Licensing Service Centre. You can also download the DDPS Fact Sheet for more information. Once we’ve worked together to come up with a scope of work and an understanding on it’s eligibility for DDPS, you can assign a voucher to Adexis to start the work. Here’s how;

1. Sign into VLSC.
2. Select Software Assurance from the top menu.
3. Click Planning Services. This will take you to the Manage Software Assurance Benefits page.
4. Click the LicenseID for which you want to manage Planning Services. This will take you to the Benefit Summary page.
5. Select Planning Services.
6. Select the voucher type and service level (length of the engagement in days).
7. Assign the Planning Services voucher to a project manager within your organization by entering their name and email address, and any special instructions.
8. Click Confirm Voucher Assignment.
9. Once the voucher is created, click Assign Voucher. This takes you to a benefit details page confirming voucher information, including voucher status and expiration date.
10. You can then assign this voucher to Adexis by searching for our name or using our Microsoft ID: 1388832
That’s it!
We manage the rest of the paperwork on your behalf and you’re good to go with your engagement.

To find out more or to get started on your engagement under Microsoft Software Assurance Planning Services with Adexis give us a call on (08) 7228 6188 or email us at Contact@Adexis.com.au

ADFS, WAP and updating their public certificates

Renewing public certificates within an environment is always a bit of a pain – especially when you use the same certificate on a range of different systems and have to update each manually! When you’ve got a number of web-based systems that you publish externally, using a reverse proxy such as a Microsoft Web Application Proxy (WAP) can make the task a little less tedious.

With WAP, you can use a single wildcard certificate to publish any number of web-based services to the public internet. When you need to update the public certificate, you only need to update it in the one place – you don’t need to update each individual web service. In addition, WAP can also act as an Active Directory Federation Services Proxy (ADFS Proxy) – this allows you to present your ADFS infrastructure to the public internet without directly exposing your ADFS server(s).

In general, ADFS and WAP should go hand-in-hand. Internal clients hit the ADFS server directly (via the ADFS namespace), while external clients communicate via the WAP. By doing this, you can also set up different rules in ADFS to define what should happen for external authentication requests, compared to internal authentication requests (e.g: 2-factor auth for external, windows auth for internal).

Now, both ADFS and WAP need to have a public-signed certificate. What happens when those certificates expire? Obviously you need to renew them and update the configuration – which is what prompted me to write this article. Usually this is a pretty simple process – you import the new certificate into the local computer certificate store on each of your ADFS/WAP servers, then update the configuration.

Initially I noticed I was getting the following in the event logs of the WAP server:

I had a look at the certificate on the ADFS server and sure enough, the certificate thumbprint matched the expired certificate on the ADFS server. Since I was using that certificate on the WAP server as well, I needed to update it in both systems. I started by importing the new public wildcard certificate into both the ADFS and WAP servers.

The next step is to update the configuration. For ADFS, you can pull up the ADFS console and go to the Service\Certificate node. From there, you select the ‘Service Communications’ certificate, hit the ‘Set Service Communications Certificate’ link, then follow the wizard. Then in the ADFS event log I started getting:

Whoops, I forgot to give access to the service account for the private key! In the Certificate Management console, locate the public cert, right-click, select ‘All Tasks’ – ‘Manage Private Keys’ and make sure the service account has full access. I restarted the ADFS service (adfssrv) and the ADFS server looked to start up successfully. Or so I thought.

Assuming ADFS was all good, I then proceeded to update the main proxy certificate in WAP. To do this you really only have the option to use a powershell command:

…and of course I was still getting trust errors. In the end I removed and re-added the WAP role to the server (it was a development environment – and since the rules and configuration are stored with ADFS, it’s wasn’t a huge issue). When trying to re-create the trust to the ADFS server via the wizard, I was getting a trust error – along with the following in the event log:

Odd. I could resolve and ping the ADFS server (both directly and via the ADFS namespace) – and the credentials used were an administrator on the remote server. The new certificate was showing correctly in the ADFS console, and the event logs on the ADFS server indicated it was all fine. So I started going through all the config via Powershell instead. After a bit of investigation, I ran the Get-AdfsSslCertificate  command. Despite the ADFS console showing the correct certificate, powershell was still showing the old one!

I ran: Get-ChildItem -path cert:\LocalMachine\My  to get the thumbprint of the new certificate, then Set-AdfsSslCertification thumbprint <newthumbprint>  to set it. I restarted the service with  Restart-Service adfssrv and double-checked the certificate. Ok, NOW we were looking good.

As it turns out, the GUI wizard will update the configuration in the ADFS database, but not the binding on HTTP.sys.

I re-ran the WAP wizard and everything started working correctly.

One other thing to take note of – the above commands are all about updating certificates specifically for ADFS and the ADFS Proxy (WAP) – if you have additional published rules in WAP, you’ll need to update the certificate thumbprint against those as well!

 

Windows 10 Fast Startup Mode – Maybe not so good for enterprise!

Windows 10 includes a feature called “Fast Startup”, which is enabled by default. The whole idea behind this feature is to make it so computers don’t take as long to boot up after being shut down (rather than going into hibernation or sleep). It achieves this by essentially using a cut-down implementation of Windows Hibernation. Instead of saving all user and application state to a file like traditional hibernation, it only saves the kernel and system session to the hibernation file (no user session data) – that way when it “turns on”, it loads the previous system session into RAM and off you go. Its worth noting that this process doesn’t apply to reboots – only shutdowns. Reboots follow the traditional process of completely unloading the kernel and starting from scratch on boot-up.

Obviously, it’s a great idea for consumers – quicker boot-up and login times = happy consumers.

When you start using it in a corporate environment though, you can start running into some issues – primarily:

  • It can cause the network adaptor to not be ready prior to the user logging in. If you’re using folder redirection (without offline files – for computers that are always network-connected), then this isn’t such a good thing. It’s also not such a great thing for application of user-based group policies that only apply during login.
  • Some Windows Updates require the computer to be shut down/rebooted for them to install correctly. In the case of Fast Startup, the system isn’t really shutting down – it’s hibernating. Since users in corporate environments quite often just “shut down” at the end of the day (hibernate with Fast Startup), these updates don’t get installed. Of course there’s ways around this (have SCCM prompt the user to reboot, for example), but they’re not always an acceptable solution for every customer.

Obviously if the computer doesn’t support hibernation, there’s no issues.

If you’d like to disable Fast Startup, there doesn’t seem to be a specific GPO setting – you’ll have to use Group Policy Preferences instead. The relevant registry setting is here:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Power\HiberbootEnabled    (1 = enable, 0 = disable)

Windows 10 Photos App – Invalid Value for Registry / Repairing Windows 10 Universal Apps

One of our clients had a user with a weird issue today – whenever they tried to open a photo, they’d get the following error:

win10photos-invalidregistry

When looking at the PC, they had all image formats set to use the built-in Windows 10 Photos application. If you try to open the application separately, you get the exact same error – so obviously the application was broken somehow.

After a little research, I discovered other users with the same issue – and of course, many of the suggested solutions were ridiculous (sfc /scannow – seriously?!).

As it turns out, there’s actually quite a simple fix – and it’s built into Windows.

  1. Navigate to Start – Settings – System – Apps & Features
  2. Scroll down to ‘Photos’ and click on it
  3. Click ‘Advanced Options’
  4. Click ‘Reset’

Give it a minute or so, then try it again – it should now work!

As an aside, you can do this with any of the Windows 10 Universal Applications!

Microsoft Exchange Federation Certificates – Keep an eye on the expiry!

I recently had a client experience an issue with their hybrid exchange setup (365/On Premise) – users were suddenly unable to retrieve free/busy and calendar information between the two environments. As it turns out, the certificate used to secure communications to the Microsoft Federation Gateway (MFG) had expired.

Federation certificates within exchange are generally created as part of the federation creation wizard (or the 365 Hybrid Configuration Wizard) – so in most cases, people don’t realise they’ve been created. If you’re not actively monitoring certificate expiry dates on your servers (which you should be!), you may get into the situation where this certificate expires – which results in the federation no longer working.

Why is it important to renew it before it expires? Because if you don’t, you need to remove and re-create the federation – a significantly larger task than the federation certificate renewal process. The reason for needing to re-create the trust is due to the fact that the federation certificate is used to authenticate any changes to the federation – so once it expires you can’t make any changes and have to start from scratch. Lets take a look at the steps involved in both:

Renewing before expiry:

  1. Create a new self-signed federation certificate
  2. Set the new certificate as the ‘Next’ certificate in the federation trust
  3. Wait for AD replication
  4. Test the certificate and trust (Test-FederationTrustCertificate, Test-FederationTrust)
  5. Roll-over the ‘Current’ certificate to the ‘Next’ certificate
  6. Refresh the federation metadata

Renewing after expiry:

  1. Document the existing trust settings (federated domains, federation settings)
  2. Force remove each federated domain from the federation
  3. Remove the federation trust
  4. Wait for AD replication
  5. Create a new self-signed federation certificate
  6. Create a new federation trust
  7. Update the trust organisation information
  8. Configure the required settings in the trust (as per the documentation you created in step 1)
  9. Wait for AD replication
  10. Test the certificate and trust (Test-FederationTrustCertificate, Test-FederationTrust) – it can take 12-48 hours before the trust reports as being no longer expired!
  11. Add each of the federated domains back into the trust (this will involve generating domain ‘Proof’ entries and adding them to your external DNS, then waiting for DNS propagation)

So in short, don’t let your federation certificates expire!

Decommissioning Skype for Business 2015 on premise after migrating to O365

Depending on how you utilise Skype for Business, you may have no requirement to maintain a hybrid environment once all users are within Skype for Business online.

Documentation around decommissioning the on premise environment was surprisingly sparse.

One of the better documents around web was here – but it still stopped a little short, in my opinion.

So below is my attempt at rounding this process out.

 

All steps below assume you have already migrated all users to Skype for business and that you are aware of the requirements to stay in hybrid depending on your EV setup. If you are unsure, do not start this process.

 

Step 1 – Update DNS entries

This document really nails the DNS changes required, so good work Mark Vale. I am going to paraphrase the article a little, just so it’s all in one place.

Depending on your environment, you have a fair idea of idea of how long you need to wait externally and internally for convergence. This will lead to downtime, so it is wise to perform this outside of business hours.

Its also wise to take a backup of your existing values, just in case.

External DNS changes

ActionDNS Record NameTypeValue
ModifySipCNAMEsipdir.online.lync.com
ModifylyncdiscoverCNAMEwebdir.online.lync.com
Modify_sipfederationtls._tcpSRV0 0 5061 sipfed.online.lync.com
Modify_sip._tlsSRV0 0 443 sipdir.online.lync.com
DeleteDialinA
DeleteMeetA/Cname
DeletelyncwebA/Cname
Delete_xmpp-serverSRV

 

Internal DNS changes

ActionDNS Record NameTypeValue
Add_sipfederationtls._tcpSRV0 0 5061 sipfed.online.lync.com
ModifysipCNAMEsipdir.online.lync.com
ModifylyncdiscoverCNAMEwebdir.online.lync.com
DeletelyncdiscoverinternalA
DeletedialinA
DeletemeetA
DeletelyncwebA
Delete_sipinternaltls._tcpSRV

after completing these updated, check via the O365 portal that O365 is reporting the Skype DNS entries as all good. I find this is generally pretty quick, so I assume that the source DNS server is used for DNS record checks and it doesn’t have to wait for convergence.

 

Step 2 – Check functionality

After an appropriate convergence time, check that all functionality is working before moving on to further steps.

Again, this depends on the size of your environment and your internal and external DNS configuration.

Marks post has a couple of scripts you can run if you wish to speed up the process internally.

 

Step 3 – Disable Shared SIP Address Space

Ensure you have the Skype for business powershell module

  • Import-Module LyncOnlineConnector
  • $credential = Get-Credential “<yourSkypeForBusinessAdminAccount”
  • $session = New-CsOnlineSession -Credential $credential
  • Import-PSSession $session
  • Set-CsTenantFederationConfiguration –SharedSipAddressSpace $false

 

Step 4 – Uninstall on premise components

On one of your home servers

  • Open the Skype for Business server control panel or PowerShell (whichever way you prefer)
    • Remove all objects possible (see Phase 3 of this document). These will vary greatly, so I have not listed all of the things to remove, but let me know if you are having trouble with one.
  • Open the Skype of Business topology manager
    • Download your existing topology
    • Remove configuration and components to allow you to strip the environment bare
      • Remove global routes to your edge servers, which will allow the edge servers to be removed
      • Remove application servers
      • Remove any configuration pointing to your mediation servers, then remove them
      • Remove persistent chant pools
      • Remove everything you can, which will be everything except the last server where your CSS is stored (if running standard)
    • Publish the topology
      • Run Skype for Business Server Deployment wizard on the edge and mediation (if this is co-located, there will not be additional mediation servers) servers, and allow it to remove all roles
      • These servers can now be switched off
  • Open the Skype of Business topology manager
    • Download your existing topology
    • Now when right clicking on your final server, if you to “topology” you will have an option of “Remove Deployment”
    • Select this and publish your topology again
  • Open Skype for business management shell
    • Get-CsConferenceDirectory | Remove-CsConferenceDirectory -Force
    • Publish-CsTopology -FinalizeUninstall
    • Run C:\Program Files\Skype for Business Server 2015\Deployment\bootstrapper.exe /scorch
    • Remove-CsConfigurationStoreLocation
    • Disable-CsAdDomain (This will remove the RTC groups from your AD permissions structure)
    • Disable-CsAdForest ((This will remove the CS* groups from your AD)
    • I found once this had completed, a couple of RTC groups still existed under the “users” container. This is likely due to the fact that the domain has hosted versions of Lync/OCS etc. sync LCS 2005. I deleted these manually.
  • Once this was completed, shut down your last Skype for business server on premise.

 

 

References

Mark Vale, August 17 2015, Decommissioning Skype for Business Hybrid and Going Cloud Only

Microsoft,  September 11 2013, Decommissioning a Deployment

Microsoft, March 26 2012, Remove-CsConfigurationStoreLocation

Microsoft, April 12 2011, Publish Final Topology and Remove Last Front End

Scripting Office 365 licensing with disabled services

In the past I’ve had a few clients request scripts to automatically set/assign licenses to users in Office 365 – Generally pretty simple stuff. Recently I had a client ask to disable a particular service within a license – again, not all that difficult – unless you want to actually check if a license/service is already configured correctly (and not make any changes if it is). Took a little while to work out, so figured I’d share the love!

Just to set a license for a user is a pretty simple process – all you need is the license ‘SkuId’ value of the relevant license. To get a list of the ones available in your tenant, run: Get-MsolAccountSku You’ll get a list of the available license SkuId’s and how many are active/consumed. In this article we’ll use an example SkuId of Contoso:STANDARDWOFFPACK_IW_STUDENT. Once you have the SkuId, all you need to run to assign the license is:

You’ll notice that the code above sets the location first – this is required, as you can’t apply a license without a location being set! What if you didn’t want to have all the applications available for the user? For example, the above license includes Yammer Education. In this case, we need to create a ‘License Options’ object first.

 So where did we get the “YAMMER_EDU” from? You can list the available services for a license by running:

What if we wanted to disable multiple services in the License Option? The “-DisabledPlans” option accepts a comma-separated list. For example:

Ok, so now we know how to get the available licenses and related services – as well as how to assign the license to the user. What if we wanted to check if a license is assigned to a user first? Personally, I’m not a huge fan of just re-stamping settings each time you run a script – so I thought I’d look into it. The easiest method I’ve found is to try bind to the license, then check if it’s $null or not:

From there we can do whatever we want – if the license is found and that’s all you care about, you can skip – otherwise you can use the other commands to set the license.
So what if we also want to make sure YAMMER_EDU is disabled as well? That’s a little trickier. First we need to bind to the license like we did above, then we need to check the status of the relevant ‘ServicePlan’.

At this point it’s probably a good idea to talk about the structure of these objects – you may not need to know it, but for anyone trying to modify these commands it might be helpful:

  • A ‘User’ object contains an attribute ‘Licenses’. This attribute is an array – as a user can have multiple licenses assigned.
  • A ‘License’ object contains two attributes relevant to this script; ‘AccountSkuID’ and ‘ServiceStatus’
    • AccountSkuId is the attribute that matches up with the AccountSkuId we’re using above
    • ServiceStatus is another array – it contains an array of objects representing the individual services available in that license – and their status.

The two attributes attached to a ‘ServiceStatus’ object that we care about are:

  • ServicePlan.ServiceName – this is the name to match the service above (eg: YAMMER_EDU)
  • ProvisioningStatus – this can be a bunch of values, but mostly ‘Success’, ‘Disabled’ or ‘PendingInput’. I’d assume there’s also ‘Provisioning’, but I’ve never seen it.

With this in mind, we can put together a script like the following – it reads the UPN and AccountSkuID from a CSV file, though you could use whatever source you like and update the script accordingly.

Note: In order to run this script, you’ll need:

 

Automating Mailbox Regional Settings in Exchange Online

When you migrate (or create) a mailbox in Exchange Online, the first time a user goes to open their mailbox they are prompted to select their Timezone and Language. I recently had a client ask for a more automated method of pre-populating these values, so thought I’d have a look into it.

Of course, there’s no global way to define these settings for users before they get a mailbox, so the settings have to be set once the mailbox has been migrated – this really only leaves the option of a custom powershell script – either something you run after each migration (or creation), or on a periodic schedule.

First, to the settings themselves. As it turns out, you can use the same commands that you’d use in on premise Exchange: Get-MailboxRegionalConfiguration and Set-MailboxRegionalConfiguration – which also means this script could be adapted to be used on premise as well. The two settings we’re concerned with here are “Language” and “TimeZone”. Since the client we’re dealing with here is solely based in Australia, we’re going to be setting all users to a language of “en-AU”. For the TimeZone, Microsoft provide a list of valid values here: https://support.microsoft.com/en-us/kb/2779520

Except that they’re missing two Australian time zones. The actual valid values for Australia are:

  • AUS Central Standard Time – Darwin
  • Cen. Australia Standard Time – Adelaide
  • AUS Eastern Standard Time – Canberra, Melbourne, Sydney
  • E. Australia Standard Time – Brisbane
  • Tasmania Standard Time – Hobart

So with that in mind, we can use the following commands:

Since we’re talking about a national business with users in different time zones, the time zone value is going to need to change for each user. In order to automate this, we’ll need some source information available that indicates in which state the user is located – ideally, you’re going to be using the ‘Office’ field in the user’s AD account – though obviously you could use any available attribute. The reason I recommend ‘Office’ (or ‘physicalDeliveryOfficeName’) is because it’s synchronised to Office 365 with the user account (and becomes ‘Office’).

Note: You don’t actually need the value in Office 365 – if you’re running the script on premise, you can query your AD directly and ignore the attributes in 365. When I wrote the script I opted to solely use data that was in Office 365 – primarily because I was developing the script remotely and didn’t have direct access to their AD – so if you want to use your local AD instead of values in 365, you’ll need to modify the script!

For this client, the ‘Office’ value for each user is prefixed with the state (ie: SA, NSW, QLD, WA) – so it was relatively simple to use a ‘Switch’ function in Powershell (similar to a ‘Case’ statement in vbscript).

In order to use the script, you need the following:

You’ll also need to update the 5 variables at the top of the script (paths, etc), as well as the Time Zones (and criteria) in the Switch statement.

 

Using Azure RM Site to Site VPN with a Dynamic IP

In the interests of saving a bit of money, I decided to switch my ADSL service from an expensive business connection to a cheap residential connection. In Australia this also means switching from a static IP address to a dynamic IP address. With most web-based services now able to be proxied via Microsoft’s Web Application Proxy (and other services using unique ports), it seemed like everything would be fine with a combination of a Dynamic DNS service and port forwarding. I only run a development environment at home, so if I could save some money without any real impact, all the better!

After I made the switch, I realised that I’d forgotten about my site-to-site VPN between my development environment and Azure Resource Manager (AzureRM). For those familiar with AzureRM and Site to Site VPN, you’ll know that your on premise IP address is configured in a “Local Network Gateway” object. I thought perhaps that you could enter a DNS entry in the IP address field – no such luck.

So I had a look around online to see if anyone else had some easy solution I could poach. While I could find a solution for Azure Classic, the objects are completely different in AzureRM (and the powershell commands are different) – so while it gave me a direction, I couldn’t use the solution as-is. So I had a look at the available AzureRM powershell cmdlets – primarily Get-AzureRmLocalNetworkGateway  and Set-AzureRmLocalNetworkGateway . The problem I came across was that ‘Set’ command really only accepts two parameters – a LocalNetworkGateway object, and AddressPrefix (for your local address spaces). No option to change the Gateway IP. The documentation didn’t give any additional information either.

Based on previous experience with powershell, I had assumed that the LocalNetworkGateway input object would need to refer to an existing object. As a last resort, I decided to try modify it before setting anyway – and it worked! So essentially we can do something like:

Obviously this is a fair way from an automated solution that can be run on a schedule! In order to put it into a workable solution, the following overall steps need to be taken:

  1. Configure a dynamic DNS service (such as www.noip.com) – this’ll need to be automatically updated via your router or client software
  2. On the server that will be running the scheduled task, install the Azure Powershell Cmdlets (as per https://azure.microsoft.com/en-us/documentation/articles/powershell-install-configure/)
  3. Create an administrative account in Azure AD that has administrative access on the subscription (they must be listed in the Azure Classic portal under Settings > Administrators). It’s important to note that when using the Login-AzureRM Credentials  command that the credentials used must be an ‘organisational account’ – you can’t use a Microsoft Live account (even if it’s a subscription administrator).
  4. Use some method of saving credentials for use in Powershell. I prefer to use a key-based encryption so it’s transportable between computers – a guide on doing this can be found here: http://www.adminarsenal.com/admin-arsenal-blog/secure-password-with-powershell-encrypting-credentials-part-2/
  5. Update the following values in the following script:
    1. DynDNS: the external DNS entry that resolves to your dynamic IP
    2. SubscriptionName: the name of your Azure subscription. This can be retrieved using Get-AzureRMSubscription
    3. User: the organisational administrative account
    4. PasswordFile: the file containing the encrypted password
    5. KeyFile: the file containing the encryption key (obviously you want to keep this safe – as it can be used to reverse engineer the password!)
    6. Address Prefixes on line 42.
  6. When running the script via Task Schedule, ensure you also specify the ‘Start In’ directory – otherwise you need to hard code paths in the script.