Slow Scans and Deployments Involving Windows 10 build 1903

8/15/2019 420 Contributors
windows 10


After upgrading your PDQ console machine or target machines to Windows 10 build 1903, you’ve noticed that deployments and scans are taking considerably longer than before. Deployments and scans may sit at “Connecting” or waiting longer than expected at any of the subsequent steps or scan profiles.


In a surprising twist of events, we’ve cried “Bug!” and have submitted a ticket with Microsoft to determine if the behavior our developers have witnessed is intentional. It’s not often that we get to kick something like this up to the big guys/gals/distinguished individuals at Microsoft.

Our developers have been working with Microsoft on the issue and were able to help reproduce the issue and identify the bug. The excellent Microsoft techs who have been working on the ticket say that they’re working on a fix that they hope to release soon. 


As we all know, a deployment or scan begins with PDQ Deploy or PDQ Inventory copying over the on-demand, temporary runner service executable and starting a Windows service on your intended target machines. Because no developer truly enjoys reinventing the wheel, our devs make use of some built-in Windows goodies to spin up the service. In this case, they call existing Windows APIs to create and run a service.


The behavior we’ve noticed has to do with calling the service control manager. With the Windows 10 build 1903, contacting the service control manager results in a delay of about 20 seconds. In previous builds, the delay was a few milliseconds. You read that right -- that measurement changed from milliseconds to seconds. In the context of a deployment or a scan, the delay has a pretty huge impact. Deployments that previously connected and ran quickly are now taking multiple minutes longer than they should. Since PDQ Deploy and PDQ Inventory call upon the service control manager numerous times over the course of a deployment or a scan (depending on how many steps are in a package or scanners in a scan profile) those delays stack up on top of one another. To add insult to injury, this is an area where weird timeouts and failures can be expected, so there are a few retries built into the process. The bottom line here is that 20-second delay adds up quickly and makes your scans and deployments look the opposite of Pretty Damn Quick. 


Since none of us want to wait upwards of 20 seconds or 20 minutes for anything these days, we’ve got a workaround. In both PDQ Deploy and PDQ Inventory, change Preferences > Performance > Service Manager TCP Connection to Disabled.

As it says on the Preferences page, this is a system-wide setting. Please keep this in mind when making the changes in the event the change impacts other applications installed on your PDQ console machine. In the immortal words of the associated documentation:

These are Windows settings and, as such, are system wide. They cannot be set per process or application. Be aware that changing these values will affect other applications that use remote service manager connections and the Windows service control. If another application (such as PDQ Inventory) or process (such as GPO) changes this setting, then PDQ Deploy uses the changed value.” 

Alternatively, you can modify the underlying registry values using this bit of PowerShell:

Set-ItemProperty -Path HKLM:\SYSTEM\CurrentControlSet\Control -Name SCMApiConnectionParam -Value 0x80000001
Restart-Service PDQInventory, PDQDeploy

To answer the question we’re all wondering -- when will we fix the problem? 

Right now, the ball is in Microsoft’s court. Since we’ve been able to confirm with the fine folks at Microsoft support that this appears to be their bug, the best course of action is to wait patiently until a fix appears. As such, the time-frame from this is completely out of our PDQ hands. 

Initially, our developers were able to identify places in the code where we might be able to sidestep or work around the issue. Unfortunately, as is the nature of some code changes, fixing this issue would completely break the Agent and other areas of the code. Because of this, our devs have decided to stay the course and wait for the fix from Microsoft. This may change in the future, but for now, the ultimate resolution is to implement the workaround and wait patiently with us for Microsoft.