[pkg-go] Bug#952230: consul: FTBFS: dh_auto_test: error: cd _build && go test -vet=off -v -p 4 -short -failfast -timeout 8m [...] returned exit code 1

Lucas Nussbaum lucas at debian.org
Sun Feb 23 13:06:31 GMT 2020


Source: consul
Version: 1.7.0+dfsg1-1
Severity: serious
Justification: FTBFS on amd64
Tags: bullseye sid ftbfs
Usertags: ftbfs-20200222 ftbfs-bullseye

Hi,

During a rebuild of all packages in sid, your package failed to build
on amd64.

Relevant part (hopefully):
> make[1]: Entering directory '/<<PKGBUILDDIR>>'
> PATH="/<<PKGBUILDDIR>>/_build/bin:${PATH}" \
>         DH_GOLANG_EXCLUDES="test/integration api agent/cache agent/checks agent/connect agent/consul command/tls" \
>         dh_auto_test -v --max-parallel=4 -- -short -failfast -timeout 8m
> 	cd _build && go test -vet=off -v -p 4 -short -failfast -timeout 8m github.com/hashicorp/consul github.com/hashicorp/consul/acl github.com/hashicorp/consul/agent github.com/hashicorp/consul/agent/ae github.com/hashicorp/consul/agent/agentpb github.com/hashicorp/consul/agent/config github.com/hashicorp/consul/agent/debug github.com/hashicorp/consul/agent/exec github.com/hashicorp/consul/agent/local github.com/hashicorp/consul/agent/metadata github.com/hashicorp/consul/agent/mock github.com/hashicorp/consul/agent/pool github.com/hashicorp/consul/agent/proxycfg github.com/hashicorp/consul/agent/router github.com/hashicorp/consul/agent/structs github.com/hashicorp/consul/agent/systemd github.com/hashicorp/consul/agent/token github.com/hashicorp/consul/agent/xds github.com/hashicorp/consul/command github.com/hashicorp/consul/command/acl github.com/hashicorp/consul/command/acl/agenttokens github.com/hashicorp/consul/command/acl/authmethod github.com/hashicorp/consul/command/acl/authmethod/create github.com/hashicorp/consul/command/acl/authmethod/delete github.com/hashicorp/consul/command/acl/authmethod/list github.com/hashicorp/consul/command/acl/authmethod/read github.com/hashicorp/consul/command/acl/authmethod/update github.com/hashicorp/consul/command/acl/bindingrule github.com/hashicorp/consul/command/acl/bindingrule/create github.com/hashicorp/consul/command/acl/bindingrule/delete github.com/hashicorp/consul/command/acl/bindingrule/list github.com/hashicorp/consul/command/acl/bindingrule/read github.com/hashicorp/consul/command/acl/bindingrule/update github.com/hashicorp/consul/command/acl/bootstrap github.com/hashicorp/consul/command/acl/policy github.com/hashicorp/consul/command/acl/policy/create github.com/hashicorp/consul/command/acl/policy/delete github.com/hashicorp/consul/command/acl/policy/list github.com/hashicorp/consul/command/acl/policy/read github.com/hashicorp/consul/command/acl/policy/update github.com/hashicorp/consul/command/acl/role github.com/hashicorp/consul/command/acl/role/create github.com/hashicorp/consul/command/acl/role/delete github.com/hashicorp/consul/command/acl/role/list github.com/hashicorp/consul/command/acl/role/read github.com/hashicorp/consul/command/acl/role/update github.com/hashicorp/consul/command/acl/rules github.com/hashicorp/consul/command/acl/token github.com/hashicorp/consul/command/acl/token/clone github.com/hashicorp/consul/command/acl/token/create github.com/hashicorp/consul/command/acl/token/delete github.com/hashicorp/consul/command/acl/token/list github.com/hashicorp/consul/command/acl/token/read github.com/hashicorp/consul/command/acl/token/update github.com/hashicorp/consul/command/agent github.com/hashicorp/consul/command/catalog github.com/hashicorp/consul/command/catalog/list/dc github.com/hashicorp/consul/command/catalog/list/nodes github.com/hashicorp/consul/command/catalog/list/services github.com/hashicorp/consul/command/config github.com/hashicorp/consul/command/config/delete github.com/hashicorp/consul/command/config/list github.com/hashicorp/consul/command/config/read github.com/hashicorp/consul/command/config/write github.com/hashicorp/consul/command/connect github.com/hashicorp/consul/command/connect/ca github.com/hashicorp/consul/command/connect/ca/get github.com/hashicorp/consul/command/connect/ca/set github.com/hashicorp/consul/command/connect/envoy github.com/hashicorp/consul/command/connect/envoy/pipe-bootstrap github.com/hashicorp/consul/command/connect/proxy github.com/hashicorp/consul/command/debug github.com/hashicorp/consul/command/event github.com/hashicorp/consul/command/exec github.com/hashicorp/consul/command/flags github.com/hashicorp/consul/command/forceleave github.com/hashicorp/consul/command/helpers github.com/hashicorp/consul/command/info github.com/hashicorp/consul/command/intention github.com/hashicorp/consul/command/intention/check github.com/hashicorp/consul/command/intention/create github.com/hashicorp/consul/command/intention/delete github.com/hashicorp/consul/command/intention/finder github.com/hashicorp/consul/command/intention/get github.com/hashicorp/consul/command/intention/match github.com/hashicorp/consul/command/join github.com/hashicorp/consul/command/keygen github.com/hashicorp/consul/command/keyring github.com/hashicorp/consul/command/kv github.com/hashicorp/consul/command/kv/del github.com/hashicorp/consul/command/kv/exp github.com/hashicorp/consul/command/kv/get github.com/hashicorp/consul/command/kv/imp github.com/hashicorp/consul/command/kv/impexp github.com/hashicorp/consul/command/kv/put github.com/hashicorp/consul/command/leave github.com/hashicorp/consul/command/lock github.com/hashicorp/consul/command/login github.com/hashicorp/consul/command/logout github.com/hashicorp/consul/command/maint github.com/hashicorp/consul/command/members github.com/hashicorp/consul/command/monitor github.com/hashicorp/consul/command/operator github.com/hashicorp/consul/command/operator/autopilot github.com/hashicorp/consul/command/operator/autopilot/get github.com/hashicorp/consul/command/operator/autopilot/set github.com/hashicorp/consul/command/operator/raft github.com/hashicorp/consul/command/operator/raft/listpeers github.com/hashicorp/consul/command/operator/raft/removepeer github.com/hashicorp/consul/command/reload github.com/hashicorp/consul/command/rtt github.com/hashicorp/consul/command/services github.com/hashicorp/consul/command/services/deregister github.com/hashicorp/consul/command/services/register github.com/hashicorp/consul/command/snapshot github.com/hashicorp/consul/command/snapshot/inspect github.com/hashicorp/consul/command/snapshot/restore github.com/hashicorp/consul/command/snapshot/save github.com/hashicorp/consul/command/validate github.com/hashicorp/consul/command/version github.com/hashicorp/consul/command/watch github.com/hashicorp/consul/connect github.com/hashicorp/consul/connect/certgen github.com/hashicorp/consul/connect/proxy github.com/hashicorp/consul/ipaddr github.com/hashicorp/consul/lib github.com/hashicorp/consul/lib/file github.com/hashicorp/consul/lib/semaphore github.com/hashicorp/consul/logging github.com/hashicorp/consul/logging/monitor github.com/hashicorp/consul/sdk/freeport github.com/hashicorp/consul/sdk/testutil github.com/hashicorp/consul/sdk/testutil/retry github.com/hashicorp/consul/sentinel github.com/hashicorp/consul/service_os github.com/hashicorp/consul/snapshot github.com/hashicorp/consul/testrpc github.com/hashicorp/consul/tlsutil github.com/hashicorp/consul/types github.com/hashicorp/consul/version
> testing: warning: no tests to run
> PASS
> ok  	github.com/hashicorp/consul	0.113s [no tests to run]
> === RUN   TestACL
> === RUN   TestACL/DenyAll
> === RUN   TestACL/DenyAll/DenyACLRead
> === RUN   TestACL/DenyAll/DenyACLWrite
> === RUN   TestACL/DenyAll/DenyAgentRead
> === RUN   TestACL/DenyAll/DenyAgentWrite
> === RUN   TestACL/DenyAll/DenyEventRead
> === RUN   TestACL/DenyAll/DenyEventWrite
> === RUN   TestACL/DenyAll/DenyIntentionDefaultAllow
> === RUN   TestACL/DenyAll/DenyIntentionRead
> === RUN   TestACL/DenyAll/DenyIntentionWrite
> === RUN   TestACL/DenyAll/DenyKeyRead
> === RUN   TestACL/DenyAll/DenyKeyringRead
> === RUN   TestACL/DenyAll/DenyKeyringWrite
> === RUN   TestACL/DenyAll/DenyKeyWrite
> === RUN   TestACL/DenyAll/DenyNodeRead
> === RUN   TestACL/DenyAll/DenyNodeWrite
> === RUN   TestACL/DenyAll/DenyOperatorRead
> === RUN   TestACL/DenyAll/DenyOperatorWrite
> === RUN   TestACL/DenyAll/DenyPreparedQueryRead
> === RUN   TestACL/DenyAll/DenyPreparedQueryWrite
> === RUN   TestACL/DenyAll/DenyServiceRead
> === RUN   TestACL/DenyAll/DenyServiceWrite
> === RUN   TestACL/DenyAll/DenySessionRead
> === RUN   TestACL/DenyAll/DenySessionWrite
> === RUN   TestACL/DenyAll/DenySnapshot
> === RUN   TestACL/AllowAll
> === RUN   TestACL/AllowAll/DenyACLRead
> === RUN   TestACL/AllowAll/DenyACLWrite
> === RUN   TestACL/AllowAll/AllowAgentRead
> === RUN   TestACL/AllowAll/AllowAgentWrite
> === RUN   TestACL/AllowAll/AllowEventRead
> === RUN   TestACL/AllowAll/AllowEventWrite
> === RUN   TestACL/AllowAll/AllowIntentionDefaultAllow
> === RUN   TestACL/AllowAll/AllowIntentionRead
> === RUN   TestACL/AllowAll/AllowIntentionWrite
> === RUN   TestACL/AllowAll/AllowKeyRead
> === RUN   TestACL/AllowAll/AllowKeyringRead
> === RUN   TestACL/AllowAll/AllowKeyringWrite
> === RUN   TestACL/AllowAll/AllowKeyWrite
> === RUN   TestACL/AllowAll/AllowNodeRead
> === RUN   TestACL/AllowAll/AllowNodeWrite
> === RUN   TestACL/AllowAll/AllowOperatorRead
> === RUN   TestACL/AllowAll/AllowOperatorWrite
> === RUN   TestACL/AllowAll/AllowPreparedQueryRead
> === RUN   TestACL/AllowAll/AllowPreparedQueryWrite
> === RUN   TestACL/AllowAll/AllowServiceRead
> === RUN   TestACL/AllowAll/AllowServiceWrite
> === RUN   TestACL/AllowAll/AllowSessionRead
> === RUN   TestACL/AllowAll/AllowSessionWrite
> === RUN   TestACL/AllowAll/DenySnapshot
> === RUN   TestACL/ManageAll
> === RUN   TestACL/ManageAll/AllowACLRead
> === RUN   TestACL/ManageAll/AllowACLWrite
> === RUN   TestACL/ManageAll/AllowAgentRead
> === RUN   TestACL/ManageAll/AllowAgentWrite
> === RUN   TestACL/ManageAll/AllowEventRead
> === RUN   TestACL/ManageAll/AllowEventWrite
> === RUN   TestACL/ManageAll/AllowIntentionDefaultAllow
> === RUN   TestACL/ManageAll/AllowIntentionRead
> === RUN   TestACL/ManageAll/AllowIntentionWrite
> === RUN   TestACL/ManageAll/AllowKeyRead
> === RUN   TestACL/ManageAll/AllowKeyringRead
> === RUN   TestACL/ManageAll/AllowKeyringWrite
> === RUN   TestACL/ManageAll/AllowKeyWrite
> === RUN   TestACL/ManageAll/AllowNodeRead
> === RUN   TestACL/ManageAll/AllowNodeWrite
> === RUN   TestACL/ManageAll/AllowOperatorRead
> === RUN   TestACL/ManageAll/AllowOperatorWrite
> === RUN   TestACL/ManageAll/AllowPreparedQueryRead
> === RUN   TestACL/ManageAll/AllowPreparedQueryWrite
> === RUN   TestACL/ManageAll/AllowServiceRead
> === RUN   TestACL/ManageAll/AllowServiceWrite
> === RUN   TestACL/ManageAll/AllowSessionRead
> === RUN   TestACL/ManageAll/AllowSessionWrite
> === RUN   TestACL/ManageAll/AllowSnapshot
> === RUN   TestACL/AgentBasicDefaultDeny
> === RUN   TestACL/AgentBasicDefaultDeny/DefaultReadDenied.Prefix(ro)
> === RUN   TestACL/AgentBasicDefaultDeny/DefaultWriteDenied.Prefix(ro)
> === RUN   TestACL/AgentBasicDefaultDeny/ROReadAllowed.Prefix(root)
> === RUN   TestACL/AgentBasicDefaultDeny/ROWriteDenied.Prefix(root)
> === RUN   TestACL/AgentBasicDefaultDeny/ROSuffixReadAllowed.Prefix(root-ro)
> === RUN   TestACL/AgentBasicDefaultDeny/ROSuffixWriteDenied.Prefix(root-ro)
> === RUN   TestACL/AgentBasicDefaultDeny/RWReadAllowed.Prefix(root-rw)
> === RUN   TestACL/AgentBasicDefaultDeny/RWWriteDenied.Prefix(root-rw)
> === RUN   TestACL/AgentBasicDefaultDeny/RWSuffixReadAllowed.Prefix(root-rw-sub)
> === RUN   TestACL/AgentBasicDefaultDeny/RWSuffixWriteAllowed.Prefix(root-rw-sub)
> === RUN   TestACL/AgentBasicDefaultDeny/DenyReadDenied.Prefix(root-nope)
> === RUN   TestACL/AgentBasicDefaultDeny/DenyWriteDenied.Prefix(root-nope)
> === RUN   TestACL/AgentBasicDefaultDeny/DenySuffixReadDenied.Prefix(root-nope-sub)
> === RUN   TestACL/AgentBasicDefaultDeny/DenySuffixWriteDenied.Prefix(root-nope-sub)
> === RUN   TestACL/AgentBasicDefaultAllow
> === RUN   TestACL/AgentBasicDefaultAllow/DefaultReadDenied.Prefix(ro)
> === RUN   TestACL/AgentBasicDefaultAllow/DefaultWriteDenied.Prefix(ro)
> === RUN   TestACL/AgentBasicDefaultAllow/ROReadAllowed.Prefix(root)
> === RUN   TestACL/AgentBasicDefaultAllow/ROWriteDenied.Prefix(root)
> === RUN   TestACL/AgentBasicDefaultAllow/ROSuffixReadAllowed.Prefix(root-ro)
> === RUN   TestACL/AgentBasicDefaultAllow/ROSuffixWriteDenied.Prefix(root-ro)
> === RUN   TestACL/AgentBasicDefaultAllow/RWReadAllowed.Prefix(root-rw)
> === RUN   TestACL/AgentBasicDefaultAllow/RWWriteDenied.Prefix(root-rw)
> === RUN   TestACL/AgentBasicDefaultAllow/RWSuffixReadAllowed.Prefix(root-rw-sub)
> === RUN   TestACL/AgentBasicDefaultAllow/RWSuffixWriteAllowed.Prefix(root-rw-sub)
> === RUN   TestACL/AgentBasicDefaultAllow/DenyReadDenied.Prefix(root-nope)
> === RUN   TestACL/AgentBasicDefaultAllow/DenyWriteDenied.Prefix(root-nope)
> === RUN   TestACL/AgentBasicDefaultAllow/DenySuffixReadDenied.Prefix(root-nope-sub)
> === RUN   TestACL/AgentBasicDefaultAllow/DenySuffixWriteDenied.Prefix(root-nope-sub)
> === RUN   TestACL/PreparedQueryDefaultAllow
> === RUN   TestACL/PreparedQueryDefaultAllow/ReadAllowed.Prefix(foo)
> === RUN   TestACL/PreparedQueryDefaultAllow/WriteAllowed.Prefix(foo)
> === RUN   TestACL/PreparedQueryDefaultAllow/ReadDenied.Prefix(other)
> === RUN   TestACL/PreparedQueryDefaultAllow/WriteDenied.Prefix(other)
> === RUN   TestACL/AgentNestedDefaultDeny
> === RUN   TestACL/AgentNestedDefaultDeny/DefaultReadDenied.Prefix(nope)
> === RUN   TestACL/AgentNestedDefaultDeny/DefaultWriteDenied.Prefix(nope)
> === RUN   TestACL/AgentNestedDefaultDeny/DenyReadDenied.Prefix(root-nope)
> === RUN   TestACL/AgentNestedDefaultDeny/DenyWriteDenied.Prefix(root-nope)
> === RUN   TestACL/AgentNestedDefaultDeny/ROReadAllowed.Prefix(root-ro)
> === RUN   TestACL/AgentNestedDefaultDeny/ROWriteDenied.Prefix(root-ro)
> === RUN   TestACL/AgentNestedDefaultDeny/RWReadAllowed.Prefix(root-rw)
> === RUN   TestACL/AgentNestedDefaultDeny/RWWriteAllowed.Prefix(root-rw)
> === RUN   TestACL/AgentNestedDefaultDeny/DenySuffixReadDenied.Prefix(root-nope-prefix)
> === RUN   TestACL/AgentNestedDefaultDeny/DenySuffixWriteDenied.Prefix(root-nope-prefix)
> === RUN   TestACL/AgentNestedDefaultDeny/ROSuffixReadAllowed.Prefix(root-ro-prefix)
> === RUN   TestACL/AgentNestedDefaultDeny/ROSuffixWriteDenied.Prefix(root-ro-prefix)
> === RUN   TestACL/AgentNestedDefaultDeny/RWSuffixReadAllowed.Prefix(root-rw-prefix)
> === RUN   TestACL/AgentNestedDefaultDeny/RWSuffixWriteAllowed.Prefix(root-rw-prefix)
> === RUN   TestACL/AgentNestedDefaultDeny/ChildDenyReadDenied.Prefix(child-nope)
> === RUN   TestACL/AgentNestedDefaultDeny/ChildDenyWriteDenied.Prefix(child-nope)
> === RUN   TestACL/AgentNestedDefaultDeny/ChildROReadAllowed.Prefix(child-ro)
> === RUN   TestACL/AgentNestedDefaultDeny/ChildROWriteDenied.Prefix(child-ro)
> === RUN   TestACL/AgentNestedDefaultDeny/ChildRWReadAllowed.Prefix(child-rw)
> === RUN   TestACL/AgentNestedDefaultDeny/ChildRWWriteAllowed.Prefix(child-rw)
> === RUN   TestACL/AgentNestedDefaultDeny/ChildDenySuffixReadDenied.Prefix(child-nope-prefix)
> === RUN   TestACL/AgentNestedDefaultDeny/ChildDenySuffixWriteDenied.Prefix(child-nope-prefix)
> === RUN   TestACL/AgentNestedDefaultDeny/ChildROSuffixReadAllowed.Prefix(child-ro-prefix)
> === RUN   TestACL/AgentNestedDefaultDeny/ChildROSuffixWriteDenied.Prefix(child-ro-prefix)
> === RUN   TestACL/AgentNestedDefaultDeny/ChildRWSuffixReadAllowed.Prefix(child-rw-prefix)
> === RUN   TestACL/AgentNestedDefaultDeny/ChildRWSuffixWriteAllowed.Prefix(child-rw-prefix)
> === RUN   TestACL/AgentNestedDefaultDeny/ChildOverrideReadAllowed.Prefix(override)
> === RUN   TestACL/AgentNestedDefaultDeny/ChildOverrideWriteAllowed.Prefix(override)
> === RUN   TestACL/AgentNestedDefaultAllow
> === RUN   TestACL/AgentNestedDefaultAllow/DefaultReadAllowed.Prefix(nope)
> === RUN   TestACL/AgentNestedDefaultAllow/DefaultWriteAllowed.Prefix(nope)
> === RUN   TestACL/AgentNestedDefaultAllow/DenyReadDenied.Prefix(root-nope)
> === RUN   TestACL/AgentNestedDefaultAllow/DenyWriteDenied.Prefix(root-nope)
> === RUN   TestACL/AgentNestedDefaultAllow/ROReadAllowed.Prefix(root-ro)
> === RUN   TestACL/AgentNestedDefaultAllow/ROWriteDenied.Prefix(root-ro)
> === RUN   TestACL/AgentNestedDefaultAllow/RWReadAllowed.Prefix(root-rw)
> === RUN   TestACL/AgentNestedDefaultAllow/RWWriteAllowed.Prefix(root-rw)
> === RUN   TestACL/AgentNestedDefaultAllow/DenySuffixReadDenied.Prefix(root-nope-prefix)
> === RUN   TestACL/AgentNestedDefaultAllow/DenySuffixWriteDenied.Prefix(root-nope-prefix)
> === RUN   TestACL/AgentNestedDefaultAllow/ROSuffixReadAllowed.Prefix(root-ro-prefix)
> === RUN   TestACL/AgentNestedDefaultAllow/ROSuffixWriteDenied.Prefix(root-ro-prefix)
> === RUN   TestACL/AgentNestedDefaultAllow/RWSuffixReadAllowed.Prefix(root-rw-prefix)
> === RUN   TestACL/AgentNestedDefaultAllow/RWSuffixWriteAllowed.Prefix(root-rw-prefix)
> === RUN   TestACL/AgentNestedDefaultAllow/ChildDenyReadDenied.Prefix(child-nope)
> === RUN   TestACL/AgentNestedDefaultAllow/ChildDenyWriteDenied.Prefix(child-nope)
> === RUN   TestACL/AgentNestedDefaultAllow/ChildROReadAllowed.Prefix(child-ro)
> === RUN   TestACL/AgentNestedDefaultAllow/ChildROWriteDenied.Prefix(child-ro)
> === RUN   TestACL/AgentNestedDefaultAllow/ChildRWReadAllowed.Prefix(child-rw)
> === RUN   TestACL/AgentNestedDefaultAllow/ChildRWWriteAllowed.Prefix(child-rw)
> === RUN   TestACL/AgentNestedDefaultAllow/ChildDenySuffixReadDenied.Prefix(child-nope-prefix)
> === RUN   TestACL/AgentNestedDefaultAllow/ChildDenySuffixWriteDenied.Prefix(child-nope-prefix)
> === RUN   TestACL/AgentNestedDefaultAllow/ChildROSuffixReadAllowed.Prefix(child-ro-prefix)
> === RUN   TestACL/AgentNestedDefaultAllow/ChildROSuffixWriteDenied.Prefix(child-ro-prefix)
> === RUN   TestACL/AgentNestedDefaultAllow/ChildRWSuffixReadAllowed.Prefix(child-rw-prefix)
> === RUN   TestACL/AgentNestedDefaultAllow/ChildRWSuffixWriteAllowed.Prefix(child-rw-prefix)
> === RUN   TestACL/AgentNestedDefaultAllow/ChildOverrideReadAllowed.Prefix(override)
> === RUN   TestACL/AgentNestedDefaultAllow/ChildOverrideWriteAllowed.Prefix(override)
> === RUN   TestACL/KeyringDefaultAllowPolicyDeny
> === RUN   TestACL/KeyringDefaultAllowPolicyDeny/ReadDenied
> === RUN   TestACL/KeyringDefaultAllowPolicyDeny/WriteDenied
> === RUN   TestACL/KeyringDefaultAllowPolicyRead
> === RUN   TestACL/KeyringDefaultAllowPolicyRead/ReadAllowed
> === RUN   TestACL/KeyringDefaultAllowPolicyRead/WriteDenied
> === RUN   TestACL/KeyringDefaultAllowPolicyWrite
> === RUN   TestACL/KeyringDefaultAllowPolicyWrite/ReadAllowed
> === RUN   TestACL/KeyringDefaultAllowPolicyWrite/WriteAllowed
> === RUN   TestACL/KeyringDefaultAllowPolicyNone
> === RUN   TestACL/KeyringDefaultAllowPolicyNone/ReadAllowed
> === RUN   TestACL/KeyringDefaultAllowPolicyNone/WriteAllowed
> === RUN   TestACL/KeyringDefaultDenyPolicyDeny
> === RUN   TestACL/KeyringDefaultDenyPolicyDeny/ReadDenied
> === RUN   TestACL/KeyringDefaultDenyPolicyDeny/WriteDenied
> === RUN   TestACL/KeyringDefaultDenyPolicyRead
> === RUN   TestACL/KeyringDefaultDenyPolicyRead/ReadAllowed
> === RUN   TestACL/KeyringDefaultDenyPolicyRead/WriteDenied
> === RUN   TestACL/KeyringDefaultDenyPolicyWrite
> === RUN   TestACL/KeyringDefaultDenyPolicyWrite/ReadAllowed
> === RUN   TestACL/KeyringDefaultDenyPolicyWrite/WriteAllowed
> === RUN   TestACL/KeyringDefaultDenyPolicyNone
> === RUN   TestACL/KeyringDefaultDenyPolicyNone/ReadDenied
> === RUN   TestACL/KeyringDefaultDenyPolicyNone/WriteDenied
> === RUN   TestACL/OperatorDefaultAllowPolicyDeny
> === RUN   TestACL/OperatorDefaultAllowPolicyDeny/ReadDenied
> === RUN   TestACL/OperatorDefaultAllowPolicyDeny/WriteDenied
> === RUN   TestACL/OperatorDefaultAllowPolicyRead
> === RUN   TestACL/OperatorDefaultAllowPolicyRead/ReadAllowed
> === RUN   TestACL/OperatorDefaultAllowPolicyRead/WriteDenied
> === RUN   TestACL/OperatorDefaultAllowPolicyWrite
> === RUN   TestACL/OperatorDefaultAllowPolicyWrite/ReadAllowed
> === RUN   TestACL/OperatorDefaultAllowPolicyWrite/WriteAllowed
> === RUN   TestACL/OperatorDefaultAllowPolicyNone
> === RUN   TestACL/OperatorDefaultAllowPolicyNone/ReadAllowed
> === RUN   TestACL/OperatorDefaultAllowPolicyNone/WriteAllowed
> === RUN   TestACL/OperatorDefaultDenyPolicyDeny
> === RUN   TestACL/OperatorDefaultDenyPolicyDeny/ReadDenied
> === RUN   TestACL/OperatorDefaultDenyPolicyDeny/WriteDenied
> === RUN   TestACL/OperatorDefaultDenyPolicyRead
> === RUN   TestACL/OperatorDefaultDenyPolicyRead/ReadAllowed
> === RUN   TestACL/OperatorDefaultDenyPolicyRead/WriteDenied
> === RUN   TestACL/OperatorDefaultDenyPolicyWrite
> === RUN   TestACL/OperatorDefaultDenyPolicyWrite/ReadAllowed
> === RUN   TestACL/OperatorDefaultDenyPolicyWrite/WriteAllowed
> === RUN   TestACL/OperatorDefaultDenyPolicyNone
> === RUN   TestACL/OperatorDefaultDenyPolicyNone/ReadDenied
> === RUN   TestACL/OperatorDefaultDenyPolicyNone/WriteDenied
> === RUN   TestACL/NodeDefaultDeny
> === RUN   TestACL/NodeDefaultDeny/DefaultReadDenied.Prefix(nope)
> === RUN   TestACL/NodeDefaultDeny/DefaultWriteDenied.Prefix(nope)
> === RUN   TestACL/NodeDefaultDeny/DenyReadDenied.Prefix(root-nope)
> === RUN   TestACL/NodeDefaultDeny/DenyWriteDenied.Prefix(root-nope)
> === RUN   TestACL/NodeDefaultDeny/ROReadAllowed.Prefix(root-ro)
> === RUN   TestACL/NodeDefaultDeny/ROWriteDenied.Prefix(root-ro)
> === RUN   TestACL/NodeDefaultDeny/RWReadAllowed.Prefix(root-rw)
> === RUN   TestACL/NodeDefaultDeny/RWWriteAllowed.Prefix(root-rw)
> === RUN   TestACL/NodeDefaultDeny/DenySuffixReadDenied.Prefix(root-nope-prefix)
> === RUN   TestACL/NodeDefaultDeny/DenySuffixWriteDenied.Prefix(root-nope-prefix)
> === RUN   TestACL/NodeDefaultDeny/ROSuffixReadAllowed.Prefix(root-ro-prefix)
> === RUN   TestACL/NodeDefaultDeny/ROSuffixWriteDenied.Prefix(root-ro-prefix)
> === RUN   TestACL/NodeDefaultDeny/RWSuffixReadAllowed.Prefix(root-rw-prefix)
> === RUN   TestACL/NodeDefaultDeny/RWSuffixWriteAllowed.Prefix(root-rw-prefix)
> === RUN   TestACL/NodeDefaultDeny/ChildDenyReadDenied.Prefix(child-nope)
> === RUN   TestACL/NodeDefaultDeny/ChildDenyWriteDenied.Prefix(child-nope)
> === RUN   TestACL/NodeDefaultDeny/ChildROReadAllowed.Prefix(child-ro)
> === RUN   TestACL/NodeDefaultDeny/ChildROWriteDenied.Prefix(child-ro)
> === RUN   TestACL/NodeDefaultDeny/ChildRWReadAllowed.Prefix(child-rw)
> === RUN   TestACL/NodeDefaultDeny/ChildRWWriteAllowed.Prefix(child-rw)
> === RUN   TestACL/NodeDefaultDeny/ChildDenySuffixReadDenied.Prefix(child-nope-prefix)
> === RUN   TestACL/NodeDefaultDeny/ChildDenySuffixWriteDenied.Prefix(child-nope-prefix)
> === RUN   TestACL/NodeDefaultDeny/ChildROSuffixReadAllowed.Prefix(child-ro-prefix)
> === RUN   TestACL/NodeDefaultDeny/ChildROSuffixWriteDenied.Prefix(child-ro-prefix)
> === RUN   TestACL/NodeDefaultDeny/ChildRWSuffixReadAllowed.Prefix(child-rw-prefix)
> === RUN   TestACL/NodeDefaultDeny/ChildRWSuffixWriteAllowed.Prefix(child-rw-prefix)
> === RUN   TestACL/NodeDefaultDeny/ChildOverrideReadAllowed.Prefix(override)
> === RUN   TestACL/NodeDefaultDeny/ChildOverrideWriteAllowed.Prefix(override)
> === RUN   TestACL/NodeDefaultAllow
> === RUN   TestACL/NodeDefaultAllow/DefaultReadAllowed.Prefix(nope)
> === RUN   TestACL/NodeDefaultAllow/DefaultWriteAllowed.Prefix(nope)
> === RUN   TestACL/NodeDefaultAllow/DenyReadDenied.Prefix(root-nope)
> === RUN   TestACL/NodeDefaultAllow/DenyWriteDenied.Prefix(root-nope)
> === RUN   TestACL/NodeDefaultAllow/ROReadAllowed.Prefix(root-ro)
> === RUN   TestACL/NodeDefaultAllow/ROWriteDenied.Prefix(root-ro)
> === RUN   TestACL/NodeDefaultAllow/RWReadAllowed.Prefix(root-rw)
> === RUN   TestACL/NodeDefaultAllow/RWWriteAllowed.Prefix(root-rw)
> === RUN   TestACL/NodeDefaultAllow/DenySuffixReadDenied.Prefix(root-nope-prefix)
> === RUN   TestACL/NodeDefaultAllow/DenySuffixWriteDenied.Prefix(root-nope-prefix)
> === RUN   TestACL/NodeDefaultAllow/ROSuffixReadAllowed.Prefix(root-ro-prefix)
> === RUN   TestACL/NodeDefaultAllow/ROSuffixWriteDenied.Prefix(root-ro-prefix)
> === RUN   TestACL/NodeDefaultAllow/RWSuffixReadAllowed.Prefix(root-rw-prefix)
> === RUN   TestACL/NodeDefaultAllow/RWSuffixWriteAllowed.Prefix(root-rw-prefix)
> === RUN   TestACL/NodeDefaultAllow/ChildDenyReadDenied.Prefix(child-nope)
> === RUN   TestACL/NodeDefaultAllow/ChildDenyWriteDenied.Prefix(child-nope)
> === RUN   TestACL/NodeDefaultAllow/ChildROReadAllowed.Prefix(child-ro)
> === RUN   TestACL/NodeDefaultAllow/ChildROWriteDenied.Prefix(child-ro)
> === RUN   TestACL/NodeDefaultAllow/ChildRWReadAllowed.Prefix(child-rw)
> === RUN   TestACL/NodeDefaultAllow/ChildRWWriteAllowed.Prefix(child-rw)
> === RUN   TestACL/NodeDefaultAllow/ChildDenySuffixReadDenied.Prefix(child-nope-prefix)
> === RUN   TestACL/NodeDefaultAllow/ChildDenySuffixWriteDenied.Prefix(child-nope-prefix)
> === RUN   TestACL/NodeDefaultAllow/ChildROSuffixReadAllowed.Prefix(child-ro-prefix)
> === RUN   TestACL/NodeDefaultAllow/ChildROSuffixWriteDenied.Prefix(child-ro-prefix)
> === RUN   TestACL/NodeDefaultAllow/ChildRWSuffixReadAllowed.Prefix(child-rw-prefix)
> === RUN   TestACL/NodeDefaultAllow/ChildRWSuffixWriteAllowed.Prefix(child-rw-prefix)
> === RUN   TestACL/NodeDefaultAllow/ChildOverrideReadAllowed.Prefix(override)
> === RUN   TestACL/NodeDefaultAllow/ChildOverrideWriteAllowed.Prefix(override)
> === RUN   TestACL/SessionDefaultDeny
> === RUN   TestACL/SessionDefaultDeny/DefaultReadDenied.Prefix(nope)
> === RUN   TestACL/SessionDefaultDeny/DefaultWriteDenied.Prefix(nope)
> === RUN   TestACL/SessionDefaultDeny/DenyReadDenied.Prefix(root-nope)
> === RUN   TestACL/SessionDefaultDeny/DenyWriteDenied.Prefix(root-nope)
> === RUN   TestACL/SessionDefaultDeny/ROReadAllowed.Prefix(root-ro)
> === RUN   TestACL/SessionDefaultDeny/ROWriteDenied.Prefix(root-ro)
> === RUN   TestACL/SessionDefaultDeny/RWReadAllowed.Prefix(root-rw)
> === RUN   TestACL/SessionDefaultDeny/RWWriteAllowed.Prefix(root-rw)
> === RUN   TestACL/SessionDefaultDeny/DenySuffixReadDenied.Prefix(root-nope-prefix)
> === RUN   TestACL/SessionDefaultDeny/DenySuffixWriteDenied.Prefix(root-nope-prefix)
> === RUN   TestACL/SessionDefaultDeny/ROSuffixReadAllowed.Prefix(root-ro-prefix)
> === RUN   TestACL/SessionDefaultDeny/ROSuffixWriteDenied.Prefix(root-ro-prefix)
> === RUN   TestACL/SessionDefaultDeny/RWSuffixReadAllowed.Prefix(root-rw-prefix)
> === RUN   TestACL/SessionDefaultDeny/RWSuffixWriteAllowed.Prefix(root-rw-prefix)
> === RUN   TestACL/SessionDefaultDeny/ChildDenyReadDenied.Prefix(child-nope)
> === RUN   TestACL/SessionDefaultDeny/ChildDenyWriteDenied.Prefix(child-nope)
> === RUN   TestACL/SessionDefaultDeny/ChildROReadAllowed.Prefix(child-ro)
> === RUN   TestACL/SessionDefaultDeny/ChildROWriteDenied.Prefix(child-ro)
> === RUN   TestACL/SessionDefaultDeny/ChildRWReadAllowed.Prefix(child-rw)
> === RUN   TestACL/SessionDefaultDeny/ChildRWWriteAllowed.Prefix(child-rw)
> === RUN   TestACL/SessionDefaultDeny/ChildDenySuffixReadDenied.Prefix(child-nope-prefix)
> === RUN   TestACL/SessionDefaultDeny/ChildDenySuffixWriteDenied.Prefix(child-nope-prefix)
> === RUN   TestACL/SessionDefaultDeny/ChildROSuffixReadAllowed.Prefix(child-ro-prefix)
> === RUN   TestACL/SessionDefaultDeny/ChildROSuffixWriteDenied.Prefix(child-ro-prefix)
> === RUN   TestACL/SessionDefaultDeny/ChildRWSuffixReadAllowed.Prefix(child-rw-prefix)
> === RUN   TestACL/SessionDefaultDeny/ChildRWSuffixWriteAllowed.Prefix(child-rw-prefix)
> === RUN   TestACL/SessionDefaultDeny/ChildOverrideReadAllowed.Prefix(override)
> === RUN   TestACL/SessionDefaultDeny/ChildOverrideWriteAllowed.Prefix(override)
> === RUN   TestACL/SessionDefaultAllow
> === RUN   TestACL/SessionDefaultAllow/DefaultReadAllowed.Prefix(nope)
> === RUN   TestACL/SessionDefaultAllow/DefaultWriteAllowed.Prefix(nope)
> === RUN   TestACL/SessionDefaultAllow/DenyReadDenied.Prefix(root-nope)
> === RUN   TestACL/SessionDefaultAllow/DenyWriteDenied.Prefix(root-nope)
> === RUN   TestACL/SessionDefaultAllow/ROReadAllowed.Prefix(root-ro)
> === RUN   TestACL/SessionDefaultAllow/ROWriteDenied.Prefix(root-ro)
> === RUN   TestACL/SessionDefaultAllow/RWReadAllowed.Prefix(root-rw)
> === RUN   TestACL/SessionDefaultAllow/RWWriteAllowed.Prefix(root-rw)
> === RUN   TestACL/SessionDefaultAllow/DenySuffixReadDenied.Prefix(root-nope-prefix)
> === RUN   TestACL/SessionDefaultAllow/DenySuffixWriteDenied.Prefix(root-nope-prefix)
> === RUN   TestACL/SessionDefaultAllow/ROSuffixReadAllowed.Prefix(root-ro-prefix)
> === RUN   TestACL/SessionDefaultAllow/ROSuffixWriteDenied.Prefix(root-ro-prefix)
> === RUN   TestACL/SessionDefaultAllow/RWSuffixReadAllowed.Prefix(root-rw-prefix)
> === RUN   TestACL/SessionDefaultAllow/RWSuffixWriteAllowed.Prefix(root-rw-prefix)
> === RUN   TestACL/SessionDefaultAllow/ChildDenyReadDenied.Prefix(child-nope)
> === RUN   TestACL/SessionDefaultAllow/ChildDenyWriteDenied.Prefix(child-nope)
> === RUN   TestACL/SessionDefaultAllow/ChildROReadAllowed.Prefix(child-ro)
> === RUN   TestACL/SessionDefaultAllow/ChildROWriteDenied.Prefix(child-ro)
> === RUN   TestACL/SessionDefaultAllow/ChildRWReadAllowed.Prefix(child-rw)
> === RUN   TestACL/SessionDefaultAllow/ChildRWWriteAllowed.Prefix(child-rw)
> === RUN   TestACL/SessionDefaultAllow/ChildDenySuffixReadDenied.Prefix(child-nope-prefix)
> === RUN   TestACL/SessionDefaultAllow/ChildDenySuffixWriteDenied.Prefix(child-nope-prefix)
> === RUN   TestACL/SessionDefaultAllow/ChildROSuffixReadAllowed.Prefix(child-ro-prefix)
> === RUN   TestACL/SessionDefaultAllow/ChildROSuffixWriteDenied.Prefix(child-ro-prefix)
> === RUN   TestACL/SessionDefaultAllow/ChildRWSuffixReadAllowed.Prefix(child-rw-prefix)
> === RUN   TestACL/SessionDefaultAllow/ChildRWSuffixWriteAllowed.Prefix(child-rw-prefix)
> === RUN   TestACL/SessionDefaultAllow/ChildOverrideReadAllowed.Prefix(override)
> === RUN   TestACL/SessionDefaultAllow/ChildOverrideWriteAllowed.Prefix(override)
> === RUN   TestACL/Parent
> === RUN   TestACL/Parent/KeyReadDenied.Prefix(other)
> === RUN   TestACL/Parent/KeyWriteDenied.Prefix(other)
> === RUN   TestACL/Parent/KeyWritePrefixDenied.Prefix(other)
> === RUN   TestACL/Parent/KeyReadAllowed.Prefix(foo/test)
> === RUN   TestACL/Parent/KeyWriteAllowed.Prefix(foo/test)
> === RUN   TestACL/Parent/KeyWritePrefixAllowed.Prefix(foo/test)
> === RUN   TestACL/Parent/KeyReadAllowed.Prefix(foo/priv/test)
> === RUN   TestACL/Parent/KeyWriteDenied.Prefix(foo/priv/test)
> === RUN   TestACL/Parent/KeyWritePrefixDenied.Prefix(foo/priv/test)
> === RUN   TestACL/Parent/KeyReadDenied.Prefix(bar/any)
> === RUN   TestACL/Parent/KeyWriteDenied.Prefix(bar/any)
> === RUN   TestACL/Parent/KeyWritePrefixDenied.Prefix(bar/any)
> === RUN   TestACL/Parent/KeyReadAllowed.Prefix(zip/test)
> === RUN   TestACL/Parent/KeyWriteDenied.Prefix(zip/test)
> === RUN   TestACL/Parent/KeyWritePrefixDenied.Prefix(zip/test)
> === RUN   TestACL/Parent/ServiceReadDenied.Prefix(fail)
> === RUN   TestACL/Parent/ServiceWriteDenied.Prefix(fail)
> === RUN   TestACL/Parent/ServiceReadAllowed.Prefix(other)
> === RUN   TestACL/Parent/ServiceWriteAllowed.Prefix(other)
> === RUN   TestACL/Parent/ServiceReadAllowed.Prefix(foo)
> === RUN   TestACL/Parent/ServiceWriteDenied.Prefix(foo)
> === RUN   TestACL/Parent/ServiceReadDenied.Prefix(bar)
> === RUN   TestACL/Parent/ServiceWriteDenied.Prefix(bar)
> === RUN   TestACL/Parent/PreparedQueryReadAllowed.Prefix(foo)
> === RUN   TestACL/Parent/PreparedQueryWriteDenied.Prefix(foo)
> === RUN   TestACL/Parent/PreparedQueryReadAllowed.Prefix(foobar)
> === RUN   TestACL/Parent/PreparedQueryWriteDenied.Prefix(foobar)
> === RUN   TestACL/Parent/PreparedQueryReadDenied.Prefix(bar)
> === RUN   TestACL/Parent/PreparedQueryWriteDenied.Prefix(bar)
> === RUN   TestACL/Parent/PreparedQueryReadDenied.Prefix(barbaz)
> === RUN   TestACL/Parent/PreparedQueryWriteDenied.Prefix(barbaz)
> === RUN   TestACL/Parent/PreparedQueryReadDenied.Prefix(baz)
> === RUN   TestACL/Parent/PreparedQueryWriteDenied.Prefix(baz)
> === RUN   TestACL/Parent/PreparedQueryReadDenied.Prefix(nope)
> === RUN   TestACL/Parent/PreparedQueryWriteDenied.Prefix(nope)
> === RUN   TestACL/Parent/ACLReadDenied
> === RUN   TestACL/Parent/ACLWriteDenied
> === RUN   TestACL/Parent/SnapshotDenied
> === RUN   TestACL/Parent/IntentionDefaultAllowDenied
> === RUN   TestACL/ComplexDefaultAllow
> === RUN   TestACL/ComplexDefaultAllow/KeyReadAllowed.Prefix(other)
> === RUN   TestACL/ComplexDefaultAllow/KeyWriteAllowed.Prefix(other)
> === RUN   TestACL/ComplexDefaultAllow/KeyWritePrefixAllowed.Prefix(other)
> === RUN   TestACL/ComplexDefaultAllow/KeyListAllowed.Prefix(other)
> === RUN   TestACL/ComplexDefaultAllow/KeyReadAllowed.Prefix(foo/test)
> === RUN   TestACL/ComplexDefaultAllow/KeyWriteAllowed.Prefix(foo/test)
> === RUN   TestACL/ComplexDefaultAllow/KeyWritePrefixAllowed.Prefix(foo/test)
> === RUN   TestACL/ComplexDefaultAllow/KeyListAllowed.Prefix(foo/test)
> === RUN   TestACL/ComplexDefaultAllow/KeyReadDenied.Prefix(foo/priv/test)
> === RUN   TestACL/ComplexDefaultAllow/KeyWriteDenied.Prefix(foo/priv/test)
> === RUN   TestACL/ComplexDefaultAllow/KeyWritePrefixDenied.Prefix(foo/priv/test)
> === RUN   TestACL/ComplexDefaultAllow/KeyListDenied.Prefix(foo/priv/test)
> === RUN   TestACL/ComplexDefaultAllow/KeyReadDenied.Prefix(bar/any)
> === RUN   TestACL/ComplexDefaultAllow/KeyWriteDenied.Prefix(bar/any)
> === RUN   TestACL/ComplexDefaultAllow/KeyWritePrefixDenied.Prefix(bar/any)
> === RUN   TestACL/ComplexDefaultAllow/KeyListDenied.Prefix(bar/any)
> === RUN   TestACL/ComplexDefaultAllow/KeyReadAllowed.Prefix(zip/test)
> === RUN   TestACL/ComplexDefaultAllow/KeyWriteDenied.Prefix(zip/test)
> === RUN   TestACL/ComplexDefaultAllow/KeyWritePrefixDenied.Prefix(zip/test)
> === RUN   TestACL/ComplexDefaultAllow/KeyListDenied.Prefix(zip/test)
> === RUN   TestACL/ComplexDefaultAllow/KeyReadAllowed.Prefix(foo/)
> === RUN   TestACL/ComplexDefaultAllow/KeyWriteAllowed.Prefix(foo/)
> === RUN   TestACL/ComplexDefaultAllow/KeyWritePrefixDenied.Prefix(foo/)
> === RUN   TestACL/ComplexDefaultAllow/KeyListAllowed.Prefix(foo/)
> === RUN   TestACL/ComplexDefaultAllow/KeyReadAllowed
> === RUN   TestACL/ComplexDefaultAllow/KeyWriteAllowed
> === RUN   TestACL/ComplexDefaultAllow/KeyWritePrefixDenied
> === RUN   TestACL/ComplexDefaultAllow/KeyListAllowed
> === RUN   TestACL/ComplexDefaultAllow/KeyReadAllowed.Prefix(zap/test)
> === RUN   TestACL/ComplexDefaultAllow/KeyWriteDenied.Prefix(zap/test)
> === RUN   TestACL/ComplexDefaultAllow/KeyWritePrefixDenied.Prefix(zap/test)
> === RUN   TestACL/ComplexDefaultAllow/KeyListAllowed.Prefix(zap/test)
> === RUN   TestACL/ComplexDefaultAllow/IntentionReadAllowed.Prefix(other)
> === RUN   TestACL/ComplexDefaultAllow/IntentionWriteDenied.Prefix(other)
> === RUN   TestACL/ComplexDefaultAllow/IntentionReadAllowed.Prefix(foo)
> === RUN   TestACL/ComplexDefaultAllow/IntentionWriteDenied.Prefix(foo)
> === RUN   TestACL/ComplexDefaultAllow/IntentionReadDenied.Prefix(bar)
> === RUN   TestACL/ComplexDefaultAllow/IntentionWriteDenied.Prefix(bar)
> === RUN   TestACL/ComplexDefaultAllow/IntentionReadAllowed.Prefix(foobar)
> === RUN   TestACL/ComplexDefaultAllow/IntentionWriteDenied.Prefix(foobar)
> === RUN   TestACL/ComplexDefaultAllow/IntentionReadDenied.Prefix(barfo)
> === RUN   TestACL/ComplexDefaultAllow/IntentionWriteDenied.Prefix(barfo)
> === RUN   TestACL/ComplexDefaultAllow/IntentionReadAllowed.Prefix(barfoo)
> === RUN   TestACL/ComplexDefaultAllow/IntentionWriteAllowed.Prefix(barfoo)
> === RUN   TestACL/ComplexDefaultAllow/IntentionReadAllowed.Prefix(barfoo2)
> === RUN   TestACL/ComplexDefaultAllow/IntentionWriteAllowed.Prefix(barfoo2)
> === RUN   TestACL/ComplexDefaultAllow/IntentionReadDenied.Prefix(intbaz)
> === RUN   TestACL/ComplexDefaultAllow/IntentionWriteDenied.Prefix(intbaz)
> === RUN   TestACL/ComplexDefaultAllow/IntentionDefaultAllowAllowed
> === RUN   TestACL/ComplexDefaultAllow/ServiceReadAllowed.Prefix(other)
> === RUN   TestACL/ComplexDefaultAllow/ServiceWriteAllowed.Prefix(other)
> === RUN   TestACL/ComplexDefaultAllow/ServiceReadAllowed.Prefix(foo)
> === RUN   TestACL/ComplexDefaultAllow/ServiceWriteDenied.Prefix(foo)
> === RUN   TestACL/ComplexDefaultAllow/ServiceReadDenied.Prefix(bar)
> === RUN   TestACL/ComplexDefaultAllow/ServiceWriteDenied.Prefix(bar)
> === RUN   TestACL/ComplexDefaultAllow/ServiceReadAllowed.Prefix(foobar)
> === RUN   TestACL/ComplexDefaultAllow/ServiceWriteDenied.Prefix(foobar)
> === RUN   TestACL/ComplexDefaultAllow/ServiceReadDenied.Prefix(barfo)
> === RUN   TestACL/ComplexDefaultAllow/ServiceWriteDenied.Prefix(barfo)
> === RUN   TestACL/ComplexDefaultAllow/ServiceReadAllowed.Prefix(barfoo)
> === RUN   TestACL/ComplexDefaultAllow/ServiceWriteAllowed.Prefix(barfoo)
> === RUN   TestACL/ComplexDefaultAllow/ServiceReadAllowed.Prefix(barfoo2)
> === RUN   TestACL/ComplexDefaultAllow/ServiceWriteAllowed.Prefix(barfoo2)
> === RUN   TestACL/ComplexDefaultAllow/EventReadAllowed.Prefix(foo)
> === RUN   TestACL/ComplexDefaultAllow/EventWriteAllowed.Prefix(foo)
> === RUN   TestACL/ComplexDefaultAllow/EventReadAllowed.Prefix(foobar)
> === RUN   TestACL/ComplexDefaultAllow/EventWriteAllowed.Prefix(foobar)
> === RUN   TestACL/ComplexDefaultAllow/EventReadDenied.Prefix(bar)
> === RUN   TestACL/ComplexDefaultAllow/EventWriteDenied.Prefix(bar)
> === RUN   TestACL/ComplexDefaultAllow/EventReadDenied.Prefix(barbaz)
> === RUN   TestACL/ComplexDefaultAllow/EventWriteDenied.Prefix(barbaz)
> === RUN   TestACL/ComplexDefaultAllow/EventReadAllowed.Prefix(baz)
> === RUN   TestACL/ComplexDefaultAllow/EventWriteDenied.Prefix(baz)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryReadAllowed.Prefix(foo)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryWriteAllowed.Prefix(foo)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryReadAllowed.Prefix(foobar)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryWriteAllowed.Prefix(foobar)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryReadDenied.Prefix(bar)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryWriteDenied.Prefix(bar)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryReadDenied.Prefix(barbaz)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryWriteDenied.Prefix(barbaz)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryReadAllowed.Prefix(baz)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryWriteDenied.Prefix(baz)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryReadAllowed.Prefix(nope)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryWriteDenied.Prefix(nope)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryReadAllowed.Prefix(zoo)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryWriteAllowed.Prefix(zoo)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryReadAllowed.Prefix(zookeeper)
> === RUN   TestACL/ComplexDefaultAllow/PreparedQueryWriteAllowed.Prefix(zookeeper)
> === RUN   TestACL/ExactMatchPrecedence
> === RUN   TestACL/ExactMatchPrecedence/AgentReadPrefixAllowed.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/AgentWritePrefixDenied.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/AgentReadPrefixAllowed.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/AgentWritePrefixDenied.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/AgentReadAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/AgentWriteAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/AgentReadPrefixAllowed.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/AgentWritePrefixDenied.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/AgentReadPrefixAllowed.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/AgentWritePrefixDenied.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/AgentReadPrefixAllowed.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/AgentWritePrefixDenied.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/AgentReadDenied.Prefix(football)
> === RUN   TestACL/ExactMatchPrecedence/AgentWriteDenied.Prefix(football)
> === RUN   TestACL/ExactMatchPrecedence/KeyReadPrefixAllowed.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/KeyWritePrefixDenied.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/KeyReadPrefixAllowed.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/KeyWritePrefixDenied.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/KeyReadAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/KeyWriteAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/KeyReadPrefixAllowed.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/KeyWritePrefixDenied.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/KeyReadPrefixAllowed.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/KeyWritePrefixDenied.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/KeyReadPrefixAllowed.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/KeyWritePrefixDenied.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/KeyReadDenied.Prefix(football)
> === RUN   TestACL/ExactMatchPrecedence/KeyWriteDenied.Prefix(football)
> === RUN   TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/NodeReadAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/NodeWriteAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/NodeReadDenied.Prefix(football)
> === RUN   TestACL/ExactMatchPrecedence/NodeWriteDenied.Prefix(football)
> === RUN   TestACL/ExactMatchPrecedence/ServiceReadPrefixAllowed.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/ServiceWritePrefixDenied.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/ServiceReadPrefixAllowed.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/ServiceWritePrefixDenied.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/ServiceReadAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/ServiceWriteAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/ServiceReadPrefixAllowed.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/ServiceWritePrefixDenied.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/ServiceReadPrefixAllowed.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/ServiceWritePrefixDenied.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/ServiceReadPrefixAllowed.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/ServiceWritePrefixDenied.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/ServiceReadDenied.Prefix(football)
> === RUN   TestACL/ExactMatchPrecedence/ServiceWriteDenied.Prefix(football)
> === RUN   TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(fo)#01
> === RUN   TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(fo)#01
> === RUN   TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(for)#01
> === RUN   TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(for)#01
> === RUN   TestACL/ExactMatchPrecedence/NodeReadAllowed.Prefix(foo)#01
> === RUN   TestACL/ExactMatchPrecedence/NodeWriteAllowed.Prefix(foo)#01
> === RUN   TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(foot)#01
> === RUN   TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(foot)#01
> === RUN   TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(foot2)#01
> === RUN   TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(foot2)#01
> === RUN   TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(food)#01
> === RUN   TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(food)#01
> === RUN   TestACL/ExactMatchPrecedence/NodeReadDenied.Prefix(football)#01
> === RUN   TestACL/ExactMatchPrecedence/NodeWriteDenied.Prefix(football)#01
> === RUN   TestACL/ExactMatchPrecedence/IntentionReadPrefixAllowed.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/IntentionWritePrefixDenied.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/IntentionReadPrefixAllowed.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/IntentionWritePrefixDenied.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/IntentionReadAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/IntentionWriteAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/IntentionReadPrefixAllowed.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/IntentionWritePrefixDenied.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/IntentionReadPrefixAllowed.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/IntentionWritePrefixDenied.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/IntentionReadPrefixAllowed.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/IntentionWritePrefixDenied.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/IntentionReadDenied.Prefix(football)
> === RUN   TestACL/ExactMatchPrecedence/IntentionWriteDenied.Prefix(football)
> === RUN   TestACL/ExactMatchPrecedence/SessionReadPrefixAllowed.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/SessionWritePrefixDenied.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/SessionReadPrefixAllowed.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/SessionWritePrefixDenied.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/SessionReadAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/SessionWriteAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/SessionReadPrefixAllowed.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/SessionWritePrefixDenied.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/SessionReadPrefixAllowed.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/SessionWritePrefixDenied.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/SessionReadPrefixAllowed.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/SessionWritePrefixDenied.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/SessionReadDenied.Prefix(football)
> === RUN   TestACL/ExactMatchPrecedence/SessionWriteDenied.Prefix(football)
> === RUN   TestACL/ExactMatchPrecedence/EventReadPrefixAllowed.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/EventWritePrefixDenied.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/EventReadPrefixAllowed.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/EventWritePrefixDenied.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/EventReadAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/EventWriteAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/EventReadPrefixAllowed.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/EventWritePrefixDenied.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/EventReadPrefixAllowed.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/EventWritePrefixDenied.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/EventReadPrefixAllowed.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/EventWritePrefixDenied.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/EventReadDenied.Prefix(football)
> === RUN   TestACL/ExactMatchPrecedence/EventWriteDenied.Prefix(football)
> === RUN   TestACL/ExactMatchPrecedence/PreparedQueryReadPrefixAllowed.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/PreparedQueryWritePrefixDenied.Prefix(fo)
> === RUN   TestACL/ExactMatchPrecedence/PreparedQueryReadPrefixAllowed.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/PreparedQueryWritePrefixDenied.Prefix(for)
> === RUN   TestACL/ExactMatchPrecedence/PreparedQueryReadAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/PreparedQueryWriteAllowed.Prefix(foo)
> === RUN   TestACL/ExactMatchPrecedence/PreparedQueryReadPrefixAllowed.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/PreparedQueryWritePrefixDenied.Prefix(foot)
> === RUN   TestACL/ExactMatchPrecedence/PreparedQueryReadPrefixAllowed.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/PreparedQueryWritePrefixDenied.Prefix(foot2)
> === RUN   TestACL/ExactMatchPrecedence/PreparedQueryReadPrefixAllowed.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/PreparedQueryWritePrefixDenied.Prefix(food)
> === RUN   TestACL/ExactMatchPrecedence/PreparedQueryReadDenied.Prefix(football)
> === RUN   TestACL/ExactMatchPrecedence/PreparedQueryWriteDenied.Prefix(football)
> === RUN   TestACL/ACLRead
> === RUN   TestACL/ACLRead/ReadAllowed
> === RUN   TestACL/ACLRead/WriteDenied
> === RUN   TestACL/ACLRead#01
> === RUN   TestACL/ACLRead#01/ReadAllowed
> === RUN   TestACL/ACLRead#01/WriteAllowed
> === RUN   TestACL/KeyWritePrefixDefaultDeny
> === RUN   TestACL/KeyWritePrefixDefaultDeny/DeniedTopLevelPrefix.Prefix(foo)
> === RUN   TestACL/KeyWritePrefixDefaultDeny/AllowedTopLevelPrefix.Prefix(baz/)
> === RUN   TestACL/KeyWritePrefixDefaultDeny/AllowedPrefixWithNestedWrite.Prefix(foo/)
> === RUN   TestACL/KeyWritePrefixDefaultDeny/DenyPrefixWithNestedRead.Prefix(bar/)
> === RUN   TestACL/KeyWritePrefixDefaultDeny/DenyNoPrefixMatch.Prefix(te)
> === RUN   TestACL/KeyWritePrefixDefaultAllow
> === RUN   TestACL/KeyWritePrefixDefaultAllow/KeyWritePrefixDenied.Prefix(foo)
> === RUN   TestACL/KeyWritePrefixDefaultAllow/KeyWritePrefixAllowed.Prefix(bar)
> --- PASS: TestACL (0.05s)
>     --- PASS: TestACL/DenyAll (0.00s)
>         --- PASS: TestACL/DenyAll/DenyACLRead (0.00s)
>         --- PASS: TestACL/DenyAll/DenyACLWrite (0.00s)
>         --- PASS: TestACL/DenyAll/DenyAgentRead (0.00s)
>         --- PASS: TestACL/DenyAll/DenyAgentWrite (0.00s)
>         --- PASS: TestACL/DenyAll/DenyEventRead (0.00s)
>         --- PASS: TestACL/DenyAll/DenyEventWrite (0.00s)
>         --- PASS: TestACL/DenyAll/DenyIntentionDefaultAllow (0.00s)
>         --- PASS: TestACL/DenyAll/DenyIntentionRead (0.00s)
>         --- PASS: TestACL/DenyAll/DenyIntentionWrite (0.00s)
>         --- PASS: TestACL/DenyAll/DenyKeyRead (0.00s)
>         --- PASS: TestACL/DenyAll/DenyKeyringRead (0.00s)
>         --- PASS: TestACL/DenyAll/DenyKeyringWrite (0.00s)
>         --- PASS: TestACL/DenyAll/DenyKeyWrite (0.00s)
>         --- PASS: TestACL/DenyAll/DenyNodeRead (0.00s)
>         --- PASS: TestACL/DenyAll/DenyNodeWrite (0.00s)
>         --- PASS: TestACL/DenyAll/DenyOperatorRead (0.00s)
>         --- PASS: TestACL/DenyAll/DenyOperatorWrite (0.00s)
>         --- PASS: TestACL/DenyAll/DenyPreparedQueryRead (0.00s)
>         --- PASS: TestACL/DenyAll/DenyPreparedQueryWrite (0.00s)
>         --- PASS: TestACL/DenyAll/DenyServiceRead (0.00s)
>         --- PASS: TestACL/DenyAll/DenyServiceWrite (0.00s)
>         --- PASS: TestACL/DenyAll/DenySessionRead (0.00s)
>         --- PASS: TestACL/DenyAll/DenySessionWrite (0.00s)
>         --- PASS: TestACL/DenyAll/DenySnapshot (0.00s)
>     --- PASS: TestACL/AllowAll (0.00s)
>         --- PASS: TestACL/AllowAll/DenyACLRead (0.00s)
>         --- PASS: TestACL/AllowAll/DenyACLWrite (0.00s)
>         --- PASS: TestACL/AllowAll/AllowAgentRead (0.00s)
>         --- PASS: TestACL/AllowAll/AllowAgentWrite (0.00s)
>         --- PASS: TestACL/AllowAll/AllowEventRead (0.00s)
>         --- PASS: TestACL/AllowAll/AllowEventWrite (0.00s)
>         --- PASS: TestACL/AllowAll/AllowIntentionDefaultAllow (0.00s)
>         --- PASS: TestACL/AllowAll/AllowIntentionRead (0.00s)
>         --- PASS: TestACL/AllowAll/AllowIntentionWrite (0.00s)
>         --- PASS: TestACL/AllowAll/AllowKeyRead (0.00s)
>         --- PASS: TestACL/AllowAll/AllowKeyringRead (0.00s)
>         --- PASS: TestACL/AllowAll/AllowKeyringWrite (0.00s)
>         --- PASS: TestACL/AllowAll/AllowKeyWrite (0.00s)
>         --- PASS: TestACL/AllowAll/AllowNodeRead (0.00s)
>         --- PASS: TestACL/AllowAll/AllowNodeWrite (0.00s)
>         --- PASS: TestACL/AllowAll/AllowOperatorRead (0.00s)
>         --- PASS: TestACL/AllowAll/AllowOperatorWrite (0.00s)
>         --- PASS: TestACL/AllowAll/AllowPreparedQueryRead (0.00s)
>         --- PASS: TestACL/AllowAll/AllowPreparedQueryWrite (0.00s)
>         --- PASS: TestACL/AllowAll/AllowServiceRead (0.00s)
>         --- PASS: TestACL/AllowAll/AllowServiceWrite (0.00s)
>         --- PASS: TestACL/AllowAll/AllowSessionRead (0.00s)
>         --- PASS: TestACL/AllowAll/AllowSessionWrite (0.00s)
>         --- PASS: TestACL/AllowAll/DenySnapshot (0.00s)
>     --- PASS: TestACL/ManageAll (0.00s)
>         --- PASS: TestACL/ManageAll/AllowACLRead (0.00s)
>         --- PASS: TestACL/ManageAll/AllowACLWrite (0.00s)
>         --- PASS: TestACL/ManageAll/AllowAgentRead (0.00s)
>         --- PASS: TestACL/ManageAll/AllowAgentWrite (0.00s)
>         --- PASS: TestACL/ManageAll/AllowEventRead (0.00s)
>         --- PASS: TestACL/ManageAll/AllowEventWrite (0.00s)
>         --- PASS: TestACL/ManageAll/AllowIntentionDefaultAllow (0.00s)
>         --- PASS: TestACL/ManageAll/AllowIntentionRead (0.00s)
>         --- PASS: TestACL/ManageAll/AllowIntentionWrite (0.00s)
>         --- PASS: TestACL/ManageAll/AllowKeyRead (0.00s)
>         --- PASS: TestACL/ManageAll/AllowKeyringRead (0.00s)
>         --- PASS: TestACL/ManageAll/AllowKeyringWrite (0.00s)
>         --- PASS: TestACL/ManageAll/AllowKeyWrite (0.00s)
>         --- PASS: TestACL/ManageAll/AllowNodeRead (0.00s)
>         --- PASS: TestACL/ManageAll/AllowNodeWrite (0.00s)
>         --- PASS: TestACL/ManageAll/AllowOperatorRead (0.00s)
>         --- PASS: TestACL/ManageAll/AllowOperatorWrite (0.00s)
>         --- PASS: TestACL/ManageAll/AllowPreparedQueryRead (0.00s)
>         --- PASS: TestACL/ManageAll/AllowPreparedQueryWrite (0.00s)
>         --- PASS: TestACL/ManageAll/AllowServiceRead (0.00s)
>         --- PASS: TestACL/ManageAll/AllowServiceWrite (0.00s)
>         --- PASS: TestACL/ManageAll/AllowSessionRead (0.00s)
>         --- PASS: TestACL/ManageAll/AllowSessionWrite (0.00s)
>         --- PASS: TestACL/ManageAll/AllowSnapshot (0.00s)
>     --- PASS: TestACL/AgentBasicDefaultDeny (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultDeny/DefaultReadDenied.Prefix(ro) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultDeny/DefaultWriteDenied.Prefix(ro) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultDeny/ROReadAllowed.Prefix(root) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultDeny/ROWriteDenied.Prefix(root) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultDeny/ROSuffixReadAllowed.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultDeny/ROSuffixWriteDenied.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultDeny/RWReadAllowed.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultDeny/RWWriteDenied.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultDeny/RWSuffixReadAllowed.Prefix(root-rw-sub) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultDeny/RWSuffixWriteAllowed.Prefix(root-rw-sub) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultDeny/DenyReadDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultDeny/DenyWriteDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultDeny/DenySuffixReadDenied.Prefix(root-nope-sub) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultDeny/DenySuffixWriteDenied.Prefix(root-nope-sub) (0.00s)
>     --- PASS: TestACL/AgentBasicDefaultAllow (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultAllow/DefaultReadDenied.Prefix(ro) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultAllow/DefaultWriteDenied.Prefix(ro) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultAllow/ROReadAllowed.Prefix(root) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultAllow/ROWriteDenied.Prefix(root) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultAllow/ROSuffixReadAllowed.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultAllow/ROSuffixWriteDenied.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultAllow/RWReadAllowed.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultAllow/RWWriteDenied.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultAllow/RWSuffixReadAllowed.Prefix(root-rw-sub) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultAllow/RWSuffixWriteAllowed.Prefix(root-rw-sub) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultAllow/DenyReadDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultAllow/DenyWriteDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultAllow/DenySuffixReadDenied.Prefix(root-nope-sub) (0.00s)
>         --- PASS: TestACL/AgentBasicDefaultAllow/DenySuffixWriteDenied.Prefix(root-nope-sub) (0.00s)
>     --- PASS: TestACL/PreparedQueryDefaultAllow (0.00s)
>         --- PASS: TestACL/PreparedQueryDefaultAllow/ReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/PreparedQueryDefaultAllow/WriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/PreparedQueryDefaultAllow/ReadDenied.Prefix(other) (0.00s)
>         --- PASS: TestACL/PreparedQueryDefaultAllow/WriteDenied.Prefix(other) (0.00s)
>     --- PASS: TestACL/AgentNestedDefaultDeny (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/DefaultReadDenied.Prefix(nope) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/DefaultWriteDenied.Prefix(nope) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/DenyReadDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/DenyWriteDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ROReadAllowed.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ROWriteDenied.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/RWReadAllowed.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/RWWriteAllowed.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/DenySuffixReadDenied.Prefix(root-nope-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/DenySuffixWriteDenied.Prefix(root-nope-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ROSuffixReadAllowed.Prefix(root-ro-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ROSuffixWriteDenied.Prefix(root-ro-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/RWSuffixReadAllowed.Prefix(root-rw-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/RWSuffixWriteAllowed.Prefix(root-rw-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ChildDenyReadDenied.Prefix(child-nope) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ChildDenyWriteDenied.Prefix(child-nope) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ChildROReadAllowed.Prefix(child-ro) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ChildROWriteDenied.Prefix(child-ro) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ChildRWReadAllowed.Prefix(child-rw) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ChildRWWriteAllowed.Prefix(child-rw) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ChildDenySuffixReadDenied.Prefix(child-nope-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ChildDenySuffixWriteDenied.Prefix(child-nope-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ChildROSuffixReadAllowed.Prefix(child-ro-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ChildROSuffixWriteDenied.Prefix(child-ro-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ChildRWSuffixReadAllowed.Prefix(child-rw-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ChildRWSuffixWriteAllowed.Prefix(child-rw-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ChildOverrideReadAllowed.Prefix(override) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultDeny/ChildOverrideWriteAllowed.Prefix(override) (0.00s)
>     --- PASS: TestACL/AgentNestedDefaultAllow (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/DefaultReadAllowed.Prefix(nope) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/DefaultWriteAllowed.Prefix(nope) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/DenyReadDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/DenyWriteDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ROReadAllowed.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ROWriteDenied.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/RWReadAllowed.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/RWWriteAllowed.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/DenySuffixReadDenied.Prefix(root-nope-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/DenySuffixWriteDenied.Prefix(root-nope-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ROSuffixReadAllowed.Prefix(root-ro-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ROSuffixWriteDenied.Prefix(root-ro-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/RWSuffixReadAllowed.Prefix(root-rw-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/RWSuffixWriteAllowed.Prefix(root-rw-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ChildDenyReadDenied.Prefix(child-nope) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ChildDenyWriteDenied.Prefix(child-nope) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ChildROReadAllowed.Prefix(child-ro) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ChildROWriteDenied.Prefix(child-ro) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ChildRWReadAllowed.Prefix(child-rw) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ChildRWWriteAllowed.Prefix(child-rw) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ChildDenySuffixReadDenied.Prefix(child-nope-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ChildDenySuffixWriteDenied.Prefix(child-nope-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ChildROSuffixReadAllowed.Prefix(child-ro-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ChildROSuffixWriteDenied.Prefix(child-ro-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ChildRWSuffixReadAllowed.Prefix(child-rw-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ChildRWSuffixWriteAllowed.Prefix(child-rw-prefix) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ChildOverrideReadAllowed.Prefix(override) (0.00s)
>         --- PASS: TestACL/AgentNestedDefaultAllow/ChildOverrideWriteAllowed.Prefix(override) (0.00s)
>     --- PASS: TestACL/KeyringDefaultAllowPolicyDeny (0.00s)
>         --- PASS: TestACL/KeyringDefaultAllowPolicyDeny/ReadDenied (0.00s)
>         --- PASS: TestACL/KeyringDefaultAllowPolicyDeny/WriteDenied (0.00s)
>     --- PASS: TestACL/KeyringDefaultAllowPolicyRead (0.00s)
>         --- PASS: TestACL/KeyringDefaultAllowPolicyRead/ReadAllowed (0.00s)
>         --- PASS: TestACL/KeyringDefaultAllowPolicyRead/WriteDenied (0.00s)
>     --- PASS: TestACL/KeyringDefaultAllowPolicyWrite (0.00s)
>         --- PASS: TestACL/KeyringDefaultAllowPolicyWrite/ReadAllowed (0.00s)
>         --- PASS: TestACL/KeyringDefaultAllowPolicyWrite/WriteAllowed (0.00s)
>     --- PASS: TestACL/KeyringDefaultAllowPolicyNone (0.00s)
>         --- PASS: TestACL/KeyringDefaultAllowPolicyNone/ReadAllowed (0.00s)
>         --- PASS: TestACL/KeyringDefaultAllowPolicyNone/WriteAllowed (0.00s)
>     --- PASS: TestACL/KeyringDefaultDenyPolicyDeny (0.00s)
>         --- PASS: TestACL/KeyringDefaultDenyPolicyDeny/ReadDenied (0.00s)
>         --- PASS: TestACL/KeyringDefaultDenyPolicyDeny/WriteDenied (0.00s)
>     --- PASS: TestACL/KeyringDefaultDenyPolicyRead (0.00s)
>         --- PASS: TestACL/KeyringDefaultDenyPolicyRead/ReadAllowed (0.00s)
>         --- PASS: TestACL/KeyringDefaultDenyPolicyRead/WriteDenied (0.00s)
>     --- PASS: TestACL/KeyringDefaultDenyPolicyWrite (0.00s)
>         --- PASS: TestACL/KeyringDefaultDenyPolicyWrite/ReadAllowed (0.00s)
>         --- PASS: TestACL/KeyringDefaultDenyPolicyWrite/WriteAllowed (0.00s)
>     --- PASS: TestACL/KeyringDefaultDenyPolicyNone (0.00s)
>         --- PASS: TestACL/KeyringDefaultDenyPolicyNone/ReadDenied (0.00s)
>         --- PASS: TestACL/KeyringDefaultDenyPolicyNone/WriteDenied (0.00s)
>     --- PASS: TestACL/OperatorDefaultAllowPolicyDeny (0.00s)
>         --- PASS: TestACL/OperatorDefaultAllowPolicyDeny/ReadDenied (0.00s)
>         --- PASS: TestACL/OperatorDefaultAllowPolicyDeny/WriteDenied (0.00s)
>     --- PASS: TestACL/OperatorDefaultAllowPolicyRead (0.00s)
>         --- PASS: TestACL/OperatorDefaultAllowPolicyRead/ReadAllowed (0.00s)
>         --- PASS: TestACL/OperatorDefaultAllowPolicyRead/WriteDenied (0.00s)
>     --- PASS: TestACL/OperatorDefaultAllowPolicyWrite (0.00s)
>         --- PASS: TestACL/OperatorDefaultAllowPolicyWrite/ReadAllowed (0.00s)
>         --- PASS: TestACL/OperatorDefaultAllowPolicyWrite/WriteAllowed (0.00s)
>     --- PASS: TestACL/OperatorDefaultAllowPolicyNone (0.00s)
>         --- PASS: TestACL/OperatorDefaultAllowPolicyNone/ReadAllowed (0.00s)
>         --- PASS: TestACL/OperatorDefaultAllowPolicyNone/WriteAllowed (0.00s)
>     --- PASS: TestACL/OperatorDefaultDenyPolicyDeny (0.00s)
>         --- PASS: TestACL/OperatorDefaultDenyPolicyDeny/ReadDenied (0.00s)
>         --- PASS: TestACL/OperatorDefaultDenyPolicyDeny/WriteDenied (0.00s)
>     --- PASS: TestACL/OperatorDefaultDenyPolicyRead (0.00s)
>         --- PASS: TestACL/OperatorDefaultDenyPolicyRead/ReadAllowed (0.00s)
>         --- PASS: TestACL/OperatorDefaultDenyPolicyRead/WriteDenied (0.00s)
>     --- PASS: TestACL/OperatorDefaultDenyPolicyWrite (0.00s)
>         --- PASS: TestACL/OperatorDefaultDenyPolicyWrite/ReadAllowed (0.00s)
>         --- PASS: TestACL/OperatorDefaultDenyPolicyWrite/WriteAllowed (0.00s)
>     --- PASS: TestACL/OperatorDefaultDenyPolicyNone (0.00s)
>         --- PASS: TestACL/OperatorDefaultDenyPolicyNone/ReadDenied (0.00s)
>         --- PASS: TestACL/OperatorDefaultDenyPolicyNone/WriteDenied (0.00s)
>     --- PASS: TestACL/NodeDefaultDeny (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/DefaultReadDenied.Prefix(nope) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/DefaultWriteDenied.Prefix(nope) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/DenyReadDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/DenyWriteDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ROReadAllowed.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ROWriteDenied.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/RWReadAllowed.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/RWWriteAllowed.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/DenySuffixReadDenied.Prefix(root-nope-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/DenySuffixWriteDenied.Prefix(root-nope-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ROSuffixReadAllowed.Prefix(root-ro-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ROSuffixWriteDenied.Prefix(root-ro-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/RWSuffixReadAllowed.Prefix(root-rw-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/RWSuffixWriteAllowed.Prefix(root-rw-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ChildDenyReadDenied.Prefix(child-nope) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ChildDenyWriteDenied.Prefix(child-nope) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ChildROReadAllowed.Prefix(child-ro) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ChildROWriteDenied.Prefix(child-ro) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ChildRWReadAllowed.Prefix(child-rw) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ChildRWWriteAllowed.Prefix(child-rw) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ChildDenySuffixReadDenied.Prefix(child-nope-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ChildDenySuffixWriteDenied.Prefix(child-nope-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ChildROSuffixReadAllowed.Prefix(child-ro-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ChildROSuffixWriteDenied.Prefix(child-ro-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ChildRWSuffixReadAllowed.Prefix(child-rw-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ChildRWSuffixWriteAllowed.Prefix(child-rw-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ChildOverrideReadAllowed.Prefix(override) (0.00s)
>         --- PASS: TestACL/NodeDefaultDeny/ChildOverrideWriteAllowed.Prefix(override) (0.00s)
>     --- PASS: TestACL/NodeDefaultAllow (0.01s)
>         --- PASS: TestACL/NodeDefaultAllow/DefaultReadAllowed.Prefix(nope) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/DefaultWriteAllowed.Prefix(nope) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/DenyReadDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/DenyWriteDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ROReadAllowed.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ROWriteDenied.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/RWReadAllowed.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/RWWriteAllowed.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/DenySuffixReadDenied.Prefix(root-nope-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/DenySuffixWriteDenied.Prefix(root-nope-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ROSuffixReadAllowed.Prefix(root-ro-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ROSuffixWriteDenied.Prefix(root-ro-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/RWSuffixReadAllowed.Prefix(root-rw-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/RWSuffixWriteAllowed.Prefix(root-rw-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ChildDenyReadDenied.Prefix(child-nope) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ChildDenyWriteDenied.Prefix(child-nope) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ChildROReadAllowed.Prefix(child-ro) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ChildROWriteDenied.Prefix(child-ro) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ChildRWReadAllowed.Prefix(child-rw) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ChildRWWriteAllowed.Prefix(child-rw) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ChildDenySuffixReadDenied.Prefix(child-nope-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ChildDenySuffixWriteDenied.Prefix(child-nope-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ChildROSuffixReadAllowed.Prefix(child-ro-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ChildROSuffixWriteDenied.Prefix(child-ro-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ChildRWSuffixReadAllowed.Prefix(child-rw-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ChildRWSuffixWriteAllowed.Prefix(child-rw-prefix) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ChildOverrideReadAllowed.Prefix(override) (0.00s)
>         --- PASS: TestACL/NodeDefaultAllow/ChildOverrideWriteAllowed.Prefix(override) (0.00s)
>     --- PASS: TestACL/SessionDefaultDeny (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/DefaultReadDenied.Prefix(nope) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/DefaultWriteDenied.Prefix(nope) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/DenyReadDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/DenyWriteDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ROReadAllowed.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ROWriteDenied.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/RWReadAllowed.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/RWWriteAllowed.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/DenySuffixReadDenied.Prefix(root-nope-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/DenySuffixWriteDenied.Prefix(root-nope-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ROSuffixReadAllowed.Prefix(root-ro-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ROSuffixWriteDenied.Prefix(root-ro-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/RWSuffixReadAllowed.Prefix(root-rw-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/RWSuffixWriteAllowed.Prefix(root-rw-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ChildDenyReadDenied.Prefix(child-nope) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ChildDenyWriteDenied.Prefix(child-nope) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ChildROReadAllowed.Prefix(child-ro) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ChildROWriteDenied.Prefix(child-ro) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ChildRWReadAllowed.Prefix(child-rw) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ChildRWWriteAllowed.Prefix(child-rw) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ChildDenySuffixReadDenied.Prefix(child-nope-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ChildDenySuffixWriteDenied.Prefix(child-nope-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ChildROSuffixReadAllowed.Prefix(child-ro-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ChildROSuffixWriteDenied.Prefix(child-ro-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ChildRWSuffixReadAllowed.Prefix(child-rw-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ChildRWSuffixWriteAllowed.Prefix(child-rw-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ChildOverrideReadAllowed.Prefix(override) (0.00s)
>         --- PASS: TestACL/SessionDefaultDeny/ChildOverrideWriteAllowed.Prefix(override) (0.00s)
>     --- PASS: TestACL/SessionDefaultAllow (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/DefaultReadAllowed.Prefix(nope) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/DefaultWriteAllowed.Prefix(nope) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/DenyReadDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/DenyWriteDenied.Prefix(root-nope) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ROReadAllowed.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ROWriteDenied.Prefix(root-ro) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/RWReadAllowed.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/RWWriteAllowed.Prefix(root-rw) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/DenySuffixReadDenied.Prefix(root-nope-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/DenySuffixWriteDenied.Prefix(root-nope-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ROSuffixReadAllowed.Prefix(root-ro-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ROSuffixWriteDenied.Prefix(root-ro-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/RWSuffixReadAllowed.Prefix(root-rw-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/RWSuffixWriteAllowed.Prefix(root-rw-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ChildDenyReadDenied.Prefix(child-nope) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ChildDenyWriteDenied.Prefix(child-nope) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ChildROReadAllowed.Prefix(child-ro) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ChildROWriteDenied.Prefix(child-ro) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ChildRWReadAllowed.Prefix(child-rw) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ChildRWWriteAllowed.Prefix(child-rw) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ChildDenySuffixReadDenied.Prefix(child-nope-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ChildDenySuffixWriteDenied.Prefix(child-nope-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ChildROSuffixReadAllowed.Prefix(child-ro-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ChildROSuffixWriteDenied.Prefix(child-ro-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ChildRWSuffixReadAllowed.Prefix(child-rw-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ChildRWSuffixWriteAllowed.Prefix(child-rw-prefix) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ChildOverrideReadAllowed.Prefix(override) (0.00s)
>         --- PASS: TestACL/SessionDefaultAllow/ChildOverrideWriteAllowed.Prefix(override) (0.00s)
>     --- PASS: TestACL/Parent (0.00s)
>         --- PASS: TestACL/Parent/KeyReadDenied.Prefix(other) (0.00s)
>         --- PASS: TestACL/Parent/KeyWriteDenied.Prefix(other) (0.00s)
>         --- PASS: TestACL/Parent/KeyWritePrefixDenied.Prefix(other) (0.00s)
>         --- PASS: TestACL/Parent/KeyReadAllowed.Prefix(foo/test) (0.00s)
>         --- PASS: TestACL/Parent/KeyWriteAllowed.Prefix(foo/test) (0.00s)
>         --- PASS: TestACL/Parent/KeyWritePrefixAllowed.Prefix(foo/test) (0.00s)
>         --- PASS: TestACL/Parent/KeyReadAllowed.Prefix(foo/priv/test) (0.00s)
>         --- PASS: TestACL/Parent/KeyWriteDenied.Prefix(foo/priv/test) (0.00s)
>         --- PASS: TestACL/Parent/KeyWritePrefixDenied.Prefix(foo/priv/test) (0.00s)
>         --- PASS: TestACL/Parent/KeyReadDenied.Prefix(bar/any) (0.00s)
>         --- PASS: TestACL/Parent/KeyWriteDenied.Prefix(bar/any) (0.00s)
>         --- PASS: TestACL/Parent/KeyWritePrefixDenied.Prefix(bar/any) (0.00s)
>         --- PASS: TestACL/Parent/KeyReadAllowed.Prefix(zip/test) (0.00s)
>         --- PASS: TestACL/Parent/KeyWriteDenied.Prefix(zip/test) (0.00s)
>         --- PASS: TestACL/Parent/KeyWritePrefixDenied.Prefix(zip/test) (0.00s)
>         --- PASS: TestACL/Parent/ServiceReadDenied.Prefix(fail) (0.00s)
>         --- PASS: TestACL/Parent/ServiceWriteDenied.Prefix(fail) (0.00s)
>         --- PASS: TestACL/Parent/ServiceReadAllowed.Prefix(other) (0.00s)
>         --- PASS: TestACL/Parent/ServiceWriteAllowed.Prefix(other) (0.00s)
>         --- PASS: TestACL/Parent/ServiceReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/Parent/ServiceWriteDenied.Prefix(foo) (0.00s)
>         --- PASS: TestACL/Parent/ServiceReadDenied.Prefix(bar) (0.00s)
>         --- PASS: TestACL/Parent/ServiceWriteDenied.Prefix(bar) (0.00s)
>         --- PASS: TestACL/Parent/PreparedQueryReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/Parent/PreparedQueryWriteDenied.Prefix(foo) (0.00s)
>         --- PASS: TestACL/Parent/PreparedQueryReadAllowed.Prefix(foobar) (0.00s)
>         --- PASS: TestACL/Parent/PreparedQueryWriteDenied.Prefix(foobar) (0.00s)
>         --- PASS: TestACL/Parent/PreparedQueryReadDenied.Prefix(bar) (0.00s)
>         --- PASS: TestACL/Parent/PreparedQueryWriteDenied.Prefix(bar) (0.00s)
>         --- PASS: TestACL/Parent/PreparedQueryReadDenied.Prefix(barbaz) (0.00s)
>         --- PASS: TestACL/Parent/PreparedQueryWriteDenied.Prefix(barbaz) (0.00s)
>         --- PASS: TestACL/Parent/PreparedQueryReadDenied.Prefix(baz) (0.00s)
>         --- PASS: TestACL/Parent/PreparedQueryWriteDenied.Prefix(baz) (0.00s)
>         --- PASS: TestACL/Parent/PreparedQueryReadDenied.Prefix(nope) (0.00s)
>         --- PASS: TestACL/Parent/PreparedQueryWriteDenied.Prefix(nope) (0.00s)
>         --- PASS: TestACL/Parent/ACLReadDenied (0.00s)
>         --- PASS: TestACL/Parent/ACLWriteDenied (0.00s)
>         --- PASS: TestACL/Parent/SnapshotDenied (0.00s)
>         --- PASS: TestACL/Parent/IntentionDefaultAllowDenied (0.00s)
>     --- PASS: TestACL/ComplexDefaultAllow (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyReadAllowed.Prefix(other) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWriteAllowed.Prefix(other) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWritePrefixAllowed.Prefix(other) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyListAllowed.Prefix(other) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyReadAllowed.Prefix(foo/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWriteAllowed.Prefix(foo/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWritePrefixAllowed.Prefix(foo/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyListAllowed.Prefix(foo/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyReadDenied.Prefix(foo/priv/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWriteDenied.Prefix(foo/priv/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWritePrefixDenied.Prefix(foo/priv/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyListDenied.Prefix(foo/priv/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyReadDenied.Prefix(bar/any) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWriteDenied.Prefix(bar/any) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWritePrefixDenied.Prefix(bar/any) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyListDenied.Prefix(bar/any) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyReadAllowed.Prefix(zip/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWriteDenied.Prefix(zip/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWritePrefixDenied.Prefix(zip/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyListDenied.Prefix(zip/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyReadAllowed.Prefix(foo/) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWriteAllowed.Prefix(foo/) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWritePrefixDenied.Prefix(foo/) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyListAllowed.Prefix(foo/) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyReadAllowed (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWriteAllowed (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWritePrefixDenied (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyListAllowed (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyReadAllowed.Prefix(zap/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWriteDenied.Prefix(zap/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyWritePrefixDenied.Prefix(zap/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/KeyListAllowed.Prefix(zap/test) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionReadAllowed.Prefix(other) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionWriteDenied.Prefix(other) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionWriteDenied.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionReadDenied.Prefix(bar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionWriteDenied.Prefix(bar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionReadAllowed.Prefix(foobar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionWriteDenied.Prefix(foobar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionReadDenied.Prefix(barfo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionWriteDenied.Prefix(barfo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionReadAllowed.Prefix(barfoo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionWriteAllowed.Prefix(barfoo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionReadAllowed.Prefix(barfoo2) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionWriteAllowed.Prefix(barfoo2) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionReadDenied.Prefix(intbaz) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionWriteDenied.Prefix(intbaz) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/IntentionDefaultAllowAllowed (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/ServiceReadAllowed.Prefix(other) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/ServiceWriteAllowed.Prefix(other) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/ServiceReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/ServiceWriteDenied.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/ServiceReadDenied.Prefix(bar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/ServiceWriteDenied.Prefix(bar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/ServiceReadAllowed.Prefix(foobar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/ServiceWriteDenied.Prefix(foobar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/ServiceReadDenied.Prefix(barfo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/ServiceWriteDenied.Prefix(barfo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/ServiceReadAllowed.Prefix(barfoo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/ServiceWriteAllowed.Prefix(barfoo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/ServiceReadAllowed.Prefix(barfoo2) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/ServiceWriteAllowed.Prefix(barfoo2) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/EventReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/EventWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/EventReadAllowed.Prefix(foobar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/EventWriteAllowed.Prefix(foobar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/EventReadDenied.Prefix(bar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/EventWriteDenied.Prefix(bar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/EventReadDenied.Prefix(barbaz) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/EventWriteDenied.Prefix(barbaz) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/EventReadAllowed.Prefix(baz) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/EventWriteDenied.Prefix(baz) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryReadAllowed.Prefix(foobar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryWriteAllowed.Prefix(foobar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryReadDenied.Prefix(bar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryWriteDenied.Prefix(bar) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryReadDenied.Prefix(barbaz) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryWriteDenied.Prefix(barbaz) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryReadAllowed.Prefix(baz) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryWriteDenied.Prefix(baz) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryReadAllowed.Prefix(nope) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryWriteDenied.Prefix(nope) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryReadAllowed.Prefix(zoo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryWriteAllowed.Prefix(zoo) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryReadAllowed.Prefix(zookeeper) (0.00s)
>         --- PASS: TestACL/ComplexDefaultAllow/PreparedQueryWriteAllowed.Prefix(zookeeper) (0.00s)
>     --- PASS: TestACL/ExactMatchPrecedence (0.02s)
>         --- PASS: TestACL/ExactMatchPrecedence/AgentReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/AgentWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/AgentReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/AgentWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/AgentReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/AgentWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/AgentReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/AgentWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/AgentReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/AgentWritePrefixDenied.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/AgentReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/AgentWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/AgentReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/AgentWriteDenied.Prefix(football) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/KeyReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/KeyWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/KeyReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/KeyWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/KeyReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/KeyWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/KeyReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/KeyWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/KeyReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/KeyWritePrefixDenied.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/KeyReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/KeyWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/KeyReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/KeyWriteDenied.Prefix(football) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeWriteDenied.Prefix(football) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/ServiceReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/ServiceWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/ServiceReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/ServiceWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/ServiceReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/ServiceWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/ServiceReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/ServiceWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/ServiceReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/ServiceWritePrefixDenied.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/ServiceReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/ServiceWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/ServiceReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/ServiceWriteDenied.Prefix(football) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(fo)#01 (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(fo)#01 (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(for)#01 (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(for)#01 (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeReadAllowed.Prefix(foo)#01 (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeWriteAllowed.Prefix(foo)#01 (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(foot)#01 (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(foot)#01 (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(foot2)#01 (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(foot2)#01 (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeReadPrefixAllowed.Prefix(food)#01 (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeWritePrefixDenied.Prefix(food)#01 (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeReadDenied.Prefix(football)#01 (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/NodeWriteDenied.Prefix(football)#01 (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/IntentionReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/IntentionWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/IntentionReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/IntentionWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/IntentionReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/IntentionWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/IntentionReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/IntentionWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/IntentionReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/IntentionWritePrefixDenied.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/IntentionReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/IntentionWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/IntentionReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/IntentionWriteDenied.Prefix(football) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/SessionReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/SessionWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/SessionReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/SessionWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/SessionReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/SessionWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/SessionReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/SessionWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/SessionReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/SessionWritePrefixDenied.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/SessionReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/SessionWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/SessionReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/SessionWriteDenied.Prefix(football) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/EventReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/EventWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/EventReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/EventWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/EventReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/EventWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/EventReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/EventWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/EventReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/EventWritePrefixDenied.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/EventReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/EventWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/EventReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/EventWriteDenied.Prefix(football) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/PreparedQueryReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/PreparedQueryWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/PreparedQueryReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/PreparedQueryWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/PreparedQueryReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/PreparedQueryWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/PreparedQueryReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/PreparedQueryWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/PreparedQueryReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/PreparedQueryWritePrefixDenied.Prefix(foot2) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/PreparedQueryReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/PreparedQueryWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/PreparedQueryReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestACL/ExactMatchPrecedence/PreparedQueryWriteDenied.Prefix(football) (0.00s)
>     --- PASS: TestACL/ACLRead (0.00s)
>         --- PASS: TestACL/ACLRead/ReadAllowed (0.00s)
>         --- PASS: TestACL/ACLRead/WriteDenied (0.00s)
>     --- PASS: TestACL/ACLRead#01 (0.00s)
>         --- PASS: TestACL/ACLRead#01/ReadAllowed (0.00s)
>         --- PASS: TestACL/ACLRead#01/WriteAllowed (0.00s)
>     --- PASS: TestACL/KeyWritePrefixDefaultDeny (0.00s)
>         --- PASS: TestACL/KeyWritePrefixDefaultDeny/DeniedTopLevelPrefix.Prefix(foo) (0.00s)
>         --- PASS: TestACL/KeyWritePrefixDefaultDeny/AllowedTopLevelPrefix.Prefix(baz/) (0.00s)
>         --- PASS: TestACL/KeyWritePrefixDefaultDeny/AllowedPrefixWithNestedWrite.Prefix(foo/) (0.00s)
>         --- PASS: TestACL/KeyWritePrefixDefaultDeny/DenyPrefixWithNestedRead.Prefix(bar/) (0.00s)
>         --- PASS: TestACL/KeyWritePrefixDefaultDeny/DenyNoPrefixMatch.Prefix(te) (0.00s)
>     --- PASS: TestACL/KeyWritePrefixDefaultAllow (0.00s)
>         --- PASS: TestACL/KeyWritePrefixDefaultAllow/KeyWritePrefixDenied.Prefix(foo) (0.00s)
>         --- PASS: TestACL/KeyWritePrefixDefaultAllow/KeyWritePrefixAllowed.Prefix(bar) (0.00s)
> === RUN   TestRootAuthorizer
> --- PASS: TestRootAuthorizer (0.00s)
> === RUN   TestACLEnforce
> === RUN   TestACLEnforce/RuleNoneRequireRead
> === RUN   TestACLEnforce/RuleNoneRequireWrite
> === RUN   TestACLEnforce/RuleNoneRequireList
> === RUN   TestACLEnforce/RuleReadRequireRead
> === RUN   TestACLEnforce/RuleReadRequireWrite
> === RUN   TestACLEnforce/RuleReadRequireList
> === RUN   TestACLEnforce/RuleListRequireRead
> === RUN   TestACLEnforce/RuleListRequireWrite
> === RUN   TestACLEnforce/RuleListRequireList
> === RUN   TestACLEnforce/RuleWritetRequireRead
> === RUN   TestACLEnforce/RuleWritetRequireWrite
> === RUN   TestACLEnforce/RuleWritetRequireList
> === RUN   TestACLEnforce/RuleDenyRequireRead
> === RUN   TestACLEnforce/RuleDenyRequireWrite
> === RUN   TestACLEnforce/RuleDenyRequireList
> --- PASS: TestACLEnforce (0.00s)
>     --- PASS: TestACLEnforce/RuleNoneRequireRead (0.00s)
>     --- PASS: TestACLEnforce/RuleNoneRequireWrite (0.00s)
>     --- PASS: TestACLEnforce/RuleNoneRequireList (0.00s)
>     --- PASS: TestACLEnforce/RuleReadRequireRead (0.00s)
>     --- PASS: TestACLEnforce/RuleReadRequireWrite (0.00s)
>     --- PASS: TestACLEnforce/RuleReadRequireList (0.00s)
>     --- PASS: TestACLEnforce/RuleListRequireRead (0.00s)
>     --- PASS: TestACLEnforce/RuleListRequireWrite (0.00s)
>     --- PASS: TestACLEnforce/RuleListRequireList (0.00s)
>     --- PASS: TestACLEnforce/RuleWritetRequireRead (0.00s)
>     --- PASS: TestACLEnforce/RuleWritetRequireWrite (0.00s)
>     --- PASS: TestACLEnforce/RuleWritetRequireList (0.00s)
>     --- PASS: TestACLEnforce/RuleDenyRequireRead (0.00s)
>     --- PASS: TestACLEnforce/RuleDenyRequireWrite (0.00s)
>     --- PASS: TestACLEnforce/RuleDenyRequireList (0.00s)
> === RUN   TestACL_Enforce
> === PAUSE TestACL_Enforce
> === RUN   TestChainedAuthorizer
> === PAUSE TestChainedAuthorizer
> === RUN   TestPolicyAuthorizer
> === PAUSE TestPolicyAuthorizer
> === RUN   TestAnyAllowed
> === PAUSE TestAnyAllowed
> === RUN   TestAllAllowed
> === PAUSE TestAllAllowed
> === RUN   TestPolicySourceParse
> === RUN   TestPolicySourceParse/Legacy_Basic
> === RUN   TestPolicySourceParse/Legacy_(JSON)
> === RUN   TestPolicySourceParse/Service_No_Intentions_(Legacy)
> === RUN   TestPolicySourceParse/Service_Intentions_(Legacy)
> === RUN   TestPolicySourceParse/Service_Intention:_invalid_value_(Legacy)
> === RUN   TestPolicySourceParse/Bad_Policy_-_ACL
> === RUN   TestPolicySourceParse/Bad_Policy_-_Agent
> === RUN   TestPolicySourceParse/Bad_Policy_-_Agent_Prefix
> === RUN   TestPolicySourceParse/Bad_Policy_-_Key
> === RUN   TestPolicySourceParse/Bad_Policy_-_Key_Prefix
> === RUN   TestPolicySourceParse/Bad_Policy_-_Node
> === RUN   TestPolicySourceParse/Bad_Policy_-_Node_Prefix
> === RUN   TestPolicySourceParse/Bad_Policy_-_Service
> === RUN   TestPolicySourceParse/Bad_Policy_-_Service_Prefix
> === RUN   TestPolicySourceParse/Bad_Policy_-_Session
> === RUN   TestPolicySourceParse/Bad_Policy_-_Session_Prefix
> === RUN   TestPolicySourceParse/Bad_Policy_-_Event
> === RUN   TestPolicySourceParse/Bad_Policy_-_Event_Prefix
> === RUN   TestPolicySourceParse/Bad_Policy_-_Prepared_Query
> === RUN   TestPolicySourceParse/Bad_Policy_-_Prepared_Query_Prefix
> === RUN   TestPolicySourceParse/Bad_Policy_-_Keyring
> === RUN   TestPolicySourceParse/Bad_Policy_-_Operator
> === RUN   TestPolicySourceParse/Keyring_Empty
> === RUN   TestPolicySourceParse/Operator_Empty
> --- PASS: TestPolicySourceParse (0.00s)
>     --- PASS: TestPolicySourceParse/Legacy_Basic (0.00s)
>     --- PASS: TestPolicySourceParse/Legacy_(JSON) (0.00s)
>     --- PASS: TestPolicySourceParse/Service_No_Intentions_(Legacy) (0.00s)
>     --- PASS: TestPolicySourceParse/Service_Intentions_(Legacy) (0.00s)
>     --- PASS: TestPolicySourceParse/Service_Intention:_invalid_value_(Legacy) (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_ACL (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Agent (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Agent_Prefix (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Key (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Key_Prefix (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Node (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Node_Prefix (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Service (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Service_Prefix (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Session (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Session_Prefix (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Event (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Event_Prefix (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Prepared_Query (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Prepared_Query_Prefix (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Keyring (0.00s)
>     --- PASS: TestPolicySourceParse/Bad_Policy_-_Operator (0.00s)
>     --- PASS: TestPolicySourceParse/Keyring_Empty (0.00s)
>     --- PASS: TestPolicySourceParse/Operator_Empty (0.00s)
> === RUN   TestMergePolicies
> === RUN   TestMergePolicies/Agents
> === RUN   TestMergePolicies/Events
> === RUN   TestMergePolicies/Node
> === RUN   TestMergePolicies/Keys
> === RUN   TestMergePolicies/Services
> === RUN   TestMergePolicies/Sessions
> === RUN   TestMergePolicies/Prepared_Queries
> === RUN   TestMergePolicies/Write_Precedence
> === RUN   TestMergePolicies/Deny_Precedence
> === RUN   TestMergePolicies/Read_Precedence
> --- PASS: TestMergePolicies (0.00s)
>     --- PASS: TestMergePolicies/Agents (0.00s)
>     --- PASS: TestMergePolicies/Events (0.00s)
>     --- PASS: TestMergePolicies/Node (0.00s)
>     --- PASS: TestMergePolicies/Keys (0.00s)
>     --- PASS: TestMergePolicies/Services (0.00s)
>     --- PASS: TestMergePolicies/Sessions (0.00s)
>     --- PASS: TestMergePolicies/Prepared_Queries (0.00s)
>     --- PASS: TestMergePolicies/Write_Precedence (0.00s)
>     --- PASS: TestMergePolicies/Deny_Precedence (0.00s)
>     --- PASS: TestMergePolicies/Read_Precedence (0.00s)
> === RUN   TestRulesTranslate
> --- PASS: TestRulesTranslate (0.00s)
> === RUN   TestRulesTranslate_GH5493
> --- PASS: TestRulesTranslate_GH5493 (0.00s)
> === RUN   TestPrecedence
> === RUN   TestPrecedence/Deny_Over_Write
> === RUN   TestPrecedence/Deny_Over_List
> === RUN   TestPrecedence/Deny_Over_Read
> === RUN   TestPrecedence/Deny_Over_Unknown
> === RUN   TestPrecedence/Write_Over_List
> === RUN   TestPrecedence/Write_Over_Read
> === RUN   TestPrecedence/Write_Over_Unknown
> === RUN   TestPrecedence/List_Over_Read
> === RUN   TestPrecedence/List_Over_Unknown
> === RUN   TestPrecedence/Read_Over_Unknown
> === RUN   TestPrecedence/Write_Over_Deny
> === RUN   TestPrecedence/List_Over_Deny
> === RUN   TestPrecedence/Read_Over_Deny
> === RUN   TestPrecedence/Deny_Over_Unknown#01
> === RUN   TestPrecedence/List_Over_Write
> === RUN   TestPrecedence/Read_Over_Write
> === RUN   TestPrecedence/Unknown_Over_Write
> === RUN   TestPrecedence/Read_Over_List
> === RUN   TestPrecedence/Unknown_Over_List
> === RUN   TestPrecedence/Unknown_Over_Read
> --- PASS: TestPrecedence (0.00s)
>     --- PASS: TestPrecedence/Deny_Over_Write (0.00s)
>     --- PASS: TestPrecedence/Deny_Over_List (0.00s)
>     --- PASS: TestPrecedence/Deny_Over_Read (0.00s)
>     --- PASS: TestPrecedence/Deny_Over_Unknown (0.00s)
>     --- PASS: TestPrecedence/Write_Over_List (0.00s)
>     --- PASS: TestPrecedence/Write_Over_Read (0.00s)
>     --- PASS: TestPrecedence/Write_Over_Unknown (0.00s)
>     --- PASS: TestPrecedence/List_Over_Read (0.00s)
>     --- PASS: TestPrecedence/List_Over_Unknown (0.00s)
>     --- PASS: TestPrecedence/Read_Over_Unknown (0.00s)
>     --- PASS: TestPrecedence/Write_Over_Deny (0.00s)
>     --- PASS: TestPrecedence/List_Over_Deny (0.00s)
>     --- PASS: TestPrecedence/Read_Over_Deny (0.00s)
>     --- PASS: TestPrecedence/Deny_Over_Unknown#01 (0.00s)
>     --- PASS: TestPrecedence/List_Over_Write (0.00s)
>     --- PASS: TestPrecedence/Read_Over_Write (0.00s)
>     --- PASS: TestPrecedence/Unknown_Over_Write (0.00s)
>     --- PASS: TestPrecedence/Read_Over_List (0.00s)
>     --- PASS: TestPrecedence/Unknown_Over_List (0.00s)
>     --- PASS: TestPrecedence/Unknown_Over_Read (0.00s)
> === RUN   TestStaticAuthorizer
> === PAUSE TestStaticAuthorizer
> === CONT  TestACL_Enforce
> === CONT  TestAllAllowed
> === RUN   TestACL_Enforce/acl/read/Deny
> === RUN   TestAllAllowed/prefix-allow-other-read-prefix
> === CONT  TestPolicyAuthorizer
> === RUN   TestAllAllowed/prefix-allow-other-deny-prefix
> === RUN   TestPolicyAuthorizer/Intention_Wildcards_-_all_default
> === PAUSE TestPolicyAuthorizer/Intention_Wildcards_-_all_default
> === RUN   TestPolicyAuthorizer/Intention_Wildcards_-_any_default
> === PAUSE TestPolicyAuthorizer/Intention_Wildcards_-_any_default
> === RUN   TestPolicyAuthorizer/Defaults
> === PAUSE TestPolicyAuthorizer/Defaults
> === RUN   TestAllAllowed/prefix-allow-other-deny-exact
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches
> === RUN   TestPolicyAuthorizer/Intention_Wildcards_-_prefix_denied
> === PAUSE TestPolicyAuthorizer/Intention_Wildcards_-_prefix_denied
> === RUN   TestPolicyAuthorizer/Intention_Wildcards_-_prefix_allowed
> === PAUSE TestPolicyAuthorizer/Intention_Wildcards_-_prefix_allowed
> === CONT  TestStaticAuthorizer
> === RUN   TestStaticAuthorizer/AllowAll
> === PAUSE TestStaticAuthorizer/AllowAll
> === RUN   TestStaticAuthorizer/DenyAll
> === PAUSE TestStaticAuthorizer/DenyAll
> === RUN   TestStaticAuthorizer/ManageAll
> === PAUSE TestStaticAuthorizer/ManageAll
> === RUN   TestPolicyAuthorizer/Intention_Wildcards_-_all_allowed
> === PAUSE TestPolicyAuthorizer/Intention_Wildcards_-_all_allowed
> === CONT  TestAnyAllowed
> === CONT  TestChainedAuthorizer
> === RUN   TestChainedAuthorizer/No_Authorizers
> === PAUSE TestChainedAuthorizer/No_Authorizers
> === RUN   TestChainedAuthorizer/Authorizer_Defaults
> === PAUSE TestChainedAuthorizer/Authorizer_Defaults
> === RUN   TestChainedAuthorizer/Authorizer_No_Defaults
> === PAUSE TestChainedAuthorizer/Authorizer_No_Defaults
> === RUN   TestChainedAuthorizer/First_Found
> === PAUSE TestChainedAuthorizer/First_Found
> === RUN   TestAnyAllowed/prefix-list-allowed
> === CONT  TestStaticAuthorizer/AllowAll
> === RUN   TestAllAllowed/no-rules-default
> === RUN   TestAllAllowed/prefix-write-allowed
> === RUN   TestACL_Enforce/acl/read/Allow
> === RUN   TestAllAllowed/prefix-allow-other-write-prefix
> === RUN   TestACL_Enforce/acl/write/Deny
> === RUN   TestACL_Enforce/acl/write/Allow
> === RUN   TestAllAllowed/prefix-allow-other-list-prefix
> === RUN   TestAllAllowed/prefix-allow-other-list-exact
> === RUN   TestACL_Enforce/acl/list/Deny
> === RUN   TestAllAllowed/prefix-allow-other-read-exact
> === RUN   TestACL_Enforce/operator/read/Deny
> === RUN   TestAllAllowed/prefix-list-allowed
> === RUN   TestAllAllowed/prefix-read-allowed
> === RUN   TestAllAllowed/prefix-deny
> === RUN   TestACL_Enforce/operator/read/Allow
> === RUN   TestAllAllowed/prefix-allow-other-write-exact
> --- PASS: TestAllAllowed (0.02s)
>     --- PASS: TestAllAllowed/prefix-allow-other-read-prefix (0.00s)
>     --- PASS: TestAllAllowed/prefix-allow-other-deny-prefix (0.00s)
>     --- PASS: TestAllAllowed/prefix-allow-other-deny-exact (0.00s)
>     --- PASS: TestAllAllowed/no-rules-default (0.00s)
>     --- PASS: TestAllAllowed/prefix-write-allowed (0.00s)
>     --- PASS: TestAllAllowed/prefix-allow-other-write-prefix (0.00s)
>     --- PASS: TestAllAllowed/prefix-allow-other-list-prefix (0.00s)
>     --- PASS: TestAllAllowed/prefix-allow-other-list-exact (0.00s)
>     --- PASS: TestAllAllowed/prefix-allow-other-read-exact (0.00s)
>     --- PASS: TestAllAllowed/prefix-list-allowed (0.00s)
>     --- PASS: TestAllAllowed/prefix-read-allowed (0.00s)
>     --- PASS: TestAllAllowed/prefix-deny (0.00s)
>     --- PASS: TestAllAllowed/prefix-allow-other-write-exact (0.00s)
> === CONT  TestPolicyAuthorizer/Intention_Wildcards_-_all_default
> === RUN   TestACL_Enforce/operator/write/Deny
> === RUN   TestPolicyAuthorizer/Intention_Wildcards_-_all_default/AnyAllowed.Prefix(*)
> === PAUSE TestPolicyAuthorizer/Intention_Wildcards_-_all_default/AnyAllowed.Prefix(*)
> === RUN   TestPolicyAuthorizer/Intention_Wildcards_-_all_default/AllDefault.Prefix(*)
> === PAUSE TestPolicyAuthorizer/Intention_Wildcards_-_all_default/AllDefault.Prefix(*)
> === CONT  TestPolicyAuthorizer/Intention_Wildcards_-_prefix_allowed
> === RUN   TestPolicyAuthorizer/Intention_Wildcards_-_prefix_allowed/AnyAllowed.Prefix(*)
> === PAUSE TestPolicyAuthorizer/Intention_Wildcards_-_prefix_allowed/AnyAllowed.Prefix(*)
> === RUN   TestPolicyAuthorizer/Intention_Wildcards_-_prefix_allowed/AllDenied.Prefix(*)
> === PAUSE TestPolicyAuthorizer/Intention_Wildcards_-_prefix_allowed/AllDenied.Prefix(*)
> === RUN   TestACL_Enforce/operator/write/Allow
> === CONT  TestStaticAuthorizer/DenyAll
> === RUN   TestACL_Enforce/operator/list/Deny
> === CONT  TestPolicyAuthorizer/Intention_Wildcards_-_prefix_denied
> === RUN   TestACL_Enforce/keyring/read/Deny
> === RUN   TestPolicyAuthorizer/Intention_Wildcards_-_prefix_denied/AnyAllowed.Prefix(*)
> === PAUSE TestPolicyAuthorizer/Intention_Wildcards_-_prefix_denied/AnyAllowed.Prefix(*)
> === RUN   TestPolicyAuthorizer/Intention_Wildcards_-_prefix_denied/AllDenied.Prefix(*)
> === PAUSE TestPolicyAuthorizer/Intention_Wildcards_-_prefix_denied/AllDenied.Prefix(*)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(fo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(fo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(for)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(for)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadAllowed.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWriteAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWriteAllowed.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(foot)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(foot)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadDenied.Prefix(football)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadDenied.Prefix(football)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWriteDenied.Prefix(football)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWriteDenied.Prefix(football)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(fo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(fo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(for)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(for)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadAllowed.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWriteAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWriteAllowed.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(foot)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(foot)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadDenied.Prefix(football)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadDenied.Prefix(football)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWriteDenied.Prefix(football)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWriteDenied.Prefix(football)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(fo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(fo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(for)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(for)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadAllowed.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteAllowed.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadDenied.Prefix(football)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadDenied.Prefix(football)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteDenied.Prefix(football)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteDenied.Prefix(football)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(fo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(fo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(for)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(for)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadAllowed.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWriteAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWriteAllowed.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(foot)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(foot)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadDenied.Prefix(football)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadDenied.Prefix(football)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWriteDenied.Prefix(football)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWriteDenied.Prefix(football)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(fo)#01
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(fo)#01
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(fo)#01
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(fo)#01
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(for)#01
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(for)#01
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(for)#01
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(for)#01
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadAllowed.Prefix(foo)#01
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadAllowed.Prefix(foo)#01
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteAllowed.Prefix(foo)#01
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteAllowed.Prefix(foo)#01
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot)#01
> === RUN   TestACL_Enforce/keyring/read/Allow
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot)#01
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot)#01
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot)#01
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot2)#01
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot2)#01
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot2)#01
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot2)#01
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(food)#01
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(food)#01
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(food)#01
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(food)#01
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadDenied.Prefix(football)#01
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadDenied.Prefix(football)#01
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteDenied.Prefix(football)#01
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteDenied.Prefix(football)#01
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(fo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(fo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(for)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(for)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadAllowed.Prefix(foo)
> === RUN   TestACL_Enforce/keyring/write/Deny
> === RUN   TestAnyAllowed/prefix-deny-other-write-exact
> === RUN   TestAnyAllowed/prefix-deny-other-list-exact
> === RUN   TestACL_Enforce/keyring/write/Allow
> === RUN   TestAnyAllowed/prefix-deny-other-deny-prefix
> === RUN   TestAnyAllowed/prefix-deny-other-deny-exact
> === RUN   TestAnyAllowed/prefix-deny-other-read-exact
> === RUN   TestAnyAllowed/no-rules-default
> === RUN   TestACL_Enforce/keyring/list/Deny
> === RUN   TestAnyAllowed/prefix-write-allowed
> === RUN   TestACL_Enforce/agent/foo/read/Deny
> === RUN   TestAnyAllowed/prefix-read-allowed
> === RUN   TestAnyAllowed/prefix-deny
> === RUN   TestAnyAllowed/prefix-deny-other-write-prefix
> === RUN   TestAnyAllowed/prefix-deny-other-list-prefix
> === RUN   TestACL_Enforce/agent/foo/read/Allow
> === RUN   TestAnyAllowed/prefix-deny-other-read-prefix
> --- PASS: TestAnyAllowed (0.02s)
>     --- PASS: TestAnyAllowed/prefix-list-allowed (0.02s)
>     --- PASS: TestAnyAllowed/prefix-deny-other-write-exact (0.00s)
>     --- PASS: TestAnyAllowed/prefix-deny-other-list-exact (0.00s)
>     --- PASS: TestAnyAllowed/prefix-deny-other-deny-prefix (0.00s)
>     --- PASS: TestAnyAllowed/prefix-deny-other-deny-exact (0.00s)
>     --- PASS: TestAnyAllowed/prefix-deny-other-read-exact (0.00s)
>     --- PASS: TestAnyAllowed/no-rules-default (0.00s)
>     --- PASS: TestAnyAllowed/prefix-write-allowed (0.00s)
>     --- PASS: TestAnyAllowed/prefix-read-allowed (0.00s)
>     --- PASS: TestAnyAllowed/prefix-deny (0.00s)
>     --- PASS: TestAnyAllowed/prefix-deny-other-write-prefix (0.00s)
>     --- PASS: TestAnyAllowed/prefix-deny-other-list-prefix (0.00s)
>     --- PASS: TestAnyAllowed/prefix-deny-other-read-prefix (0.00s)
> === CONT  TestPolicyAuthorizer/Defaults
> === RUN   TestPolicyAuthorizer/Defaults/DefaultACLRead.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultACLRead.Prefix(foo)
> === RUN   TestACL_Enforce/agent/foo/write/Deny
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWriteAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWriteAllowed.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(foot)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(foot)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadDenied.Prefix(football)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadDenied.Prefix(football)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWriteDenied.Prefix(football)
> === RUN   TestACL_Enforce/agent/foo/write/Allow
> === RUN   TestPolicyAuthorizer/Defaults/DefaultACLWrite.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultACLWrite.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultAgentRead.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultAgentRead.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultAgentWrite.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultAgentWrite.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultEventRead.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultEventRead.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultEventWrite.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultEventWrite.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultIntentionDefaultAllow.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultIntentionDefaultAllow.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultIntentionRead.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultIntentionRead.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultIntentionWrite.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultIntentionWrite.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultKeyRead.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultKeyRead.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWriteDenied.Prefix(football)
> === CONT  TestStaticAuthorizer/ManageAll
> === RUN   TestPolicyAuthorizer/Defaults/DefaultKeyList.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultKeyList.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultKeyringRead.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultKeyringRead.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultKeyringWrite.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultKeyringWrite.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultKeyWrite.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultKeyWrite.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultKeyWritePrefix.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultKeyWritePrefix.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultNodeRead.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultNodeRead.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultNodeWrite.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultNodeWrite.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultOperatorRead.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultOperatorRead.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Intention_Wildcards_-_any_default
> === RUN   TestPolicyAuthorizer/Defaults/DefaultOperatorWrite.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultOperatorWrite.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultPreparedQueryRead.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultPreparedQueryRead.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultPreparedQueryWrite.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultPreparedQueryWrite.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Intention_Wildcards_-_any_default/AnyDefault.Prefix(*)
> === PAUSE TestPolicyAuthorizer/Intention_Wildcards_-_any_default/AnyDefault.Prefix(*)
> --- PASS: TestStaticAuthorizer (0.00s)
>     --- PASS: TestStaticAuthorizer/AllowAll (0.02s)
>     --- PASS: TestStaticAuthorizer/DenyAll (0.00s)
>     --- PASS: TestStaticAuthorizer/ManageAll (0.00s)
> === RUN   TestACL_Enforce/agent/foo/list/Deny
> === RUN   TestPolicyAuthorizer/Intention_Wildcards_-_any_default/AllDenied.Prefix(*)
> === PAUSE TestPolicyAuthorizer/Intention_Wildcards_-_any_default/AllDenied.Prefix(*)
> === CONT  TestPolicyAuthorizer/Intention_Wildcards_-_all_allowed
> === RUN   TestACL_Enforce/event/foo/read/Deny
> === RUN   TestPolicyAuthorizer/Intention_Wildcards_-_all_allowed/AnyAllowed.Prefix(*)
> === PAUSE TestPolicyAuthorizer/Intention_Wildcards_-_all_allowed/AnyAllowed.Prefix(*)
> === RUN   TestPolicyAuthorizer/Intention_Wildcards_-_all_allowed/AllAllowed.Prefix(*)
> === PAUSE TestPolicyAuthorizer/Intention_Wildcards_-_all_allowed/AllAllowed.Prefix(*)
> === CONT  TestChainedAuthorizer/No_Authorizers
> === RUN   TestACL_Enforce/event/foo/read/Allow
> === CONT  TestChainedAuthorizer/First_Found
> === RUN   TestACL_Enforce/event/foo/write/Deny
> === CONT  TestChainedAuthorizer/Authorizer_No_Defaults
> === RUN   TestACL_Enforce/event/foo/write/Allow
> === CONT  TestChainedAuthorizer/Authorizer_Defaults
> === RUN   TestACL_Enforce/event/foo/list/Deny
> === RUN   TestACL_Enforce/intention/foo/read/Deny
> --- PASS: TestChainedAuthorizer (0.00s)
>     --- PASS: TestChainedAuthorizer/No_Authorizers (0.00s)
>     --- PASS: TestChainedAuthorizer/First_Found (0.00s)
>     --- PASS: TestChainedAuthorizer/Authorizer_No_Defaults (0.00s)
>     --- PASS: TestChainedAuthorizer/Authorizer_Defaults (0.00s)
> === CONT  TestPolicyAuthorizer/Intention_Wildcards_-_all_default/AnyAllowed.Prefix(*)
> === CONT  TestPolicyAuthorizer/Intention_Wildcards_-_prefix_allowed/AnyAllowed.Prefix(*)
> === CONT  TestPolicyAuthorizer/Intention_Wildcards_-_all_default/AllDefault.Prefix(*)
> === RUN   TestACL_Enforce/intention/foo/read/Allow
> === CONT  TestPolicyAuthorizer/Intention_Wildcards_-_prefix_denied/AnyAllowed.Prefix(*)
> === CONT  TestPolicyAuthorizer/Intention_Wildcards_-_prefix_allowed/AllDenied.Prefix(*)
> === CONT  TestPolicyAuthorizer/Intention_Wildcards_-_prefix_denied/AllDenied.Prefix(*)
> === CONT  TestPolicyAuthorizer/Intention_Wildcards_-_any_default/AnyDefault.Prefix(*)
> === CONT  TestPolicyAuthorizer/Intention_Wildcards_-_all_allowed/AnyAllowed.Prefix(*)
> === RUN   TestACL_Enforce/intention/foo/write/Deny
> === CONT  TestPolicyAuthorizer/Intention_Wildcards_-_any_default/AllDenied.Prefix(*)
> === CONT  TestPolicyAuthorizer/Intention_Wildcards_-_all_allowed/AllAllowed.Prefix(*)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(fo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(fo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(for)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(for)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultServiceRead.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultServiceRead.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadAllowed.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultServiceWrite.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultServiceWrite.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultSessionRead.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultSessionRead.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWriteAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWriteAllowed.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Defaults/DefaultSessionWrite.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultSessionWrite.Prefix(foo)
> === RUN   TestACL_Enforce/intention/foo/write/Allow
> === RUN   TestPolicyAuthorizer/Defaults/DefaultSnapshot.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Defaults/DefaultSnapshot.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultACLRead.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(foot)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadDenied.Prefix(football)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadDenied.Prefix(football)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWriteDenied.Prefix(football)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWriteDenied.Prefix(football)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(fo)
> === RUN   TestACL_Enforce/intention/foo/list/Deny
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(fo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(for)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(for)
> === RUN   TestACL_Enforce/node/foo/read/Deny
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadAllowed.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/EventWriteAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/EventWriteAllowed.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(foot)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(foot)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadDenied.Prefix(football)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadDenied.Prefix(football)
> === RUN   TestACL_Enforce/node/foo/read/Allow
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/EventWriteDenied.Prefix(football)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/EventWriteDenied.Prefix(football)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(fo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(fo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(fo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(for)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(for)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(for)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadAllowed.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWriteAllowed.Prefix(foo)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWriteAllowed.Prefix(foo)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(foot)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(foot)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(foot)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(foot)
> === RUN   TestACL_Enforce/node/foo/write/Deny
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(foot2)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(foot2)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(food)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(food)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadDenied.Prefix(football)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadDenied.Prefix(football)
> === RUN   TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWriteDenied.Prefix(football)
> === PAUSE TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWriteDenied.Prefix(football)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(fo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultSnapshot.Prefix(foo)
> === RUN   TestACL_Enforce/node/foo/write/Allow
> === CONT  TestPolicyAuthorizer/Defaults/DefaultSessionWrite.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultSessionRead.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultServiceWrite.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultServiceRead.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultPreparedQueryWrite.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultPreparedQueryRead.Prefix(foo)
> === RUN   TestACL_Enforce/node/foo/list/Deny
> === CONT  TestPolicyAuthorizer/Defaults/DefaultOperatorWrite.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultOperatorRead.Prefix(foo)
> === RUN   TestACL_Enforce/query/foo/read/Deny
> === CONT  TestPolicyAuthorizer/Defaults/DefaultNodeWrite.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultNodeRead.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultKeyWritePrefix.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultKeyWrite.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultKeyringWrite.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultKeyringRead.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultKeyList.Prefix(foo)
> === RUN   TestACL_Enforce/query/foo/read/Allow
> === CONT  TestPolicyAuthorizer/Defaults/DefaultKeyRead.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultIntentionWrite.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultIntentionRead.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultIntentionDefaultAllow.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultEventWrite.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultEventRead.Prefix(foo)
> === RUN   TestACL_Enforce/query/foo/write/Deny
> === CONT  TestPolicyAuthorizer/Defaults/DefaultAgentWrite.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultAgentRead.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Defaults/DefaultACLWrite.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWriteDenied.Prefix(football)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadDenied.Prefix(football)
> === RUN   TestACL_Enforce/query/foo/write/Allow
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(food)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(food)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(foot2)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(foot2)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(foot)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(foot)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWriteAllowed.Prefix(foo)
> === RUN   TestACL_Enforce/query/foo/list/Deny
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadAllowed.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(for)
> === RUN   TestACL_Enforce/service/foo/read/Deny
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(for)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(fo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(fo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/EventWriteDenied.Prefix(football)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadDenied.Prefix(football)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(food)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(food)
> === RUN   TestACL_Enforce/service/foo/read/Allow
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(foot2)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(foot2)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(foot)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(foot)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/EventWriteAllowed.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadAllowed.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(for)
> === RUN   TestACL_Enforce/service/foo/write/Deny
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(for)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(fo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(fo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWriteDenied.Prefix(football)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadDenied.Prefix(football)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(food)
> === RUN   TestACL_Enforce/service/foo/write/Allow
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(foot2)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(foot)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(foot)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWriteAllowed.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadAllowed.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(for)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(for)
> === RUN   TestACL_Enforce/session/foo/list/Deny
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(fo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(fo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(for)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWriteDenied.Prefix(football)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadDenied.Prefix(football)
> === RUN   TestACL_Enforce/session/foo/read/Deny
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(food)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(food)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(foot2)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(foot2)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(foot)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(foot)
> === RUN   TestACL_Enforce/session/foo/read/Allow
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWriteAllowed.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadAllowed.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(for)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(for)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(fo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(fo)
> === RUN   TestACL_Enforce/session/foo/write/Deny
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteDenied.Prefix(football)#01
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadDenied.Prefix(football)#01
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(food)#01
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(food)#01
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot2)#01
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot2)#01
> === RUN   TestACL_Enforce/session/foo/write/Allow
> === RUN   TestACL_Enforce/session/foo/list/Deny#01
> === RUN   TestACL_Enforce/key/foo/read/Deny
> === RUN   TestACL_Enforce/key/foo/read/Allow
> === RUN   TestACL_Enforce/key/foo/write/Deny
> === RUN   TestACL_Enforce/key/foo/write/Allow
> === RUN   TestACL_Enforce/key/foo/list/Deny
> === RUN   TestACL_Enforce/key/foo/list/Allow
> === RUN   TestACL_Enforce/key/foo/deny/Deny
> === RUN   TestACL_Enforce/not-a-real-resource/read/Deny
> --- PASS: TestACL_Enforce (0.03s)
>     --- PASS: TestACL_Enforce/acl/read/Deny (0.01s)
>         authorizer_test.go:618: PASS:	ACLRead(*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/acl/read/Allow (0.00s)
>         authorizer_test.go:618: PASS:	ACLRead(*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/acl/write/Deny (0.00s)
>         authorizer_test.go:618: PASS:	ACLWrite(*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/acl/write/Allow (0.00s)
>         authorizer_test.go:618: PASS:	ACLWrite(*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/acl/list/Deny (0.00s)
>     --- PASS: TestACL_Enforce/operator/read/Deny (0.00s)
>         authorizer_test.go:618: PASS:	OperatorRead(*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/operator/read/Allow (0.00s)
>         authorizer_test.go:618: PASS:	OperatorRead(*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/operator/write/Deny (0.00s)
>         authorizer_test.go:618: PASS:	OperatorWrite(*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/operator/write/Allow (0.00s)
>         authorizer_test.go:618: PASS:	OperatorWrite(*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/operator/list/Deny (0.00s)
>     --- PASS: TestACL_Enforce/keyring/read/Deny (0.00s)
>         authorizer_test.go:618: PASS:	KeyringRead(*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/keyring/read/Allow (0.00s)
>         authorizer_test.go:618: PASS:	KeyringRead(*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/keyring/write/Deny (0.00s)
>         authorizer_test.go:618: PASS:	KeyringWrite(*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/keyring/write/Allow (0.00s)
>         authorizer_test.go:618: PASS:	KeyringWrite(*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/keyring/list/Deny (0.00s)
>     --- PASS: TestACL_Enforce/agent/foo/read/Deny (0.00s)
>         authorizer_test.go:618: PASS:	AgentRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/agent/foo/read/Allow (0.00s)
>         authorizer_test.go:618: PASS:	AgentRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/agent/foo/write/Deny (0.00s)
>         authorizer_test.go:618: PASS:	AgentWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/agent/foo/write/Allow (0.00s)
>         authorizer_test.go:618: PASS:	AgentWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/agent/foo/list/Deny (0.00s)
>     --- PASS: TestACL_Enforce/event/foo/read/Deny (0.00s)
>         authorizer_test.go:618: PASS:	EventRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/event/foo/read/Allow (0.00s)
>         authorizer_test.go:618: PASS:	EventRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/event/foo/write/Deny (0.00s)
>         authorizer_test.go:618: PASS:	EventWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/event/foo/write/Allow (0.00s)
>         authorizer_test.go:618: PASS:	EventWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/event/foo/list/Deny (0.00s)
>     --- PASS: TestACL_Enforce/intention/foo/read/Deny (0.00s)
>         authorizer_test.go:618: PASS:	IntentionRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/intention/foo/read/Allow (0.00s)
>         authorizer_test.go:618: PASS:	IntentionRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/intention/foo/write/Deny (0.00s)
>         authorizer_test.go:618: PASS:	IntentionWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/intention/foo/write/Allow (0.00s)
>         authorizer_test.go:618: PASS:	IntentionWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/intention/foo/list/Deny (0.00s)
>     --- PASS: TestACL_Enforce/node/foo/read/Deny (0.00s)
>         authorizer_test.go:618: PASS:	NodeRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/node/foo/read/Allow (0.00s)
>         authorizer_test.go:618: PASS:	NodeRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/node/foo/write/Deny (0.00s)
>         authorizer_test.go:618: PASS:	NodeWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/node/foo/write/Allow (0.00s)
>         authorizer_test.go:618: PASS:	NodeWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/node/foo/list/Deny (0.00s)
>     --- PASS: TestACL_Enforce/query/foo/read/Deny (0.00s)
>         authorizer_test.go:618: PASS:	PreparedQueryRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/query/foo/read/Allow (0.00s)
>         authorizer_test.go:618: PASS:	PreparedQueryRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/query/foo/write/Deny (0.00s)
>         authorizer_test.go:618: PASS:	PreparedQueryWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/query/foo/write/Allow (0.00s)
>         authorizer_test.go:618: PASS:	PreparedQueryWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/query/foo/list/Deny (0.00s)
>     --- PASS: TestACL_Enforce/service/foo/read/Deny (0.00s)
>         authorizer_test.go:618: PASS:	ServiceRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/service/foo/read/Allow (0.00s)
>         authorizer_test.go:618: PASS:	ServiceRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/service/foo/write/Deny (0.00s)
>         authorizer_test.go:618: PASS:	ServiceWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/service/foo/write/Allow (0.00s)
>         authorizer_test.go:618: PASS:	ServiceWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/session/foo/list/Deny (0.00s)
>     --- PASS: TestACL_Enforce/session/foo/read/Deny (0.00s)
>         authorizer_test.go:618: PASS:	SessionRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/session/foo/read/Allow (0.00s)
>         authorizer_test.go:618: PASS:	SessionRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/session/foo/write/Deny (0.00s)
>         authorizer_test.go:618: PASS:	SessionWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/session/foo/write/Allow (0.00s)
>         authorizer_test.go:618: PASS:	SessionWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/session/foo/list/Deny#01 (0.00s)
>     --- PASS: TestACL_Enforce/key/foo/read/Deny (0.00s)
>         authorizer_test.go:618: PASS:	KeyRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/key/foo/read/Allow (0.00s)
>         authorizer_test.go:618: PASS:	KeyRead(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/key/foo/write/Deny (0.00s)
>         authorizer_test.go:618: PASS:	KeyWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/key/foo/write/Allow (0.00s)
>         authorizer_test.go:618: PASS:	KeyWrite(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/key/foo/list/Deny (0.00s)
>         authorizer_test.go:618: PASS:	KeyList(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/key/foo/list/Allow (0.00s)
>         authorizer_test.go:618: PASS:	KeyList(string,*acl.AuthorizerContext)
>     --- PASS: TestACL_Enforce/key/foo/deny/Deny (0.00s)
>     --- PASS: TestACL_Enforce/not-a-real-resource/read/Deny (0.00s)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot)#01
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteAllowed.Prefix(foo)#01
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadAllowed.Prefix(foo)#01
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(for)#01
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(for)#01
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(fo)#01
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(fo)#01
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWriteDenied.Prefix(football)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadDenied.Prefix(football)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(food)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(food)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(foot2)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(foot2)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(foot)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(foot)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWriteAllowed.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadAllowed.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(for)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(for)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(fo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(fo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteDenied.Prefix(football)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadDenied.Prefix(football)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(food)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(food)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot2)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot2)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteAllowed.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadAllowed.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(for)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(for)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(fo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(fo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWriteDenied.Prefix(football)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadDenied.Prefix(football)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(food)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(food)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(foot2)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(foot2)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(foot)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(foot)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWriteAllowed.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadAllowed.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(for)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(for)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(fo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(fo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWriteDenied.Prefix(football)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadDenied.Prefix(football)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(food)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(food)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(foot2)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(foot2)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(foot)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(foot)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWriteAllowed.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadAllowed.Prefix(foo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(for)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(fo)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot)#01
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(food)
> === CONT  TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(foot2)
> --- PASS: TestPolicyAuthorizer (0.00s)
>     --- PASS: TestPolicyAuthorizer/Intention_Wildcards_-_all_default (0.00s)
>         --- PASS: TestPolicyAuthorizer/Intention_Wildcards_-_all_default/AnyAllowed.Prefix(*) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Intention_Wildcards_-_all_default/AllDefault.Prefix(*) (0.00s)
>     --- PASS: TestPolicyAuthorizer/Intention_Wildcards_-_prefix_allowed (0.00s)
>         --- PASS: TestPolicyAuthorizer/Intention_Wildcards_-_prefix_allowed/AnyAllowed.Prefix(*) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Intention_Wildcards_-_prefix_allowed/AllDenied.Prefix(*) (0.00s)
>     --- PASS: TestPolicyAuthorizer/Intention_Wildcards_-_prefix_denied (0.00s)
>         --- PASS: TestPolicyAuthorizer/Intention_Wildcards_-_prefix_denied/AnyAllowed.Prefix(*) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Intention_Wildcards_-_prefix_denied/AllDenied.Prefix(*) (0.00s)
>     --- PASS: TestPolicyAuthorizer/Intention_Wildcards_-_any_default (0.00s)
>         --- PASS: TestPolicyAuthorizer/Intention_Wildcards_-_any_default/AnyDefault.Prefix(*) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Intention_Wildcards_-_any_default/AllDenied.Prefix(*) (0.00s)
>     --- PASS: TestPolicyAuthorizer/Intention_Wildcards_-_all_allowed (0.00s)
>         --- PASS: TestPolicyAuthorizer/Intention_Wildcards_-_all_allowed/AnyAllowed.Prefix(*) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Intention_Wildcards_-_all_allowed/AllAllowed.Prefix(*) (0.00s)
>     --- PASS: TestPolicyAuthorizer/Defaults (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultACLRead.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultSnapshot.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultSessionWrite.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultSessionRead.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultServiceWrite.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultServiceRead.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultPreparedQueryWrite.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultPreparedQueryRead.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultOperatorWrite.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultOperatorRead.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultNodeWrite.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultNodeRead.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultKeyWritePrefix.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultKeyWrite.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultKeyringWrite.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultKeyringRead.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultKeyList.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultKeyRead.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultIntentionWrite.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultIntentionRead.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultIntentionDefaultAllow.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultEventWrite.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultEventRead.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultAgentWrite.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultAgentRead.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Defaults/DefaultACLWrite.Prefix(foo) (0.00s)
>     --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWriteDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(foot2) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/PreparedQueryReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/EventWriteDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(foot2) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/EventWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/EventWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/EventReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWriteDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWriteDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(foot2) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/IntentionReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteDenied.Prefix(football)#01 (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadDenied.Prefix(football)#01 (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(food)#01 (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(food)#01 (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot2)#01 (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot2)#01 (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot)#01 (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteAllowed.Prefix(foo)#01 (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadAllowed.Prefix(foo)#01 (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(for)#01 (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(for)#01 (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(fo)#01 (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(fo)#01 (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWriteDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(foot2) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/ServiceReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot2) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWriteDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(foot2) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/KeyWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/KeyReadPrefixAllowed.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWriteDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadDenied.Prefix(football) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(foot2) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(foot2) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(foot) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWriteAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadAllowed.Prefix(foo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/AgentReadPrefixAllowed.Prefix(for) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/AgentWritePrefixDenied.Prefix(fo) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/NodeReadPrefixAllowed.Prefix(foot)#01 (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/SessionReadPrefixAllowed.Prefix(food) (0.00s)
>         --- PASS: TestPolicyAuthorizer/Prefer_Exact_Matches/SessionWritePrefixDenied.Prefix(foot2) (0.00s)
> PASS
> ok  	github.com/hashicorp/consul/acl	0.141s
> === RUN   TestACL_Legacy_Disabled_Response
> === PAUSE TestACL_Legacy_Disabled_Response
> === RUN   TestACL_Legacy_Update
> === PAUSE TestACL_Legacy_Update
> === RUN   TestACL_Legacy_UpdateUpsert
> === PAUSE TestACL_Legacy_UpdateUpsert
> === RUN   TestACL_Legacy_Destroy
> === PAUSE TestACL_Legacy_Destroy
> === RUN   TestACL_Legacy_Clone
> === PAUSE TestACL_Legacy_Clone
> === RUN   TestACL_Legacy_Get
> === PAUSE TestACL_Legacy_Get
> === RUN   TestACL_Legacy_List
> --- SKIP: TestACL_Legacy_List (0.00s)
>     acl_endpoint_legacy_test.go:253: DM-skipped
> === RUN   TestACLReplicationStatus
> === PAUSE TestACLReplicationStatus
> === RUN   TestACL_Disabled_Response
> === PAUSE TestACL_Disabled_Response
> === RUN   TestACL_Bootstrap
> === PAUSE TestACL_Bootstrap
> === RUN   TestACL_HTTP
> === PAUSE TestACL_HTTP
> === RUN   TestACL_LoginProcedure_HTTP
> === PAUSE TestACL_LoginProcedure_HTTP
> === RUN   TestACL_Authorize
> === PAUSE TestACL_Authorize
> === RUN   TestACL_Version8
> === PAUSE TestACL_Version8
> === RUN   TestACL_AgentMasterToken
> === PAUSE TestACL_AgentMasterToken
> === RUN   TestACL_RootAuthorizersDenied
> === PAUSE TestACL_RootAuthorizersDenied
> === RUN   TestACL_vetServiceRegister
> === PAUSE TestACL_vetServiceRegister
> === RUN   TestACL_vetServiceUpdate
> === PAUSE TestACL_vetServiceUpdate
> === RUN   TestACL_vetCheckRegister
> === PAUSE TestACL_vetCheckRegister
> === RUN   TestACL_vetCheckUpdate
> === PAUSE TestACL_vetCheckUpdate
> === RUN   TestACL_filterMembers
> === PAUSE TestACL_filterMembers
> === RUN   TestACL_filterServices
> === PAUSE TestACL_filterServices
> === RUN   TestACL_filterChecks
> === PAUSE TestACL_filterChecks
> === RUN   TestAgent_Services
> === PAUSE TestAgent_Services
> === RUN   TestAgent_ServicesFiltered
> === PAUSE TestAgent_ServicesFiltered
> === RUN   TestAgent_Services_ExternalConnectProxy
> === PAUSE TestAgent_Services_ExternalConnectProxy
> === RUN   TestAgent_Services_Sidecar
> === PAUSE TestAgent_Services_Sidecar
> === RUN   TestAgent_Services_MeshGateway
> === PAUSE TestAgent_Services_MeshGateway
> === RUN   TestAgent_Services_ACLFilter
> === PAUSE TestAgent_Services_ACLFilter
> === RUN   TestAgent_Service
> --- SKIP: TestAgent_Service (0.00s)
>     agent_endpoint_test.go:276: DM-skipped
> === RUN   TestAgent_Checks
> === PAUSE TestAgent_Checks
> === RUN   TestAgent_ChecksWithFilter
> === PAUSE TestAgent_ChecksWithFilter
> === RUN   TestAgent_HealthServiceByID
> === PAUSE TestAgent_HealthServiceByID
> === RUN   TestAgent_HealthServiceByName
> === PAUSE TestAgent_HealthServiceByName
> === RUN   TestAgent_HealthServicesACLEnforcement
> === PAUSE TestAgent_HealthServicesACLEnforcement
> === RUN   TestAgent_Checks_ACLFilter
> === PAUSE TestAgent_Checks_ACLFilter
> === RUN   TestAgent_Self
> === PAUSE TestAgent_Self
> === RUN   TestAgent_Self_ACLDeny
> === PAUSE TestAgent_Self_ACLDeny
> === RUN   TestAgent_Metrics_ACLDeny
> === PAUSE TestAgent_Metrics_ACLDeny
> === RUN   TestAgent_Reload
> === PAUSE TestAgent_Reload
> === RUN   TestAgent_Reload_ACLDeny
> === PAUSE TestAgent_Reload_ACLDeny
> === RUN   TestAgent_Members
> === PAUSE TestAgent_Members
> === RUN   TestAgent_Members_WAN
> === PAUSE TestAgent_Members_WAN
> === RUN   TestAgent_Members_ACLFilter
> === PAUSE TestAgent_Members_ACLFilter
> === RUN   TestAgent_Join
> === PAUSE TestAgent_Join
> === RUN   TestAgent_Join_WAN
> === PAUSE TestAgent_Join_WAN
> === RUN   TestAgent_Join_ACLDeny
> === PAUSE TestAgent_Join_ACLDeny
> === RUN   TestAgent_JoinLANNotify
> === PAUSE TestAgent_JoinLANNotify
> === RUN   TestAgent_Leave
> --- SKIP: TestAgent_Leave (0.00s)
>     agent_endpoint_test.go:1575: DM-skipped
> === RUN   TestAgent_Leave_ACLDeny
> === PAUSE TestAgent_Leave_ACLDeny
> === RUN   TestAgent_ForceLeave
> --- SKIP: TestAgent_ForceLeave (0.00s)
>     agent_endpoint_test.go:1643: DM-skipped
> === RUN   TestAgent_ForceLeave_ACLDeny
> === PAUSE TestAgent_ForceLeave_ACLDeny
> === RUN   TestAgent_ForceLeavePrune
> === PAUSE TestAgent_ForceLeavePrune
> === RUN   TestAgent_RegisterCheck
> === PAUSE TestAgent_RegisterCheck
> === RUN   TestAgent_RegisterCheck_Scripts
> --- SKIP: TestAgent_RegisterCheck_Scripts (0.00s)
>     agent_endpoint_test.go:1822: DM-skipped
> === RUN   TestAgent_RegisterCheckScriptsExecDisable
> === PAUSE TestAgent_RegisterCheckScriptsExecDisable
> === RUN   TestAgent_RegisterCheckScriptsExecRemoteDisable
> === PAUSE TestAgent_RegisterCheckScriptsExecRemoteDisable
> === RUN   TestAgent_RegisterCheck_Passing
> === PAUSE TestAgent_RegisterCheck_Passing
> === RUN   TestAgent_RegisterCheck_BadStatus
> === PAUSE TestAgent_RegisterCheck_BadStatus
> === RUN   TestAgent_RegisterCheck_ACLDeny
> === PAUSE TestAgent_RegisterCheck_ACLDeny
> === RUN   TestAgent_DeregisterCheck
> === PAUSE TestAgent_DeregisterCheck
> === RUN   TestAgent_DeregisterCheckACLDeny
> === PAUSE TestAgent_DeregisterCheckACLDeny
> === RUN   TestAgent_PassCheck
> === PAUSE TestAgent_PassCheck
> === RUN   TestAgent_PassCheck_ACLDeny
> === PAUSE TestAgent_PassCheck_ACLDeny
> === RUN   TestAgent_WarnCheck
> === PAUSE TestAgent_WarnCheck
> === RUN   TestAgent_WarnCheck_ACLDeny
> === PAUSE TestAgent_WarnCheck_ACLDeny
> === RUN   TestAgent_FailCheck
> === PAUSE TestAgent_FailCheck
> === RUN   TestAgent_FailCheck_ACLDeny
> === PAUSE TestAgent_FailCheck_ACLDeny
> === RUN   TestAgent_UpdateCheck
> --- SKIP: TestAgent_UpdateCheck (0.00s)
>     agent_endpoint_test.go:2367: DM-skipped
> === RUN   TestAgent_UpdateCheck_ACLDeny
> === PAUSE TestAgent_UpdateCheck_ACLDeny
> === RUN   TestAgent_RegisterService
> === RUN   TestAgent_RegisterService/normal
> === PAUSE TestAgent_RegisterService/normal
> === RUN   TestAgent_RegisterService/service_manager
> === PAUSE TestAgent_RegisterService/service_manager
> === CONT  TestAgent_RegisterService/normal
> [INFO] freeport: blockSize 1500 too big for system limit 1024. Adjusting...
> === CONT  TestAgent_RegisterService/service_manager
> [INFO] freeport: detected ephemeral port range of [32768, 60999]
> [INFO] freeport: reducing max blocks from 30 to 22 to avoid the ephemeral port range
> --- PASS: TestAgent_RegisterService (0.00s)
>     --- PASS: TestAgent_RegisterService/service_manager (0.18s)
>         writer.go:29: 2020-02-23T02:45:59.679Z [WARN]  TestAgent_RegisterService/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:45:59.679Z [DEBUG] TestAgent_RegisterService/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:45:59.680Z [DEBUG] TestAgent_RegisterService/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:45:59.696Z [INFO]  TestAgent_RegisterService/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:9f1a8051-d26c-b6d2-d042-de46de4109dd Address:127.0.0.1:16138}]"
>         writer.go:29: 2020-02-23T02:45:59.697Z [INFO]  TestAgent_RegisterService/service_manager.server.serf.wan: serf: EventMemberJoin: Node-9f1a8051-d26c-b6d2-d042-de46de4109dd.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:45:59.697Z [INFO]  TestAgent_RegisterService/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16138 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:45:59.698Z [INFO]  TestAgent_RegisterService/service_manager.server.serf.lan: serf: EventMemberJoin: Node-9f1a8051-d26c-b6d2-d042-de46de4109dd 127.0.0.1
>         writer.go:29: 2020-02-23T02:45:59.723Z [INFO]  TestAgent_RegisterService/service_manager: Started DNS server: address=127.0.0.1:16133 network=udp
>         writer.go:29: 2020-02-23T02:45:59.724Z [INFO]  TestAgent_RegisterService/service_manager.server: Adding LAN server: server="Node-9f1a8051-d26c-b6d2-d042-de46de4109dd (Addr: tcp/127.0.0.1:16138) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:45:59.725Z [INFO]  TestAgent_RegisterService/service_manager.server: Handled event for server in area: event=member-join server=Node-9f1a8051-d26c-b6d2-d042-de46de4109dd.dc1 area=wan
>         writer.go:29: 2020-02-23T02:45:59.725Z [INFO]  TestAgent_RegisterService/service_manager: Started DNS server: address=127.0.0.1:16133 network=tcp
>         writer.go:29: 2020-02-23T02:45:59.726Z [INFO]  TestAgent_RegisterService/service_manager: Started HTTP server: address=127.0.0.1:16134 network=tcp
>         writer.go:29: 2020-02-23T02:45:59.726Z [INFO]  TestAgent_RegisterService/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:45:59.756Z [WARN]  TestAgent_RegisterService/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:45:59.756Z [INFO]  TestAgent_RegisterService/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16138 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:45:59.759Z [DEBUG] TestAgent_RegisterService/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:45:59.759Z [DEBUG] TestAgent_RegisterService/service_manager.server.raft: vote granted: from=9f1a8051-d26c-b6d2-d042-de46de4109dd term=2 tally=1
>         writer.go:29: 2020-02-23T02:45:59.759Z [INFO]  TestAgent_RegisterService/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:45:59.759Z [INFO]  TestAgent_RegisterService/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16138 [Leader]"
>         writer.go:29: 2020-02-23T02:45:59.759Z [INFO]  TestAgent_RegisterService/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:45:59.759Z [INFO]  TestAgent_RegisterService/service_manager.server: New leader elected: payload=Node-9f1a8051-d26c-b6d2-d042-de46de4109dd
>         writer.go:29: 2020-02-23T02:45:59.766Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:45:59.796Z [INFO]  TestAgent_RegisterService/service_manager: Synced node info
>         writer.go:29: 2020-02-23T02:45:59.796Z [DEBUG] TestAgent_RegisterService/service_manager: Node info in sync
>         writer.go:29: 2020-02-23T02:45:59.798Z [INFO]  TestAgent_RegisterService/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:45:59.798Z [INFO]  TestAgent_RegisterService/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:45:59.798Z [DEBUG] TestAgent_RegisterService/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-9f1a8051-d26c-b6d2-d042-de46de4109dd
>         writer.go:29: 2020-02-23T02:45:59.798Z [INFO]  TestAgent_RegisterService/service_manager.server: member joined, marking health alive: member=Node-9f1a8051-d26c-b6d2-d042-de46de4109dd
>         writer.go:29: 2020-02-23T02:45:59.821Z [DEBUG] TestAgent_RegisterService/service_manager: Node info in sync
>         writer.go:29: 2020-02-23T02:45:59.824Z [INFO]  TestAgent_RegisterService/service_manager: Synced service: service=test
>         writer.go:29: 2020-02-23T02:45:59.824Z [DEBUG] TestAgent_RegisterService/service_manager: Check in sync: check=service:test:1
>         writer.go:29: 2020-02-23T02:45:59.824Z [DEBUG] TestAgent_RegisterService/service_manager: Check in sync: check=service:test:2
>         writer.go:29: 2020-02-23T02:45:59.824Z [DEBUG] TestAgent_RegisterService/service_manager: Check in sync: check=service:test:3
>         writer.go:29: 2020-02-23T02:45:59.824Z [INFO]  TestAgent_RegisterService/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:45:59.824Z [INFO]  TestAgent_RegisterService/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:45:59.824Z [DEBUG] TestAgent_RegisterService/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:45:59.824Z [WARN]  TestAgent_RegisterService/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:45:59.824Z [DEBUG] TestAgent_RegisterService/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:45:59.826Z [WARN]  TestAgent_RegisterService/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:45:59.827Z [INFO]  TestAgent_RegisterService/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:45:59.827Z [INFO]  TestAgent_RegisterService/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:45:59.827Z [INFO]  TestAgent_RegisterService/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:45:59.827Z [INFO]  TestAgent_RegisterService/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16133 network=tcp
>         writer.go:29: 2020-02-23T02:45:59.827Z [INFO]  TestAgent_RegisterService/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16133 network=udp
>         writer.go:29: 2020-02-23T02:45:59.828Z [INFO]  TestAgent_RegisterService/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16134 network=tcp
>         writer.go:29: 2020-02-23T02:45:59.828Z [INFO]  TestAgent_RegisterService/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:45:59.828Z [INFO]  TestAgent_RegisterService/service_manager: Endpoints down
>     --- PASS: TestAgent_RegisterService/normal (0.39s)
>         writer.go:29: 2020-02-23T02:45:59.675Z [WARN]  TestAgent_RegisterService/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:45:59.675Z [DEBUG] TestAgent_RegisterService/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:45:59.676Z [DEBUG] TestAgent_RegisterService/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:45:59.728Z [INFO]  TestAgent_RegisterService/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:210dc8e9-d533-6d13-8110-47301eb10b9d Address:127.0.0.1:16132}]"
>         writer.go:29: 2020-02-23T02:45:59.728Z [INFO]  TestAgent_RegisterService/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16132 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:45:59.729Z [INFO]  TestAgent_RegisterService/normal.server.serf.wan: serf: EventMemberJoin: Node-210dc8e9-d533-6d13-8110-47301eb10b9d.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:45:59.729Z [INFO]  TestAgent_RegisterService/normal.server.serf.lan: serf: EventMemberJoin: Node-210dc8e9-d533-6d13-8110-47301eb10b9d 127.0.0.1
>         writer.go:29: 2020-02-23T02:45:59.730Z [INFO]  TestAgent_RegisterService/normal.server: Adding LAN server: server="Node-210dc8e9-d533-6d13-8110-47301eb10b9d (Addr: tcp/127.0.0.1:16132) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:45:59.730Z [INFO]  TestAgent_RegisterService/normal.server: Handled event for server in area: event=member-join server=Node-210dc8e9-d533-6d13-8110-47301eb10b9d.dc1 area=wan
>         writer.go:29: 2020-02-23T02:45:59.731Z [INFO]  TestAgent_RegisterService/normal: Started DNS server: address=127.0.0.1:16127 network=udp
>         writer.go:29: 2020-02-23T02:45:59.731Z [INFO]  TestAgent_RegisterService/normal: Started DNS server: address=127.0.0.1:16127 network=tcp
>         writer.go:29: 2020-02-23T02:45:59.731Z [INFO]  TestAgent_RegisterService/normal: Started HTTP server: address=127.0.0.1:16128 network=tcp
>         writer.go:29: 2020-02-23T02:45:59.731Z [INFO]  TestAgent_RegisterService/normal: started state syncer
>         writer.go:29: 2020-02-23T02:45:59.764Z [WARN]  TestAgent_RegisterService/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:45:59.764Z [INFO]  TestAgent_RegisterService/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16132 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:45:59.772Z [DEBUG] TestAgent_RegisterService/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:45:59.772Z [DEBUG] TestAgent_RegisterService/normal.server.raft: vote granted: from=210dc8e9-d533-6d13-8110-47301eb10b9d term=2 tally=1
>         writer.go:29: 2020-02-23T02:45:59.772Z [INFO]  TestAgent_RegisterService/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:45:59.772Z [INFO]  TestAgent_RegisterService/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16132 [Leader]"
>         writer.go:29: 2020-02-23T02:45:59.772Z [INFO]  TestAgent_RegisterService/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:45:59.773Z [INFO]  TestAgent_RegisterService/normal.server: New leader elected: payload=Node-210dc8e9-d533-6d13-8110-47301eb10b9d
>         writer.go:29: 2020-02-23T02:45:59.779Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:45:59.798Z [INFO]  TestAgent_RegisterService/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:45:59.798Z [INFO]  TestAgent_RegisterService/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:45:59.798Z [DEBUG] TestAgent_RegisterService/normal.server: Skipping self join check for node since the cluster is too small: node=Node-210dc8e9-d533-6d13-8110-47301eb10b9d
>         writer.go:29: 2020-02-23T02:45:59.798Z [INFO]  TestAgent_RegisterService/normal.server: member joined, marking health alive: member=Node-210dc8e9-d533-6d13-8110-47301eb10b9d
>         writer.go:29: 2020-02-23T02:45:59.901Z [DEBUG] TestAgent_RegisterService/normal: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:45:59.904Z [INFO]  TestAgent_RegisterService/normal: Synced node info
>         writer.go:29: 2020-02-23T02:45:59.904Z [DEBUG] TestAgent_RegisterService/normal: Node info in sync
>         writer.go:29: 2020-02-23T02:46:00.014Z [DEBUG] TestAgent_RegisterService/normal: Node info in sync
>         writer.go:29: 2020-02-23T02:46:00.022Z [INFO]  TestAgent_RegisterService/normal: Synced service: service=test
>         writer.go:29: 2020-02-23T02:46:00.022Z [DEBUG] TestAgent_RegisterService/normal: Check in sync: check=service:test:2
>         writer.go:29: 2020-02-23T02:46:00.022Z [DEBUG] TestAgent_RegisterService/normal: Check in sync: check=service:test:3
>         writer.go:29: 2020-02-23T02:46:00.023Z [DEBUG] TestAgent_RegisterService/normal: Check in sync: check=service:test:1
>         writer.go:29: 2020-02-23T02:46:00.023Z [INFO]  TestAgent_RegisterService/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:00.023Z [INFO]  TestAgent_RegisterService/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:00.023Z [DEBUG] TestAgent_RegisterService/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:00.023Z [WARN]  TestAgent_RegisterService/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:00.023Z [DEBUG] TestAgent_RegisterService/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:00.025Z [WARN]  TestAgent_RegisterService/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:00.026Z [INFO]  TestAgent_RegisterService/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:00.026Z [INFO]  TestAgent_RegisterService/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:00.026Z [INFO]  TestAgent_RegisterService/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:00.026Z [INFO]  TestAgent_RegisterService/normal: Stopping server: protocol=DNS address=127.0.0.1:16127 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.026Z [INFO]  TestAgent_RegisterService/normal: Stopping server: protocol=DNS address=127.0.0.1:16127 network=udp
>         writer.go:29: 2020-02-23T02:46:00.026Z [INFO]  TestAgent_RegisterService/normal: Stopping server: protocol=HTTP address=127.0.0.1:16128 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.026Z [INFO]  TestAgent_RegisterService/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:00.026Z [INFO]  TestAgent_RegisterService/normal: Endpoints down
> === RUN   TestAgent_RegisterService_ReRegister
> === RUN   TestAgent_RegisterService_ReRegister/normal
> === PAUSE TestAgent_RegisterService_ReRegister/normal
> === RUN   TestAgent_RegisterService_ReRegister/service_manager
> === PAUSE TestAgent_RegisterService_ReRegister/service_manager
> === CONT  TestAgent_RegisterService_ReRegister/normal
> === CONT  TestAgent_RegisterService_ReRegister/service_manager
> --- PASS: TestAgent_RegisterService_ReRegister (0.00s)
>     --- PASS: TestAgent_RegisterService_ReRegister/normal (0.23s)
>         writer.go:29: 2020-02-23T02:46:00.036Z [WARN]  TestAgent_RegisterService_ReRegister/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:00.037Z [DEBUG] TestAgent_RegisterService_ReRegister/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:00.038Z [DEBUG] TestAgent_RegisterService_ReRegister/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:00.048Z [INFO]  TestAgent_RegisterService_ReRegister/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:01128b1a-90aa-bca3-fb84-d95696e6540e Address:127.0.0.1:16144}]"
>         writer.go:29: 2020-02-23T02:46:00.048Z [INFO]  TestAgent_RegisterService_ReRegister/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16144 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:00.051Z [INFO]  TestAgent_RegisterService_ReRegister/normal.server.serf.wan: serf: EventMemberJoin: Node-01128b1a-90aa-bca3-fb84-d95696e6540e.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:00.052Z [INFO]  TestAgent_RegisterService_ReRegister/normal.server.serf.lan: serf: EventMemberJoin: Node-01128b1a-90aa-bca3-fb84-d95696e6540e 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:00.052Z [INFO]  TestAgent_RegisterService_ReRegister/normal: Started DNS server: address=127.0.0.1:16139 network=udp
>         writer.go:29: 2020-02-23T02:46:00.052Z [INFO]  TestAgent_RegisterService_ReRegister/normal.server: Adding LAN server: server="Node-01128b1a-90aa-bca3-fb84-d95696e6540e (Addr: tcp/127.0.0.1:16144) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:00.052Z [INFO]  TestAgent_RegisterService_ReRegister/normal.server: Handled event for server in area: event=member-join server=Node-01128b1a-90aa-bca3-fb84-d95696e6540e.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:00.052Z [INFO]  TestAgent_RegisterService_ReRegister/normal: Started DNS server: address=127.0.0.1:16139 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.052Z [INFO]  TestAgent_RegisterService_ReRegister/normal: Started HTTP server: address=127.0.0.1:16140 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.052Z [INFO]  TestAgent_RegisterService_ReRegister/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:00.086Z [WARN]  TestAgent_RegisterService_ReRegister/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:00.086Z [INFO]  TestAgent_RegisterService_ReRegister/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16144 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:00.089Z [DEBUG] TestAgent_RegisterService_ReRegister/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:00.089Z [DEBUG] TestAgent_RegisterService_ReRegister/normal.server.raft: vote granted: from=01128b1a-90aa-bca3-fb84-d95696e6540e term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:00.089Z [INFO]  TestAgent_RegisterService_ReRegister/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:00.089Z [INFO]  TestAgent_RegisterService_ReRegister/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16144 [Leader]"
>         writer.go:29: 2020-02-23T02:46:00.090Z [INFO]  TestAgent_RegisterService_ReRegister/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:00.093Z [INFO]  TestAgent_RegisterService_ReRegister/normal.server: New leader elected: payload=Node-01128b1a-90aa-bca3-fb84-d95696e6540e
>         writer.go:29: 2020-02-23T02:46:00.097Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:00.104Z [INFO]  TestAgent_RegisterService_ReRegister/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:00.104Z [INFO]  TestAgent_RegisterService_ReRegister/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:00.104Z [DEBUG] TestAgent_RegisterService_ReRegister/normal.server: Skipping self join check for node since the cluster is too small: node=Node-01128b1a-90aa-bca3-fb84-d95696e6540e
>         writer.go:29: 2020-02-23T02:46:00.104Z [INFO]  TestAgent_RegisterService_ReRegister/normal.server: member joined, marking health alive: member=Node-01128b1a-90aa-bca3-fb84-d95696e6540e
>         writer.go:29: 2020-02-23T02:46:00.224Z [DEBUG] TestAgent_RegisterService_ReRegister/normal: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:00.227Z [INFO]  TestAgent_RegisterService_ReRegister/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:00.227Z [DEBUG] TestAgent_RegisterService_ReRegister/normal: Node info in sync
>         writer.go:29: 2020-02-23T02:46:00.245Z [DEBUG] TestAgent_RegisterService_ReRegister/normal: Node info in sync
>         writer.go:29: 2020-02-23T02:46:00.248Z [INFO]  TestAgent_RegisterService_ReRegister/normal: Synced service: service=test
>         writer.go:29: 2020-02-23T02:46:00.248Z [DEBUG] TestAgent_RegisterService_ReRegister/normal: Check in sync: check=check_1
>         writer.go:29: 2020-02-23T02:46:00.248Z [DEBUG] TestAgent_RegisterService_ReRegister/normal: Check in sync: check=check_2
>         writer.go:29: 2020-02-23T02:46:00.253Z [DEBUG] TestAgent_RegisterService_ReRegister/normal: Node info in sync
>         writer.go:29: 2020-02-23T02:46:00.256Z [INFO]  TestAgent_RegisterService_ReRegister/normal: Synced service: service=test
>         writer.go:29: 2020-02-23T02:46:00.256Z [DEBUG] TestAgent_RegisterService_ReRegister/normal: Check in sync: check=check_2
>         writer.go:29: 2020-02-23T02:46:00.256Z [DEBUG] TestAgent_RegisterService_ReRegister/normal: Check in sync: check=check_3
>         writer.go:29: 2020-02-23T02:46:00.256Z [DEBUG] TestAgent_RegisterService_ReRegister/normal: Check in sync: check=check_1
>         writer.go:29: 2020-02-23T02:46:00.256Z [INFO]  TestAgent_RegisterService_ReRegister/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:00.256Z [INFO]  TestAgent_RegisterService_ReRegister/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:00.256Z [DEBUG] TestAgent_RegisterService_ReRegister/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:00.256Z [WARN]  TestAgent_RegisterService_ReRegister/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:00.256Z [DEBUG] TestAgent_RegisterService_ReRegister/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:00.257Z [WARN]  TestAgent_RegisterService_ReRegister/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:00.259Z [INFO]  TestAgent_RegisterService_ReRegister/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:00.259Z [INFO]  TestAgent_RegisterService_ReRegister/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:00.259Z [INFO]  TestAgent_RegisterService_ReRegister/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:00.259Z [INFO]  TestAgent_RegisterService_ReRegister/normal: Stopping server: protocol=DNS address=127.0.0.1:16139 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.259Z [INFO]  TestAgent_RegisterService_ReRegister/normal: Stopping server: protocol=DNS address=127.0.0.1:16139 network=udp
>         writer.go:29: 2020-02-23T02:46:00.259Z [INFO]  TestAgent_RegisterService_ReRegister/normal: Stopping server: protocol=HTTP address=127.0.0.1:16140 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.259Z [INFO]  TestAgent_RegisterService_ReRegister/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:00.259Z [INFO]  TestAgent_RegisterService_ReRegister/normal: Endpoints down
>     --- PASS: TestAgent_RegisterService_ReRegister/service_manager (0.34s)
>         writer.go:29: 2020-02-23T02:46:00.056Z [WARN]  TestAgent_RegisterService_ReRegister/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:00.056Z [DEBUG] TestAgent_RegisterService_ReRegister/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:00.057Z [DEBUG] TestAgent_RegisterService_ReRegister/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:00.073Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:1c0ec2f6-4ecb-83bf-b338-e44bbb22f5e7 Address:127.0.0.1:16150}]"
>         writer.go:29: 2020-02-23T02:46:00.073Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.server.serf.wan: serf: EventMemberJoin: Node-1c0ec2f6-4ecb-83bf-b338-e44bbb22f5e7.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:00.073Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.server.serf.lan: serf: EventMemberJoin: Node-1c0ec2f6-4ecb-83bf-b338-e44bbb22f5e7 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:00.074Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager: Started DNS server: address=127.0.0.1:16145 network=udp
>         writer.go:29: 2020-02-23T02:46:00.074Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16150 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:00.074Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.server: Adding LAN server: server="Node-1c0ec2f6-4ecb-83bf-b338-e44bbb22f5e7 (Addr: tcp/127.0.0.1:16150) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:00.074Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.server: Handled event for server in area: event=member-join server=Node-1c0ec2f6-4ecb-83bf-b338-e44bbb22f5e7.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:00.074Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager: Started DNS server: address=127.0.0.1:16145 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.074Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager: Started HTTP server: address=127.0.0.1:16146 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.074Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:00.119Z [WARN]  TestAgent_RegisterService_ReRegister/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:00.119Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16150 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:00.123Z [DEBUG] TestAgent_RegisterService_ReRegister/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:00.123Z [DEBUG] TestAgent_RegisterService_ReRegister/service_manager.server.raft: vote granted: from=1c0ec2f6-4ecb-83bf-b338-e44bbb22f5e7 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:00.123Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:00.123Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16150 [Leader]"
>         writer.go:29: 2020-02-23T02:46:00.123Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:00.123Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.server: New leader elected: payload=Node-1c0ec2f6-4ecb-83bf-b338-e44bbb22f5e7
>         writer.go:29: 2020-02-23T02:46:00.130Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:00.138Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:00.138Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:00.138Z [DEBUG] TestAgent_RegisterService_ReRegister/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-1c0ec2f6-4ecb-83bf-b338-e44bbb22f5e7
>         writer.go:29: 2020-02-23T02:46:00.138Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.server: member joined, marking health alive: member=Node-1c0ec2f6-4ecb-83bf-b338-e44bbb22f5e7
>         writer.go:29: 2020-02-23T02:46:00.251Z [DEBUG] TestAgent_RegisterService_ReRegister/service_manager: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:00.253Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager: Synced node info
>         writer.go:29: 2020-02-23T02:46:00.351Z [DEBUG] TestAgent_RegisterService_ReRegister/service_manager: Node info in sync
>         writer.go:29: 2020-02-23T02:46:00.367Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager: Synced service: service=test
>         writer.go:29: 2020-02-23T02:46:00.367Z [DEBUG] TestAgent_RegisterService_ReRegister/service_manager: Check in sync: check=check_1
>         writer.go:29: 2020-02-23T02:46:00.367Z [DEBUG] TestAgent_RegisterService_ReRegister/service_manager: Check in sync: check=check_2
>         writer.go:29: 2020-02-23T02:46:00.377Z [DEBUG] TestAgent_RegisterService_ReRegister/service_manager: Node info in sync
>         writer.go:29: 2020-02-23T02:46:00.380Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager: Synced service: service=test
>         writer.go:29: 2020-02-23T02:46:00.380Z [DEBUG] TestAgent_RegisterService_ReRegister/service_manager: Check in sync: check=check_3
>         writer.go:29: 2020-02-23T02:46:00.380Z [DEBUG] TestAgent_RegisterService_ReRegister/service_manager: Check in sync: check=check_1
>         writer.go:29: 2020-02-23T02:46:00.380Z [DEBUG] TestAgent_RegisterService_ReRegister/service_manager: Check in sync: check=check_2
>         writer.go:29: 2020-02-23T02:46:00.380Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:00.380Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:00.380Z [DEBUG] TestAgent_RegisterService_ReRegister/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:00.380Z [WARN]  TestAgent_RegisterService_ReRegister/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:00.380Z [DEBUG] TestAgent_RegisterService_ReRegister/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:00.382Z [WARN]  TestAgent_RegisterService_ReRegister/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:00.384Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:00.384Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:00.384Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:00.384Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16145 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.384Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16145 network=udp
>         writer.go:29: 2020-02-23T02:46:00.384Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16146 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.384Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:00.384Z [INFO]  TestAgent_RegisterService_ReRegister/service_manager: Endpoints down
> === RUN   TestAgent_RegisterService_ReRegister_ReplaceExistingChecks
> === RUN   TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal
> === PAUSE TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal
> === RUN   TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager
> === PAUSE TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager
> === CONT  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal
> === CONT  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager
> --- PASS: TestAgent_RegisterService_ReRegister_ReplaceExistingChecks (0.00s)
>     --- PASS: TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal (0.16s)
>         writer.go:29: 2020-02-23T02:46:00.391Z [WARN]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:00.391Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:00.392Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:00.416Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:9fcfc267-4f21-e83e-be84-1bfd0e012b52 Address:127.0.0.1:16156}]"
>         writer.go:29: 2020-02-23T02:46:00.417Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server.serf.wan: serf: EventMemberJoin: Node-9fcfc267-4f21-e83e-be84-1bfd0e012b52.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:00.417Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server.serf.lan: serf: EventMemberJoin: Node-9fcfc267-4f21-e83e-be84-1bfd0e012b52 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:00.419Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16156 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:00.419Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Started DNS server: address=127.0.0.1:16151 network=udp
>         writer.go:29: 2020-02-23T02:46:00.419Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server: Handled event for server in area: event=member-join server=Node-9fcfc267-4f21-e83e-be84-1bfd0e012b52.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:00.420Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server: Adding LAN server: server="Node-9fcfc267-4f21-e83e-be84-1bfd0e012b52 (Addr: tcp/127.0.0.1:16156) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:00.420Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Started DNS server: address=127.0.0.1:16151 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.421Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Started HTTP server: address=127.0.0.1:16152 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.421Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:00.462Z [WARN]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:00.462Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16156 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:00.467Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:00.467Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server.raft: vote granted: from=9fcfc267-4f21-e83e-be84-1bfd0e012b52 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:00.467Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:00.467Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16156 [Leader]"
>         writer.go:29: 2020-02-23T02:46:00.468Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:00.468Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server: New leader elected: payload=Node-9fcfc267-4f21-e83e-be84-1bfd0e012b52
>         writer.go:29: 2020-02-23T02:46:00.471Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:00.471Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Node info in sync
>         writer.go:29: 2020-02-23T02:46:00.479Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:00.485Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:00.485Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:00.485Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server: Skipping self join check for node since the cluster is too small: node=Node-9fcfc267-4f21-e83e-be84-1bfd0e012b52
>         writer.go:29: 2020-02-23T02:46:00.485Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server: member joined, marking health alive: member=Node-9fcfc267-4f21-e83e-be84-1bfd0e012b52
>         writer.go:29: 2020-02-23T02:46:00.519Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Node info in sync
>         writer.go:29: 2020-02-23T02:46:00.522Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Synced service: service=test
>         writer.go:29: 2020-02-23T02:46:00.522Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Check in sync: check=service:test:1
>         writer.go:29: 2020-02-23T02:46:00.522Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Check in sync: check=check_2
>         writer.go:29: 2020-02-23T02:46:00.529Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: removed check: check=check_2
>         writer.go:29: 2020-02-23T02:46:00.529Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Node info in sync
>         writer.go:29: 2020-02-23T02:46:00.531Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Synced service: service=test
>         writer.go:29: 2020-02-23T02:46:00.531Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Check in sync: check=service:test:1
>         writer.go:29: 2020-02-23T02:46:00.536Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Deregistered check: check=check_2
>         writer.go:29: 2020-02-23T02:46:00.536Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Check in sync: check=check_3
>         writer.go:29: 2020-02-23T02:46:00.536Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:00.536Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:00.536Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:00.536Z [WARN]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:00.536Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:00.538Z [WARN]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:00.541Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:00.541Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:00.541Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:00.541Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Stopping server: protocol=DNS address=127.0.0.1:16151 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.541Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Stopping server: protocol=DNS address=127.0.0.1:16151 network=udp
>         writer.go:29: 2020-02-23T02:46:00.541Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Stopping server: protocol=HTTP address=127.0.0.1:16152 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.541Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:00.541Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/normal: Endpoints down
>     --- PASS: TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager (0.51s)
>         writer.go:29: 2020-02-23T02:46:00.422Z [WARN]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:00.422Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:00.423Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:00.434Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4d44c883-4a58-6573-ffad-61145f6e166f Address:127.0.0.1:16162}]"
>         writer.go:29: 2020-02-23T02:46:00.434Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16162 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:00.434Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server.serf.wan: serf: EventMemberJoin: Node-4d44c883-4a58-6573-ffad-61145f6e166f.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:00.434Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server.serf.lan: serf: EventMemberJoin: Node-4d44c883-4a58-6573-ffad-61145f6e166f 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:00.435Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Started DNS server: address=127.0.0.1:16157 network=udp
>         writer.go:29: 2020-02-23T02:46:00.435Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server: Adding LAN server: server="Node-4d44c883-4a58-6573-ffad-61145f6e166f (Addr: tcp/127.0.0.1:16162) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:00.435Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server: Handled event for server in area: event=member-join server=Node-4d44c883-4a58-6573-ffad-61145f6e166f.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:00.435Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Started DNS server: address=127.0.0.1:16157 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.435Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Started HTTP server: address=127.0.0.1:16158 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.435Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:00.500Z [WARN]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:00.500Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16162 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:00.503Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:00.503Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server.raft: vote granted: from=4d44c883-4a58-6573-ffad-61145f6e166f term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:00.503Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:00.503Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16162 [Leader]"
>         writer.go:29: 2020-02-23T02:46:00.505Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:00.506Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server: New leader elected: payload=Node-4d44c883-4a58-6573-ffad-61145f6e166f
>         writer.go:29: 2020-02-23T02:46:00.511Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:00.520Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:00.520Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:00.520Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-4d44c883-4a58-6573-ffad-61145f6e166f
>         writer.go:29: 2020-02-23T02:46:00.520Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server: member joined, marking health alive: member=Node-4d44c883-4a58-6573-ffad-61145f6e166f
>         writer.go:29: 2020-02-23T02:46:00.662Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:00.664Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Synced node info
>         writer.go:29: 2020-02-23T02:46:00.664Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Node info in sync
>         writer.go:29: 2020-02-23T02:46:00.876Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Node info in sync
>         writer.go:29: 2020-02-23T02:46:00.879Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Synced service: service=test
>         writer.go:29: 2020-02-23T02:46:00.879Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Check in sync: check=check_2
>         writer.go:29: 2020-02-23T02:46:00.879Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Check in sync: check=service:test:1
>         writer.go:29: 2020-02-23T02:46:00.885Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: removed check: check=check_2
>         writer.go:29: 2020-02-23T02:46:00.885Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Node info in sync
>         writer.go:29: 2020-02-23T02:46:00.888Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Synced service: service=test
>         writer.go:29: 2020-02-23T02:46:00.888Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Check in sync: check=service:test:1
>         writer.go:29: 2020-02-23T02:46:00.890Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Deregistered check: check=check_2
>         writer.go:29: 2020-02-23T02:46:00.890Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Check in sync: check=check_3
>         writer.go:29: 2020-02-23T02:46:00.890Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:00.890Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:00.890Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:00.890Z [WARN]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:00.890Z [DEBUG] TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:00.893Z [WARN]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:00.895Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:00.895Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:00.895Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:00.895Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16157 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.895Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16157 network=udp
>         writer.go:29: 2020-02-23T02:46:00.895Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16158 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.895Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:00.896Z [INFO]  TestAgent_RegisterService_ReRegister_ReplaceExistingChecks/service_manager: Endpoints down
> === RUN   TestAgent_RegisterService_TranslateKeys
> === RUN   TestAgent_RegisterService_TranslateKeys/normal
> === PAUSE TestAgent_RegisterService_TranslateKeys/normal
> === RUN   TestAgent_RegisterService_TranslateKeys/service_manager
> === PAUSE TestAgent_RegisterService_TranslateKeys/service_manager
> === CONT  TestAgent_RegisterService_TranslateKeys/normal
> === CONT  TestAgent_RegisterService_TranslateKeys/service_manager
> === RUN   TestAgent_RegisterService_TranslateKeys/normal/no_token
> === RUN   TestAgent_RegisterService_TranslateKeys/normal/root_token
> === RUN   TestAgent_RegisterService_TranslateKeys/service_manager/no_token
> === RUN   TestAgent_RegisterService_TranslateKeys/service_manager/root_token
> --- PASS: TestAgent_RegisterService_TranslateKeys (0.00s)
>     --- PASS: TestAgent_RegisterService_TranslateKeys/normal (0.20s)
>         writer.go:29: 2020-02-23T02:46:00.922Z [WARN]  TestAgent_RegisterService_TranslateKeys/normal: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:00.922Z [WARN]  TestAgent_RegisterService_TranslateKeys/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:00.922Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:00.923Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:00.932Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:31dd88a2-057c-c151-911f-edf67766f5d7 Address:127.0.0.1:16168}]"
>         writer.go:29: 2020-02-23T02:46:00.932Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server.serf.wan: serf: EventMemberJoin: Node-31dd88a2-057c-c151-911f-edf67766f5d7.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:00.932Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server.serf.lan: serf: EventMemberJoin: Node-31dd88a2-057c-c151-911f-edf67766f5d7 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:00.933Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal: Started DNS server: address=127.0.0.1:16163 network=udp
>         writer.go:29: 2020-02-23T02:46:00.933Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16168 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:00.933Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: Adding LAN server: server="Node-31dd88a2-057c-c151-911f-edf67766f5d7 (Addr: tcp/127.0.0.1:16168) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:00.933Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: Handled event for server in area: event=member-join server=Node-31dd88a2-057c-c151-911f-edf67766f5d7.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:00.933Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal: Started DNS server: address=127.0.0.1:16163 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.933Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal: Started HTTP server: address=127.0.0.1:16164 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.933Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:00.978Z [WARN]  TestAgent_RegisterService_TranslateKeys/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:00.978Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16168 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:00.981Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:00.981Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.server.raft: vote granted: from=31dd88a2-057c-c151-911f-edf67766f5d7 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:00.981Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:00.981Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16168 [Leader]"
>         writer.go:29: 2020-02-23T02:46:00.981Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:00.982Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: New leader elected: payload=Node-31dd88a2-057c-c151-911f-edf67766f5d7
>         writer.go:29: 2020-02-23T02:46:00.983Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:00.984Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:00.988Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:00.988Z [WARN]  TestAgent_RegisterService_TranslateKeys/normal.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:00.988Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:00.988Z [WARN]  TestAgent_RegisterService_TranslateKeys/normal.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:00.993Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:00.993Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:00.995Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:00.995Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:00.995Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:00.995Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server.serf.lan: serf: EventMemberUpdate: Node-31dd88a2-057c-c151-911f-edf67766f5d7
>         writer.go:29: 2020-02-23T02:46:00.995Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server.serf.wan: serf: EventMemberUpdate: Node-31dd88a2-057c-c151-911f-edf67766f5d7.dc1
>         writer.go:29: 2020-02-23T02:46:00.996Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:00.996Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.server: transitioning out of legacy ACL mode
>         writer.go:29: 2020-02-23T02:46:00.996Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: Handled event for server in area: event=member-update server=Node-31dd88a2-057c-c151-911f-edf67766f5d7.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:00.996Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server.serf.lan: serf: EventMemberUpdate: Node-31dd88a2-057c-c151-911f-edf67766f5d7
>         writer.go:29: 2020-02-23T02:46:00.996Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server.serf.wan: serf: EventMemberUpdate: Node-31dd88a2-057c-c151-911f-edf67766f5d7.dc1
>         writer.go:29: 2020-02-23T02:46:00.996Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: Handled event for server in area: event=member-update server=Node-31dd88a2-057c-c151-911f-edf67766f5d7.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:01.001Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:01.007Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:01.008Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.008Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.server: Skipping self join check for node since the cluster is too small: node=Node-31dd88a2-057c-c151-911f-edf67766f5d7
>         writer.go:29: 2020-02-23T02:46:01.008Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: member joined, marking health alive: member=Node-31dd88a2-057c-c151-911f-edf67766f5d7
>         writer.go:29: 2020-02-23T02:46:01.015Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.server: Skipping self join check for node since the cluster is too small: node=Node-31dd88a2-057c-c151-911f-edf67766f5d7
>         writer.go:29: 2020-02-23T02:46:01.015Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.server: Skipping self join check for node since the cluster is too small: node=Node-31dd88a2-057c-c151-911f-edf67766f5d7
>         writer.go:29: 2020-02-23T02:46:01.077Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.acl: dropping node from result due to ACLs: node=Node-31dd88a2-057c-c151-911f-edf67766f5d7
>         writer.go:29: 2020-02-23T02:46:01.077Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.acl: dropping node from result due to ACLs: node=Node-31dd88a2-057c-c151-911f-edf67766f5d7
>         --- PASS: TestAgent_RegisterService_TranslateKeys/normal/no_token (0.00s)
>         writer.go:29: 2020-02-23T02:46:01.089Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:01.092Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal: Synced service: service=test
>         writer.go:29: 2020-02-23T02:46:01.092Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal: Check in sync: check=service:test:1
>         writer.go:29: 2020-02-23T02:46:01.092Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal: Check in sync: check=service:test:2
>         writer.go:29: 2020-02-23T02:46:01.092Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal: Check in sync: check=service:test:3
>         --- PASS: TestAgent_RegisterService_TranslateKeys/normal/root_token (0.01s)
>         writer.go:29: 2020-02-23T02:46:01.092Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:01.092Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:01.092Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:01.092Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:01.092Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.092Z [WARN]  TestAgent_RegisterService_TranslateKeys/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:01.092Z [ERROR] TestAgent_RegisterService_TranslateKeys/normal.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:01.092Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:01.092Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:01.092Z [DEBUG] TestAgent_RegisterService_TranslateKeys/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.095Z [WARN]  TestAgent_RegisterService_TranslateKeys/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:01.096Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:01.097Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:01.097Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:01.097Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal: Stopping server: protocol=DNS address=127.0.0.1:16163 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.097Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal: Stopping server: protocol=DNS address=127.0.0.1:16163 network=udp
>         writer.go:29: 2020-02-23T02:46:01.097Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal: Stopping server: protocol=HTTP address=127.0.0.1:16164 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.097Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:01.097Z [INFO]  TestAgent_RegisterService_TranslateKeys/normal: Endpoints down
>     --- PASS: TestAgent_RegisterService_TranslateKeys/service_manager (0.21s)
>         writer.go:29: 2020-02-23T02:46:00.923Z [WARN]  TestAgent_RegisterService_TranslateKeys/service_manager: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:00.923Z [WARN]  TestAgent_RegisterService_TranslateKeys/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:00.923Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:00.924Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:00.939Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:eddb7251-67cf-a244-aa13-a7741e726632 Address:127.0.0.1:16174}]"
>         writer.go:29: 2020-02-23T02:46:00.939Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16174 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:00.940Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server.serf.wan: serf: EventMemberJoin: Node-eddb7251-67cf-a244-aa13-a7741e726632.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:00.940Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server.serf.lan: serf: EventMemberJoin: Node-eddb7251-67cf-a244-aa13-a7741e726632 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:00.940Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server: Handled event for server in area: event=member-join server=Node-eddb7251-67cf-a244-aa13-a7741e726632.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:00.940Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server: Adding LAN server: server="Node-eddb7251-67cf-a244-aa13-a7741e726632 (Addr: tcp/127.0.0.1:16174) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:00.940Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager: Started DNS server: address=127.0.0.1:16169 network=udp
>         writer.go:29: 2020-02-23T02:46:00.940Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager: Started DNS server: address=127.0.0.1:16169 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.941Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager: Started HTTP server: address=127.0.0.1:16170 network=tcp
>         writer.go:29: 2020-02-23T02:46:00.941Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:00.980Z [WARN]  TestAgent_RegisterService_TranslateKeys/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:00.980Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16174 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:00.983Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:00.983Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.server.raft: vote granted: from=eddb7251-67cf-a244-aa13-a7741e726632 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:00.983Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:00.983Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16174 [Leader]"
>         writer.go:29: 2020-02-23T02:46:00.983Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:00.983Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server: New leader elected: payload=Node-eddb7251-67cf-a244-aa13-a7741e726632
>         writer.go:29: 2020-02-23T02:46:00.986Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:00.987Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:00.987Z [WARN]  TestAgent_RegisterService_TranslateKeys/service_manager.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:00.989Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:00.990Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:00.990Z [WARN]  TestAgent_RegisterService_TranslateKeys/service_manager.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:00.994Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:00.994Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:00.994Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:00.994Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server.serf.lan: serf: EventMemberUpdate: Node-eddb7251-67cf-a244-aa13-a7741e726632
>         writer.go:29: 2020-02-23T02:46:00.994Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server.serf.wan: serf: EventMemberUpdate: Node-eddb7251-67cf-a244-aa13-a7741e726632.dc1
>         writer.go:29: 2020-02-23T02:46:00.994Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server: Handled event for server in area: event=member-update server=Node-eddb7251-67cf-a244-aa13-a7741e726632.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:00.997Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:00.997Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.server: transitioning out of legacy ACL mode
>         writer.go:29: 2020-02-23T02:46:00.997Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server.serf.lan: serf: EventMemberUpdate: Node-eddb7251-67cf-a244-aa13-a7741e726632
>         writer.go:29: 2020-02-23T02:46:00.997Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server.serf.wan: serf: EventMemberUpdate: Node-eddb7251-67cf-a244-aa13-a7741e726632.dc1
>         writer.go:29: 2020-02-23T02:46:00.997Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server: Handled event for server in area: event=member-update server=Node-eddb7251-67cf-a244-aa13-a7741e726632.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:00.999Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:01.006Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:01.006Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.007Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-eddb7251-67cf-a244-aa13-a7741e726632
>         writer.go:29: 2020-02-23T02:46:01.007Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server: member joined, marking health alive: member=Node-eddb7251-67cf-a244-aa13-a7741e726632
>         writer.go:29: 2020-02-23T02:46:01.008Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-eddb7251-67cf-a244-aa13-a7741e726632
>         writer.go:29: 2020-02-23T02:46:01.009Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-eddb7251-67cf-a244-aa13-a7741e726632
>         writer.go:29: 2020-02-23T02:46:01.092Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.acl: dropping node from result due to ACLs: node=Node-eddb7251-67cf-a244-aa13-a7741e726632
>         writer.go:29: 2020-02-23T02:46:01.092Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.acl: dropping node from result due to ACLs: node=Node-eddb7251-67cf-a244-aa13-a7741e726632
>         --- PASS: TestAgent_RegisterService_TranslateKeys/service_manager/no_token (0.00s)
>         writer.go:29: 2020-02-23T02:46:01.102Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager: Synced node info
>         writer.go:29: 2020-02-23T02:46:01.105Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager: Synced service: service=test
>         writer.go:29: 2020-02-23T02:46:01.105Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager: Check in sync: check=service:test:1
>         writer.go:29: 2020-02-23T02:46:01.105Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager: Check in sync: check=service:test:2
>         writer.go:29: 2020-02-23T02:46:01.105Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager: Check in sync: check=service:test:3
>         --- PASS: TestAgent_RegisterService_TranslateKeys/service_manager/root_token (0.01s)
>         writer.go:29: 2020-02-23T02:46:01.105Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:01.105Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:01.105Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:01.105Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:01.105Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.105Z [WARN]  TestAgent_RegisterService_TranslateKeys/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:01.105Z [ERROR] TestAgent_RegisterService_TranslateKeys/service_manager.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:01.105Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:01.105Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:01.105Z [DEBUG] TestAgent_RegisterService_TranslateKeys/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.107Z [WARN]  TestAgent_RegisterService_TranslateKeys/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:01.109Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:01.109Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:01.109Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:01.109Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16169 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.109Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16169 network=udp
>         writer.go:29: 2020-02-23T02:46:01.109Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16170 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.109Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:01.109Z [INFO]  TestAgent_RegisterService_TranslateKeys/service_manager: Endpoints down
> === RUN   TestAgent_RegisterService_ACLDeny
> === RUN   TestAgent_RegisterService_ACLDeny/normal
> === PAUSE TestAgent_RegisterService_ACLDeny/normal
> === RUN   TestAgent_RegisterService_ACLDeny/service_manager
> === PAUSE TestAgent_RegisterService_ACLDeny/service_manager
> === CONT  TestAgent_RegisterService_ACLDeny/normal
> === CONT  TestAgent_RegisterService_ACLDeny/service_manager
> === RUN   TestAgent_RegisterService_ACLDeny/service_manager/no_token
> === RUN   TestAgent_RegisterService_ACLDeny/service_manager/root_token
> === RUN   TestAgent_RegisterService_ACLDeny/normal/no_token
> === RUN   TestAgent_RegisterService_ACLDeny/normal/root_token
> --- PASS: TestAgent_RegisterService_ACLDeny (0.00s)
>     --- PASS: TestAgent_RegisterService_ACLDeny/service_manager (0.15s)
>         writer.go:29: 2020-02-23T02:46:01.124Z [WARN]  TestAgent_RegisterService_ACLDeny/service_manager: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:01.124Z [WARN]  TestAgent_RegisterService_ACLDeny/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:01.125Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:01.126Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:01.142Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:bc067964-60e9-3a38-66d8-9260a666bc2e Address:127.0.0.1:16186}]"
>         writer.go:29: 2020-02-23T02:46:01.143Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server.serf.wan: serf: EventMemberJoin: Node-bc067964-60e9-3a38-66d8-9260a666bc2e.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:01.143Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server.serf.lan: serf: EventMemberJoin: Node-bc067964-60e9-3a38-66d8-9260a666bc2e 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:01.143Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager: Started DNS server: address=127.0.0.1:16181 network=udp
>         writer.go:29: 2020-02-23T02:46:01.143Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16186 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:01.144Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server: Adding LAN server: server="Node-bc067964-60e9-3a38-66d8-9260a666bc2e (Addr: tcp/127.0.0.1:16186) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:01.144Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server: Handled event for server in area: event=member-join server=Node-bc067964-60e9-3a38-66d8-9260a666bc2e.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:01.144Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager: Started DNS server: address=127.0.0.1:16181 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.144Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager: Started HTTP server: address=127.0.0.1:16182 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.144Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:01.191Z [WARN]  TestAgent_RegisterService_ACLDeny/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:01.191Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16186 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:01.194Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:01.194Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager.server.raft: vote granted: from=bc067964-60e9-3a38-66d8-9260a666bc2e term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:01.194Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:01.194Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16186 [Leader]"
>         writer.go:29: 2020-02-23T02:46:01.194Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:01.194Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server: New leader elected: payload=Node-bc067964-60e9-3a38-66d8-9260a666bc2e
>         writer.go:29: 2020-02-23T02:46:01.199Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:01.200Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:01.200Z [WARN]  TestAgent_RegisterService_ACLDeny/service_manager.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:01.212Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:01.218Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:01.218Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:01.218Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:01.218Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server.serf.lan: serf: EventMemberUpdate: Node-bc067964-60e9-3a38-66d8-9260a666bc2e
>         writer.go:29: 2020-02-23T02:46:01.218Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server.serf.wan: serf: EventMemberUpdate: Node-bc067964-60e9-3a38-66d8-9260a666bc2e.dc1
>         writer.go:29: 2020-02-23T02:46:01.218Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server: Handled event for server in area: event=member-update server=Node-bc067964-60e9-3a38-66d8-9260a666bc2e.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:01.223Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:01.230Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:01.230Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.230Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-bc067964-60e9-3a38-66d8-9260a666bc2e
>         writer.go:29: 2020-02-23T02:46:01.230Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server: member joined, marking health alive: member=Node-bc067964-60e9-3a38-66d8-9260a666bc2e
>         writer.go:29: 2020-02-23T02:46:01.232Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-bc067964-60e9-3a38-66d8-9260a666bc2e
>         writer.go:29: 2020-02-23T02:46:01.239Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager.acl: dropping node from result due to ACLs: node=Node-bc067964-60e9-3a38-66d8-9260a666bc2e
>         writer.go:29: 2020-02-23T02:46:01.239Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager.acl: dropping node from result due to ACLs: node=Node-bc067964-60e9-3a38-66d8-9260a666bc2e
>         --- PASS: TestAgent_RegisterService_ACLDeny/service_manager/no_token (0.00s)
>         writer.go:29: 2020-02-23T02:46:01.255Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager: Synced node info
>         writer.go:29: 2020-02-23T02:46:01.258Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager: Synced service: service=test
>         writer.go:29: 2020-02-23T02:46:01.258Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager: Check in sync: check=service:test:1
>         writer.go:29: 2020-02-23T02:46:01.258Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager: Check in sync: check=service:test:2
>         writer.go:29: 2020-02-23T02:46:01.258Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager: Check in sync: check=service:test:3
>         --- PASS: TestAgent_RegisterService_ACLDeny/service_manager/root_token (0.02s)
>         writer.go:29: 2020-02-23T02:46:01.258Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:01.258Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:01.258Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:01.258Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:01.258Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.258Z [WARN]  TestAgent_RegisterService_ACLDeny/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:01.258Z [ERROR] TestAgent_RegisterService_ACLDeny/service_manager.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:01.258Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:01.258Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:01.258Z [DEBUG] TestAgent_RegisterService_ACLDeny/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.260Z [WARN]  TestAgent_RegisterService_ACLDeny/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:01.262Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:01.262Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:01.262Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:01.262Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16181 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.262Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16181 network=udp
>         writer.go:29: 2020-02-23T02:46:01.262Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16182 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.262Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:01.262Z [INFO]  TestAgent_RegisterService_ACLDeny/service_manager: Endpoints down
>     --- PASS: TestAgent_RegisterService_ACLDeny/normal (0.28s)
>         writer.go:29: 2020-02-23T02:46:01.132Z [WARN]  TestAgent_RegisterService_ACLDeny/normal: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:01.132Z [WARN]  TestAgent_RegisterService_ACLDeny/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:01.132Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:01.133Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:01.153Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:8049af04-277d-b0ea-8876-5d145ac27757 Address:127.0.0.1:16180}]"
>         writer.go:29: 2020-02-23T02:46:01.153Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server.serf.wan: serf: EventMemberJoin: Node-8049af04-277d-b0ea-8876-5d145ac27757.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:01.153Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server.serf.lan: serf: EventMemberJoin: Node-8049af04-277d-b0ea-8876-5d145ac27757 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:01.154Z [INFO]  TestAgent_RegisterService_ACLDeny/normal: Started DNS server: address=127.0.0.1:16175 network=udp
>         writer.go:29: 2020-02-23T02:46:01.154Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16180 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:01.154Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server: Adding LAN server: server="Node-8049af04-277d-b0ea-8876-5d145ac27757 (Addr: tcp/127.0.0.1:16180) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:01.154Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server: Handled event for server in area: event=member-join server=Node-8049af04-277d-b0ea-8876-5d145ac27757.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:01.154Z [INFO]  TestAgent_RegisterService_ACLDeny/normal: Started DNS server: address=127.0.0.1:16175 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.154Z [INFO]  TestAgent_RegisterService_ACLDeny/normal: Started HTTP server: address=127.0.0.1:16176 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.154Z [INFO]  TestAgent_RegisterService_ACLDeny/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:01.193Z [WARN]  TestAgent_RegisterService_ACLDeny/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:01.193Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16180 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:01.197Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:01.197Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.server.raft: vote granted: from=8049af04-277d-b0ea-8876-5d145ac27757 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:01.197Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:01.197Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16180 [Leader]"
>         writer.go:29: 2020-02-23T02:46:01.198Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:01.199Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server: New leader elected: payload=Node-8049af04-277d-b0ea-8876-5d145ac27757
>         writer.go:29: 2020-02-23T02:46:01.199Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:01.201Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:01.201Z [WARN]  TestAgent_RegisterService_ACLDeny/normal.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:01.204Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:01.204Z [WARN]  TestAgent_RegisterService_ACLDeny/normal.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:01.212Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:01.217Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:01.221Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:01.221Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:01.221Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:01.221Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.server: transitioning out of legacy ACL mode
>         writer.go:29: 2020-02-23T02:46:01.221Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server.serf.lan: serf: EventMemberUpdate: Node-8049af04-277d-b0ea-8876-5d145ac27757
>         writer.go:29: 2020-02-23T02:46:01.221Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server.serf.wan: serf: EventMemberUpdate: Node-8049af04-277d-b0ea-8876-5d145ac27757.dc1
>         writer.go:29: 2020-02-23T02:46:01.221Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server: Handled event for server in area: event=member-update server=Node-8049af04-277d-b0ea-8876-5d145ac27757.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:01.221Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:01.221Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server.serf.lan: serf: EventMemberUpdate: Node-8049af04-277d-b0ea-8876-5d145ac27757
>         writer.go:29: 2020-02-23T02:46:01.221Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server.serf.wan: serf: EventMemberUpdate: Node-8049af04-277d-b0ea-8876-5d145ac27757.dc1
>         writer.go:29: 2020-02-23T02:46:01.221Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server: Handled event for server in area: event=member-update server=Node-8049af04-277d-b0ea-8876-5d145ac27757.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:01.225Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:01.233Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:01.233Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.233Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.server: Skipping self join check for node since the cluster is too small: node=Node-8049af04-277d-b0ea-8876-5d145ac27757
>         writer.go:29: 2020-02-23T02:46:01.233Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server: member joined, marking health alive: member=Node-8049af04-277d-b0ea-8876-5d145ac27757
>         writer.go:29: 2020-02-23T02:46:01.234Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.server: Skipping self join check for node since the cluster is too small: node=Node-8049af04-277d-b0ea-8876-5d145ac27757
>         writer.go:29: 2020-02-23T02:46:01.234Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.server: Skipping self join check for node since the cluster is too small: node=Node-8049af04-277d-b0ea-8876-5d145ac27757
>         writer.go:29: 2020-02-23T02:46:01.372Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.acl: dropping node from result due to ACLs: node=Node-8049af04-277d-b0ea-8876-5d145ac27757
>         writer.go:29: 2020-02-23T02:46:01.372Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.acl: dropping node from result due to ACLs: node=Node-8049af04-277d-b0ea-8876-5d145ac27757
>         --- PASS: TestAgent_RegisterService_ACLDeny/normal/no_token (0.00s)
>         writer.go:29: 2020-02-23T02:46:01.383Z [INFO]  TestAgent_RegisterService_ACLDeny/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:01.386Z [INFO]  TestAgent_RegisterService_ACLDeny/normal: Synced service: service=test
>         writer.go:29: 2020-02-23T02:46:01.386Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal: Check in sync: check=service:test:1
>         writer.go:29: 2020-02-23T02:46:01.386Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal: Check in sync: check=service:test:2
>         writer.go:29: 2020-02-23T02:46:01.386Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal: Check in sync: check=service:test:3
>         --- PASS: TestAgent_RegisterService_ACLDeny/normal/root_token (0.01s)
>         writer.go:29: 2020-02-23T02:46:01.386Z [INFO]  TestAgent_RegisterService_ACLDeny/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:01.386Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:01.386Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:01.386Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:01.386Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.386Z [WARN]  TestAgent_RegisterService_ACLDeny/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:01.386Z [ERROR] TestAgent_RegisterService_ACLDeny/normal.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:01.386Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:01.386Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:01.386Z [DEBUG] TestAgent_RegisterService_ACLDeny/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.388Z [WARN]  TestAgent_RegisterService_ACLDeny/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:01.389Z [INFO]  TestAgent_RegisterService_ACLDeny/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:01.389Z [INFO]  TestAgent_RegisterService_ACLDeny/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:01.389Z [INFO]  TestAgent_RegisterService_ACLDeny/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:01.389Z [INFO]  TestAgent_RegisterService_ACLDeny/normal: Stopping server: protocol=DNS address=127.0.0.1:16175 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.389Z [INFO]  TestAgent_RegisterService_ACLDeny/normal: Stopping server: protocol=DNS address=127.0.0.1:16175 network=udp
>         writer.go:29: 2020-02-23T02:46:01.389Z [INFO]  TestAgent_RegisterService_ACLDeny/normal: Stopping server: protocol=HTTP address=127.0.0.1:16176 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.389Z [INFO]  TestAgent_RegisterService_ACLDeny/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:01.389Z [INFO]  TestAgent_RegisterService_ACLDeny/normal: Endpoints down
> === RUN   TestAgent_RegisterService_InvalidAddress
> === RUN   TestAgent_RegisterService_InvalidAddress/normal
> === PAUSE TestAgent_RegisterService_InvalidAddress/normal
> === RUN   TestAgent_RegisterService_InvalidAddress/service_manager
> === PAUSE TestAgent_RegisterService_InvalidAddress/service_manager
> === CONT  TestAgent_RegisterService_InvalidAddress/normal
> === CONT  TestAgent_RegisterService_InvalidAddress/service_manager
> === RUN   TestAgent_RegisterService_InvalidAddress/service_manager/addr_0.0.0.0
> === RUN   TestAgent_RegisterService_InvalidAddress/service_manager/addr_::
> === RUN   TestAgent_RegisterService_InvalidAddress/service_manager/addr_[::]
> --- PASS: TestAgent_RegisterService_InvalidAddress (0.00s)
>     --- PASS: TestAgent_RegisterService_InvalidAddress/normal (0.28s)
>         writer.go:29: 2020-02-23T02:46:01.417Z [WARN]  TestAgent_RegisterService_InvalidAddress/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:01.417Z [DEBUG] TestAgent_RegisterService_InvalidAddress/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:01.417Z [DEBUG] TestAgent_RegisterService_InvalidAddress/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:01.432Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:6e5efc32-1016-d40c-00af-38ee2ca8a813 Address:127.0.0.1:16192}]"
>         writer.go:29: 2020-02-23T02:46:01.433Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.server.serf.wan: serf: EventMemberJoin: Node-6e5efc32-1016-d40c-00af-38ee2ca8a813.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:01.433Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.server.serf.lan: serf: EventMemberJoin: Node-6e5efc32-1016-d40c-00af-38ee2ca8a813 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:01.434Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16192 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:01.434Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.server: Adding LAN server: server="Node-6e5efc32-1016-d40c-00af-38ee2ca8a813 (Addr: tcp/127.0.0.1:16192) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:01.434Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.server: Handled event for server in area: event=member-join server=Node-6e5efc32-1016-d40c-00af-38ee2ca8a813.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:01.434Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal: Started DNS server: address=127.0.0.1:16187 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.434Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal: Started DNS server: address=127.0.0.1:16187 network=udp
>         writer.go:29: 2020-02-23T02:46:01.434Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal: Started HTTP server: address=127.0.0.1:16188 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.434Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:01.491Z [WARN]  TestAgent_RegisterService_InvalidAddress/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:01.491Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16192 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:01.496Z [DEBUG] TestAgent_RegisterService_InvalidAddress/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:01.496Z [DEBUG] TestAgent_RegisterService_InvalidAddress/normal.server.raft: vote granted: from=6e5efc32-1016-d40c-00af-38ee2ca8a813 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:01.496Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:01.496Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16192 [Leader]"
>         writer.go:29: 2020-02-23T02:46:01.496Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:01.496Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.server: New leader elected: payload=Node-6e5efc32-1016-d40c-00af-38ee2ca8a813
>         writer.go:29: 2020-02-23T02:46:01.508Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:01.516Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:01.516Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.516Z [DEBUG] TestAgent_RegisterService_InvalidAddress/normal.server: Skipping self join check for node since the cluster is too small: node=Node-6e5efc32-1016-d40c-00af-38ee2ca8a813
>         writer.go:29: 2020-02-23T02:46:01.516Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.server: member joined, marking health alive: member=Node-6e5efc32-1016-d40c-00af-38ee2ca8a813
>         writer.go:29: 2020-02-23T02:46:01.651Z [ERROR] TestAgent_RegisterService_InvalidAddress/normal.proxycfg: watch error: id=service-http-checks: error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>         writer.go:29: 2020-02-23T02:46:01.655Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:01.659Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal: Synced service: service=connect-proxy
>         writer.go:29: 2020-02-23T02:46:01.659Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:01.659Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:01.659Z [DEBUG] TestAgent_RegisterService_InvalidAddress/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.659Z [WARN]  TestAgent_RegisterService_InvalidAddress/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:01.659Z [ERROR] TestAgent_RegisterService_InvalidAddress/normal.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:01.659Z [DEBUG] TestAgent_RegisterService_InvalidAddress/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.661Z [WARN]  TestAgent_RegisterService_InvalidAddress/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:01.663Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:01.663Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:01.663Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:01.663Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal: Stopping server: protocol=DNS address=127.0.0.1:16187 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.663Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal: Stopping server: protocol=DNS address=127.0.0.1:16187 network=udp
>         writer.go:29: 2020-02-23T02:46:01.663Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal: Stopping server: protocol=HTTP address=127.0.0.1:16188 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.667Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:01.667Z [INFO]  TestAgent_RegisterService_InvalidAddress/normal: Endpoints down
>     --- PASS: TestAgent_RegisterService_InvalidAddress/service_manager (0.37s)
>         writer.go:29: 2020-02-23T02:46:01.430Z [WARN]  TestAgent_RegisterService_InvalidAddress/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:01.430Z [DEBUG] TestAgent_RegisterService_InvalidAddress/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:01.431Z [DEBUG] TestAgent_RegisterService_InvalidAddress/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:01.444Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b772ffeb-ade1-3ef2-a22b-d2658dde9666 Address:127.0.0.1:16198}]"
>         writer.go:29: 2020-02-23T02:46:01.444Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16198 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:01.444Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.server.serf.wan: serf: EventMemberJoin: Node-b772ffeb-ade1-3ef2-a22b-d2658dde9666.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:01.445Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.server.serf.lan: serf: EventMemberJoin: Node-b772ffeb-ade1-3ef2-a22b-d2658dde9666 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:01.445Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.server: Handled event for server in area: event=member-join server=Node-b772ffeb-ade1-3ef2-a22b-d2658dde9666.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:01.445Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.server: Adding LAN server: server="Node-b772ffeb-ade1-3ef2-a22b-d2658dde9666 (Addr: tcp/127.0.0.1:16198) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:01.445Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager: Started DNS server: address=127.0.0.1:16193 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.445Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager: Started DNS server: address=127.0.0.1:16193 network=udp
>         writer.go:29: 2020-02-23T02:46:01.446Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager: Started HTTP server: address=127.0.0.1:16194 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.446Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:01.482Z [WARN]  TestAgent_RegisterService_InvalidAddress/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:01.482Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16198 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:01.485Z [DEBUG] TestAgent_RegisterService_InvalidAddress/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:01.485Z [DEBUG] TestAgent_RegisterService_InvalidAddress/service_manager.server.raft: vote granted: from=b772ffeb-ade1-3ef2-a22b-d2658dde9666 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:01.485Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:01.485Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16198 [Leader]"
>         writer.go:29: 2020-02-23T02:46:01.485Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:01.485Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.server: New leader elected: payload=Node-b772ffeb-ade1-3ef2-a22b-d2658dde9666
>         writer.go:29: 2020-02-23T02:46:01.493Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:01.500Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:01.500Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.500Z [DEBUG] TestAgent_RegisterService_InvalidAddress/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-b772ffeb-ade1-3ef2-a22b-d2658dde9666
>         writer.go:29: 2020-02-23T02:46:01.500Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.server: member joined, marking health alive: member=Node-b772ffeb-ade1-3ef2-a22b-d2658dde9666
>         writer.go:29: 2020-02-23T02:46:01.671Z [DEBUG] TestAgent_RegisterService_InvalidAddress/service_manager: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:01.674Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager: Synced node info
>         writer.go:29: 2020-02-23T02:46:01.674Z [DEBUG] TestAgent_RegisterService_InvalidAddress/service_manager: Node info in sync
>         --- PASS: TestAgent_RegisterService_InvalidAddress/service_manager/addr_0.0.0.0 (0.00s)
>         --- PASS: TestAgent_RegisterService_InvalidAddress/service_manager/addr_:: (0.00s)
>         --- PASS: TestAgent_RegisterService_InvalidAddress/service_manager/addr_[::] (0.00s)
>         writer.go:29: 2020-02-23T02:46:01.752Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:01.752Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:01.752Z [DEBUG] TestAgent_RegisterService_InvalidAddress/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.752Z [WARN]  TestAgent_RegisterService_InvalidAddress/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:01.752Z [DEBUG] TestAgent_RegisterService_InvalidAddress/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.754Z [WARN]  TestAgent_RegisterService_InvalidAddress/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:01.756Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:01.756Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:01.756Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:01.756Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16193 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.756Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16193 network=udp
>         writer.go:29: 2020-02-23T02:46:01.756Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16194 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.756Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:01.756Z [INFO]  TestAgent_RegisterService_InvalidAddress/service_manager: Endpoints down
> === RUN   TestAgent_RegisterService_UnmanagedConnectProxy
> === RUN   TestAgent_RegisterService_UnmanagedConnectProxy/normal
> === PAUSE TestAgent_RegisterService_UnmanagedConnectProxy/normal
> === RUN   TestAgent_RegisterService_UnmanagedConnectProxy/service_manager
> === PAUSE TestAgent_RegisterService_UnmanagedConnectProxy/service_manager
> === CONT  TestAgent_RegisterService_UnmanagedConnectProxy/normal
> === CONT  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager
> --- PASS: TestAgent_RegisterService_UnmanagedConnectProxy (0.00s)
>     --- PASS: TestAgent_RegisterService_UnmanagedConnectProxy/service_manager (0.19s)
>         writer.go:29: 2020-02-23T02:46:01.772Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:01.772Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:01.773Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:01.798Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fa6d0112-b935-e71f-eab7-d83d7ae65b8a Address:127.0.0.1:16210}]"
>         writer.go:29: 2020-02-23T02:46:01.798Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16210 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:01.799Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server.serf.wan: serf: EventMemberJoin: Node-fa6d0112-b935-e71f-eab7-d83d7ae65b8a.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:01.799Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server.serf.lan: serf: EventMemberJoin: Node-fa6d0112-b935-e71f-eab7-d83d7ae65b8a 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:01.799Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager: Started DNS server: address=127.0.0.1:16205 network=udp
>         writer.go:29: 2020-02-23T02:46:01.800Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server: Adding LAN server: server="Node-fa6d0112-b935-e71f-eab7-d83d7ae65b8a (Addr: tcp/127.0.0.1:16210) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:01.800Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server: Handled event for server in area: event=member-join server=Node-fa6d0112-b935-e71f-eab7-d83d7ae65b8a.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:01.800Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager: Started DNS server: address=127.0.0.1:16205 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.800Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager: Started HTTP server: address=127.0.0.1:16206 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.800Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:01.860Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:01.860Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16210 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:01.864Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:01.864Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server.raft: vote granted: from=fa6d0112-b935-e71f-eab7-d83d7ae65b8a term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:01.864Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:01.864Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16210 [Leader]"
>         writer.go:29: 2020-02-23T02:46:01.864Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:01.864Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server: New leader elected: payload=Node-fa6d0112-b935-e71f-eab7-d83d7ae65b8a
>         writer.go:29: 2020-02-23T02:46:01.875Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:01.884Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:01.884Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.884Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-fa6d0112-b935-e71f-eab7-d83d7ae65b8a
>         writer.go:29: 2020-02-23T02:46:01.884Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server: member joined, marking health alive: member=Node-fa6d0112-b935-e71f-eab7-d83d7ae65b8a
>         writer.go:29: 2020-02-23T02:46:01.933Z [ERROR] TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.proxycfg: watch error: id=service-http-checks: error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>         writer.go:29: 2020-02-23T02:46:01.937Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager: Synced node info
>         writer.go:29: 2020-02-23T02:46:01.941Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager: Synced service: service=connect-proxy
>         writer.go:29: 2020-02-23T02:46:01.941Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:01.941Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:01.941Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.941Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:01.941Z [ERROR] TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:01.941Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.943Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:01.945Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:01.945Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:01.945Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:01.945Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16205 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.945Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16205 network=udp
>         writer.go:29: 2020-02-23T02:46:01.945Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16206 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.945Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:01.945Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/service_manager: Endpoints down
>     --- PASS: TestAgent_RegisterService_UnmanagedConnectProxy/normal (0.32s)
>         writer.go:29: 2020-02-23T02:46:01.772Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxy/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:01.772Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxy/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:01.773Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxy/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:01.796Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:1d60f2e3-f2b1-1cfb-378b-0e9b5f2da060 Address:127.0.0.1:16204}]"
>         writer.go:29: 2020-02-23T02:46:01.796Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16204 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:01.797Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server.serf.wan: serf: EventMemberJoin: Node-1d60f2e3-f2b1-1cfb-378b-0e9b5f2da060.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:01.797Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server.serf.lan: serf: EventMemberJoin: Node-1d60f2e3-f2b1-1cfb-378b-0e9b5f2da060 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:01.797Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server: Adding LAN server: server="Node-1d60f2e3-f2b1-1cfb-378b-0e9b5f2da060 (Addr: tcp/127.0.0.1:16204) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:01.797Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal: Started DNS server: address=127.0.0.1:16199 network=udp
>         writer.go:29: 2020-02-23T02:46:01.798Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server: Handled event for server in area: event=member-join server=Node-1d60f2e3-f2b1-1cfb-378b-0e9b5f2da060.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:01.798Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal: Started DNS server: address=127.0.0.1:16199 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.798Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal: Started HTTP server: address=127.0.0.1:16200 network=tcp
>         writer.go:29: 2020-02-23T02:46:01.798Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:01.861Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:01.861Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16204 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:01.864Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxy/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:01.864Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxy/normal.server.raft: vote granted: from=1d60f2e3-f2b1-1cfb-378b-0e9b5f2da060 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:01.864Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:01.864Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16204 [Leader]"
>         writer.go:29: 2020-02-23T02:46:01.864Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:01.864Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server: New leader elected: payload=Node-1d60f2e3-f2b1-1cfb-378b-0e9b5f2da060
>         writer.go:29: 2020-02-23T02:46:01.873Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:01.882Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:01.882Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:01.882Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxy/normal.server: Skipping self join check for node since the cluster is too small: node=Node-1d60f2e3-f2b1-1cfb-378b-0e9b5f2da060
>         writer.go:29: 2020-02-23T02:46:01.882Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server: member joined, marking health alive: member=Node-1d60f2e3-f2b1-1cfb-378b-0e9b5f2da060
>         writer.go:29: 2020-02-23T02:46:02.066Z [ERROR] TestAgent_RegisterService_UnmanagedConnectProxy/normal.proxycfg: watch error: id=service-http-checks: error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>         writer.go:29: 2020-02-23T02:46:02.071Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:02.074Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal: Synced service: service=connect-proxy
>         writer.go:29: 2020-02-23T02:46:02.074Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:02.074Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:02.074Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxy/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:02.074Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:02.075Z [ERROR] TestAgent_RegisterService_UnmanagedConnectProxy/normal.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:02.075Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxy/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:02.076Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:02.078Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:02.078Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:02.078Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:02.078Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal: Stopping server: protocol=DNS address=127.0.0.1:16199 network=tcp
>         writer.go:29: 2020-02-23T02:46:02.078Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal: Stopping server: protocol=DNS address=127.0.0.1:16199 network=udp
>         writer.go:29: 2020-02-23T02:46:02.078Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal: Stopping server: protocol=HTTP address=127.0.0.1:16200 network=tcp
>         writer.go:29: 2020-02-23T02:46:02.078Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:02.078Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxy/normal: Endpoints down
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal
> === PAUSE TestAgent_RegisterServiceDeregisterService_Sidecar/normal
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager
> === PAUSE TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager
> === CONT  TestAgent_RegisterServiceDeregisterService_Sidecar/normal
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case
> === CONT  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work
> === RUN   TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it
> --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar (0.00s)
>     --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager (4.56s)
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case (0.13s)
>             writer.go:29: 2020-02-23T02:46:02.109Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:02.109Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:02.109Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:02.119Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:eef6e8e5-9295-10da-c3c7-4bc8a357a465 Address:127.0.0.1:16222}]"
>             writer.go:29: 2020-02-23T02:46:02.119Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server.serf.wan: serf: EventMemberJoin: Node-eef6e8e5-9295-10da-c3c7-4bc8a357a465.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:02.120Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server.serf.lan: serf: EventMemberJoin: Node-eef6e8e5-9295-10da-c3c7-4bc8a357a465 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:02.120Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case: Started DNS server: address=127.0.0.1:16217 network=udp
>             writer.go:29: 2020-02-23T02:46:02.120Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server.raft: entering follower state: follower="Node at 127.0.0.1:16222 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:02.120Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server: Adding LAN server: server="Node-eef6e8e5-9295-10da-c3c7-4bc8a357a465 (Addr: tcp/127.0.0.1:16222) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:02.120Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server: Handled event for server in area: event=member-join server=Node-eef6e8e5-9295-10da-c3c7-4bc8a357a465.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:02.121Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case: Started DNS server: address=127.0.0.1:16217 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.121Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case: Started HTTP server: address=127.0.0.1:16218 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.121Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case: started state syncer
>             writer.go:29: 2020-02-23T02:46:02.159Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:02.159Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server.raft: entering candidate state: node="Node at 127.0.0.1:16222 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:02.162Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:02.162Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server.raft: vote granted: from=eef6e8e5-9295-10da-c3c7-4bc8a357a465 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:02.162Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:02.162Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server.raft: entering leader state: leader="Node at 127.0.0.1:16222 [Leader]"
>             writer.go:29: 2020-02-23T02:46:02.162Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:02.162Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server: New leader elected: payload=Node-eef6e8e5-9295-10da-c3c7-4bc8a357a465
>             writer.go:29: 2020-02-23T02:46:02.170Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:02.178Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:02.178Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:02.178Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server: Skipping self join check for node since the cluster is too small: node=Node-eef6e8e5-9295-10da-c3c7-4bc8a357a465
>             writer.go:29: 2020-02-23T02:46:02.178Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server: member joined, marking health alive: member=Node-eef6e8e5-9295-10da-c3c7-4bc8a357a465
>             writer.go:29: 2020-02-23T02:46:02.201Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case: Synced node info
>             writer.go:29: 2020-02-23T02:46:02.202Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case: Synced service: service=web
>             writer.go:29: 2020-02-23T02:46:02.202Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:02.202Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:02.202Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:02.202Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:02.203Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:02.203Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:02.205Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:02.207Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:02.207Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case: consul server down
>             writer.go:29: 2020-02-23T02:46:02.207Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case: shutdown complete
>             writer.go:29: 2020-02-23T02:46:02.207Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case: Stopping server: protocol=DNS address=127.0.0.1:16217 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.207Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case: Stopping server: protocol=DNS address=127.0.0.1:16217 network=udp
>             writer.go:29: 2020-02-23T02:46:02.207Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case: Stopping server: protocol=HTTP address=127.0.0.1:16218 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.207Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:02.207Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/sanity_check_no_sidecar_case: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar (0.44s)
>             writer.go:29: 2020-02-23T02:46:02.236Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:02.236Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:02.237Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:02.374Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:6302271f-8cec-f969-6db1-f38c5d71249b Address:127.0.0.1:16228}]"
>             writer.go:29: 2020-02-23T02:46:02.374Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server.serf.wan: serf: EventMemberJoin: Node-6302271f-8cec-f969-6db1-f38c5d71249b.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:02.375Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server.serf.lan: serf: EventMemberJoin: Node-6302271f-8cec-f969-6db1-f38c5d71249b 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:02.375Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16228 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:02.375Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Started DNS server: address=127.0.0.1:16223 network=udp
>             writer.go:29: 2020-02-23T02:46:02.375Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server: Handled event for server in area: event=member-join server=Node-6302271f-8cec-f969-6db1-f38c5d71249b.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:02.375Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server: Adding LAN server: server="Node-6302271f-8cec-f969-6db1-f38c5d71249b (Addr: tcp/127.0.0.1:16228) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:02.375Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Started DNS server: address=127.0.0.1:16223 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.375Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Started HTTP server: address=127.0.0.1:16224 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.375Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:02.417Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:02.417Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16228 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:02.478Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:02.478Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server.raft: vote granted: from=6302271f-8cec-f969-6db1-f38c5d71249b term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:02.478Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:02.478Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16228 [Leader]"
>             writer.go:29: 2020-02-23T02:46:02.478Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:02.478Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server: New leader elected: payload=Node-6302271f-8cec-f969-6db1-f38c5d71249b
>             writer.go:29: 2020-02-23T02:46:02.508Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:02.516Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:02.516Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:02.516Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-6302271f-8cec-f969-6db1-f38c5d71249b
>             writer.go:29: 2020-02-23T02:46:02.516Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server: member joined, marking health alive: member=Node-6302271f-8cec-f969-6db1-f38c5d71249b
>             writer.go:29: 2020-02-23T02:46:02.629Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: added local registration for service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:02.630Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Synced node info
>             writer.go:29: 2020-02-23T02:46:02.632Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Synced service: service=web
>             writer.go:29: 2020-02-23T02:46:02.635Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Synced service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:02.635Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Check in sync: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:02.635Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Check in sync: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:02.642Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: removed service: service=web
>             writer.go:29: 2020-02-23T02:46:02.642Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: removed check: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:02.643Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: removed check: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:02.643Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: removed service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:02.643Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Node info in sync
>             writer.go:29: 2020-02-23T02:46:02.644Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Deregistered service: service=web
>             writer.go:29: 2020-02-23T02:46:02.645Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Deregistered service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:02.646Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:02.646Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:02.646Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:02.646Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:02.646Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:02.646Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:02.648Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:02.649Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:02.649Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:02.649Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:02.649Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16223 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.649Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16223 network=udp
>             writer.go:29: 2020-02-23T02:46:02.649Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16224 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.649Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:02.650Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/default_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults (0.24s)
>             writer.go:29: 2020-02-23T02:46:02.673Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>             writer.go:29: 2020-02-23T02:46:02.673Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:02.673Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:02.673Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:02.682Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4d10202f-10d7-ba21-2d7c-c5c61ba9c011 Address:127.0.0.1:16240}]"
>             writer.go:29: 2020-02-23T02:46:02.683Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.serf.wan: serf: EventMemberJoin: Node-4d10202f-10d7-ba21-2d7c-c5c61ba9c011.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:02.683Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.serf.lan: serf: EventMemberJoin: Node-4d10202f-10d7-ba21-2d7c-c5c61ba9c011 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:02.683Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Started DNS server: address=127.0.0.1:16235 network=udp
>             writer.go:29: 2020-02-23T02:46:02.683Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.raft: entering follower state: follower="Node at 127.0.0.1:16240 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:02.684Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server: Adding LAN server: server="Node-4d10202f-10d7-ba21-2d7c-c5c61ba9c011 (Addr: tcp/127.0.0.1:16240) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:02.684Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server: Handled event for server in area: event=member-join server=Node-4d10202f-10d7-ba21-2d7c-c5c61ba9c011.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:02.684Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Started DNS server: address=127.0.0.1:16235 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.684Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Started HTTP server: address=127.0.0.1:16236 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.684Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: started state syncer
>             writer.go:29: 2020-02-23T02:46:02.719Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:02.719Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.raft: entering candidate state: node="Node at 127.0.0.1:16240 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:02.749Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:02.749Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.raft: vote granted: from=4d10202f-10d7-ba21-2d7c-c5c61ba9c011 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:02.749Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:02.749Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.raft: entering leader state: leader="Node at 127.0.0.1:16240 [Leader]"
>             writer.go:29: 2020-02-23T02:46:02.749Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:02.749Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server: New leader elected: payload=Node-4d10202f-10d7-ba21-2d7c-c5c61ba9c011
>             writer.go:29: 2020-02-23T02:46:02.751Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:02.752Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server: Created ACL 'global-management' policy
>             writer.go:29: 2020-02-23T02:46:02.752Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:02.755Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server: Bootstrapped ACL master token from configuration
>             writer.go:29: 2020-02-23T02:46:02.759Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server: Created ACL anonymous token from configuration
>             writer.go:29: 2020-02-23T02:46:02.759Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.leader: started routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:02.759Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.leader: started routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:02.759Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.serf.lan: serf: EventMemberUpdate: Node-4d10202f-10d7-ba21-2d7c-c5c61ba9c011
>             writer.go:29: 2020-02-23T02:46:02.759Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.serf.wan: serf: EventMemberUpdate: Node-4d10202f-10d7-ba21-2d7c-c5c61ba9c011.dc1
>             writer.go:29: 2020-02-23T02:46:02.759Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server: Handled event for server in area: event=member-update server=Node-4d10202f-10d7-ba21-2d7c-c5c61ba9c011.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:02.763Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:02.770Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:02.770Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:02.770Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server: Skipping self join check for node since the cluster is too small: node=Node-4d10202f-10d7-ba21-2d7c-c5c61ba9c011
>             writer.go:29: 2020-02-23T02:46:02.770Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server: member joined, marking health alive: member=Node-4d10202f-10d7-ba21-2d7c-c5c61ba9c011
>             writer.go:29: 2020-02-23T02:46:02.773Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server: Skipping self join check for node since the cluster is too small: node=Node-4d10202f-10d7-ba21-2d7c-c5c61ba9c011
>             writer.go:29: 2020-02-23T02:46:02.806Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.acl: dropping node from result due to ACLs: node=Node-4d10202f-10d7-ba21-2d7c-c5c61ba9c011
>             writer.go:29: 2020-02-23T02:46:02.806Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.acl: dropping node from result due to ACLs: node=Node-4d10202f-10d7-ba21-2d7c-c5c61ba9c011
>             writer.go:29: 2020-02-23T02:46:02.882Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: added local registration for service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:02.883Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Synced node info
>             writer.go:29: 2020-02-23T02:46:02.885Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Synced service: service=web
>             writer.go:29: 2020-02-23T02:46:02.887Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Synced service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:02.887Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Check in sync: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:02.887Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Check in sync: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:02.895Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: removed service: service=web
>             writer.go:29: 2020-02-23T02:46:02.895Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: removed check: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:02.895Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: removed check: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:02.895Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: removed service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:02.895Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Node info in sync
>             writer.go:29: 2020-02-23T02:46:02.896Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Deregistered service: service=web
>             writer.go:29: 2020-02-23T02:46:02.898Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Deregistered service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:02.898Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:02.898Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:02.898Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.leader: stopping routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:02.898Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.leader: stopping routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:02.898Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:02.898Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:02.899Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:02.899Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.leader: stopped routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:02.899Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.leader: stopped routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:02.899Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:02.900Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:02.902Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:02.902Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: consul server down
>             writer.go:29: 2020-02-23T02:46:02.902Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: shutdown complete
>             writer.go:29: 2020-02-23T02:46:02.902Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Stopping server: protocol=DNS address=127.0.0.1:16235 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.902Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Stopping server: protocol=DNS address=127.0.0.1:16235 network=udp
>             writer.go:29: 2020-02-23T02:46:02.902Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Stopping server: protocol=HTTP address=127.0.0.1:16236 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.902Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:02.902Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_defaults: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied (0.30s)
>             writer.go:29: 2020-02-23T02:46:02.914Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>             writer.go:29: 2020-02-23T02:46:02.914Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:02.914Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:02.915Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:02.951Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:63f390c8-c373-982b-c26a-5cc609ec5a71 Address:127.0.0.1:16246}]"
>             writer.go:29: 2020-02-23T02:46:02.951Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.raft: entering follower state: follower="Node at 127.0.0.1:16246 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:02.952Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.serf.wan: serf: EventMemberJoin: Node-63f390c8-c373-982b-c26a-5cc609ec5a71.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:02.952Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.serf.lan: serf: EventMemberJoin: Node-63f390c8-c373-982b-c26a-5cc609ec5a71 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:02.953Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server: Handled event for server in area: event=member-join server=Node-63f390c8-c373-982b-c26a-5cc609ec5a71.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:02.953Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server: Adding LAN server: server="Node-63f390c8-c373-982b-c26a-5cc609ec5a71 (Addr: tcp/127.0.0.1:16246) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:02.953Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied: Started DNS server: address=127.0.0.1:16241 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.953Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied: Started DNS server: address=127.0.0.1:16241 network=udp
>             writer.go:29: 2020-02-23T02:46:02.953Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied: Started HTTP server: address=127.0.0.1:16242 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.953Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied: started state syncer
>             writer.go:29: 2020-02-23T02:46:03.010Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:03.010Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.raft: entering candidate state: node="Node at 127.0.0.1:16246 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:03.014Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:03.014Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.raft: vote granted: from=63f390c8-c373-982b-c26a-5cc609ec5a71 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:03.014Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:03.014Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.raft: entering leader state: leader="Node at 127.0.0.1:16246 [Leader]"
>             writer.go:29: 2020-02-23T02:46:03.014Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:03.014Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server: New leader elected: payload=Node-63f390c8-c373-982b-c26a-5cc609ec5a71
>             writer.go:29: 2020-02-23T02:46:03.017Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:03.019Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server: Created ACL 'global-management' policy
>             writer.go:29: 2020-02-23T02:46:03.019Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:03.022Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server: Bootstrapped ACL master token from configuration
>             writer.go:29: 2020-02-23T02:46:03.025Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server: Created ACL anonymous token from configuration
>             writer.go:29: 2020-02-23T02:46:03.025Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.leader: started routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.025Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.leader: started routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.025Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.serf.lan: serf: EventMemberUpdate: Node-63f390c8-c373-982b-c26a-5cc609ec5a71
>             writer.go:29: 2020-02-23T02:46:03.025Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.serf.wan: serf: EventMemberUpdate: Node-63f390c8-c373-982b-c26a-5cc609ec5a71.dc1
>             writer.go:29: 2020-02-23T02:46:03.026Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server: Handled event for server in area: event=member-update server=Node-63f390c8-c373-982b-c26a-5cc609ec5a71.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:03.029Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:03.036Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:03.036Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.036Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server: Skipping self join check for node since the cluster is too small: node=Node-63f390c8-c373-982b-c26a-5cc609ec5a71
>             writer.go:29: 2020-02-23T02:46:03.036Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server: member joined, marking health alive: member=Node-63f390c8-c373-982b-c26a-5cc609ec5a71
>             writer.go:29: 2020-02-23T02:46:03.039Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server: Skipping self join check for node since the cluster is too small: node=Node-63f390c8-c373-982b-c26a-5cc609ec5a71
>             writer.go:29: 2020-02-23T02:46:03.200Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.acl: dropping node from result due to ACLs: node=Node-63f390c8-c373-982b-c26a-5cc609ec5a71
>             writer.go:29: 2020-02-23T02:46:03.200Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.acl: dropping node from result due to ACLs: node=Node-63f390c8-c373-982b-c26a-5cc609ec5a71
>             writer.go:29: 2020-02-23T02:46:03.201Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:03.201Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:03.201Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.leader: stopping routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.201Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.leader: stopping routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.201Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.201Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:03.201Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:03.201Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.leader: stopped routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.201Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.leader: stopped routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.201Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.204Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:03.206Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:03.206Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied: consul server down
>             writer.go:29: 2020-02-23T02:46:03.206Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied: shutdown complete
>             writer.go:29: 2020-02-23T02:46:03.206Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied: Stopping server: protocol=DNS address=127.0.0.1:16241 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.206Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied: Stopping server: protocol=DNS address=127.0.0.1:16241 network=udp
>             writer.go:29: 2020-02-23T02:46:03.206Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied: Stopping server: protocol=HTTP address=127.0.0.1:16242 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.206Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:03.206Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_denied: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar (0.17s)
>             writer.go:29: 2020-02-23T02:46:03.214Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>             writer.go:29: 2020-02-23T02:46:03.214Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:03.214Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:03.214Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:03.227Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:32ff9ae8-499b-2d70-2390-1aafc0dd4943 Address:127.0.0.1:16264}]"
>             writer.go:29: 2020-02-23T02:46:03.227Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16264 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:03.227Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.serf.wan: serf: EventMemberJoin: Node-32ff9ae8-499b-2d70-2390-1aafc0dd4943.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:03.228Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.serf.lan: serf: EventMemberJoin: Node-32ff9ae8-499b-2d70-2390-1aafc0dd4943 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:03.228Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server: Adding LAN server: server="Node-32ff9ae8-499b-2d70-2390-1aafc0dd4943 (Addr: tcp/127.0.0.1:16264) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:03.228Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server: Handled event for server in area: event=member-join server=Node-32ff9ae8-499b-2d70-2390-1aafc0dd4943.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:03.228Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar: Started DNS server: address=127.0.0.1:16259 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.228Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar: Started DNS server: address=127.0.0.1:16259 network=udp
>             writer.go:29: 2020-02-23T02:46:03.229Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar: Started HTTP server: address=127.0.0.1:16260 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.229Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:03.284Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:03.284Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16264 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:03.288Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:03.288Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.raft: vote granted: from=32ff9ae8-499b-2d70-2390-1aafc0dd4943 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:03.288Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:03.288Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16264 [Leader]"
>             writer.go:29: 2020-02-23T02:46:03.288Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:03.288Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server: New leader elected: payload=Node-32ff9ae8-499b-2d70-2390-1aafc0dd4943
>             writer.go:29: 2020-02-23T02:46:03.290Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:03.292Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server: Created ACL 'global-management' policy
>             writer.go:29: 2020-02-23T02:46:03.292Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:03.295Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server: Bootstrapped ACL master token from configuration
>             writer.go:29: 2020-02-23T02:46:03.304Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server: Created ACL anonymous token from configuration
>             writer.go:29: 2020-02-23T02:46:03.304Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.leader: started routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.304Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.leader: started routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.304Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.serf.lan: serf: EventMemberUpdate: Node-32ff9ae8-499b-2d70-2390-1aafc0dd4943
>             writer.go:29: 2020-02-23T02:46:03.304Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.serf.wan: serf: EventMemberUpdate: Node-32ff9ae8-499b-2d70-2390-1aafc0dd4943.dc1
>             writer.go:29: 2020-02-23T02:46:03.304Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server: Handled event for server in area: event=member-update server=Node-32ff9ae8-499b-2d70-2390-1aafc0dd4943.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:03.308Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:03.314Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:03.314Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.314Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-32ff9ae8-499b-2d70-2390-1aafc0dd4943
>             writer.go:29: 2020-02-23T02:46:03.314Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server: member joined, marking health alive: member=Node-32ff9ae8-499b-2d70-2390-1aafc0dd4943
>             writer.go:29: 2020-02-23T02:46:03.318Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-32ff9ae8-499b-2d70-2390-1aafc0dd4943
>             writer.go:29: 2020-02-23T02:46:03.362Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.acl: dropping node from result due to ACLs: node=Node-32ff9ae8-499b-2d70-2390-1aafc0dd4943
>             writer.go:29: 2020-02-23T02:46:03.362Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.acl: dropping node from result due to ACLs: node=Node-32ff9ae8-499b-2d70-2390-1aafc0dd4943
>             writer.go:29: 2020-02-23T02:46:03.368Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:03.368Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:03.368Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.leader: stopping routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.368Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.leader: stopping routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.368Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.368Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:03.368Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.leader: stopped routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.368Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:03.368Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.leader: stopped routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.368Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.370Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:03.371Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:03.371Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:03.371Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:03.371Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16259 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.371Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16259 network=udp
>             writer.go:29: 2020-02-23T02:46:03.371Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16260 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.372Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:03.372Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination (0.45s)
>             writer.go:29: 2020-02-23T02:46:03.395Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>             writer.go:29: 2020-02-23T02:46:03.395Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:03.395Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:03.396Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:03.406Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:564f8457-3967-3114-f345-0f3aebea7b9c Address:127.0.0.1:16276}]"
>             writer.go:29: 2020-02-23T02:46:03.406Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.serf.wan: serf: EventMemberJoin: Node-564f8457-3967-3114-f345-0f3aebea7b9c.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:03.407Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.serf.lan: serf: EventMemberJoin: Node-564f8457-3967-3114-f345-0f3aebea7b9c 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:03.407Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Started DNS server: address=127.0.0.1:16271 network=udp
>             writer.go:29: 2020-02-23T02:46:03.407Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: entering follower state: follower="Node at 127.0.0.1:16276 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:03.407Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Adding LAN server: server="Node-564f8457-3967-3114-f345-0f3aebea7b9c (Addr: tcp/127.0.0.1:16276) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:03.407Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Handled event for server in area: event=member-join server=Node-564f8457-3967-3114-f345-0f3aebea7b9c.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:03.407Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Started DNS server: address=127.0.0.1:16271 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.408Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Started HTTP server: address=127.0.0.1:16272 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.408Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: started state syncer
>             writer.go:29: 2020-02-23T02:46:03.471Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:03.471Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: entering candidate state: node="Node at 127.0.0.1:16276 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:03.474Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:03.474Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: vote granted: from=564f8457-3967-3114-f345-0f3aebea7b9c term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:03.474Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:03.474Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: entering leader state: leader="Node at 127.0.0.1:16276 [Leader]"
>             writer.go:29: 2020-02-23T02:46:03.474Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:03.474Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: New leader elected: payload=Node-564f8457-3967-3114-f345-0f3aebea7b9c
>             writer.go:29: 2020-02-23T02:46:03.476Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:03.478Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Created ACL 'global-management' policy
>             writer.go:29: 2020-02-23T02:46:03.478Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:03.480Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Bootstrapped ACL master token from configuration
>             writer.go:29: 2020-02-23T02:46:03.484Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Created ACL anonymous token from configuration
>             writer.go:29: 2020-02-23T02:46:03.484Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: started routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.484Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: started routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.484Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.serf.lan: serf: EventMemberUpdate: Node-564f8457-3967-3114-f345-0f3aebea7b9c
>             writer.go:29: 2020-02-23T02:46:03.484Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.serf.wan: serf: EventMemberUpdate: Node-564f8457-3967-3114-f345-0f3aebea7b9c.dc1
>             writer.go:29: 2020-02-23T02:46:03.484Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Handled event for server in area: event=member-update server=Node-564f8457-3967-3114-f345-0f3aebea7b9c.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:03.489Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:03.496Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:03.496Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.496Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Skipping self join check for node since the cluster is too small: node=Node-564f8457-3967-3114-f345-0f3aebea7b9c
>             writer.go:29: 2020-02-23T02:46:03.496Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: member joined, marking health alive: member=Node-564f8457-3967-3114-f345-0f3aebea7b9c
>             writer.go:29: 2020-02-23T02:46:03.499Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Skipping self join check for node since the cluster is too small: node=Node-564f8457-3967-3114-f345-0f3aebea7b9c
>             writer.go:29: 2020-02-23T02:46:03.606Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Skipping remote check since it is managed automatically: check=serfHealth
>             writer.go:29: 2020-02-23T02:46:03.610Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Synced node info
>             writer.go:29: 2020-02-23T02:46:03.811Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.acl: dropping node from result due to ACLs: node=Node-564f8457-3967-3114-f345-0f3aebea7b9c
>             writer.go:29: 2020-02-23T02:46:03.811Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.acl: dropping node from result due to ACLs: node=Node-564f8457-3967-3114-f345-0f3aebea7b9c
>             writer.go:29: 2020-02-23T02:46:03.817Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:03.817Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:03.817Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: stopping routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.817Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: stopping routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.817Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.817Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:03.817Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: stopped routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.817Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: stopped routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.817Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.819Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:03.821Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:03.821Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: consul server down
>             writer.go:29: 2020-02-23T02:46:03.821Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: shutdown complete
>             writer.go:29: 2020-02-23T02:46:03.821Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Stopping server: protocol=DNS address=127.0.0.1:16271 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.821Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Stopping server: protocol=DNS address=127.0.0.1:16271 network=udp
>             writer.go:29: 2020-02-23T02:46:03.821Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Stopping server: protocol=HTTP address=127.0.0.1:16272 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.821Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:03.821Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar (0.25s)
>             writer.go:29: 2020-02-23T02:46:03.828Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>             writer.go:29: 2020-02-23T02:46:03.828Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:03.829Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:03.829Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:03.838Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:052ad375-a208-e31d-5281-03ecaeb3bb82 Address:127.0.0.1:16288}]"
>             writer.go:29: 2020-02-23T02:46:03.838Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16288 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:03.839Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.serf.wan: serf: EventMemberJoin: Node-052ad375-a208-e31d-5281-03ecaeb3bb82.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:03.840Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.serf.lan: serf: EventMemberJoin: Node-052ad375-a208-e31d-5281-03ecaeb3bb82 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:03.840Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Adding LAN server: server="Node-052ad375-a208-e31d-5281-03ecaeb3bb82 (Addr: tcp/127.0.0.1:16288) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:03.840Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Handled event for server in area: event=member-join server=Node-052ad375-a208-e31d-5281-03ecaeb3bb82.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:03.840Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar: Started DNS server: address=127.0.0.1:16283 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.841Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar: Started DNS server: address=127.0.0.1:16283 network=udp
>             writer.go:29: 2020-02-23T02:46:03.841Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar: Started HTTP server: address=127.0.0.1:16284 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.841Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:03.888Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:03.888Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16288 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:03.891Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:03.891Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: vote granted: from=052ad375-a208-e31d-5281-03ecaeb3bb82 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:03.891Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:03.891Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16288 [Leader]"
>             writer.go:29: 2020-02-23T02:46:03.891Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:03.892Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server: New leader elected: payload=Node-052ad375-a208-e31d-5281-03ecaeb3bb82
>             writer.go:29: 2020-02-23T02:46:03.894Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:03.895Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Created ACL 'global-management' policy
>             writer.go:29: 2020-02-23T02:46:03.895Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:03.898Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Bootstrapped ACL master token from configuration
>             writer.go:29: 2020-02-23T02:46:03.901Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Created ACL anonymous token from configuration
>             writer.go:29: 2020-02-23T02:46:03.901Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: started routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.901Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: started routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.901Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.serf.lan: serf: EventMemberUpdate: Node-052ad375-a208-e31d-5281-03ecaeb3bb82
>             writer.go:29: 2020-02-23T02:46:03.901Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.serf.wan: serf: EventMemberUpdate: Node-052ad375-a208-e31d-5281-03ecaeb3bb82.dc1
>             writer.go:29: 2020-02-23T02:46:03.902Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Handled event for server in area: event=member-update server=Node-052ad375-a208-e31d-5281-03ecaeb3bb82.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:03.905Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:03.948Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:03.948Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.949Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-052ad375-a208-e31d-5281-03ecaeb3bb82
>             writer.go:29: 2020-02-23T02:46:03.949Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server: member joined, marking health alive: member=Node-052ad375-a208-e31d-5281-03ecaeb3bb82
>             writer.go:29: 2020-02-23T02:46:04.038Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-052ad375-a208-e31d-5281-03ecaeb3bb82
>             writer.go:29: 2020-02-23T02:46:04.063Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.acl: dropping node from result due to ACLs: node=Node-052ad375-a208-e31d-5281-03ecaeb3bb82
>             writer.go:29: 2020-02-23T02:46:04.069Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:04.069Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:04.069Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: stopping routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:04.069Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: stopping routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:04.069Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.069Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:04.069Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: stopped routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:04.069Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:04.069Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: stopped routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:04.069Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.071Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:04.072Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:04.072Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:04.072Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:04.072Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16283 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.072Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16283 network=udp
>             writer.go:29: 2020-02-23T02:46:04.072Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16284 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.072Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:04.072Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_not_for_overridden_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar (0.49s)
>             writer.go:29: 2020-02-23T02:46:04.081Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>             writer.go:29: 2020-02-23T02:46:04.081Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:04.081Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:04.081Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:04.093Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4a7af89b-d05b-4089-341e-1589d776f8b2 Address:127.0.0.1:16300}]"
>             writer.go:29: 2020-02-23T02:46:04.094Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.wan: serf: EventMemberJoin: Node-4a7af89b-d05b-4089-341e-1589d776f8b2.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:04.094Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.lan: serf: EventMemberJoin: Node-4a7af89b-d05b-4089-341e-1589d776f8b2 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:04.094Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Started DNS server: address=127.0.0.1:16295 network=udp
>             writer.go:29: 2020-02-23T02:46:04.094Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16300 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:04.094Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Adding LAN server: server="Node-4a7af89b-d05b-4089-341e-1589d776f8b2 (Addr: tcp/127.0.0.1:16300) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:04.095Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Handled event for server in area: event=member-join server=Node-4a7af89b-d05b-4089-341e-1589d776f8b2.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:04.095Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Started DNS server: address=127.0.0.1:16295 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.095Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Started HTTP server: address=127.0.0.1:16296 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.095Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:04.139Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:04.139Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16300 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:04.143Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:04.143Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: vote granted: from=4a7af89b-d05b-4089-341e-1589d776f8b2 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:04.143Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:04.143Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16300 [Leader]"
>             writer.go:29: 2020-02-23T02:46:04.143Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:04.143Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: New leader elected: payload=Node-4a7af89b-d05b-4089-341e-1589d776f8b2
>             writer.go:29: 2020-02-23T02:46:04.145Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:04.145Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:04.147Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Created ACL 'global-management' policy
>             writer.go:29: 2020-02-23T02:46:04.147Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:04.149Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Created ACL 'global-management' policy
>             writer.go:29: 2020-02-23T02:46:04.149Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:04.159Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Bootstrapped ACL master token from configuration
>             writer.go:29: 2020-02-23T02:46:04.161Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Bootstrapped ACL master token from configuration
>             writer.go:29: 2020-02-23T02:46:04.163Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Created ACL anonymous token from configuration
>             writer.go:29: 2020-02-23T02:46:04.163Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: started routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:04.163Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: started routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:04.163Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.lan: serf: EventMemberUpdate: Node-4a7af89b-d05b-4089-341e-1589d776f8b2
>             writer.go:29: 2020-02-23T02:46:04.163Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.wan: serf: EventMemberUpdate: Node-4a7af89b-d05b-4089-341e-1589d776f8b2.dc1
>             writer.go:29: 2020-02-23T02:46:04.164Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Created ACL anonymous token from configuration
>             writer.go:29: 2020-02-23T02:46:04.164Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: transitioning out of legacy ACL mode
>             writer.go:29: 2020-02-23T02:46:04.164Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Handled event for server in area: event=member-update server=Node-4a7af89b-d05b-4089-341e-1589d776f8b2.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:04.164Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.lan: serf: EventMemberUpdate: Node-4a7af89b-d05b-4089-341e-1589d776f8b2
>             writer.go:29: 2020-02-23T02:46:04.164Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.wan: serf: EventMemberUpdate: Node-4a7af89b-d05b-4089-341e-1589d776f8b2.dc1
>             writer.go:29: 2020-02-23T02:46:04.164Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Handled event for server in area: event=member-update server=Node-4a7af89b-d05b-4089-341e-1589d776f8b2.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:04.169Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:04.177Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:04.177Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.177Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-4a7af89b-d05b-4089-341e-1589d776f8b2
>             writer.go:29: 2020-02-23T02:46:04.177Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: member joined, marking health alive: member=Node-4a7af89b-d05b-4089-341e-1589d776f8b2
>             writer.go:29: 2020-02-23T02:46:04.179Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-4a7af89b-d05b-4089-341e-1589d776f8b2
>             writer.go:29: 2020-02-23T02:46:04.180Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-4a7af89b-d05b-4089-341e-1589d776f8b2
>             writer.go:29: 2020-02-23T02:46:04.415Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Skipping remote check since it is managed automatically: check=serfHealth
>             writer.go:29: 2020-02-23T02:46:04.418Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Synced node info
>             writer.go:29: 2020-02-23T02:46:04.524Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.acl: dropping node from result due to ACLs: node=Node-4a7af89b-d05b-4089-341e-1589d776f8b2
>             writer.go:29: 2020-02-23T02:46:04.524Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.acl: dropping node from result due to ACLs: node=Node-4a7af89b-d05b-4089-341e-1589d776f8b2
>             writer.go:29: 2020-02-23T02:46:04.540Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: added local registration for service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:04.540Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Node info in sync
>             writer.go:29: 2020-02-23T02:46:04.542Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Synced service: service=web
>             writer.go:29: 2020-02-23T02:46:04.543Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Synced service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:04.543Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Check in sync: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:04.544Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Check in sync: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:04.551Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: removed service: service=web
>             writer.go:29: 2020-02-23T02:46:04.551Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: removed check: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:04.551Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: removed check: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:04.551Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: removed service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:04.551Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Node info in sync
>             writer.go:29: 2020-02-23T02:46:04.552Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Deregistered service: service=web
>             writer.go:29: 2020-02-23T02:46:04.554Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Deregistered service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:04.554Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:04.554Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:04.554Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.554Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: stopping routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:04.554Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: stopping routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:04.554Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:04.554Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.554Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: stopped routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:04.554Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: stopped routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:04.556Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:04.558Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:04.558Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:04.558Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:04.558Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16295 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.558Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16295 network=udp
>             writer.go:29: 2020-02-23T02:46:04.558Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16296 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.558Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:04.558Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/ACL_OK_for_service_but_and_overridden_for_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar (0.23s)
>             writer.go:29: 2020-02-23T02:46:04.565Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:04.565Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:04.566Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:04.575Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:ddd21265-7f93-4392-562e-3c45010c0512 Address:127.0.0.1:16312}]"
>             writer.go:29: 2020-02-23T02:46:04.575Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16312 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:04.575Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server.serf.wan: serf: EventMemberJoin: Node-ddd21265-7f93-4392-562e-3c45010c0512.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:04.576Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server.serf.lan: serf: EventMemberJoin: Node-ddd21265-7f93-4392-562e-3c45010c0512 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:04.576Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server: Handled event for server in area: event=member-join server=Node-ddd21265-7f93-4392-562e-3c45010c0512.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:04.576Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server: Adding LAN server: server="Node-ddd21265-7f93-4392-562e-3c45010c0512 (Addr: tcp/127.0.0.1:16312) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:04.576Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar: Started DNS server: address=127.0.0.1:16307 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.576Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar: Started DNS server: address=127.0.0.1:16307 network=udp
>             writer.go:29: 2020-02-23T02:46:04.577Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar: Started HTTP server: address=127.0.0.1:16308 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.577Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:04.618Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:04.618Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16312 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:04.621Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:04.621Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server.raft: vote granted: from=ddd21265-7f93-4392-562e-3c45010c0512 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:04.621Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:04.621Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16312 [Leader]"
>             writer.go:29: 2020-02-23T02:46:04.621Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:04.621Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server: New leader elected: payload=Node-ddd21265-7f93-4392-562e-3c45010c0512
>             writer.go:29: 2020-02-23T02:46:04.629Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:04.637Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:04.637Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.637Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-ddd21265-7f93-4392-562e-3c45010c0512
>             writer.go:29: 2020-02-23T02:46:04.637Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server: member joined, marking health alive: member=Node-ddd21265-7f93-4392-562e-3c45010c0512
>             writer.go:29: 2020-02-23T02:46:04.678Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar: Skipping remote check since it is managed automatically: check=serfHealth
>             writer.go:29: 2020-02-23T02:46:04.681Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar: Synced node info
>             writer.go:29: 2020-02-23T02:46:04.787Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:04.787Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:04.787Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.787Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:04.787Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.789Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:04.790Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:04.791Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:04.791Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:04.791Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16307 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.791Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16307 network=udp
>             writer.go:29: 2020-02-23T02:46:04.791Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16308 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.791Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:04.791Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_definition_in_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar (0.16s)
>             writer.go:29: 2020-02-23T02:46:04.804Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:04.804Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:04.805Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:04.816Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:e889fd54-f28a-9b58-4d29-f51e475ea609 Address:127.0.0.1:16324}]"
>             writer.go:29: 2020-02-23T02:46:04.816Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16324 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:04.817Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server.serf.wan: serf: EventMemberJoin: Node-e889fd54-f28a-9b58-4d29-f51e475ea609.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:04.819Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server.serf.lan: serf: EventMemberJoin: Node-e889fd54-f28a-9b58-4d29-f51e475ea609 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:04.819Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar: Started DNS server: address=127.0.0.1:16319 network=udp
>             writer.go:29: 2020-02-23T02:46:04.819Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server: Adding LAN server: server="Node-e889fd54-f28a-9b58-4d29-f51e475ea609 (Addr: tcp/127.0.0.1:16324) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:04.819Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar: Started DNS server: address=127.0.0.1:16319 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.819Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server: Handled event for server in area: event=member-join server=Node-e889fd54-f28a-9b58-4d29-f51e475ea609.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:04.820Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar: Started HTTP server: address=127.0.0.1:16320 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.820Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:04.874Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:04.874Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16324 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:04.918Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:04.918Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server.raft: vote granted: from=e889fd54-f28a-9b58-4d29-f51e475ea609 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:04.918Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:04.918Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16324 [Leader]"
>             writer.go:29: 2020-02-23T02:46:04.918Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:04.918Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server: New leader elected: payload=Node-e889fd54-f28a-9b58-4d29-f51e475ea609
>             writer.go:29: 2020-02-23T02:46:04.926Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:04.935Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:04.935Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.935Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-e889fd54-f28a-9b58-4d29-f51e475ea609
>             writer.go:29: 2020-02-23T02:46:04.935Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server: member joined, marking health alive: member=Node-e889fd54-f28a-9b58-4d29-f51e475ea609
>             writer.go:29: 2020-02-23T02:46:04.950Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:04.950Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:04.950Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.950Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:04.950Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:04.950Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.951Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:04.953Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:04.953Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:04.953Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:04.953Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16319 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.953Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16319 network=udp
>             writer.go:29: 2020-02-23T02:46:04.953Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16320 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.953Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:04.953Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_definitions_in_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar (0.46s)
>             writer.go:29: 2020-02-23T02:46:04.960Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:04.960Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:04.961Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:04.975Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:dba5082d-3345-33e3-589f-b8c93a667b07 Address:127.0.0.1:16330}]"
>             writer.go:29: 2020-02-23T02:46:04.975Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server.serf.wan: serf: EventMemberJoin: Node-dba5082d-3345-33e3-589f-b8c93a667b07.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:04.975Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server.serf.lan: serf: EventMemberJoin: Node-dba5082d-3345-33e3-589f-b8c93a667b07 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:04.976Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar: Started DNS server: address=127.0.0.1:16325 network=udp
>             writer.go:29: 2020-02-23T02:46:04.976Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16330 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:04.976Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server: Adding LAN server: server="Node-dba5082d-3345-33e3-589f-b8c93a667b07 (Addr: tcp/127.0.0.1:16330) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:04.976Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server: Handled event for server in area: event=member-join server=Node-dba5082d-3345-33e3-589f-b8c93a667b07.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:04.976Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar: Started DNS server: address=127.0.0.1:16325 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.976Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar: Started HTTP server: address=127.0.0.1:16326 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.976Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:05.047Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:05.047Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16330 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:05.106Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:05.106Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server.raft: vote granted: from=dba5082d-3345-33e3-589f-b8c93a667b07 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:05.106Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:05.106Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16330 [Leader]"
>             writer.go:29: 2020-02-23T02:46:05.107Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:05.107Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server: New leader elected: payload=Node-dba5082d-3345-33e3-589f-b8c93a667b07
>             writer.go:29: 2020-02-23T02:46:05.118Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:05.139Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:05.139Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:05.140Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-dba5082d-3345-33e3-589f-b8c93a667b07
>             writer.go:29: 2020-02-23T02:46:05.140Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server: member joined, marking health alive: member=Node-dba5082d-3345-33e3-589f-b8c93a667b07
>             writer.go:29: 2020-02-23T02:46:05.351Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar: Skipping remote check since it is managed automatically: check=serfHealth
>             writer.go:29: 2020-02-23T02:46:05.375Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:05.375Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:05.375Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:05.376Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:05.376Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:05.378Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar: Synced node info
>             writer.go:29: 2020-02-23T02:46:05.392Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:05.416Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:05.417Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:05.417Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:05.417Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16325 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.417Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16325 network=udp
>             writer.go:29: 2020-02-23T02:46:05.417Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16326 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.417Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:05.417Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_check_status_in_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar (0.25s)
>             writer.go:29: 2020-02-23T02:46:05.439Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:05.439Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:05.440Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:05.450Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:273704eb-c0aa-3267-1560-ae1d3a266d0d Address:127.0.0.1:16342}]"
>             writer.go:29: 2020-02-23T02:46:05.450Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server.serf.wan: serf: EventMemberJoin: Node-273704eb-c0aa-3267-1560-ae1d3a266d0d.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:05.451Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server.serf.lan: serf: EventMemberJoin: Node-273704eb-c0aa-3267-1560-ae1d3a266d0d 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:05.451Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar: Started DNS server: address=127.0.0.1:16337 network=udp
>             writer.go:29: 2020-02-23T02:46:05.451Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16342 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:05.451Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server: Adding LAN server: server="Node-273704eb-c0aa-3267-1560-ae1d3a266d0d (Addr: tcp/127.0.0.1:16342) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:05.451Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server: Handled event for server in area: event=member-join server=Node-273704eb-c0aa-3267-1560-ae1d3a266d0d.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:05.451Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar: Started DNS server: address=127.0.0.1:16337 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.452Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar: Started HTTP server: address=127.0.0.1:16338 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.452Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:05.491Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:05.491Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16342 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:05.494Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:05.494Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server.raft: vote granted: from=273704eb-c0aa-3267-1560-ae1d3a266d0d term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:05.494Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:05.494Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16342 [Leader]"
>             writer.go:29: 2020-02-23T02:46:05.494Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:05.494Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server: New leader elected: payload=Node-273704eb-c0aa-3267-1560-ae1d3a266d0d
>             writer.go:29: 2020-02-23T02:46:05.501Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:05.509Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:05.509Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:05.509Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-273704eb-c0aa-3267-1560-ae1d3a266d0d
>             writer.go:29: 2020-02-23T02:46:05.509Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server: member joined, marking health alive: member=Node-273704eb-c0aa-3267-1560-ae1d3a266d0d
>             writer.go:29: 2020-02-23T02:46:05.665Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:05.665Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:05.665Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:05.665Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:05.665Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:05.665Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:05.667Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:05.669Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:05.669Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:05.669Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:05.669Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16337 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.669Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16337 network=udp
>             writer.go:29: 2020-02-23T02:46:05.669Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16338 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.669Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:05.669Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/invalid_checks_status_in_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered (0.45s)
>             writer.go:29: 2020-02-23T02:46:05.677Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:05.677Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:05.678Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:05.690Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:3ebaf4b9-93dc-09ec-5e74-6c70fbefaf4e Address:127.0.0.1:16354}]"
>             writer.go:29: 2020-02-23T02:46:05.690Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: entering follower state: follower="Node at 127.0.0.1:16354 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:05.691Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.serf.wan: serf: EventMemberJoin: Node-3ebaf4b9-93dc-09ec-5e74-6c70fbefaf4e.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:05.691Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.serf.lan: serf: EventMemberJoin: Node-3ebaf4b9-93dc-09ec-5e74-6c70fbefaf4e 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:05.691Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server: Handled event for server in area: event=member-join server=Node-3ebaf4b9-93dc-09ec-5e74-6c70fbefaf4e.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:05.691Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server: Adding LAN server: server="Node-3ebaf4b9-93dc-09ec-5e74-6c70fbefaf4e (Addr: tcp/127.0.0.1:16354) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:05.692Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Started DNS server: address=127.0.0.1:16349 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.692Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Started DNS server: address=127.0.0.1:16349 network=udp
>             writer.go:29: 2020-02-23T02:46:05.692Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Started HTTP server: address=127.0.0.1:16350 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.692Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: started state syncer
>             writer.go:29: 2020-02-23T02:46:05.750Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:05.750Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: entering candidate state: node="Node at 127.0.0.1:16354 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:05.753Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:05.753Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: vote granted: from=3ebaf4b9-93dc-09ec-5e74-6c70fbefaf4e term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:05.753Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:05.753Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: entering leader state: leader="Node at 127.0.0.1:16354 [Leader]"
>             writer.go:29: 2020-02-23T02:46:05.753Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:05.753Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server: New leader elected: payload=Node-3ebaf4b9-93dc-09ec-5e74-6c70fbefaf4e
>             writer.go:29: 2020-02-23T02:46:05.760Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:05.768Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:05.768Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:05.768Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server: Skipping self join check for node since the cluster is too small: node=Node-3ebaf4b9-93dc-09ec-5e74-6c70fbefaf4e
>             writer.go:29: 2020-02-23T02:46:05.768Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server: member joined, marking health alive: member=Node-3ebaf4b9-93dc-09ec-5e74-6c70fbefaf4e
>             writer.go:29: 2020-02-23T02:46:05.829Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Skipping remote check since it is managed automatically: check=serfHealth
>             writer.go:29: 2020-02-23T02:46:05.832Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Synced node info
>             writer.go:29: 2020-02-23T02:46:06.111Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Node info in sync
>             writer.go:29: 2020-02-23T02:46:06.112Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Synced service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:06.115Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Synced service: service=web
>             writer.go:29: 2020-02-23T02:46:06.116Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: removed service: service=web
>             writer.go:29: 2020-02-23T02:46:06.116Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Node info in sync
>             writer.go:29: 2020-02-23T02:46:06.116Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Service in sync: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:06.117Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Deregistered service: service=web
>             writer.go:29: 2020-02-23T02:46:06.117Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:06.117Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:06.117Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.117Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:06.117Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.119Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:06.121Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:06.121Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: consul server down
>             writer.go:29: 2020-02-23T02:46:06.121Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: shutdown complete
>             writer.go:29: 2020-02-23T02:46:06.121Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Stopping server: protocol=DNS address=127.0.0.1:16349 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.121Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Stopping server: protocol=DNS address=127.0.0.1:16349 network=udp
>             writer.go:29: 2020-02-23T02:46:06.121Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Stopping server: protocol=HTTP address=127.0.0.1:16350 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.121Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:06.121Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work (0.38s)
>             writer.go:29: 2020-02-23T02:46:06.130Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:06.130Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:06.131Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:06.148Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:630b4840-67a4-2b58-98ab-9b543a5b8437 Address:127.0.0.1:16366}]"
>             writer.go:29: 2020-02-23T02:46:06.149Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server.serf.wan: serf: EventMemberJoin: Node-630b4840-67a4-2b58-98ab-9b543a5b8437.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:06.149Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server.serf.lan: serf: EventMemberJoin: Node-630b4840-67a4-2b58-98ab-9b543a5b8437 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:06.149Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Started DNS server: address=127.0.0.1:16361 network=udp
>             writer.go:29: 2020-02-23T02:46:06.149Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server.raft: entering follower state: follower="Node at 127.0.0.1:16366 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:06.150Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server: Adding LAN server: server="Node-630b4840-67a4-2b58-98ab-9b543a5b8437 (Addr: tcp/127.0.0.1:16366) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:06.150Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server: Handled event for server in area: event=member-join server=Node-630b4840-67a4-2b58-98ab-9b543a5b8437.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:06.150Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Started DNS server: address=127.0.0.1:16361 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.150Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Started HTTP server: address=127.0.0.1:16362 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.150Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: started state syncer
>             writer.go:29: 2020-02-23T02:46:06.196Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:06.196Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server.raft: entering candidate state: node="Node at 127.0.0.1:16366 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:06.199Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:06.199Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server.raft: vote granted: from=630b4840-67a4-2b58-98ab-9b543a5b8437 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:06.199Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:06.199Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server.raft: entering leader state: leader="Node at 127.0.0.1:16366 [Leader]"
>             writer.go:29: 2020-02-23T02:46:06.199Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:06.199Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server: New leader elected: payload=Node-630b4840-67a4-2b58-98ab-9b543a5b8437
>             writer.go:29: 2020-02-23T02:46:06.206Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:06.214Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:06.214Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.214Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server: Skipping self join check for node since the cluster is too small: node=Node-630b4840-67a4-2b58-98ab-9b543a5b8437
>             writer.go:29: 2020-02-23T02:46:06.214Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server: member joined, marking health alive: member=Node-630b4840-67a4-2b58-98ab-9b543a5b8437
>             writer.go:29: 2020-02-23T02:46:06.478Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: added local registration for service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:06.479Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Synced node info
>             writer.go:29: 2020-02-23T02:46:06.482Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Synced service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:06.483Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Synced service: service=web
>             writer.go:29: 2020-02-23T02:46:06.483Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Check in sync: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:06.483Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Check in sync: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:06.491Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: removed service: service=web
>             writer.go:29: 2020-02-23T02:46:06.491Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.proxycfg: watch error: id=service-http-checks:web error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>             writer.go:29: 2020-02-23T02:46:06.491Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Skipping remote check since it is managed automatically: check=serfHealth
>             writer.go:29: 2020-02-23T02:46:06.491Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Node info in sync
>             writer.go:29: 2020-02-23T02:46:06.491Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: removed check: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:06.493Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Deregistered service: service=web
>             writer.go:29: 2020-02-23T02:46:06.494Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Deregistered service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:06.494Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Node info in sync
>             writer.go:29: 2020-02-23T02:46:06.494Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: removed check: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:06.494Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: removed service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:06.494Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Node info in sync
>             writer.go:29: 2020-02-23T02:46:06.494Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:06.494Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:06.494Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.494Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:06.495Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.496Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:06.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:06.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: consul server down
>             writer.go:29: 2020-02-23T02:46:06.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: shutdown complete
>             writer.go:29: 2020-02-23T02:46:06.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Stopping server: protocol=DNS address=127.0.0.1:16361 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Stopping server: protocol=DNS address=127.0.0.1:16361 network=udp
>             writer.go:29: 2020-02-23T02:46:06.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Stopping server: protocol=HTTP address=127.0.0.1:16362 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:06.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/updates_to_sidecar_should_work: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it (0.14s)
>             writer.go:29: 2020-02-23T02:46:06.513Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:06.513Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:06.514Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:06.528Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:58d96bd7-6bc4-18ca-0139-733949ca6625 Address:127.0.0.1:16378}]"
>             writer.go:29: 2020-02-23T02:46:06.528Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: entering follower state: follower="Node at 127.0.0.1:16378 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:06.529Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server.serf.wan: serf: EventMemberJoin: Node-58d96bd7-6bc4-18ca-0139-733949ca6625.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:06.529Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server.serf.lan: serf: EventMemberJoin: Node-58d96bd7-6bc4-18ca-0139-733949ca6625 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:06.530Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server: Adding LAN server: server="Node-58d96bd7-6bc4-18ca-0139-733949ca6625 (Addr: tcp/127.0.0.1:16378) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:06.530Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server: Handled event for server in area: event=member-join server=Node-58d96bd7-6bc4-18ca-0139-733949ca6625.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:06.530Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: Started DNS server: address=127.0.0.1:16373 network=udp
>             writer.go:29: 2020-02-23T02:46:06.530Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: Started DNS server: address=127.0.0.1:16373 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.535Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: Started HTTP server: address=127.0.0.1:16374 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.535Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: started state syncer
>             writer.go:29: 2020-02-23T02:46:06.577Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:06.577Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: entering candidate state: node="Node at 127.0.0.1:16378 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:06.582Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:06.582Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: vote granted: from=58d96bd7-6bc4-18ca-0139-733949ca6625 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:06.582Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:06.582Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: entering leader state: leader="Node at 127.0.0.1:16378 [Leader]"
>             writer.go:29: 2020-02-23T02:46:06.582Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:06.582Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server: New leader elected: payload=Node-58d96bd7-6bc4-18ca-0139-733949ca6625
>             writer.go:29: 2020-02-23T02:46:06.600Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:06.608Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:06.608Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.608Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server: Skipping self join check for node since the cluster is too small: node=Node-58d96bd7-6bc4-18ca-0139-733949ca6625
>             writer.go:29: 2020-02-23T02:46:06.608Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server: member joined, marking health alive: member=Node-58d96bd7-6bc4-18ca-0139-733949ca6625
>             writer.go:29: 2020-02-23T02:46:06.624Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: added local registration for service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:06.628Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: Synced node info
>             writer.go:29: 2020-02-23T02:46:06.630Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: Synced service: service=web
>             writer.go:29: 2020-02-23T02:46:06.633Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: Synced service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:06.635Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: removed service: service=web
>             writer.go:29: 2020-02-23T02:46:06.635Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: removed service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:06.635Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: Node info in sync
>             writer.go:29: 2020-02-23T02:46:06.636Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: Deregistered service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:06.638Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: Deregistered service: service=web
>             writer.go:29: 2020-02-23T02:46:06.638Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:06.638Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:06.638Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.638Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:06.638Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:06.638Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.640Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:06.642Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:06.642Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: consul server down
>             writer.go:29: 2020-02-23T02:46:06.642Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: shutdown complete
>             writer.go:29: 2020-02-23T02:46:06.642Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: Stopping server: protocol=DNS address=127.0.0.1:16373 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.642Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: Stopping server: protocol=DNS address=127.0.0.1:16373 network=udp
>             writer.go:29: 2020-02-23T02:46:06.642Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: Stopping server: protocol=HTTP address=127.0.0.1:16374 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.642Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:06.642Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/service_manager/update_that_removes_sidecar_should_NOT_deregister_it: Endpoints down
>     --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal (5.00s)
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case (0.42s)
>             writer.go:29: 2020-02-23T02:46:02.111Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:02.111Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:02.112Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:02.123Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:1330266d-5f61-dac4-baca-4a61d941f64b Address:127.0.0.1:16216}]"
>             writer.go:29: 2020-02-23T02:46:02.124Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server.serf.wan: serf: EventMemberJoin: Node-1330266d-5f61-dac4-baca-4a61d941f64b.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:02.124Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server.serf.lan: serf: EventMemberJoin: Node-1330266d-5f61-dac4-baca-4a61d941f64b 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:02.124Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case: Started DNS server: address=127.0.0.1:16211 network=udp
>             writer.go:29: 2020-02-23T02:46:02.124Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server.raft: entering follower state: follower="Node at 127.0.0.1:16216 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:02.124Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server: Adding LAN server: server="Node-1330266d-5f61-dac4-baca-4a61d941f64b (Addr: tcp/127.0.0.1:16216) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:02.124Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server: Handled event for server in area: event=member-join server=Node-1330266d-5f61-dac4-baca-4a61d941f64b.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:02.125Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case: Started DNS server: address=127.0.0.1:16211 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.125Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case: Started HTTP server: address=127.0.0.1:16212 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.125Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case: started state syncer
>             writer.go:29: 2020-02-23T02:46:02.186Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:02.186Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server.raft: entering candidate state: node="Node at 127.0.0.1:16216 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:02.189Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:02.189Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server.raft: vote granted: from=1330266d-5f61-dac4-baca-4a61d941f64b term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:02.189Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:02.189Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server.raft: entering leader state: leader="Node at 127.0.0.1:16216 [Leader]"
>             writer.go:29: 2020-02-23T02:46:02.189Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:02.189Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server: New leader elected: payload=Node-1330266d-5f61-dac4-baca-4a61d941f64b
>             writer.go:29: 2020-02-23T02:46:02.196Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:02.205Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:02.205Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:02.205Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server: Skipping self join check for node since the cluster is too small: node=Node-1330266d-5f61-dac4-baca-4a61d941f64b
>             writer.go:29: 2020-02-23T02:46:02.205Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server: member joined, marking health alive: member=Node-1330266d-5f61-dac4-baca-4a61d941f64b
>             writer.go:29: 2020-02-23T02:46:02.444Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case: Synced node info
>             writer.go:29: 2020-02-23T02:46:02.472Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case: Synced service: service=web
>             writer.go:29: 2020-02-23T02:46:02.472Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:02.472Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:02.472Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:02.472Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:02.472Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:02.472Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:02.493Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:02.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:02.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case: consul server down
>             writer.go:29: 2020-02-23T02:46:02.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case: shutdown complete
>             writer.go:29: 2020-02-23T02:46:02.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case: Stopping server: protocol=DNS address=127.0.0.1:16211 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case: Stopping server: protocol=DNS address=127.0.0.1:16211 network=udp
>             writer.go:29: 2020-02-23T02:46:02.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case: Stopping server: protocol=HTTP address=127.0.0.1:16212 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:02.499Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/sanity_check_no_sidecar_case: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar (0.42s)
>             writer.go:29: 2020-02-23T02:46:02.509Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:02.509Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:02.510Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:02.523Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:3d76cf7c-0032-6643-b483-cdca1866898c Address:127.0.0.1:16234}]"
>             writer.go:29: 2020-02-23T02:46:02.523Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server.serf.wan: serf: EventMemberJoin: Node-3d76cf7c-0032-6643-b483-cdca1866898c.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:02.523Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server.serf.lan: serf: EventMemberJoin: Node-3d76cf7c-0032-6643-b483-cdca1866898c 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:02.524Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Started DNS server: address=127.0.0.1:16229 network=udp
>             writer.go:29: 2020-02-23T02:46:02.524Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16234 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:02.524Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server: Adding LAN server: server="Node-3d76cf7c-0032-6643-b483-cdca1866898c (Addr: tcp/127.0.0.1:16234) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:02.524Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server: Handled event for server in area: event=member-join server=Node-3d76cf7c-0032-6643-b483-cdca1866898c.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:02.524Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Started DNS server: address=127.0.0.1:16229 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.524Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Started HTTP server: address=127.0.0.1:16230 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.524Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:02.559Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:02.559Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16234 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:02.562Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:02.562Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server.raft: vote granted: from=3d76cf7c-0032-6643-b483-cdca1866898c term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:02.562Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:02.562Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16234 [Leader]"
>             writer.go:29: 2020-02-23T02:46:02.562Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:02.563Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server: New leader elected: payload=Node-3d76cf7c-0032-6643-b483-cdca1866898c
>             writer.go:29: 2020-02-23T02:46:02.570Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:02.578Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:02.578Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:02.578Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-3d76cf7c-0032-6643-b483-cdca1866898c
>             writer.go:29: 2020-02-23T02:46:02.578Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server: member joined, marking health alive: member=Node-3d76cf7c-0032-6643-b483-cdca1866898c
>             writer.go:29: 2020-02-23T02:46:02.860Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Skipping remote check since it is managed automatically: check=serfHealth
>             writer.go:29: 2020-02-23T02:46:02.869Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Synced node info
>             writer.go:29: 2020-02-23T02:46:02.869Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Node info in sync
>             writer.go:29: 2020-02-23T02:46:02.901Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Node info in sync
>             writer.go:29: 2020-02-23T02:46:02.908Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Synced service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:02.909Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Synced service: service=web
>             writer.go:29: 2020-02-23T02:46:02.909Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Check in sync: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:02.909Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Check in sync: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:02.910Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.proxycfg: watch error: id=service-http-checks:web error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>             writer.go:29: 2020-02-23T02:46:02.910Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: removed service: service=web
>             writer.go:29: 2020-02-23T02:46:02.910Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: removed check: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:02.911Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: removed check: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:02.911Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: removed service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:02.911Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Node info in sync
>             writer.go:29: 2020-02-23T02:46:02.912Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Deregistered service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:02.914Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Deregistered service: service=web
>             writer.go:29: 2020-02-23T02:46:02.914Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:02.914Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:02.914Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:02.914Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:02.914Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:02.916Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:02.917Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:02.917Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:02.917Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:02.917Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16229 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.917Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16229 network=udp
>             writer.go:29: 2020-02-23T02:46:02.918Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16230 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.918Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:02.918Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/default_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults (0.25s)
>             writer.go:29: 2020-02-23T02:46:02.943Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>             writer.go:29: 2020-02-23T02:46:02.943Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:02.943Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:02.944Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:02.955Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:8da0722e-8f6a-1c82-33f1-32fdfbe07f98 Address:127.0.0.1:16252}]"
>             writer.go:29: 2020-02-23T02:46:02.956Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.serf.wan: serf: EventMemberJoin: Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:02.956Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.raft: entering follower state: follower="Node at 127.0.0.1:16252 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:02.956Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.serf.lan: serf: EventMemberJoin: Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:02.956Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: Adding LAN server: server="Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98 (Addr: tcp/127.0.0.1:16252) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:02.956Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: Handled event for server in area: event=member-join server=Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:02.957Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Started DNS server: address=127.0.0.1:16247 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.957Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Started DNS server: address=127.0.0.1:16247 network=udp
>             writer.go:29: 2020-02-23T02:46:02.957Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Started HTTP server: address=127.0.0.1:16248 network=tcp
>             writer.go:29: 2020-02-23T02:46:02.957Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: started state syncer
>             writer.go:29: 2020-02-23T02:46:02.997Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:02.997Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.raft: entering candidate state: node="Node at 127.0.0.1:16252 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:03.000Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:03.000Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.raft: vote granted: from=8da0722e-8f6a-1c82-33f1-32fdfbe07f98 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:03.000Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:03.000Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.raft: entering leader state: leader="Node at 127.0.0.1:16252 [Leader]"
>             writer.go:29: 2020-02-23T02:46:03.000Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:03.000Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: New leader elected: payload=Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98
>             writer.go:29: 2020-02-23T02:46:03.002Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:03.003Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: Created ACL 'global-management' policy
>             writer.go:29: 2020-02-23T02:46:03.003Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:03.006Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: Bootstrapped ACL master token from configuration
>             writer.go:29: 2020-02-23T02:46:03.007Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:03.007Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:03.011Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: Created ACL anonymous token from configuration
>             writer.go:29: 2020-02-23T02:46:03.011Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.leader: started routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.011Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.leader: started routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.011Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: transitioning out of legacy ACL mode
>             writer.go:29: 2020-02-23T02:46:03.011Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.serf.lan: serf: EventMemberUpdate: Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98
>             writer.go:29: 2020-02-23T02:46:03.011Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.serf.wan: serf: EventMemberUpdate: Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98.dc1
>             writer.go:29: 2020-02-23T02:46:03.011Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: Handled event for server in area: event=member-update server=Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:03.012Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: Created ACL anonymous token from configuration
>             writer.go:29: 2020-02-23T02:46:03.012Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.serf.lan: serf: EventMemberUpdate: Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98
>             writer.go:29: 2020-02-23T02:46:03.012Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.serf.wan: serf: EventMemberUpdate: Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98.dc1
>             writer.go:29: 2020-02-23T02:46:03.012Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: Handled event for server in area: event=member-update server=Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:03.015Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:03.022Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:03.022Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.022Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: Skipping self join check for node since the cluster is too small: node=Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98
>             writer.go:29: 2020-02-23T02:46:03.022Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: member joined, marking health alive: member=Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98
>             writer.go:29: 2020-02-23T02:46:03.024Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: Skipping self join check for node since the cluster is too small: node=Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98
>             writer.go:29: 2020-02-23T02:46:03.024Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: Skipping self join check for node since the cluster is too small: node=Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98
>             writer.go:29: 2020-02-23T02:46:03.102Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.acl: dropping node from result due to ACLs: node=Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98
>             writer.go:29: 2020-02-23T02:46:03.102Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.acl: dropping node from result due to ACLs: node=Node-8da0722e-8f6a-1c82-33f1-32fdfbe07f98
>             writer.go:29: 2020-02-23T02:46:03.158Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Synced node info
>             writer.go:29: 2020-02-23T02:46:03.160Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Synced service: service=web
>             writer.go:29: 2020-02-23T02:46:03.162Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Synced service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:03.162Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Check in sync: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:03.162Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Check in sync: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:03.163Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: removed service: service=web
>             writer.go:29: 2020-02-23T02:46:03.163Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: removed check: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:03.163Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: removed check: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:03.163Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: removed service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:03.163Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Node info in sync
>             writer.go:29: 2020-02-23T02:46:03.166Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Deregistered service: service=web
>             writer.go:29: 2020-02-23T02:46:03.167Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Deregistered service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:03.167Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:03.168Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:03.168Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.leader: stopping routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.168Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.leader: stopping routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.168Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.168Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:03.168Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:03.168Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.leader: stopped routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.168Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.leader: stopped routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.168Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.170Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:03.171Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:03.171Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: consul server down
>             writer.go:29: 2020-02-23T02:46:03.171Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: shutdown complete
>             writer.go:29: 2020-02-23T02:46:03.171Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Stopping server: protocol=DNS address=127.0.0.1:16247 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.172Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Stopping server: protocol=DNS address=127.0.0.1:16247 network=udp
>             writer.go:29: 2020-02-23T02:46:03.172Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Stopping server: protocol=HTTP address=127.0.0.1:16248 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.172Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:03.172Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_defaults: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied (0.13s)
>             writer.go:29: 2020-02-23T02:46:03.180Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>             writer.go:29: 2020-02-23T02:46:03.180Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:03.180Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:03.180Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:03.190Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:83b6a542-9867-379f-5979-e8c4df67651e Address:127.0.0.1:16258}]"
>             writer.go:29: 2020-02-23T02:46:03.190Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.serf.wan: serf: EventMemberJoin: Node-83b6a542-9867-379f-5979-e8c4df67651e.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:03.192Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.serf.lan: serf: EventMemberJoin: Node-83b6a542-9867-379f-5979-e8c4df67651e 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:03.194Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.raft: entering follower state: follower="Node at 127.0.0.1:16258 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:03.196Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server: Adding LAN server: server="Node-83b6a542-9867-379f-5979-e8c4df67651e (Addr: tcp/127.0.0.1:16258) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:03.196Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server: Handled event for server in area: event=member-join server=Node-83b6a542-9867-379f-5979-e8c4df67651e.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:03.197Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied: Started DNS server: address=127.0.0.1:16253 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.197Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied: Started DNS server: address=127.0.0.1:16253 network=udp
>             writer.go:29: 2020-02-23T02:46:03.198Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied: Started HTTP server: address=127.0.0.1:16254 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.198Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied: started state syncer
>             writer.go:29: 2020-02-23T02:46:03.253Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:03.253Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.raft: entering candidate state: node="Node at 127.0.0.1:16258 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:03.256Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:03.256Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.raft: vote granted: from=83b6a542-9867-379f-5979-e8c4df67651e term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:03.256Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:03.256Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.raft: entering leader state: leader="Node at 127.0.0.1:16258 [Leader]"
>             writer.go:29: 2020-02-23T02:46:03.257Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:03.257Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server: New leader elected: payload=Node-83b6a542-9867-379f-5979-e8c4df67651e
>             writer.go:29: 2020-02-23T02:46:03.259Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:03.260Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server: Created ACL 'global-management' policy
>             writer.go:29: 2020-02-23T02:46:03.260Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:03.263Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server: Bootstrapped ACL master token from configuration
>             writer.go:29: 2020-02-23T02:46:03.266Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server: Created ACL anonymous token from configuration
>             writer.go:29: 2020-02-23T02:46:03.266Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.leader: started routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.266Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.leader: started routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.267Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.serf.lan: serf: EventMemberUpdate: Node-83b6a542-9867-379f-5979-e8c4df67651e
>             writer.go:29: 2020-02-23T02:46:03.267Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.serf.wan: serf: EventMemberUpdate: Node-83b6a542-9867-379f-5979-e8c4df67651e.dc1
>             writer.go:29: 2020-02-23T02:46:03.267Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server: Handled event for server in area: event=member-update server=Node-83b6a542-9867-379f-5979-e8c4df67651e.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:03.271Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:03.278Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:03.278Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.278Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server: Skipping self join check for node since the cluster is too small: node=Node-83b6a542-9867-379f-5979-e8c4df67651e
>             writer.go:29: 2020-02-23T02:46:03.278Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server: member joined, marking health alive: member=Node-83b6a542-9867-379f-5979-e8c4df67651e
>             writer.go:29: 2020-02-23T02:46:03.281Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server: Skipping self join check for node since the cluster is too small: node=Node-83b6a542-9867-379f-5979-e8c4df67651e
>             writer.go:29: 2020-02-23T02:46:03.294Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.acl: dropping node from result due to ACLs: node=Node-83b6a542-9867-379f-5979-e8c4df67651e
>             writer.go:29: 2020-02-23T02:46:03.294Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.acl: dropping node from result due to ACLs: node=Node-83b6a542-9867-379f-5979-e8c4df67651e
>             writer.go:29: 2020-02-23T02:46:03.294Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:03.294Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:03.294Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.leader: stopping routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.294Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.leader: stopping routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.294Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.294Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:03.294Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:03.294Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.leader: stopped routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.294Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.leader: stopped routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.294Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.296Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:03.298Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:03.298Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied: consul server down
>             writer.go:29: 2020-02-23T02:46:03.298Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied: shutdown complete
>             writer.go:29: 2020-02-23T02:46:03.298Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied: Stopping server: protocol=DNS address=127.0.0.1:16253 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.298Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied: Stopping server: protocol=DNS address=127.0.0.1:16253 network=udp
>             writer.go:29: 2020-02-23T02:46:03.298Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied: Stopping server: protocol=HTTP address=127.0.0.1:16254 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.299Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:03.299Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_denied: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar (0.47s)
>             writer.go:29: 2020-02-23T02:46:03.307Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>             writer.go:29: 2020-02-23T02:46:03.307Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:03.307Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:03.308Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:03.320Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4f9a7cb4-c260-b32e-9bd0-6831a10008ed Address:127.0.0.1:16270}]"
>             writer.go:29: 2020-02-23T02:46:03.320Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16270 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:03.320Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.serf.wan: serf: EventMemberJoin: Node-4f9a7cb4-c260-b32e-9bd0-6831a10008ed.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:03.321Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.serf.lan: serf: EventMemberJoin: Node-4f9a7cb4-c260-b32e-9bd0-6831a10008ed 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:03.321Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server: Handled event for server in area: event=member-join server=Node-4f9a7cb4-c260-b32e-9bd0-6831a10008ed.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:03.321Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server: Adding LAN server: server="Node-4f9a7cb4-c260-b32e-9bd0-6831a10008ed (Addr: tcp/127.0.0.1:16270) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:03.321Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: Started DNS server: address=127.0.0.1:16265 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.321Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: Started DNS server: address=127.0.0.1:16265 network=udp
>             writer.go:29: 2020-02-23T02:46:03.322Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: Started HTTP server: address=127.0.0.1:16266 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.322Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:03.379Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:03.379Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16270 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:03.386Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:03.386Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.raft: vote granted: from=4f9a7cb4-c260-b32e-9bd0-6831a10008ed term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:03.386Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:03.386Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16270 [Leader]"
>             writer.go:29: 2020-02-23T02:46:03.386Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:03.387Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server: New leader elected: payload=Node-4f9a7cb4-c260-b32e-9bd0-6831a10008ed
>             writer.go:29: 2020-02-23T02:46:03.391Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:03.393Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server: Created ACL 'global-management' policy
>             writer.go:29: 2020-02-23T02:46:03.393Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:03.395Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server: Bootstrapped ACL master token from configuration
>             writer.go:29: 2020-02-23T02:46:03.400Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server: Created ACL anonymous token from configuration
>             writer.go:29: 2020-02-23T02:46:03.400Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.leader: started routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.400Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.leader: started routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.400Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.serf.lan: serf: EventMemberUpdate: Node-4f9a7cb4-c260-b32e-9bd0-6831a10008ed
>             writer.go:29: 2020-02-23T02:46:03.400Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.serf.wan: serf: EventMemberUpdate: Node-4f9a7cb4-c260-b32e-9bd0-6831a10008ed.dc1
>             writer.go:29: 2020-02-23T02:46:03.400Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server: Handled event for server in area: event=member-update server=Node-4f9a7cb4-c260-b32e-9bd0-6831a10008ed.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:03.404Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:03.414Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:03.414Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.414Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-4f9a7cb4-c260-b32e-9bd0-6831a10008ed
>             writer.go:29: 2020-02-23T02:46:03.414Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server: member joined, marking health alive: member=Node-4f9a7cb4-c260-b32e-9bd0-6831a10008ed
>             writer.go:29: 2020-02-23T02:46:03.417Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-4f9a7cb4-c260-b32e-9bd0-6831a10008ed
>             writer.go:29: 2020-02-23T02:46:03.529Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: Skipping remote check since it is managed automatically: check=serfHealth
>             writer.go:29: 2020-02-23T02:46:03.532Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: Synced node info
>             writer.go:29: 2020-02-23T02:46:03.533Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: Node info in sync
>             writer.go:29: 2020-02-23T02:46:03.722Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.acl: dropping node from result due to ACLs: node=Node-4f9a7cb4-c260-b32e-9bd0-6831a10008ed
>             writer.go:29: 2020-02-23T02:46:03.722Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.acl: dropping node from result due to ACLs: node=Node-4f9a7cb4-c260-b32e-9bd0-6831a10008ed
>             writer.go:29: 2020-02-23T02:46:03.766Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:03.766Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:03.766Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.leader: stopping routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.766Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.leader: stopping routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.766Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.766Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:03.766Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.leader: stopped routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.766Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.leader: stopped routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.766Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.768Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:03.769Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:03.769Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:03.769Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:03.769Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16265 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.769Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16265 network=udp
>             writer.go:29: 2020-02-23T02:46:03.769Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16266 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.769Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:03.770Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination (0.30s)
>             writer.go:29: 2020-02-23T02:46:03.777Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>             writer.go:29: 2020-02-23T02:46:03.777Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:03.777Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:03.778Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:03.786Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:1656165f-6010-7723-1688-51ea550fb3b8 Address:127.0.0.1:16282}]"
>             writer.go:29: 2020-02-23T02:46:03.786Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: entering follower state: follower="Node at 127.0.0.1:16282 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:03.787Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.serf.wan: serf: EventMemberJoin: Node-1656165f-6010-7723-1688-51ea550fb3b8.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:03.787Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.serf.lan: serf: EventMemberJoin: Node-1656165f-6010-7723-1688-51ea550fb3b8 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:03.787Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Adding LAN server: server="Node-1656165f-6010-7723-1688-51ea550fb3b8 (Addr: tcp/127.0.0.1:16282) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:03.787Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Handled event for server in area: event=member-join server=Node-1656165f-6010-7723-1688-51ea550fb3b8.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:03.788Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Started DNS server: address=127.0.0.1:16277 network=udp
>             writer.go:29: 2020-02-23T02:46:03.788Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Started DNS server: address=127.0.0.1:16277 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.788Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Started HTTP server: address=127.0.0.1:16278 network=tcp
>             writer.go:29: 2020-02-23T02:46:03.788Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: started state syncer
>             writer.go:29: 2020-02-23T02:46:03.836Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:03.836Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: entering candidate state: node="Node at 127.0.0.1:16282 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:03.839Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:03.839Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: vote granted: from=1656165f-6010-7723-1688-51ea550fb3b8 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:03.839Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:03.839Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.raft: entering leader state: leader="Node at 127.0.0.1:16282 [Leader]"
>             writer.go:29: 2020-02-23T02:46:03.839Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:03.839Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: New leader elected: payload=Node-1656165f-6010-7723-1688-51ea550fb3b8
>             writer.go:29: 2020-02-23T02:46:03.841Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:03.842Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Created ACL 'global-management' policy
>             writer.go:29: 2020-02-23T02:46:03.842Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:03.845Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Bootstrapped ACL master token from configuration
>             writer.go:29: 2020-02-23T02:46:03.852Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Created ACL anonymous token from configuration
>             writer.go:29: 2020-02-23T02:46:03.852Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: started routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:03.852Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: started routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:03.852Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.serf.lan: serf: EventMemberUpdate: Node-1656165f-6010-7723-1688-51ea550fb3b8
>             writer.go:29: 2020-02-23T02:46:03.852Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.serf.wan: serf: EventMemberUpdate: Node-1656165f-6010-7723-1688-51ea550fb3b8.dc1
>             writer.go:29: 2020-02-23T02:46:03.852Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Handled event for server in area: event=member-update server=Node-1656165f-6010-7723-1688-51ea550fb3b8.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:03.856Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:03.863Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:03.863Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:03.863Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Skipping self join check for node since the cluster is too small: node=Node-1656165f-6010-7723-1688-51ea550fb3b8
>             writer.go:29: 2020-02-23T02:46:03.863Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: member joined, marking health alive: member=Node-1656165f-6010-7723-1688-51ea550fb3b8
>             writer.go:29: 2020-02-23T02:46:03.866Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: Skipping self join check for node since the cluster is too small: node=Node-1656165f-6010-7723-1688-51ea550fb3b8
>             writer.go:29: 2020-02-23T02:46:03.957Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.acl: dropping node from result due to ACLs: node=Node-1656165f-6010-7723-1688-51ea550fb3b8
>             writer.go:29: 2020-02-23T02:46:03.957Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.acl: dropping node from result due to ACLs: node=Node-1656165f-6010-7723-1688-51ea550fb3b8
>             writer.go:29: 2020-02-23T02:46:04.054Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:04.054Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:04.054Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: stopping routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:04.054Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: stopping routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:04.054Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.054Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:04.054Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: stopped routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:04.054Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:04.054Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: stopped routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:04.054Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.063Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:04.065Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:04.065Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: consul server down
>             writer.go:29: 2020-02-23T02:46:04.065Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: shutdown complete
>             writer.go:29: 2020-02-23T02:46:04.065Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Stopping server: protocol=DNS address=127.0.0.1:16277 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.065Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Stopping server: protocol=DNS address=127.0.0.1:16277 network=udp
>             writer.go:29: 2020-02-23T02:46:04.065Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Stopping server: protocol=HTTP address=127.0.0.1:16278 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.065Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:04.065Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_and_sidecar_but_not_sidecar's_overridden_destination: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar (0.24s)
>             writer.go:29: 2020-02-23T02:46:04.079Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>             writer.go:29: 2020-02-23T02:46:04.079Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:04.079Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:04.079Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:04.101Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086 Address:127.0.0.1:16294}]"
>             writer.go:29: 2020-02-23T02:46:04.101Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16294 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:04.101Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.serf.wan: serf: EventMemberJoin: Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:04.101Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.serf.lan: serf: EventMemberJoin: Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:04.102Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Handled event for server in area: event=member-join server=Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:04.102Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Adding LAN server: server="Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086 (Addr: tcp/127.0.0.1:16294) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:04.102Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar: Started DNS server: address=127.0.0.1:16289 network=udp
>             writer.go:29: 2020-02-23T02:46:04.102Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar: Started DNS server: address=127.0.0.1:16289 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.102Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar: Started HTTP server: address=127.0.0.1:16290 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.102Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:04.139Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:04.139Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16294 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:04.142Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:04.142Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: vote granted: from=193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:04.142Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:04.142Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16294 [Leader]"
>             writer.go:29: 2020-02-23T02:46:04.142Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:04.143Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: New leader elected: payload=Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086
>             writer.go:29: 2020-02-23T02:46:04.145Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:04.146Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Created ACL 'global-management' policy
>             writer.go:29: 2020-02-23T02:46:04.146Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:04.149Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Bootstrapped ACL master token from configuration
>             writer.go:29: 2020-02-23T02:46:04.152Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:04.152Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:04.160Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Created ACL anonymous token from configuration
>             writer.go:29: 2020-02-23T02:46:04.160Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: started routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:04.160Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: started routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:04.160Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.serf.lan: serf: EventMemberUpdate: Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086
>             writer.go:29: 2020-02-23T02:46:04.160Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.serf.wan: serf: EventMemberUpdate: Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086.dc1
>             writer.go:29: 2020-02-23T02:46:04.160Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Handled event for server in area: event=member-update server=Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:04.161Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Created ACL anonymous token from configuration
>             writer.go:29: 2020-02-23T02:46:04.161Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: transitioning out of legacy ACL mode
>             writer.go:29: 2020-02-23T02:46:04.161Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.serf.lan: serf: EventMemberUpdate: Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086
>             writer.go:29: 2020-02-23T02:46:04.161Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.serf.wan: serf: EventMemberUpdate: Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086.dc1
>             writer.go:29: 2020-02-23T02:46:04.161Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Handled event for server in area: event=member-update server=Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:04.165Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:04.172Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:04.172Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.172Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086
>             writer.go:29: 2020-02-23T02:46:04.172Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: member joined, marking health alive: member=Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086
>             writer.go:29: 2020-02-23T02:46:04.174Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086
>             writer.go:29: 2020-02-23T02:46:04.174Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086
>             writer.go:29: 2020-02-23T02:46:04.292Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.acl: dropping node from result due to ACLs: node=Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086
>             writer.go:29: 2020-02-23T02:46:04.292Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.acl: dropping node from result due to ACLs: node=Node-193fc6d7-b0e6-b5eb-8c9a-48ea5dcfb086
>             writer.go:29: 2020-02-23T02:46:04.297Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:04.297Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:04.297Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: stopping routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:04.297Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: stopping routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:04.297Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.297Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:04.297Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:04.297Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: stopped routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:04.297Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: stopped routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:04.297Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.299Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:04.301Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:04.301Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:04.301Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:04.301Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16289 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.301Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16289 network=udp
>             writer.go:29: 2020-02-23T02:46:04.301Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16290 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.301Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:04.301Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_not_for_overridden_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar (0.49s)
>             writer.go:29: 2020-02-23T02:46:04.327Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>             writer.go:29: 2020-02-23T02:46:04.327Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:04.327Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:04.328Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:04.337Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:6a4cd708-649f-b851-9c58-912d8ca65245 Address:127.0.0.1:16306}]"
>             writer.go:29: 2020-02-23T02:46:04.338Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16306 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:04.338Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.wan: serf: EventMemberJoin: Node-6a4cd708-649f-b851-9c58-912d8ca65245.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:04.338Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.lan: serf: EventMemberJoin: Node-6a4cd708-649f-b851-9c58-912d8ca65245 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:04.339Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Started DNS server: address=127.0.0.1:16301 network=udp
>             writer.go:29: 2020-02-23T02:46:04.339Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Adding LAN server: server="Node-6a4cd708-649f-b851-9c58-912d8ca65245 (Addr: tcp/127.0.0.1:16306) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:04.339Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Handled event for server in area: event=member-join server=Node-6a4cd708-649f-b851-9c58-912d8ca65245.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:04.339Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Started DNS server: address=127.0.0.1:16301 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.339Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Started HTTP server: address=127.0.0.1:16302 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.339Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:04.385Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:04.385Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16306 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:04.389Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:04.389Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: vote granted: from=6a4cd708-649f-b851-9c58-912d8ca65245 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:04.389Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:04.389Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16306 [Leader]"
>             writer.go:29: 2020-02-23T02:46:04.389Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:04.389Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:04.389Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: New leader elected: payload=Node-6a4cd708-649f-b851-9c58-912d8ca65245
>             writer.go:29: 2020-02-23T02:46:04.391Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Created ACL 'global-management' policy
>             writer.go:29: 2020-02-23T02:46:04.391Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:04.392Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: initializing acls
>             writer.go:29: 2020-02-23T02:46:04.392Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Configuring a non-UUID master token is deprecated
>             writer.go:29: 2020-02-23T02:46:04.394Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Bootstrapped ACL master token from configuration
>             writer.go:29: 2020-02-23T02:46:04.398Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Created ACL anonymous token from configuration
>             writer.go:29: 2020-02-23T02:46:04.398Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: started routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:04.399Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: started routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:04.399Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: transitioning out of legacy ACL mode
>             writer.go:29: 2020-02-23T02:46:04.399Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.lan: serf: EventMemberUpdate: Node-6a4cd708-649f-b851-9c58-912d8ca65245
>             writer.go:29: 2020-02-23T02:46:04.399Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.wan: serf: EventMemberUpdate: Node-6a4cd708-649f-b851-9c58-912d8ca65245.dc1
>             writer.go:29: 2020-02-23T02:46:04.399Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Handled event for server in area: event=member-update server=Node-6a4cd708-649f-b851-9c58-912d8ca65245.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:04.399Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Bootstrapped ACL master token from configuration
>             writer.go:29: 2020-02-23T02:46:04.399Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.lan: serf: EventMemberUpdate: Node-6a4cd708-649f-b851-9c58-912d8ca65245
>             writer.go:29: 2020-02-23T02:46:04.399Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.wan: serf: EventMemberUpdate: Node-6a4cd708-649f-b851-9c58-912d8ca65245.dc1
>             writer.go:29: 2020-02-23T02:46:04.399Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Handled event for server in area: event=member-update server=Node-6a4cd708-649f-b851-9c58-912d8ca65245.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:04.403Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:04.410Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:04.410Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.410Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-6a4cd708-649f-b851-9c58-912d8ca65245
>             writer.go:29: 2020-02-23T02:46:04.410Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: member joined, marking health alive: member=Node-6a4cd708-649f-b851-9c58-912d8ca65245
>             writer.go:29: 2020-02-23T02:46:04.412Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-6a4cd708-649f-b851-9c58-912d8ca65245
>             writer.go:29: 2020-02-23T02:46:04.412Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-6a4cd708-649f-b851-9c58-912d8ca65245
>             writer.go:29: 2020-02-23T02:46:04.539Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Skipping remote check since it is managed automatically: check=serfHealth
>             writer.go:29: 2020-02-23T02:46:04.542Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Synced node info
>             writer.go:29: 2020-02-23T02:46:04.542Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Node info in sync
>             writer.go:29: 2020-02-23T02:46:04.765Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.acl: dropping node from result due to ACLs: node=Node-6a4cd708-649f-b851-9c58-912d8ca65245
>             writer.go:29: 2020-02-23T02:46:04.765Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.acl: dropping node from result due to ACLs: node=Node-6a4cd708-649f-b851-9c58-912d8ca65245
>             writer.go:29: 2020-02-23T02:46:04.776Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Node info in sync
>             writer.go:29: 2020-02-23T02:46:04.779Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Synced service: service=web
>             writer.go:29: 2020-02-23T02:46:04.780Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Synced service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:04.780Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Check in sync: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:04.780Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Check in sync: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:04.781Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: removed service: service=web
>             writer.go:29: 2020-02-23T02:46:04.781Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: removed check: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:04.781Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.proxycfg: watch error: id=service-http-checks:web error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>             writer.go:29: 2020-02-23T02:46:04.781Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: removed check: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:04.781Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: removed service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:04.781Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Node info in sync
>             writer.go:29: 2020-02-23T02:46:04.784Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Deregistered service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:04.785Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Deregistered service: service=web
>             writer.go:29: 2020-02-23T02:46:04.785Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:04.785Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:04.785Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: stopping routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:04.785Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.785Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: stopping routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:04.785Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:04.785Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: stopped routine: routine="acl token reaping"
>             writer.go:29: 2020-02-23T02:46:04.786Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: stopped routine: routine="legacy ACL token upgrade"
>             writer.go:29: 2020-02-23T02:46:04.786Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.787Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:04.789Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:04.789Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:04.789Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:04.789Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16301 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.789Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16301 network=udp
>             writer.go:29: 2020-02-23T02:46:04.789Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16302 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.789Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:04.789Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/ACL_OK_for_service_but_and_overridden_for_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar (0.45s)
>             writer.go:29: 2020-02-23T02:46:04.796Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:04.796Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:04.798Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:04.817Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:0d2b3498-7373-fc72-1eee-f3d860a577b2 Address:127.0.0.1:16318}]"
>             writer.go:29: 2020-02-23T02:46:04.817Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16318 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:04.817Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server.serf.wan: serf: EventMemberJoin: Node-0d2b3498-7373-fc72-1eee-f3d860a577b2.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:04.817Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server.serf.lan: serf: EventMemberJoin: Node-0d2b3498-7373-fc72-1eee-f3d860a577b2 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:04.818Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar: Started DNS server: address=127.0.0.1:16313 network=udp
>             writer.go:29: 2020-02-23T02:46:04.818Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server: Adding LAN server: server="Node-0d2b3498-7373-fc72-1eee-f3d860a577b2 (Addr: tcp/127.0.0.1:16318) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:04.818Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server: Handled event for server in area: event=member-join server=Node-0d2b3498-7373-fc72-1eee-f3d860a577b2.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:04.818Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar: Started DNS server: address=127.0.0.1:16313 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.818Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar: Started HTTP server: address=127.0.0.1:16314 network=tcp
>             writer.go:29: 2020-02-23T02:46:04.818Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:04.887Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:04.887Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16318 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:04.919Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:04.919Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server.raft: vote granted: from=0d2b3498-7373-fc72-1eee-f3d860a577b2 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:04.919Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:04.919Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16318 [Leader]"
>             writer.go:29: 2020-02-23T02:46:04.920Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:04.920Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server: New leader elected: payload=Node-0d2b3498-7373-fc72-1eee-f3d860a577b2
>             writer.go:29: 2020-02-23T02:46:04.928Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:04.937Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:04.937Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:04.937Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-0d2b3498-7373-fc72-1eee-f3d860a577b2
>             writer.go:29: 2020-02-23T02:46:04.937Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server: member joined, marking health alive: member=Node-0d2b3498-7373-fc72-1eee-f3d860a577b2
>             writer.go:29: 2020-02-23T02:46:05.101Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar: Skipping remote check since it is managed automatically: check=serfHealth
>             writer.go:29: 2020-02-23T02:46:05.111Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar: Synced node info
>             writer.go:29: 2020-02-23T02:46:05.233Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:05.233Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:05.233Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:05.233Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:05.233Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:05.235Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:05.237Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:05.237Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:05.237Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:05.237Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16313 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.237Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16313 network=udp
>             writer.go:29: 2020-02-23T02:46:05.237Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16314 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.237Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:05.237Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_definition_in_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar (0.28s)
>             writer.go:29: 2020-02-23T02:46:05.245Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:05.245Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:05.245Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:05.254Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a1a6383e-f428-cc89-ec1c-88b221fa698e Address:127.0.0.1:16336}]"
>             writer.go:29: 2020-02-23T02:46:05.254Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16336 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:05.255Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server.serf.wan: serf: EventMemberJoin: Node-a1a6383e-f428-cc89-ec1c-88b221fa698e.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:05.255Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server.serf.lan: serf: EventMemberJoin: Node-a1a6383e-f428-cc89-ec1c-88b221fa698e 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:05.255Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server: Handled event for server in area: event=member-join server=Node-a1a6383e-f428-cc89-ec1c-88b221fa698e.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:05.256Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server: Adding LAN server: server="Node-a1a6383e-f428-cc89-ec1c-88b221fa698e (Addr: tcp/127.0.0.1:16336) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:05.256Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar: Started DNS server: address=127.0.0.1:16331 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.256Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar: Started DNS server: address=127.0.0.1:16331 network=udp
>             writer.go:29: 2020-02-23T02:46:05.256Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar: Started HTTP server: address=127.0.0.1:16332 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.256Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:05.315Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:05.315Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16336 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:05.368Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:05.368Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server.raft: vote granted: from=a1a6383e-f428-cc89-ec1c-88b221fa698e term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:05.368Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:05.368Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16336 [Leader]"
>             writer.go:29: 2020-02-23T02:46:05.368Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:05.368Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server: New leader elected: payload=Node-a1a6383e-f428-cc89-ec1c-88b221fa698e
>             writer.go:29: 2020-02-23T02:46:05.430Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar: Synced node info
>             writer.go:29: 2020-02-23T02:46:05.439Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:05.446Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:05.446Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:05.446Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-a1a6383e-f428-cc89-ec1c-88b221fa698e
>             writer.go:29: 2020-02-23T02:46:05.446Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server: member joined, marking health alive: member=Node-a1a6383e-f428-cc89-ec1c-88b221fa698e
>             writer.go:29: 2020-02-23T02:46:05.513Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:05.513Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:05.513Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:05.513Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:05.513Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:05.515Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:05.517Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:05.517Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:05.517Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:05.517Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16331 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.517Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16331 network=udp
>             writer.go:29: 2020-02-23T02:46:05.517Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16332 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.517Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:05.517Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_definitions_in_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar (0.42s)
>             writer.go:29: 2020-02-23T02:46:05.525Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:05.525Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:05.525Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:05.535Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:06e87db1-8dff-88f9-4572-a4c1e66c1170 Address:127.0.0.1:16348}]"
>             writer.go:29: 2020-02-23T02:46:05.535Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16348 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:05.536Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server.serf.wan: serf: EventMemberJoin: Node-06e87db1-8dff-88f9-4572-a4c1e66c1170.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:05.536Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server.serf.lan: serf: EventMemberJoin: Node-06e87db1-8dff-88f9-4572-a4c1e66c1170 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:05.536Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server: Adding LAN server: server="Node-06e87db1-8dff-88f9-4572-a4c1e66c1170 (Addr: tcp/127.0.0.1:16348) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:05.536Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server: Handled event for server in area: event=member-join server=Node-06e87db1-8dff-88f9-4572-a4c1e66c1170.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:05.537Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: Started DNS server: address=127.0.0.1:16343 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.537Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: Started DNS server: address=127.0.0.1:16343 network=udp
>             writer.go:29: 2020-02-23T02:46:05.537Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: Started HTTP server: address=127.0.0.1:16344 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.537Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:05.572Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:05.572Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16348 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:05.575Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:05.576Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server.raft: vote granted: from=06e87db1-8dff-88f9-4572-a4c1e66c1170 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:05.576Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:05.576Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16348 [Leader]"
>             writer.go:29: 2020-02-23T02:46:05.576Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:05.576Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server: New leader elected: payload=Node-06e87db1-8dff-88f9-4572-a4c1e66c1170
>             writer.go:29: 2020-02-23T02:46:05.587Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:05.595Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:05.595Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:05.595Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-06e87db1-8dff-88f9-4572-a4c1e66c1170
>             writer.go:29: 2020-02-23T02:46:05.595Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server: member joined, marking health alive: member=Node-06e87db1-8dff-88f9-4572-a4c1e66c1170
>             writer.go:29: 2020-02-23T02:46:05.695Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: Skipping remote check since it is managed automatically: check=serfHealth
>             writer.go:29: 2020-02-23T02:46:05.698Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: Synced node info
>             writer.go:29: 2020-02-23T02:46:05.698Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: Node info in sync
>             writer.go:29: 2020-02-23T02:46:05.930Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:05.930Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:05.930Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:05.930Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:05.930Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:05.932Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:05.934Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:05.934Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:05.934Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:05.934Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16343 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.934Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16343 network=udp
>             writer.go:29: 2020-02-23T02:46:05.934Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16344 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.934Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:05.934Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_check_status_in_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar (0.22s)
>             writer.go:29: 2020-02-23T02:46:05.942Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:05.943Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:05.943Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:05.959Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d54c4882-de05-837a-6ec4-44456662e02d Address:127.0.0.1:16360}]"
>             writer.go:29: 2020-02-23T02:46:05.959Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16360 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:05.960Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server.serf.wan: serf: EventMemberJoin: Node-d54c4882-de05-837a-6ec4-44456662e02d.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:05.971Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server.serf.lan: serf: EventMemberJoin: Node-d54c4882-de05-837a-6ec4-44456662e02d 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:05.972Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar: Started DNS server: address=127.0.0.1:16355 network=udp
>             writer.go:29: 2020-02-23T02:46:05.972Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server: Adding LAN server: server="Node-d54c4882-de05-837a-6ec4-44456662e02d (Addr: tcp/127.0.0.1:16360) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:05.972Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server: Handled event for server in area: event=member-join server=Node-d54c4882-de05-837a-6ec4-44456662e02d.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:05.972Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar: Started DNS server: address=127.0.0.1:16355 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.973Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar: Started HTTP server: address=127.0.0.1:16356 network=tcp
>             writer.go:29: 2020-02-23T02:46:05.973Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar: started state syncer
>             writer.go:29: 2020-02-23T02:46:06.022Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:06.022Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16360 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:06.025Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:06.025Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server.raft: vote granted: from=d54c4882-de05-837a-6ec4-44456662e02d term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:06.025Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:06.025Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16360 [Leader]"
>             writer.go:29: 2020-02-23T02:46:06.025Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:06.025Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server: New leader elected: payload=Node-d54c4882-de05-837a-6ec4-44456662e02d
>             writer.go:29: 2020-02-23T02:46:06.033Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:06.041Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:06.041Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.041Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server: Skipping self join check for node since the cluster is too small: node=Node-d54c4882-de05-837a-6ec4-44456662e02d
>             writer.go:29: 2020-02-23T02:46:06.041Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server: member joined, marking health alive: member=Node-d54c4882-de05-837a-6ec4-44456662e02d
>             writer.go:29: 2020-02-23T02:46:06.148Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:06.148Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:06.148Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.148Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:06.148Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:06.148Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.152Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:06.154Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:06.154Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar: consul server down
>             writer.go:29: 2020-02-23T02:46:06.154Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar: shutdown complete
>             writer.go:29: 2020-02-23T02:46:06.154Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16355 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.154Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar: Stopping server: protocol=DNS address=127.0.0.1:16355 network=udp
>             writer.go:29: 2020-02-23T02:46:06.154Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16356 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.154Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:06.154Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/invalid_checks_status_in_sidecar: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered (0.43s)
>             writer.go:29: 2020-02-23T02:46:06.161Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:06.161Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:06.162Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:06.171Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:9f008869-312a-3332-d0e5-280eb015acd2 Address:127.0.0.1:16372}]"
>             writer.go:29: 2020-02-23T02:46:06.171Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.serf.wan: serf: EventMemberJoin: Node-9f008869-312a-3332-d0e5-280eb015acd2.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:06.172Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.serf.lan: serf: EventMemberJoin: Node-9f008869-312a-3332-d0e5-280eb015acd2 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:06.172Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Started DNS server: address=127.0.0.1:16367 network=udp
>             writer.go:29: 2020-02-23T02:46:06.172Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: entering follower state: follower="Node at 127.0.0.1:16372 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:06.172Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server: Adding LAN server: server="Node-9f008869-312a-3332-d0e5-280eb015acd2 (Addr: tcp/127.0.0.1:16372) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:06.172Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server: Handled event for server in area: event=member-join server=Node-9f008869-312a-3332-d0e5-280eb015acd2.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:06.172Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Started DNS server: address=127.0.0.1:16367 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.173Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Started HTTP server: address=127.0.0.1:16368 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.173Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: started state syncer
>             writer.go:29: 2020-02-23T02:46:06.211Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:06.211Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: entering candidate state: node="Node at 127.0.0.1:16372 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:06.214Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:06.214Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: vote granted: from=9f008869-312a-3332-d0e5-280eb015acd2 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:06.214Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:06.214Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.raft: entering leader state: leader="Node at 127.0.0.1:16372 [Leader]"
>             writer.go:29: 2020-02-23T02:46:06.215Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:06.215Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server: New leader elected: payload=Node-9f008869-312a-3332-d0e5-280eb015acd2
>             writer.go:29: 2020-02-23T02:46:06.222Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:06.230Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:06.230Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.230Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server: Skipping self join check for node since the cluster is too small: node=Node-9f008869-312a-3332-d0e5-280eb015acd2
>             writer.go:29: 2020-02-23T02:46:06.230Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server: member joined, marking health alive: member=Node-9f008869-312a-3332-d0e5-280eb015acd2
>             writer.go:29: 2020-02-23T02:46:06.365Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Skipping remote check since it is managed automatically: check=serfHealth
>             writer.go:29: 2020-02-23T02:46:06.368Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Synced node info
>             writer.go:29: 2020-02-23T02:46:06.368Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Node info in sync
>             writer.go:29: 2020-02-23T02:46:06.576Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Node info in sync
>             writer.go:29: 2020-02-23T02:46:06.577Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Synced service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:06.580Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Synced service: service=web
>             writer.go:29: 2020-02-23T02:46:06.580Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: removed service: service=web
>             writer.go:29: 2020-02-23T02:46:06.580Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Node info in sync
>             writer.go:29: 2020-02-23T02:46:06.580Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Service in sync: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:06.582Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Deregistered service: service=web
>             writer.go:29: 2020-02-23T02:46:06.582Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:06.582Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:06.582Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.582Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:06.582Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.584Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:06.586Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:06.586Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: consul server down
>             writer.go:29: 2020-02-23T02:46:06.586Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: shutdown complete
>             writer.go:29: 2020-02-23T02:46:06.586Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Stopping server: protocol=DNS address=127.0.0.1:16367 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.586Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Stopping server: protocol=DNS address=127.0.0.1:16367 network=udp
>             writer.go:29: 2020-02-23T02:46:06.586Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Stopping server: protocol=HTTP address=127.0.0.1:16368 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.586Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:06.586Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/another_service_registered_with_same_ID_as_a_sidecar_should_not_be_deregistered: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work (0.23s)
>             writer.go:29: 2020-02-23T02:46:06.596Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:06.596Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:06.596Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:06.610Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:43ccb63b-e04a-ba49-51aa-7869b3c3b077 Address:127.0.0.1:16384}]"
>             writer.go:29: 2020-02-23T02:46:06.611Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server.raft: entering follower state: follower="Node at 127.0.0.1:16384 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:06.611Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server.serf.wan: serf: EventMemberJoin: Node-43ccb63b-e04a-ba49-51aa-7869b3c3b077.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:06.612Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server.serf.lan: serf: EventMemberJoin: Node-43ccb63b-e04a-ba49-51aa-7869b3c3b077 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:06.612Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server: Handled event for server in area: event=member-join server=Node-43ccb63b-e04a-ba49-51aa-7869b3c3b077.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:06.612Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server: Adding LAN server: server="Node-43ccb63b-e04a-ba49-51aa-7869b3c3b077 (Addr: tcp/127.0.0.1:16384) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:06.612Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Started DNS server: address=127.0.0.1:16379 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.612Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Started DNS server: address=127.0.0.1:16379 network=udp
>             writer.go:29: 2020-02-23T02:46:06.613Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Started HTTP server: address=127.0.0.1:16380 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.613Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: started state syncer
>             writer.go:29: 2020-02-23T02:46:06.659Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:06.659Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server.raft: entering candidate state: node="Node at 127.0.0.1:16384 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:06.662Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:06.662Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server.raft: vote granted: from=43ccb63b-e04a-ba49-51aa-7869b3c3b077 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:06.662Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:06.662Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server.raft: entering leader state: leader="Node at 127.0.0.1:16384 [Leader]"
>             writer.go:29: 2020-02-23T02:46:06.662Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:06.662Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server: New leader elected: payload=Node-43ccb63b-e04a-ba49-51aa-7869b3c3b077
>             writer.go:29: 2020-02-23T02:46:06.669Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:06.677Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:06.677Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.677Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server: Skipping self join check for node since the cluster is too small: node=Node-43ccb63b-e04a-ba49-51aa-7869b3c3b077
>             writer.go:29: 2020-02-23T02:46:06.677Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server: member joined, marking health alive: member=Node-43ccb63b-e04a-ba49-51aa-7869b3c3b077
>             writer.go:29: 2020-02-23T02:46:06.802Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Synced node info
>             writer.go:29: 2020-02-23T02:46:06.804Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Synced service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:06.806Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Synced service: service=web
>             writer.go:29: 2020-02-23T02:46:06.806Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Check in sync: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:06.806Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Check in sync: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:06.806Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: removed service: service=web
>             writer.go:29: 2020-02-23T02:46:06.806Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: removed check: check=service:web-sidecar-proxy:1
>             writer.go:29: 2020-02-23T02:46:06.806Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: removed check: check=service:web-sidecar-proxy:2
>             writer.go:29: 2020-02-23T02:46:06.806Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: removed service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:06.806Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Node info in sync
>             writer.go:29: 2020-02-23T02:46:06.808Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Deregistered service: service=web
>             writer.go:29: 2020-02-23T02:46:06.809Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Deregistered service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:06.809Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:06.809Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:06.809Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.809Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:06.809Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:06.809Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.811Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:06.813Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:06.813Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: consul server down
>             writer.go:29: 2020-02-23T02:46:06.813Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: shutdown complete
>             writer.go:29: 2020-02-23T02:46:06.813Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Stopping server: protocol=DNS address=127.0.0.1:16379 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.813Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Stopping server: protocol=DNS address=127.0.0.1:16379 network=udp
>             writer.go:29: 2020-02-23T02:46:06.813Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Stopping server: protocol=HTTP address=127.0.0.1:16380 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.813Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:06.813Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/updates_to_sidecar_should_work: Endpoints down
>         --- PASS: TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it (0.26s)
>             writer.go:29: 2020-02-23T02:46:06.820Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: bootstrap = true: do not enable unless necessary
>             writer.go:29: 2020-02-23T02:46:06.821Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.tlsutil: Update: version=1
>             writer.go:29: 2020-02-23T02:46:06.821Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.tlsutil: OutgoingRPCWrapper: version=1
>             writer.go:29: 2020-02-23T02:46:06.831Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:49258a47-e648-49f0-42d0-a189792fda56 Address:127.0.0.1:16390}]"
>             writer.go:29: 2020-02-23T02:46:06.831Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server.serf.wan: serf: EventMemberJoin: Node-49258a47-e648-49f0-42d0-a189792fda56.dc1 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:06.832Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server.serf.lan: serf: EventMemberJoin: Node-49258a47-e648-49f0-42d0-a189792fda56 127.0.0.1
>             writer.go:29: 2020-02-23T02:46:06.832Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: Started DNS server: address=127.0.0.1:16385 network=udp
>             writer.go:29: 2020-02-23T02:46:06.832Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: entering follower state: follower="Node at 127.0.0.1:16390 [Follower]" leader=
>             writer.go:29: 2020-02-23T02:46:06.832Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server: Adding LAN server: server="Node-49258a47-e648-49f0-42d0-a189792fda56 (Addr: tcp/127.0.0.1:16390) (DC: dc1)"
>             writer.go:29: 2020-02-23T02:46:06.832Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server: Handled event for server in area: event=member-join server=Node-49258a47-e648-49f0-42d0-a189792fda56.dc1 area=wan
>             writer.go:29: 2020-02-23T02:46:06.832Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: Started DNS server: address=127.0.0.1:16385 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.833Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: Started HTTP server: address=127.0.0.1:16386 network=tcp
>             writer.go:29: 2020-02-23T02:46:06.833Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: started state syncer
>             writer.go:29: 2020-02-23T02:46:06.891Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: heartbeat timeout reached, starting election: last-leader=
>             writer.go:29: 2020-02-23T02:46:06.891Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: entering candidate state: node="Node at 127.0.0.1:16390 [Candidate]" term=2
>             writer.go:29: 2020-02-23T02:46:06.895Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: votes: needed=1
>             writer.go:29: 2020-02-23T02:46:06.895Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: vote granted: from=49258a47-e648-49f0-42d0-a189792fda56 term=2 tally=1
>             writer.go:29: 2020-02-23T02:46:06.895Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: election won: tally=1
>             writer.go:29: 2020-02-23T02:46:06.895Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server.raft: entering leader state: leader="Node at 127.0.0.1:16390 [Leader]"
>             writer.go:29: 2020-02-23T02:46:06.895Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server: cluster leadership acquired
>             writer.go:29: 2020-02-23T02:46:06.895Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server: New leader elected: payload=Node-49258a47-e648-49f0-42d0-a189792fda56
>             writer.go:29: 2020-02-23T02:46:06.909Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>             writer.go:29: 2020-02-23T02:46:06.917Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server.connect: initialized primary datacenter CA with provider: provider=consul
>             writer.go:29: 2020-02-23T02:46:06.917Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.leader: started routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:06.917Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server: Skipping self join check for node since the cluster is too small: node=Node-49258a47-e648-49f0-42d0-a189792fda56
>             writer.go:29: 2020-02-23T02:46:06.917Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server: member joined, marking health alive: member=Node-49258a47-e648-49f0-42d0-a189792fda56
>             writer.go:29: 2020-02-23T02:46:07.054Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: Synced node info
>             writer.go:29: 2020-02-23T02:46:07.057Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: Synced service: service=web
>             writer.go:29: 2020-02-23T02:46:07.060Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: Synced service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:07.060Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: removed service: service=web
>             writer.go:29: 2020-02-23T02:46:07.060Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: removed service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:07.060Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: Node info in sync
>             writer.go:29: 2020-02-23T02:46:07.062Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: Deregistered service: service=web
>             writer.go:29: 2020-02-23T02:46:07.064Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: Deregistered service: service=web-sidecar-proxy
>             writer.go:29: 2020-02-23T02:46:07.064Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: Requesting shutdown
>             writer.go:29: 2020-02-23T02:46:07.064Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server: shutting down server
>             writer.go:29: 2020-02-23T02:46:07.064Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.leader: stopping routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:07.064Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server.serf.lan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:07.064Z [ERROR] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.anti_entropy: failed to sync remote state: error="No cluster leader"
>             writer.go:29: 2020-02-23T02:46:07.064Z [DEBUG] TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.leader: stopped routine: routine="CA root pruning"
>             writer.go:29: 2020-02-23T02:46:07.070Z [WARN]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server.serf.wan: serf: Shutdown without a Leave
>             writer.go:29: 2020-02-23T02:46:07.074Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it.server.router.manager: shutting down
>             writer.go:29: 2020-02-23T02:46:07.074Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: consul server down
>             writer.go:29: 2020-02-23T02:46:07.074Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: shutdown complete
>             writer.go:29: 2020-02-23T02:46:07.074Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: Stopping server: protocol=DNS address=127.0.0.1:16385 network=tcp
>             writer.go:29: 2020-02-23T02:46:07.074Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: Stopping server: protocol=DNS address=127.0.0.1:16385 network=udp
>             writer.go:29: 2020-02-23T02:46:07.074Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: Stopping server: protocol=HTTP address=127.0.0.1:16386 network=tcp
>             writer.go:29: 2020-02-23T02:46:07.074Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: Waiting for endpoints to shut down
>             writer.go:29: 2020-02-23T02:46:07.074Z [INFO]  TestAgent_RegisterServiceDeregisterService_Sidecar/normal/update_that_removes_sidecar_should_NOT_deregister_it: Endpoints down
> === RUN   TestAgent_RegisterService_UnmanagedConnectProxyInvalid
> === RUN   TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal
> === PAUSE TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal
> === RUN   TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager
> === PAUSE TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager
> === CONT  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal
> === CONT  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager
> --- PASS: TestAgent_RegisterService_UnmanagedConnectProxyInvalid (0.00s)
>     --- PASS: TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal (0.20s)
>         writer.go:29: 2020-02-23T02:46:07.120Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:07.121Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:07.121Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:07.134Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:61231e12-3855-fc2c-2ce3-e809beb3392d Address:127.0.0.1:16402}]"
>         writer.go:29: 2020-02-23T02:46:07.134Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server.serf.wan: serf: EventMemberJoin: Node-61231e12-3855-fc2c-2ce3-e809beb3392d.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:07.135Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server.serf.lan: serf: EventMemberJoin: Node-61231e12-3855-fc2c-2ce3-e809beb3392d 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:07.135Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal: Started DNS server: address=127.0.0.1:16397 network=udp
>         writer.go:29: 2020-02-23T02:46:07.135Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16402 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:07.135Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server: Adding LAN server: server="Node-61231e12-3855-fc2c-2ce3-e809beb3392d (Addr: tcp/127.0.0.1:16402) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:07.135Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server: Handled event for server in area: event=member-join server=Node-61231e12-3855-fc2c-2ce3-e809beb3392d.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:07.135Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal: Started DNS server: address=127.0.0.1:16397 network=tcp
>         writer.go:29: 2020-02-23T02:46:07.136Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal: Started HTTP server: address=127.0.0.1:16398 network=tcp
>         writer.go:29: 2020-02-23T02:46:07.136Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:07.202Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:07.202Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16402 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:07.206Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:07.206Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server.raft: vote granted: from=61231e12-3855-fc2c-2ce3-e809beb3392d term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:07.206Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:07.206Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16402 [Leader]"
>         writer.go:29: 2020-02-23T02:46:07.206Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:07.206Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server: New leader elected: payload=Node-61231e12-3855-fc2c-2ce3-e809beb3392d
>         writer.go:29: 2020-02-23T02:46:07.213Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:07.220Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:07.220Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:07.220Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server: Skipping self join check for node since the cluster is too small: node=Node-61231e12-3855-fc2c-2ce3-e809beb3392d
>         writer.go:29: 2020-02-23T02:46:07.221Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server: member joined, marking health alive: member=Node-61231e12-3855-fc2c-2ce3-e809beb3392d
>         writer.go:29: 2020-02-23T02:46:07.268Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:07.268Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:07.268Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:07.268Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:07.268Z [ERROR] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:07.268Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:07.269Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:07.271Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:07.271Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:07.271Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:07.271Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal: Stopping server: protocol=DNS address=127.0.0.1:16397 network=tcp
>         writer.go:29: 2020-02-23T02:46:07.271Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal: Stopping server: protocol=DNS address=127.0.0.1:16397 network=udp
>         writer.go:29: 2020-02-23T02:46:07.271Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal: Stopping server: protocol=HTTP address=127.0.0.1:16398 network=tcp
>         writer.go:29: 2020-02-23T02:46:07.271Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:07.271Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/normal: Endpoints down
>     --- PASS: TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager (0.56s)
>         writer.go:29: 2020-02-23T02:46:07.112Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:07.113Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:07.113Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:07.124Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:3713825d-357f-58bb-4691-bdb44775efad Address:127.0.0.1:16396}]"
>         writer.go:29: 2020-02-23T02:46:07.125Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server.serf.wan: serf: EventMemberJoin: Node-3713825d-357f-58bb-4691-bdb44775efad.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:07.125Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server.serf.lan: serf: EventMemberJoin: Node-3713825d-357f-58bb-4691-bdb44775efad 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:07.125Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: Started DNS server: address=127.0.0.1:16391 network=udp
>         writer.go:29: 2020-02-23T02:46:07.125Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16396 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:07.126Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server: Adding LAN server: server="Node-3713825d-357f-58bb-4691-bdb44775efad (Addr: tcp/127.0.0.1:16396) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:07.126Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server: Handled event for server in area: event=member-join server=Node-3713825d-357f-58bb-4691-bdb44775efad.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:07.126Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: Started DNS server: address=127.0.0.1:16391 network=tcp
>         writer.go:29: 2020-02-23T02:46:07.126Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: Started HTTP server: address=127.0.0.1:16392 network=tcp
>         writer.go:29: 2020-02-23T02:46:07.126Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:07.169Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:07.169Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16396 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:07.178Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:07.178Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server.raft: vote granted: from=3713825d-357f-58bb-4691-bdb44775efad term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:07.178Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:07.178Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16396 [Leader]"
>         writer.go:29: 2020-02-23T02:46:07.178Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:07.178Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server: New leader elected: payload=Node-3713825d-357f-58bb-4691-bdb44775efad
>         writer.go:29: 2020-02-23T02:46:07.186Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:07.194Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:07.194Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:07.194Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-3713825d-357f-58bb-4691-bdb44775efad
>         writer.go:29: 2020-02-23T02:46:07.194Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server: member joined, marking health alive: member=Node-3713825d-357f-58bb-4691-bdb44775efad
>         writer.go:29: 2020-02-23T02:46:07.344Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:07.391Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: Synced node info
>         writer.go:29: 2020-02-23T02:46:07.533Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:07.533Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: Node info in sync
>         writer.go:29: 2020-02-23T02:46:07.533Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: Node info in sync
>         writer.go:29: 2020-02-23T02:46:07.559Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:07.559Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:07.559Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:07.559Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:07.560Z [DEBUG] TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:07.586Z [WARN]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:07.631Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:07.631Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:07.631Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:07.631Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16391 network=tcp
>         writer.go:29: 2020-02-23T02:46:07.631Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16391 network=udp
>         writer.go:29: 2020-02-23T02:46:07.631Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16392 network=tcp
>         writer.go:29: 2020-02-23T02:46:07.631Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:07.631Z [INFO]  TestAgent_RegisterService_UnmanagedConnectProxyInvalid/service_manager: Endpoints down
> === RUN   TestAgent_RegisterService_ConnectNative
> === RUN   TestAgent_RegisterService_ConnectNative/normal
> === PAUSE TestAgent_RegisterService_ConnectNative/normal
> === RUN   TestAgent_RegisterService_ConnectNative/service_manager
> === PAUSE TestAgent_RegisterService_ConnectNative/service_manager
> === CONT  TestAgent_RegisterService_ConnectNative/normal
> === CONT  TestAgent_RegisterService_ConnectNative/service_manager
> --- PASS: TestAgent_RegisterService_ConnectNative (0.00s)
>     --- PASS: TestAgent_RegisterService_ConnectNative/normal (0.41s)
>         writer.go:29: 2020-02-23T02:46:07.638Z [WARN]  TestAgent_RegisterService_ConnectNative/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:07.639Z [DEBUG] TestAgent_RegisterService_ConnectNative/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:07.644Z [DEBUG] TestAgent_RegisterService_ConnectNative/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:07.763Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:31aedf96-e665-0ba5-7ada-78694164a3f0 Address:127.0.0.1:16408}]"
>         writer.go:29: 2020-02-23T02:46:07.763Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16408 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:07.763Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.server.serf.wan: serf: EventMemberJoin: Node-31aedf96-e665-0ba5-7ada-78694164a3f0.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:07.764Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.server.serf.lan: serf: EventMemberJoin: Node-31aedf96-e665-0ba5-7ada-78694164a3f0 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:07.764Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.server: Handled event for server in area: event=member-join server=Node-31aedf96-e665-0ba5-7ada-78694164a3f0.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:07.764Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.server: Adding LAN server: server="Node-31aedf96-e665-0ba5-7ada-78694164a3f0 (Addr: tcp/127.0.0.1:16408) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:07.764Z [INFO]  TestAgent_RegisterService_ConnectNative/normal: Started DNS server: address=127.0.0.1:16403 network=tcp
>         writer.go:29: 2020-02-23T02:46:07.764Z [INFO]  TestAgent_RegisterService_ConnectNative/normal: Started DNS server: address=127.0.0.1:16403 network=udp
>         writer.go:29: 2020-02-23T02:46:07.765Z [INFO]  TestAgent_RegisterService_ConnectNative/normal: Started HTTP server: address=127.0.0.1:16404 network=tcp
>         writer.go:29: 2020-02-23T02:46:07.765Z [INFO]  TestAgent_RegisterService_ConnectNative/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:07.832Z [WARN]  TestAgent_RegisterService_ConnectNative/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:07.832Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16408 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:07.893Z [DEBUG] TestAgent_RegisterService_ConnectNative/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:07.893Z [DEBUG] TestAgent_RegisterService_ConnectNative/normal.server.raft: vote granted: from=31aedf96-e665-0ba5-7ada-78694164a3f0 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:07.893Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:07.893Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16408 [Leader]"
>         writer.go:29: 2020-02-23T02:46:07.893Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:07.893Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.server: New leader elected: payload=Node-31aedf96-e665-0ba5-7ada-78694164a3f0
>         writer.go:29: 2020-02-23T02:46:07.981Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:08.010Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:08.010Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.010Z [DEBUG] TestAgent_RegisterService_ConnectNative/normal.server: Skipping self join check for node since the cluster is too small: node=Node-31aedf96-e665-0ba5-7ada-78694164a3f0
>         writer.go:29: 2020-02-23T02:46:08.010Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.server: member joined, marking health alive: member=Node-31aedf96-e665-0ba5-7ada-78694164a3f0
>         writer.go:29: 2020-02-23T02:46:08.040Z [INFO]  TestAgent_RegisterService_ConnectNative/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:08.041Z [INFO]  TestAgent_RegisterService_ConnectNative/normal: Synced service: service=web
>         writer.go:29: 2020-02-23T02:46:08.041Z [DEBUG] TestAgent_RegisterService_ConnectNative/normal: Check in sync: check=service:web
>         writer.go:29: 2020-02-23T02:46:08.041Z [INFO]  TestAgent_RegisterService_ConnectNative/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:08.041Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:08.041Z [DEBUG] TestAgent_RegisterService_ConnectNative/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.041Z [WARN]  TestAgent_RegisterService_ConnectNative/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:08.041Z [ERROR] TestAgent_RegisterService_ConnectNative/normal.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:08.041Z [DEBUG] TestAgent_RegisterService_ConnectNative/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.043Z [WARN]  TestAgent_RegisterService_ConnectNative/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:08.045Z [INFO]  TestAgent_RegisterService_ConnectNative/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:08.045Z [INFO]  TestAgent_RegisterService_ConnectNative/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:08.045Z [INFO]  TestAgent_RegisterService_ConnectNative/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:08.045Z [INFO]  TestAgent_RegisterService_ConnectNative/normal: Stopping server: protocol=DNS address=127.0.0.1:16403 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.045Z [INFO]  TestAgent_RegisterService_ConnectNative/normal: Stopping server: protocol=DNS address=127.0.0.1:16403 network=udp
>         writer.go:29: 2020-02-23T02:46:08.045Z [INFO]  TestAgent_RegisterService_ConnectNative/normal: Stopping server: protocol=HTTP address=127.0.0.1:16404 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.045Z [INFO]  TestAgent_RegisterService_ConnectNative/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:08.045Z [INFO]  TestAgent_RegisterService_ConnectNative/normal: Endpoints down
>     --- PASS: TestAgent_RegisterService_ConnectNative/service_manager (0.51s)
>         writer.go:29: 2020-02-23T02:46:07.646Z [WARN]  TestAgent_RegisterService_ConnectNative/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:07.647Z [DEBUG] TestAgent_RegisterService_ConnectNative/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:07.647Z [DEBUG] TestAgent_RegisterService_ConnectNative/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:07.779Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4b5c7577-3d7a-5110-0212-00164302d7f5 Address:127.0.0.1:16414}]"
>         writer.go:29: 2020-02-23T02:46:07.779Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16414 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:07.779Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.server.serf.wan: serf: EventMemberJoin: Node-4b5c7577-3d7a-5110-0212-00164302d7f5.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:07.781Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.server.serf.lan: serf: EventMemberJoin: Node-4b5c7577-3d7a-5110-0212-00164302d7f5 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:07.781Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.server: Handled event for server in area: event=member-join server=Node-4b5c7577-3d7a-5110-0212-00164302d7f5.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:07.781Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.server: Adding LAN server: server="Node-4b5c7577-3d7a-5110-0212-00164302d7f5 (Addr: tcp/127.0.0.1:16414) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:07.781Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager: Started DNS server: address=127.0.0.1:16409 network=tcp
>         writer.go:29: 2020-02-23T02:46:07.781Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager: Started DNS server: address=127.0.0.1:16409 network=udp
>         writer.go:29: 2020-02-23T02:46:07.782Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager: Started HTTP server: address=127.0.0.1:16410 network=tcp
>         writer.go:29: 2020-02-23T02:46:07.782Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:07.843Z [WARN]  TestAgent_RegisterService_ConnectNative/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:07.843Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16414 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:07.884Z [DEBUG] TestAgent_RegisterService_ConnectNative/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:07.884Z [DEBUG] TestAgent_RegisterService_ConnectNative/service_manager.server.raft: vote granted: from=4b5c7577-3d7a-5110-0212-00164302d7f5 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:07.884Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:07.884Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16414 [Leader]"
>         writer.go:29: 2020-02-23T02:46:07.884Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:07.884Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.server: New leader elected: payload=Node-4b5c7577-3d7a-5110-0212-00164302d7f5
>         writer.go:29: 2020-02-23T02:46:07.958Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:08.010Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:08.010Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.010Z [DEBUG] TestAgent_RegisterService_ConnectNative/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-4b5c7577-3d7a-5110-0212-00164302d7f5
>         writer.go:29: 2020-02-23T02:46:08.010Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.server: member joined, marking health alive: member=Node-4b5c7577-3d7a-5110-0212-00164302d7f5
>         writer.go:29: 2020-02-23T02:46:08.085Z [DEBUG] TestAgent_RegisterService_ConnectNative/service_manager: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:08.088Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager: Synced node info
>         writer.go:29: 2020-02-23T02:46:08.148Z [DEBUG] TestAgent_RegisterService_ConnectNative/service_manager: Node info in sync
>         writer.go:29: 2020-02-23T02:46:08.149Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager: Synced service: service=web
>         writer.go:29: 2020-02-23T02:46:08.149Z [DEBUG] TestAgent_RegisterService_ConnectNative/service_manager: Check in sync: check=service:web
>         writer.go:29: 2020-02-23T02:46:08.150Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:08.150Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:08.150Z [DEBUG] TestAgent_RegisterService_ConnectNative/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.150Z [WARN]  TestAgent_RegisterService_ConnectNative/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:08.150Z [DEBUG] TestAgent_RegisterService_ConnectNative/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.151Z [WARN]  TestAgent_RegisterService_ConnectNative/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:08.153Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:08.153Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:08.153Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:08.153Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16409 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.153Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16409 network=udp
>         writer.go:29: 2020-02-23T02:46:08.153Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16410 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.153Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:08.153Z [INFO]  TestAgent_RegisterService_ConnectNative/service_manager: Endpoints down
> === RUN   TestAgent_RegisterService_ScriptCheck_ExecDisable
> === RUN   TestAgent_RegisterService_ScriptCheck_ExecDisable/normal
> === PAUSE TestAgent_RegisterService_ScriptCheck_ExecDisable/normal
> === RUN   TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager
> === PAUSE TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager
> === CONT  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal
> === CONT  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager
> --- PASS: TestAgent_RegisterService_ScriptCheck_ExecDisable (0.00s)
>     --- PASS: TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager (0.34s)
>         writer.go:29: 2020-02-23T02:46:08.194Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:08.195Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:08.195Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:08.211Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:ec0a2cf1-99fb-6f85-7179-3dedf9710ba2 Address:127.0.0.1:16426}]"
>         writer.go:29: 2020-02-23T02:46:08.211Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server.serf.wan: serf: EventMemberJoin: Node-ec0a2cf1-99fb-6f85-7179-3dedf9710ba2.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:08.211Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server.serf.lan: serf: EventMemberJoin: Node-ec0a2cf1-99fb-6f85-7179-3dedf9710ba2 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:08.211Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager: Started DNS server: address=127.0.0.1:16421 network=udp
>         writer.go:29: 2020-02-23T02:46:08.212Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16426 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:08.212Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server: Adding LAN server: server="Node-ec0a2cf1-99fb-6f85-7179-3dedf9710ba2 (Addr: tcp/127.0.0.1:16426) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:08.212Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server: Handled event for server in area: event=member-join server=Node-ec0a2cf1-99fb-6f85-7179-3dedf9710ba2.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:08.212Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager: Started DNS server: address=127.0.0.1:16421 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.212Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager: Started HTTP server: address=127.0.0.1:16422 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.212Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:08.269Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:08.269Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16426 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:08.272Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:08.272Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server.raft: vote granted: from=ec0a2cf1-99fb-6f85-7179-3dedf9710ba2 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:08.272Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:08.272Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16426 [Leader]"
>         writer.go:29: 2020-02-23T02:46:08.272Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:08.272Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server: New leader elected: payload=Node-ec0a2cf1-99fb-6f85-7179-3dedf9710ba2
>         writer.go:29: 2020-02-23T02:46:08.279Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:08.287Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:08.287Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.287Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-ec0a2cf1-99fb-6f85-7179-3dedf9710ba2
>         writer.go:29: 2020-02-23T02:46:08.287Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server: member joined, marking health alive: member=Node-ec0a2cf1-99fb-6f85-7179-3dedf9710ba2
>         writer.go:29: 2020-02-23T02:46:08.478Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:08.480Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager: Synced node info
>         writer.go:29: 2020-02-23T02:46:08.493Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:08.493Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:08.493Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.493Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:08.493Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.495Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:08.496Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:08.496Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:08.496Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:08.496Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16421 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.496Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16421 network=udp
>         writer.go:29: 2020-02-23T02:46:08.496Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16422 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.496Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:08.496Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/service_manager: Endpoints down
>     --- PASS: TestAgent_RegisterService_ScriptCheck_ExecDisable/normal (0.44s)
>         writer.go:29: 2020-02-23T02:46:08.181Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:08.182Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:08.186Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:08.204Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:c4abcdbb-d37b-ce4d-71b2-9babfefe6ab8 Address:127.0.0.1:16420}]"
>         writer.go:29: 2020-02-23T02:46:08.205Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server.serf.wan: serf: EventMemberJoin: Node-c4abcdbb-d37b-ce4d-71b2-9babfefe6ab8.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:08.205Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server.serf.lan: serf: EventMemberJoin: Node-c4abcdbb-d37b-ce4d-71b2-9babfefe6ab8 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:08.205Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: Started DNS server: address=127.0.0.1:16415 network=udp
>         writer.go:29: 2020-02-23T02:46:08.205Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16420 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:08.205Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server: Adding LAN server: server="Node-c4abcdbb-d37b-ce4d-71b2-9babfefe6ab8 (Addr: tcp/127.0.0.1:16420) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:08.205Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server: Handled event for server in area: event=member-join server=Node-c4abcdbb-d37b-ce4d-71b2-9babfefe6ab8.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:08.206Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: Started DNS server: address=127.0.0.1:16415 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.206Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: Started HTTP server: address=127.0.0.1:16416 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.206Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:08.245Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:08.245Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16420 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:08.249Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:08.249Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server.raft: vote granted: from=c4abcdbb-d37b-ce4d-71b2-9babfefe6ab8 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:08.249Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:08.249Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16420 [Leader]"
>         writer.go:29: 2020-02-23T02:46:08.249Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:08.249Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server: New leader elected: payload=Node-c4abcdbb-d37b-ce4d-71b2-9babfefe6ab8
>         writer.go:29: 2020-02-23T02:46:08.257Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:08.264Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:08.264Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.264Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server: Skipping self join check for node since the cluster is too small: node=Node-c4abcdbb-d37b-ce4d-71b2-9babfefe6ab8
>         writer.go:29: 2020-02-23T02:46:08.264Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server: member joined, marking health alive: member=Node-c4abcdbb-d37b-ce4d-71b2-9babfefe6ab8
>         writer.go:29: 2020-02-23T02:46:08.447Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:08.449Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:08.449Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: Node info in sync
>         writer.go:29: 2020-02-23T02:46:08.594Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:08.594Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:08.594Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.594Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:08.594Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.596Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:08.597Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:08.597Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:08.597Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:08.597Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: Stopping server: protocol=DNS address=127.0.0.1:16415 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.597Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: Stopping server: protocol=DNS address=127.0.0.1:16415 network=udp
>         writer.go:29: 2020-02-23T02:46:08.597Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: Stopping server: protocol=HTTP address=127.0.0.1:16416 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.597Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:08.597Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecDisable/normal: Endpoints down
> === RUN   TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable
> === RUN   TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal
> === PAUSE TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal
> === RUN   TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager
> === PAUSE TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager
> === CONT  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal
> === CONT  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager
> --- PASS: TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable (0.00s)
>     --- PASS: TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal (0.36s)
>         writer.go:29: 2020-02-23T02:46:08.613Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:08.614Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:08.614Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:08.627Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:134d511e-efe6-09df-6a50-c498ff6861df Address:127.0.0.1:16432}]"
>         writer.go:29: 2020-02-23T02:46:08.628Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server.serf.wan: serf: EventMemberJoin: Node-134d511e-efe6-09df-6a50-c498ff6861df.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:08.628Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server.serf.lan: serf: EventMemberJoin: Node-134d511e-efe6-09df-6a50-c498ff6861df 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:08.628Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: Started DNS server: address=127.0.0.1:16427 network=udp
>         writer.go:29: 2020-02-23T02:46:08.628Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16432 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:08.629Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server: Adding LAN server: server="Node-134d511e-efe6-09df-6a50-c498ff6861df (Addr: tcp/127.0.0.1:16432) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:08.629Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server: Handled event for server in area: event=member-join server=Node-134d511e-efe6-09df-6a50-c498ff6861df.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:08.629Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: Started DNS server: address=127.0.0.1:16427 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.629Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: Started HTTP server: address=127.0.0.1:16428 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.629Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:08.694Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:08.694Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16432 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:08.698Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:08.698Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server.raft: vote granted: from=134d511e-efe6-09df-6a50-c498ff6861df term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:08.698Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:08.698Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16432 [Leader]"
>         writer.go:29: 2020-02-23T02:46:08.698Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:08.698Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server: New leader elected: payload=Node-134d511e-efe6-09df-6a50-c498ff6861df
>         writer.go:29: 2020-02-23T02:46:08.706Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:08.715Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:08.715Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.715Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server: Skipping self join check for node since the cluster is too small: node=Node-134d511e-efe6-09df-6a50-c498ff6861df
>         writer.go:29: 2020-02-23T02:46:08.715Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server: member joined, marking health alive: member=Node-134d511e-efe6-09df-6a50-c498ff6861df
>         writer.go:29: 2020-02-23T02:46:08.728Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:08.762Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:08.762Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: Node info in sync
>         writer.go:29: 2020-02-23T02:46:08.935Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:08.935Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:08.935Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.935Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:08.935Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.954Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:08.956Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:08.956Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:08.956Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:08.956Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: Stopping server: protocol=DNS address=127.0.0.1:16427 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.956Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: Stopping server: protocol=DNS address=127.0.0.1:16427 network=udp
>         writer.go:29: 2020-02-23T02:46:08.956Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: Stopping server: protocol=HTTP address=127.0.0.1:16428 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.956Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:08.956Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/normal: Endpoints down
>     --- PASS: TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager (0.37s)
>         writer.go:29: 2020-02-23T02:46:08.613Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:08.613Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:08.614Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:08.625Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:367300e6-0916-e1f5-28bf-241f4959267d Address:127.0.0.1:16438}]"
>         writer.go:29: 2020-02-23T02:46:08.625Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server.serf.wan: serf: EventMemberJoin: Node-367300e6-0916-e1f5-28bf-241f4959267d.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:08.626Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server.serf.lan: serf: EventMemberJoin: Node-367300e6-0916-e1f5-28bf-241f4959267d 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:08.626Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager: Started DNS server: address=127.0.0.1:16433 network=udp
>         writer.go:29: 2020-02-23T02:46:08.626Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16438 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:08.626Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server: Adding LAN server: server="Node-367300e6-0916-e1f5-28bf-241f4959267d (Addr: tcp/127.0.0.1:16438) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:08.626Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server: Handled event for server in area: event=member-join server=Node-367300e6-0916-e1f5-28bf-241f4959267d.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:08.626Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager: Started DNS server: address=127.0.0.1:16433 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.627Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager: Started HTTP server: address=127.0.0.1:16434 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.627Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:08.695Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:08.695Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16438 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:08.698Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:08.698Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server.raft: vote granted: from=367300e6-0916-e1f5-28bf-241f4959267d term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:08.698Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:08.698Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16438 [Leader]"
>         writer.go:29: 2020-02-23T02:46:08.698Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:08.698Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server: New leader elected: payload=Node-367300e6-0916-e1f5-28bf-241f4959267d
>         writer.go:29: 2020-02-23T02:46:08.706Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:08.715Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:08.715Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.715Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-367300e6-0916-e1f5-28bf-241f4959267d
>         writer.go:29: 2020-02-23T02:46:08.715Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server: member joined, marking health alive: member=Node-367300e6-0916-e1f5-28bf-241f4959267d
>         writer.go:29: 2020-02-23T02:46:08.962Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:08.962Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:08.962Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.962Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:08.962Z [ERROR] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:08.962Z [DEBUG] TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:08.966Z [WARN]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:08.968Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:08.968Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:08.968Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:08.968Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16433 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.968Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16433 network=udp
>         writer.go:29: 2020-02-23T02:46:08.968Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16434 network=tcp
>         writer.go:29: 2020-02-23T02:46:08.968Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:08.968Z [INFO]  TestAgent_RegisterService_ScriptCheck_ExecRemoteDisable/service_manager: Endpoints down
> === RUN   TestAgent_DeregisterService
> === PAUSE TestAgent_DeregisterService
> === RUN   TestAgent_DeregisterService_ACLDeny
> === PAUSE TestAgent_DeregisterService_ACLDeny
> === RUN   TestAgent_ServiceMaintenance_BadRequest
> === PAUSE TestAgent_ServiceMaintenance_BadRequest
> === RUN   TestAgent_ServiceMaintenance_Enable
> --- SKIP: TestAgent_ServiceMaintenance_Enable (0.00s)
>     agent_endpoint_test.go:3938: DM-skipped
> === RUN   TestAgent_ServiceMaintenance_Disable
> === PAUSE TestAgent_ServiceMaintenance_Disable
> === RUN   TestAgent_ServiceMaintenance_ACLDeny
> --- SKIP: TestAgent_ServiceMaintenance_ACLDeny (0.00s)
>     agent_endpoint_test.go:4019: DM-skipped
> === RUN   TestAgent_NodeMaintenance_BadRequest
> === PAUSE TestAgent_NodeMaintenance_BadRequest
> === RUN   TestAgent_NodeMaintenance_Enable
> === PAUSE TestAgent_NodeMaintenance_Enable
> === RUN   TestAgent_NodeMaintenance_Disable
> === PAUSE TestAgent_NodeMaintenance_Disable
> === RUN   TestAgent_NodeMaintenance_ACLDeny
> === PAUSE TestAgent_NodeMaintenance_ACLDeny
> === RUN   TestAgent_RegisterCheck_Service
> === PAUSE TestAgent_RegisterCheck_Service
> === RUN   TestAgent_Monitor
> === PAUSE TestAgent_Monitor
> === RUN   TestAgent_Monitor_ACLDeny
> === PAUSE TestAgent_Monitor_ACLDeny
> === RUN   TestAgent_TokenTriggersFullSync
> === PAUSE TestAgent_TokenTriggersFullSync
> === RUN   TestAgent_Token
> === PAUSE TestAgent_Token
> === RUN   TestAgentConnectCARoots_empty
> === PAUSE TestAgentConnectCARoots_empty
> === RUN   TestAgentConnectCARoots_list
> === PAUSE TestAgentConnectCARoots_list
> === RUN   TestAgentConnectCALeafCert_aclDefaultDeny
> === PAUSE TestAgentConnectCALeafCert_aclDefaultDeny
> === RUN   TestAgentConnectCALeafCert_aclServiceWrite
> === PAUSE TestAgentConnectCALeafCert_aclServiceWrite
> === RUN   TestAgentConnectCALeafCert_aclServiceReadDeny
> === PAUSE TestAgentConnectCALeafCert_aclServiceReadDeny
> === RUN   TestAgentConnectCALeafCert_good
> --- SKIP: TestAgentConnectCALeafCert_good (0.00s)
>     agent_endpoint_test.go:4971: DM-skipped
> === RUN   TestAgentConnectCALeafCert_goodNotLocal
> --- SKIP: TestAgentConnectCALeafCert_goodNotLocal (0.00s)
>     agent_endpoint_test.go:5075: DM-skipped
> === RUN   TestAgentConnectCALeafCert_secondaryDC_good
> === PAUSE TestAgentConnectCALeafCert_secondaryDC_good
> === RUN   TestAgentConnectAuthorize_badBody
> === PAUSE TestAgentConnectAuthorize_badBody
> === RUN   TestAgentConnectAuthorize_noTarget
> === PAUSE TestAgentConnectAuthorize_noTarget
> === RUN   TestAgentConnectAuthorize_idInvalidFormat
> === PAUSE TestAgentConnectAuthorize_idInvalidFormat
> === RUN   TestAgentConnectAuthorize_idNotService
> === PAUSE TestAgentConnectAuthorize_idNotService
> === RUN   TestAgentConnectAuthorize_allow
> === PAUSE TestAgentConnectAuthorize_allow
> === RUN   TestAgentConnectAuthorize_deny
> === PAUSE TestAgentConnectAuthorize_deny
> === RUN   TestAgentConnectAuthorize_allowTrustDomain
> === PAUSE TestAgentConnectAuthorize_allowTrustDomain
> === RUN   TestAgentConnectAuthorize_denyWildcard
> === PAUSE TestAgentConnectAuthorize_denyWildcard
> === RUN   TestAgentConnectAuthorize_serviceWrite
> === PAUSE TestAgentConnectAuthorize_serviceWrite
> === RUN   TestAgentConnectAuthorize_defaultDeny
> === PAUSE TestAgentConnectAuthorize_defaultDeny
> === RUN   TestAgentConnectAuthorize_defaultAllow
> === PAUSE TestAgentConnectAuthorize_defaultAllow
> === RUN   TestAgent_Host
> === PAUSE TestAgent_Host
> === RUN   TestAgent_HostBadACL
> === PAUSE TestAgent_HostBadACL
> === RUN   TestAgent_Services_ExposeConfig
> === PAUSE TestAgent_Services_ExposeConfig
> === RUN   TestAgent_MultiStartStop
> === RUN   TestAgent_MultiStartStop/#00
> === PAUSE TestAgent_MultiStartStop/#00
> === RUN   TestAgent_MultiStartStop/#01
> === PAUSE TestAgent_MultiStartStop/#01
> === RUN   TestAgent_MultiStartStop/#02
> === PAUSE TestAgent_MultiStartStop/#02
> === RUN   TestAgent_MultiStartStop/#03
> === PAUSE TestAgent_MultiStartStop/#03
> === RUN   TestAgent_MultiStartStop/#04
> === PAUSE TestAgent_MultiStartStop/#04
> === RUN   TestAgent_MultiStartStop/#05
> === PAUSE TestAgent_MultiStartStop/#05
> === RUN   TestAgent_MultiStartStop/#06
> === PAUSE TestAgent_MultiStartStop/#06
> === RUN   TestAgent_MultiStartStop/#07
> === PAUSE TestAgent_MultiStartStop/#07
> === RUN   TestAgent_MultiStartStop/#08
> === PAUSE TestAgent_MultiStartStop/#08
> === RUN   TestAgent_MultiStartStop/#09
> === PAUSE TestAgent_MultiStartStop/#09
> === CONT  TestAgent_MultiStartStop/#00
> === CONT  TestAgent_MultiStartStop/#05
> === CONT  TestAgent_MultiStartStop/#09
> === CONT  TestAgent_MultiStartStop/#08
> === CONT  TestAgent_MultiStartStop/#07
> === CONT  TestAgent_MultiStartStop/#06
> === CONT  TestAgent_MultiStartStop/#03
> === CONT  TestAgent_MultiStartStop/#04
> === CONT  TestAgent_MultiStartStop/#02
> === CONT  TestAgent_MultiStartStop/#01
> --- PASS: TestAgent_MultiStartStop (0.00s)
>     --- PASS: TestAgent_MultiStartStop/#05 (0.46s)
>         writer.go:29: 2020-02-23T02:46:08.990Z [WARN]  TestAgent_MultiStartStop/#05: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:08.991Z [DEBUG] TestAgent_MultiStartStop/#05.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:08.992Z [DEBUG] TestAgent_MultiStartStop/#05.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:09.010Z [INFO]  TestAgent_MultiStartStop/#05.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:e9a96925-ff95-d871-7c77-2ce8fa94a10e Address:127.0.0.1:16450}]"
>         writer.go:29: 2020-02-23T02:46:09.010Z [INFO]  TestAgent_MultiStartStop/#05.server.raft: entering follower state: follower="Node at 127.0.0.1:16450 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:09.011Z [INFO]  TestAgent_MultiStartStop/#05.server.serf.wan: serf: EventMemberJoin: Node-e9a96925-ff95-d871-7c77-2ce8fa94a10e.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.011Z [INFO]  TestAgent_MultiStartStop/#05.server.serf.lan: serf: EventMemberJoin: Node-e9a96925-ff95-d871-7c77-2ce8fa94a10e 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.011Z [INFO]  TestAgent_MultiStartStop/#05.server: Adding LAN server: server="Node-e9a96925-ff95-d871-7c77-2ce8fa94a10e (Addr: tcp/127.0.0.1:16450) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:09.011Z [INFO]  TestAgent_MultiStartStop/#05: Started DNS server: address=127.0.0.1:16445 network=udp
>         writer.go:29: 2020-02-23T02:46:09.011Z [INFO]  TestAgent_MultiStartStop/#05.server: Handled event for server in area: event=member-join server=Node-e9a96925-ff95-d871-7c77-2ce8fa94a10e.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:09.011Z [INFO]  TestAgent_MultiStartStop/#05: Started DNS server: address=127.0.0.1:16445 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.012Z [INFO]  TestAgent_MultiStartStop/#05: Started HTTP server: address=127.0.0.1:16446 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.012Z [INFO]  TestAgent_MultiStartStop/#05: started state syncer
>         writer.go:29: 2020-02-23T02:46:09.058Z [WARN]  TestAgent_MultiStartStop/#05.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:09.058Z [INFO]  TestAgent_MultiStartStop/#05.server.raft: entering candidate state: node="Node at 127.0.0.1:16450 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:09.062Z [DEBUG] TestAgent_MultiStartStop/#05.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:09.062Z [DEBUG] TestAgent_MultiStartStop/#05.server.raft: vote granted: from=e9a96925-ff95-d871-7c77-2ce8fa94a10e term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:09.062Z [INFO]  TestAgent_MultiStartStop/#05.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:09.062Z [INFO]  TestAgent_MultiStartStop/#05.server.raft: entering leader state: leader="Node at 127.0.0.1:16450 [Leader]"
>         writer.go:29: 2020-02-23T02:46:09.062Z [INFO]  TestAgent_MultiStartStop/#05.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:09.062Z [INFO]  TestAgent_MultiStartStop/#05.server: New leader elected: payload=Node-e9a96925-ff95-d871-7c77-2ce8fa94a10e
>         writer.go:29: 2020-02-23T02:46:09.071Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:09.082Z [INFO]  TestAgent_MultiStartStop/#05.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:09.082Z [INFO]  TestAgent_MultiStartStop/#05.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.082Z [DEBUG] TestAgent_MultiStartStop/#05.server: Skipping self join check for node since the cluster is too small: node=Node-e9a96925-ff95-d871-7c77-2ce8fa94a10e
>         writer.go:29: 2020-02-23T02:46:09.082Z [INFO]  TestAgent_MultiStartStop/#05.server: member joined, marking health alive: member=Node-e9a96925-ff95-d871-7c77-2ce8fa94a10e
>         writer.go:29: 2020-02-23T02:46:09.376Z [DEBUG] TestAgent_MultiStartStop/#05: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:09.379Z [INFO]  TestAgent_MultiStartStop/#05: Synced node info
>         writer.go:29: 2020-02-23T02:46:09.424Z [INFO]  TestAgent_MultiStartStop/#05: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:09.424Z [INFO]  TestAgent_MultiStartStop/#05.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:09.424Z [DEBUG] TestAgent_MultiStartStop/#05.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.424Z [WARN]  TestAgent_MultiStartStop/#05.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:09.425Z [DEBUG] TestAgent_MultiStartStop/#05.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.426Z [WARN]  TestAgent_MultiStartStop/#05.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:09.428Z [INFO]  TestAgent_MultiStartStop/#05.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:09.428Z [INFO]  TestAgent_MultiStartStop/#05: consul server down
>         writer.go:29: 2020-02-23T02:46:09.428Z [INFO]  TestAgent_MultiStartStop/#05: shutdown complete
>         writer.go:29: 2020-02-23T02:46:09.428Z [INFO]  TestAgent_MultiStartStop/#05: Stopping server: protocol=DNS address=127.0.0.1:16445 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.428Z [INFO]  TestAgent_MultiStartStop/#05: Stopping server: protocol=DNS address=127.0.0.1:16445 network=udp
>         writer.go:29: 2020-02-23T02:46:09.428Z [INFO]  TestAgent_MultiStartStop/#05: Stopping server: protocol=HTTP address=127.0.0.1:16446 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.428Z [INFO]  TestAgent_MultiStartStop/#05: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:09.428Z [INFO]  TestAgent_MultiStartStop/#05: Endpoints down
>     --- PASS: TestAgent_MultiStartStop/#09 (0.47s)
>         writer.go:29: 2020-02-23T02:46:08.990Z [WARN]  TestAgent_MultiStartStop/#09: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:08.991Z [DEBUG] TestAgent_MultiStartStop/#09.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:08.992Z [DEBUG] TestAgent_MultiStartStop/#09.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:09.004Z [INFO]  TestAgent_MultiStartStop/#09.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b6cdc26e-2685-9947-62eb-1eaf853c75f9 Address:127.0.0.1:16456}]"
>         writer.go:29: 2020-02-23T02:46:09.004Z [INFO]  TestAgent_MultiStartStop/#09.server.raft: entering follower state: follower="Node at 127.0.0.1:16456 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:09.005Z [INFO]  TestAgent_MultiStartStop/#09.server.serf.wan: serf: EventMemberJoin: Node-b6cdc26e-2685-9947-62eb-1eaf853c75f9.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.005Z [INFO]  TestAgent_MultiStartStop/#09.server.serf.lan: serf: EventMemberJoin: Node-b6cdc26e-2685-9947-62eb-1eaf853c75f9 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.005Z [INFO]  TestAgent_MultiStartStop/#09.server: Adding LAN server: server="Node-b6cdc26e-2685-9947-62eb-1eaf853c75f9 (Addr: tcp/127.0.0.1:16456) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:09.005Z [INFO]  TestAgent_MultiStartStop/#09: Started DNS server: address=127.0.0.1:16451 network=udp
>         writer.go:29: 2020-02-23T02:46:09.005Z [INFO]  TestAgent_MultiStartStop/#09.server: Handled event for server in area: event=member-join server=Node-b6cdc26e-2685-9947-62eb-1eaf853c75f9.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:09.005Z [INFO]  TestAgent_MultiStartStop/#09: Started DNS server: address=127.0.0.1:16451 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.006Z [INFO]  TestAgent_MultiStartStop/#09: Started HTTP server: address=127.0.0.1:16452 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.006Z [INFO]  TestAgent_MultiStartStop/#09: started state syncer
>         writer.go:29: 2020-02-23T02:46:09.053Z [WARN]  TestAgent_MultiStartStop/#09.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:09.053Z [INFO]  TestAgent_MultiStartStop/#09.server.raft: entering candidate state: node="Node at 127.0.0.1:16456 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:09.058Z [DEBUG] TestAgent_MultiStartStop/#09.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:09.058Z [DEBUG] TestAgent_MultiStartStop/#09.server.raft: vote granted: from=b6cdc26e-2685-9947-62eb-1eaf853c75f9 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:09.058Z [INFO]  TestAgent_MultiStartStop/#09.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:09.058Z [INFO]  TestAgent_MultiStartStop/#09.server.raft: entering leader state: leader="Node at 127.0.0.1:16456 [Leader]"
>         writer.go:29: 2020-02-23T02:46:09.058Z [INFO]  TestAgent_MultiStartStop/#09.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:09.058Z [INFO]  TestAgent_MultiStartStop/#09.server: New leader elected: payload=Node-b6cdc26e-2685-9947-62eb-1eaf853c75f9
>         writer.go:29: 2020-02-23T02:46:09.068Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:09.079Z [INFO]  TestAgent_MultiStartStop/#09.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:09.079Z [INFO]  TestAgent_MultiStartStop/#09.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.079Z [DEBUG] TestAgent_MultiStartStop/#09.server: Skipping self join check for node since the cluster is too small: node=Node-b6cdc26e-2685-9947-62eb-1eaf853c75f9
>         writer.go:29: 2020-02-23T02:46:09.079Z [INFO]  TestAgent_MultiStartStop/#09.server: member joined, marking health alive: member=Node-b6cdc26e-2685-9947-62eb-1eaf853c75f9
>         writer.go:29: 2020-02-23T02:46:09.433Z [INFO]  TestAgent_MultiStartStop/#09: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:09.433Z [INFO]  TestAgent_MultiStartStop/#09.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:09.433Z [DEBUG] TestAgent_MultiStartStop/#09.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.433Z [WARN]  TestAgent_MultiStartStop/#09.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:09.433Z [ERROR] TestAgent_MultiStartStop/#09.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:09.433Z [DEBUG] TestAgent_MultiStartStop/#09.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.435Z [WARN]  TestAgent_MultiStartStop/#09.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:09.437Z [INFO]  TestAgent_MultiStartStop/#09.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:09.437Z [INFO]  TestAgent_MultiStartStop/#09: consul server down
>         writer.go:29: 2020-02-23T02:46:09.437Z [INFO]  TestAgent_MultiStartStop/#09: shutdown complete
>         writer.go:29: 2020-02-23T02:46:09.437Z [INFO]  TestAgent_MultiStartStop/#09: Stopping server: protocol=DNS address=127.0.0.1:16451 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.437Z [INFO]  TestAgent_MultiStartStop/#09: Stopping server: protocol=DNS address=127.0.0.1:16451 network=udp
>         writer.go:29: 2020-02-23T02:46:09.437Z [INFO]  TestAgent_MultiStartStop/#09: Stopping server: protocol=HTTP address=127.0.0.1:16452 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.437Z [INFO]  TestAgent_MultiStartStop/#09: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:09.437Z [INFO]  TestAgent_MultiStartStop/#09: Endpoints down
>     --- PASS: TestAgent_MultiStartStop/#00 (0.48s)
>         writer.go:29: 2020-02-23T02:46:08.976Z [WARN]  TestAgent_MultiStartStop/#00: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:08.977Z [DEBUG] TestAgent_MultiStartStop/#00.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:08.977Z [DEBUG] TestAgent_MultiStartStop/#00.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:09.009Z [INFO]  TestAgent_MultiStartStop/#00.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:041b7b25-f0f6-c5d3-1e09-9acbd2115566 Address:127.0.0.1:16444}]"
>         writer.go:29: 2020-02-23T02:46:09.009Z [INFO]  TestAgent_MultiStartStop/#00.server.raft: entering follower state: follower="Node at 127.0.0.1:16444 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:09.009Z [INFO]  TestAgent_MultiStartStop/#00.server.serf.wan: serf: EventMemberJoin: Node-041b7b25-f0f6-c5d3-1e09-9acbd2115566.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.010Z [INFO]  TestAgent_MultiStartStop/#00.server.serf.lan: serf: EventMemberJoin: Node-041b7b25-f0f6-c5d3-1e09-9acbd2115566 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.010Z [INFO]  TestAgent_MultiStartStop/#00.server: Handled event for server in area: event=member-join server=Node-041b7b25-f0f6-c5d3-1e09-9acbd2115566.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:09.010Z [INFO]  TestAgent_MultiStartStop/#00.server: Adding LAN server: server="Node-041b7b25-f0f6-c5d3-1e09-9acbd2115566 (Addr: tcp/127.0.0.1:16444) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:09.010Z [INFO]  TestAgent_MultiStartStop/#00: Started DNS server: address=127.0.0.1:16439 network=udp
>         writer.go:29: 2020-02-23T02:46:09.010Z [INFO]  TestAgent_MultiStartStop/#00: Started DNS server: address=127.0.0.1:16439 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.010Z [INFO]  TestAgent_MultiStartStop/#00: Started HTTP server: address=127.0.0.1:16440 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.010Z [INFO]  TestAgent_MultiStartStop/#00: started state syncer
>         writer.go:29: 2020-02-23T02:46:09.077Z [WARN]  TestAgent_MultiStartStop/#00.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:09.077Z [INFO]  TestAgent_MultiStartStop/#00.server.raft: entering candidate state: node="Node at 127.0.0.1:16444 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:09.082Z [DEBUG] TestAgent_MultiStartStop/#00.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:09.082Z [DEBUG] TestAgent_MultiStartStop/#00.server.raft: vote granted: from=041b7b25-f0f6-c5d3-1e09-9acbd2115566 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:09.082Z [INFO]  TestAgent_MultiStartStop/#00.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:09.082Z [INFO]  TestAgent_MultiStartStop/#00.server.raft: entering leader state: leader="Node at 127.0.0.1:16444 [Leader]"
>         writer.go:29: 2020-02-23T02:46:09.083Z [INFO]  TestAgent_MultiStartStop/#00.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:09.083Z [INFO]  TestAgent_MultiStartStop/#00.server: New leader elected: payload=Node-041b7b25-f0f6-c5d3-1e09-9acbd2115566
>         writer.go:29: 2020-02-23T02:46:09.091Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:09.098Z [INFO]  TestAgent_MultiStartStop/#00.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:09.098Z [INFO]  TestAgent_MultiStartStop/#00.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.098Z [DEBUG] TestAgent_MultiStartStop/#00.server: Skipping self join check for node since the cluster is too small: node=Node-041b7b25-f0f6-c5d3-1e09-9acbd2115566
>         writer.go:29: 2020-02-23T02:46:09.098Z [INFO]  TestAgent_MultiStartStop/#00.server: member joined, marking health alive: member=Node-041b7b25-f0f6-c5d3-1e09-9acbd2115566
>         writer.go:29: 2020-02-23T02:46:09.106Z [DEBUG] TestAgent_MultiStartStop/#00: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:09.111Z [INFO]  TestAgent_MultiStartStop/#00: Synced node info
>         writer.go:29: 2020-02-23T02:46:09.111Z [DEBUG] TestAgent_MultiStartStop/#00: Node info in sync
>         writer.go:29: 2020-02-23T02:46:09.444Z [INFO]  TestAgent_MultiStartStop/#00: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:09.444Z [INFO]  TestAgent_MultiStartStop/#00.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:09.444Z [DEBUG] TestAgent_MultiStartStop/#00.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.444Z [WARN]  TestAgent_MultiStartStop/#00.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:09.444Z [DEBUG] TestAgent_MultiStartStop/#00.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.446Z [WARN]  TestAgent_MultiStartStop/#00.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:09.447Z [INFO]  TestAgent_MultiStartStop/#00.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:09.448Z [INFO]  TestAgent_MultiStartStop/#00: consul server down
>         writer.go:29: 2020-02-23T02:46:09.448Z [INFO]  TestAgent_MultiStartStop/#00: shutdown complete
>         writer.go:29: 2020-02-23T02:46:09.448Z [INFO]  TestAgent_MultiStartStop/#00: Stopping server: protocol=DNS address=127.0.0.1:16439 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.448Z [INFO]  TestAgent_MultiStartStop/#00: Stopping server: protocol=DNS address=127.0.0.1:16439 network=udp
>         writer.go:29: 2020-02-23T02:46:09.448Z [INFO]  TestAgent_MultiStartStop/#00: Stopping server: protocol=HTTP address=127.0.0.1:16440 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.448Z [INFO]  TestAgent_MultiStartStop/#00: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:09.448Z [INFO]  TestAgent_MultiStartStop/#00: Endpoints down
>     --- PASS: TestAgent_MultiStartStop/#08 (0.54s)
>         writer.go:29: 2020-02-23T02:46:09.002Z [WARN]  TestAgent_MultiStartStop/#08: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:09.003Z [DEBUG] TestAgent_MultiStartStop/#08.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:09.003Z [DEBUG] TestAgent_MultiStartStop/#08.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:09.014Z [INFO]  TestAgent_MultiStartStop/#08.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:3c3c4e91-3aad-0d0c-8508-2c7f60b5c4d8 Address:127.0.0.1:16462}]"
>         writer.go:29: 2020-02-23T02:46:09.014Z [INFO]  TestAgent_MultiStartStop/#08.server.raft: entering follower state: follower="Node at 127.0.0.1:16462 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:09.014Z [INFO]  TestAgent_MultiStartStop/#08.server.serf.wan: serf: EventMemberJoin: Node-3c3c4e91-3aad-0d0c-8508-2c7f60b5c4d8.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.014Z [INFO]  TestAgent_MultiStartStop/#08.server.serf.lan: serf: EventMemberJoin: Node-3c3c4e91-3aad-0d0c-8508-2c7f60b5c4d8 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.015Z [INFO]  TestAgent_MultiStartStop/#08.server: Adding LAN server: server="Node-3c3c4e91-3aad-0d0c-8508-2c7f60b5c4d8 (Addr: tcp/127.0.0.1:16462) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:09.015Z [INFO]  TestAgent_MultiStartStop/#08: Started DNS server: address=127.0.0.1:16457 network=udp
>         writer.go:29: 2020-02-23T02:46:09.015Z [INFO]  TestAgent_MultiStartStop/#08.server: Handled event for server in area: event=member-join server=Node-3c3c4e91-3aad-0d0c-8508-2c7f60b5c4d8.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:09.015Z [INFO]  TestAgent_MultiStartStop/#08: Started DNS server: address=127.0.0.1:16457 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.015Z [INFO]  TestAgent_MultiStartStop/#08: Started HTTP server: address=127.0.0.1:16458 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.015Z [INFO]  TestAgent_MultiStartStop/#08: started state syncer
>         writer.go:29: 2020-02-23T02:46:09.052Z [WARN]  TestAgent_MultiStartStop/#08.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:09.052Z [INFO]  TestAgent_MultiStartStop/#08.server.raft: entering candidate state: node="Node at 127.0.0.1:16462 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:09.056Z [DEBUG] TestAgent_MultiStartStop/#08.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:09.056Z [DEBUG] TestAgent_MultiStartStop/#08.server.raft: vote granted: from=3c3c4e91-3aad-0d0c-8508-2c7f60b5c4d8 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:09.056Z [INFO]  TestAgent_MultiStartStop/#08.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:09.056Z [INFO]  TestAgent_MultiStartStop/#08.server.raft: entering leader state: leader="Node at 127.0.0.1:16462 [Leader]"
>         writer.go:29: 2020-02-23T02:46:09.056Z [INFO]  TestAgent_MultiStartStop/#08.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:09.056Z [INFO]  TestAgent_MultiStartStop/#08.server: New leader elected: payload=Node-3c3c4e91-3aad-0d0c-8508-2c7f60b5c4d8
>         writer.go:29: 2020-02-23T02:46:09.065Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:09.078Z [INFO]  TestAgent_MultiStartStop/#08.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:09.078Z [INFO]  TestAgent_MultiStartStop/#08.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.078Z [DEBUG] TestAgent_MultiStartStop/#08.server: Skipping self join check for node since the cluster is too small: node=Node-3c3c4e91-3aad-0d0c-8508-2c7f60b5c4d8
>         writer.go:29: 2020-02-23T02:46:09.078Z [INFO]  TestAgent_MultiStartStop/#08.server: member joined, marking health alive: member=Node-3c3c4e91-3aad-0d0c-8508-2c7f60b5c4d8
>         writer.go:29: 2020-02-23T02:46:09.115Z [DEBUG] TestAgent_MultiStartStop/#08: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:09.140Z [INFO]  TestAgent_MultiStartStop/#08: Synced node info
>         writer.go:29: 2020-02-23T02:46:09.507Z [INFO]  TestAgent_MultiStartStop/#08: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:09.507Z [INFO]  TestAgent_MultiStartStop/#08.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:09.507Z [DEBUG] TestAgent_MultiStartStop/#08.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.507Z [WARN]  TestAgent_MultiStartStop/#08.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:09.507Z [DEBUG] TestAgent_MultiStartStop/#08.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.509Z [WARN]  TestAgent_MultiStartStop/#08.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:09.511Z [INFO]  TestAgent_MultiStartStop/#08.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:09.511Z [INFO]  TestAgent_MultiStartStop/#08: consul server down
>         writer.go:29: 2020-02-23T02:46:09.511Z [INFO]  TestAgent_MultiStartStop/#08: shutdown complete
>         writer.go:29: 2020-02-23T02:46:09.511Z [INFO]  TestAgent_MultiStartStop/#08: Stopping server: protocol=DNS address=127.0.0.1:16457 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.511Z [INFO]  TestAgent_MultiStartStop/#08: Stopping server: protocol=DNS address=127.0.0.1:16457 network=udp
>         writer.go:29: 2020-02-23T02:46:09.511Z [INFO]  TestAgent_MultiStartStop/#08: Stopping server: protocol=HTTP address=127.0.0.1:16458 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.511Z [INFO]  TestAgent_MultiStartStop/#08: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:09.511Z [INFO]  TestAgent_MultiStartStop/#08: Endpoints down
>     --- PASS: TestAgent_MultiStartStop/#03 (0.43s)
>         writer.go:29: 2020-02-23T02:46:09.455Z [WARN]  TestAgent_MultiStartStop/#03: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:09.455Z [DEBUG] TestAgent_MultiStartStop/#03.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:09.456Z [DEBUG] TestAgent_MultiStartStop/#03.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:09.482Z [INFO]  TestAgent_MultiStartStop/#03.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:bc87b96e-1aa9-4809-2b08-34c611545669 Address:127.0.0.1:16480}]"
>         writer.go:29: 2020-02-23T02:46:09.482Z [INFO]  TestAgent_MultiStartStop/#03.server.serf.wan: serf: EventMemberJoin: Node-bc87b96e-1aa9-4809-2b08-34c611545669.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.484Z [INFO]  TestAgent_MultiStartStop/#03.server.raft: entering follower state: follower="Node at 127.0.0.1:16480 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:09.484Z [INFO]  TestAgent_MultiStartStop/#03.server.serf.lan: serf: EventMemberJoin: Node-bc87b96e-1aa9-4809-2b08-34c611545669 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.484Z [INFO]  TestAgent_MultiStartStop/#03.server: Handled event for server in area: event=member-join server=Node-bc87b96e-1aa9-4809-2b08-34c611545669.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:09.484Z [INFO]  TestAgent_MultiStartStop/#03.server: Adding LAN server: server="Node-bc87b96e-1aa9-4809-2b08-34c611545669 (Addr: tcp/127.0.0.1:16480) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:09.485Z [INFO]  TestAgent_MultiStartStop/#03: Started DNS server: address=127.0.0.1:16475 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.485Z [INFO]  TestAgent_MultiStartStop/#03: Started DNS server: address=127.0.0.1:16475 network=udp
>         writer.go:29: 2020-02-23T02:46:09.486Z [INFO]  TestAgent_MultiStartStop/#03: Started HTTP server: address=127.0.0.1:16476 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.486Z [INFO]  TestAgent_MultiStartStop/#03: started state syncer
>         writer.go:29: 2020-02-23T02:46:09.549Z [WARN]  TestAgent_MultiStartStop/#03.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:09.549Z [INFO]  TestAgent_MultiStartStop/#03.server.raft: entering candidate state: node="Node at 127.0.0.1:16480 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:09.554Z [DEBUG] TestAgent_MultiStartStop/#03.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:09.554Z [DEBUG] TestAgent_MultiStartStop/#03.server.raft: vote granted: from=bc87b96e-1aa9-4809-2b08-34c611545669 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:09.554Z [INFO]  TestAgent_MultiStartStop/#03.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:09.554Z [INFO]  TestAgent_MultiStartStop/#03.server.raft: entering leader state: leader="Node at 127.0.0.1:16480 [Leader]"
>         writer.go:29: 2020-02-23T02:46:09.554Z [INFO]  TestAgent_MultiStartStop/#03.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:09.554Z [INFO]  TestAgent_MultiStartStop/#03.server: New leader elected: payload=Node-bc87b96e-1aa9-4809-2b08-34c611545669
>         writer.go:29: 2020-02-23T02:46:09.561Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:09.571Z [INFO]  TestAgent_MultiStartStop/#03.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:09.571Z [INFO]  TestAgent_MultiStartStop/#03.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.571Z [DEBUG] TestAgent_MultiStartStop/#03.server: Skipping self join check for node since the cluster is too small: node=Node-bc87b96e-1aa9-4809-2b08-34c611545669
>         writer.go:29: 2020-02-23T02:46:09.571Z [INFO]  TestAgent_MultiStartStop/#03.server: member joined, marking health alive: member=Node-bc87b96e-1aa9-4809-2b08-34c611545669
>         writer.go:29: 2020-02-23T02:46:09.848Z [DEBUG] TestAgent_MultiStartStop/#03: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:09.851Z [INFO]  TestAgent_MultiStartStop/#03: Synced node info
>         writer.go:29: 2020-02-23T02:46:09.851Z [DEBUG] TestAgent_MultiStartStop/#03: Node info in sync
>         writer.go:29: 2020-02-23T02:46:09.878Z [INFO]  TestAgent_MultiStartStop/#03: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:09.878Z [INFO]  TestAgent_MultiStartStop/#03.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:09.878Z [DEBUG] TestAgent_MultiStartStop/#03.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.878Z [WARN]  TestAgent_MultiStartStop/#03.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:09.878Z [DEBUG] TestAgent_MultiStartStop/#03.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.880Z [WARN]  TestAgent_MultiStartStop/#03.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:09.881Z [INFO]  TestAgent_MultiStartStop/#03.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:09.881Z [INFO]  TestAgent_MultiStartStop/#03: consul server down
>         writer.go:29: 2020-02-23T02:46:09.881Z [INFO]  TestAgent_MultiStartStop/#03: shutdown complete
>         writer.go:29: 2020-02-23T02:46:09.881Z [INFO]  TestAgent_MultiStartStop/#03: Stopping server: protocol=DNS address=127.0.0.1:16475 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.881Z [INFO]  TestAgent_MultiStartStop/#03: Stopping server: protocol=DNS address=127.0.0.1:16475 network=udp
>         writer.go:29: 2020-02-23T02:46:09.881Z [INFO]  TestAgent_MultiStartStop/#03: Stopping server: protocol=HTTP address=127.0.0.1:16476 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.881Z [INFO]  TestAgent_MultiStartStop/#03: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:09.882Z [INFO]  TestAgent_MultiStartStop/#03: Endpoints down
>     --- PASS: TestAgent_MultiStartStop/#07 (0.49s)
>         writer.go:29: 2020-02-23T02:46:09.436Z [WARN]  TestAgent_MultiStartStop/#07: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:09.436Z [DEBUG] TestAgent_MultiStartStop/#07.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:09.436Z [DEBUG] TestAgent_MultiStartStop/#07.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:09.482Z [INFO]  TestAgent_MultiStartStop/#07.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:dc10cf96-54be-ea56-b5eb-573be613c3ed Address:127.0.0.1:16468}]"
>         writer.go:29: 2020-02-23T02:46:09.482Z [INFO]  TestAgent_MultiStartStop/#07.server.serf.wan: serf: EventMemberJoin: Node-dc10cf96-54be-ea56-b5eb-573be613c3ed.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.482Z [INFO]  TestAgent_MultiStartStop/#07.server.serf.lan: serf: EventMemberJoin: Node-dc10cf96-54be-ea56-b5eb-573be613c3ed 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.483Z [INFO]  TestAgent_MultiStartStop/#07: Started DNS server: address=127.0.0.1:16463 network=udp
>         writer.go:29: 2020-02-23T02:46:09.483Z [INFO]  TestAgent_MultiStartStop/#07.server.raft: entering follower state: follower="Node at 127.0.0.1:16468 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:09.483Z [INFO]  TestAgent_MultiStartStop/#07.server: Adding LAN server: server="Node-dc10cf96-54be-ea56-b5eb-573be613c3ed (Addr: tcp/127.0.0.1:16468) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:09.483Z [INFO]  TestAgent_MultiStartStop/#07.server: Handled event for server in area: event=member-join server=Node-dc10cf96-54be-ea56-b5eb-573be613c3ed.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:09.483Z [INFO]  TestAgent_MultiStartStop/#07: Started DNS server: address=127.0.0.1:16463 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.483Z [INFO]  TestAgent_MultiStartStop/#07: Started HTTP server: address=127.0.0.1:16464 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.483Z [INFO]  TestAgent_MultiStartStop/#07: started state syncer
>         writer.go:29: 2020-02-23T02:46:09.527Z [WARN]  TestAgent_MultiStartStop/#07.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:09.527Z [INFO]  TestAgent_MultiStartStop/#07.server.raft: entering candidate state: node="Node at 127.0.0.1:16468 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:09.532Z [DEBUG] TestAgent_MultiStartStop/#07.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:09.532Z [DEBUG] TestAgent_MultiStartStop/#07.server.raft: vote granted: from=dc10cf96-54be-ea56-b5eb-573be613c3ed term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:09.532Z [INFO]  TestAgent_MultiStartStop/#07.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:09.532Z [INFO]  TestAgent_MultiStartStop/#07.server.raft: entering leader state: leader="Node at 127.0.0.1:16468 [Leader]"
>         writer.go:29: 2020-02-23T02:46:09.532Z [INFO]  TestAgent_MultiStartStop/#07.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:09.532Z [INFO]  TestAgent_MultiStartStop/#07.server: New leader elected: payload=Node-dc10cf96-54be-ea56-b5eb-573be613c3ed
>         writer.go:29: 2020-02-23T02:46:09.539Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:09.546Z [INFO]  TestAgent_MultiStartStop/#07.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:09.546Z [INFO]  TestAgent_MultiStartStop/#07.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.546Z [DEBUG] TestAgent_MultiStartStop/#07.server: Skipping self join check for node since the cluster is too small: node=Node-dc10cf96-54be-ea56-b5eb-573be613c3ed
>         writer.go:29: 2020-02-23T02:46:09.547Z [INFO]  TestAgent_MultiStartStop/#07.server: member joined, marking health alive: member=Node-dc10cf96-54be-ea56-b5eb-573be613c3ed
>         writer.go:29: 2020-02-23T02:46:09.793Z [DEBUG] TestAgent_MultiStartStop/#07: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:09.796Z [INFO]  TestAgent_MultiStartStop/#07: Synced node info
>         writer.go:29: 2020-02-23T02:46:09.911Z [INFO]  TestAgent_MultiStartStop/#07: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:09.911Z [INFO]  TestAgent_MultiStartStop/#07.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:09.911Z [DEBUG] TestAgent_MultiStartStop/#07.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.911Z [WARN]  TestAgent_MultiStartStop/#07.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:09.911Z [DEBUG] TestAgent_MultiStartStop/#07.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.913Z [WARN]  TestAgent_MultiStartStop/#07.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:09.914Z [INFO]  TestAgent_MultiStartStop/#07.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:09.915Z [INFO]  TestAgent_MultiStartStop/#07: consul server down
>         writer.go:29: 2020-02-23T02:46:09.915Z [INFO]  TestAgent_MultiStartStop/#07: shutdown complete
>         writer.go:29: 2020-02-23T02:46:09.915Z [INFO]  TestAgent_MultiStartStop/#07: Stopping server: protocol=DNS address=127.0.0.1:16463 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.915Z [INFO]  TestAgent_MultiStartStop/#07: Stopping server: protocol=DNS address=127.0.0.1:16463 network=udp
>         writer.go:29: 2020-02-23T02:46:09.915Z [INFO]  TestAgent_MultiStartStop/#07: Stopping server: protocol=HTTP address=127.0.0.1:16464 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.915Z [INFO]  TestAgent_MultiStartStop/#07: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:09.915Z [INFO]  TestAgent_MultiStartStop/#07: Endpoints down
>     --- PASS: TestAgent_MultiStartStop/#06 (0.63s)
>         writer.go:29: 2020-02-23T02:46:09.444Z [WARN]  TestAgent_MultiStartStop/#06: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:09.444Z [DEBUG] TestAgent_MultiStartStop/#06.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:09.445Z [DEBUG] TestAgent_MultiStartStop/#06.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:09.481Z [INFO]  TestAgent_MultiStartStop/#06.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:1e31c1f0-47dc-2a7f-2dda-dee753906e6f Address:127.0.0.1:16474}]"
>         writer.go:29: 2020-02-23T02:46:09.482Z [INFO]  TestAgent_MultiStartStop/#06.server.serf.wan: serf: EventMemberJoin: Node-1e31c1f0-47dc-2a7f-2dda-dee753906e6f.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.484Z [INFO]  TestAgent_MultiStartStop/#06.server.raft: entering follower state: follower="Node at 127.0.0.1:16474 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:09.484Z [INFO]  TestAgent_MultiStartStop/#06.server.serf.lan: serf: EventMemberJoin: Node-1e31c1f0-47dc-2a7f-2dda-dee753906e6f 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.484Z [INFO]  TestAgent_MultiStartStop/#06.server: Adding LAN server: server="Node-1e31c1f0-47dc-2a7f-2dda-dee753906e6f (Addr: tcp/127.0.0.1:16474) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:09.484Z [INFO]  TestAgent_MultiStartStop/#06.server: Handled event for server in area: event=member-join server=Node-1e31c1f0-47dc-2a7f-2dda-dee753906e6f.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:09.485Z [INFO]  TestAgent_MultiStartStop/#06: Started DNS server: address=127.0.0.1:16469 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.485Z [INFO]  TestAgent_MultiStartStop/#06: Started DNS server: address=127.0.0.1:16469 network=udp
>         writer.go:29: 2020-02-23T02:46:09.486Z [INFO]  TestAgent_MultiStartStop/#06: Started HTTP server: address=127.0.0.1:16470 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.486Z [INFO]  TestAgent_MultiStartStop/#06: started state syncer
>         writer.go:29: 2020-02-23T02:46:09.551Z [WARN]  TestAgent_MultiStartStop/#06.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:09.551Z [INFO]  TestAgent_MultiStartStop/#06.server.raft: entering candidate state: node="Node at 127.0.0.1:16474 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:09.554Z [DEBUG] TestAgent_MultiStartStop/#06.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:09.554Z [DEBUG] TestAgent_MultiStartStop/#06.server.raft: vote granted: from=1e31c1f0-47dc-2a7f-2dda-dee753906e6f term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:09.554Z [INFO]  TestAgent_MultiStartStop/#06.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:09.554Z [INFO]  TestAgent_MultiStartStop/#06.server.raft: entering leader state: leader="Node at 127.0.0.1:16474 [Leader]"
>         writer.go:29: 2020-02-23T02:46:09.554Z [INFO]  TestAgent_MultiStartStop/#06.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:09.554Z [INFO]  TestAgent_MultiStartStop/#06.server: New leader elected: payload=Node-1e31c1f0-47dc-2a7f-2dda-dee753906e6f
>         writer.go:29: 2020-02-23T02:46:09.561Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:09.569Z [INFO]  TestAgent_MultiStartStop/#06.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:09.570Z [INFO]  TestAgent_MultiStartStop/#06.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.570Z [DEBUG] TestAgent_MultiStartStop/#06.server: Skipping self join check for node since the cluster is too small: node=Node-1e31c1f0-47dc-2a7f-2dda-dee753906e6f
>         writer.go:29: 2020-02-23T02:46:09.570Z [INFO]  TestAgent_MultiStartStop/#06.server: member joined, marking health alive: member=Node-1e31c1f0-47dc-2a7f-2dda-dee753906e6f
>         writer.go:29: 2020-02-23T02:46:09.582Z [DEBUG] TestAgent_MultiStartStop/#06: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:09.584Z [INFO]  TestAgent_MultiStartStop/#06: Synced node info
>         writer.go:29: 2020-02-23T02:46:09.584Z [DEBUG] TestAgent_MultiStartStop/#06: Node info in sync
>         writer.go:29: 2020-02-23T02:46:09.592Z [DEBUG] TestAgent_MultiStartStop/#06: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:09.592Z [DEBUG] TestAgent_MultiStartStop/#06: Node info in sync
>         writer.go:29: 2020-02-23T02:46:10.055Z [INFO]  TestAgent_MultiStartStop/#06: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:10.055Z [INFO]  TestAgent_MultiStartStop/#06.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:10.055Z [DEBUG] TestAgent_MultiStartStop/#06.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.055Z [WARN]  TestAgent_MultiStartStop/#06.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:10.055Z [DEBUG] TestAgent_MultiStartStop/#06.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.060Z [WARN]  TestAgent_MultiStartStop/#06.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:10.065Z [INFO]  TestAgent_MultiStartStop/#06.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:10.065Z [INFO]  TestAgent_MultiStartStop/#06: consul server down
>         writer.go:29: 2020-02-23T02:46:10.065Z [INFO]  TestAgent_MultiStartStop/#06: shutdown complete
>         writer.go:29: 2020-02-23T02:46:10.065Z [INFO]  TestAgent_MultiStartStop/#06: Stopping server: protocol=DNS address=127.0.0.1:16469 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.065Z [INFO]  TestAgent_MultiStartStop/#06: Stopping server: protocol=DNS address=127.0.0.1:16469 network=udp
>         writer.go:29: 2020-02-23T02:46:10.065Z [INFO]  TestAgent_MultiStartStop/#06: Stopping server: protocol=HTTP address=127.0.0.1:16470 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.065Z [INFO]  TestAgent_MultiStartStop/#06: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:10.065Z [INFO]  TestAgent_MultiStartStop/#06: Endpoints down
>     --- PASS: TestAgent_MultiStartStop/#04 (0.58s)
>         writer.go:29: 2020-02-23T02:46:09.519Z [WARN]  TestAgent_MultiStartStop/#04: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:09.519Z [DEBUG] TestAgent_MultiStartStop/#04.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:09.519Z [DEBUG] TestAgent_MultiStartStop/#04.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:09.528Z [INFO]  TestAgent_MultiStartStop/#04.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f420fd60-f5c9-eabe-14bb-f02b0f005dc0 Address:127.0.0.1:16486}]"
>         writer.go:29: 2020-02-23T02:46:09.528Z [INFO]  TestAgent_MultiStartStop/#04.server.raft: entering follower state: follower="Node at 127.0.0.1:16486 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:09.529Z [INFO]  TestAgent_MultiStartStop/#04.server.serf.wan: serf: EventMemberJoin: Node-f420fd60-f5c9-eabe-14bb-f02b0f005dc0.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.529Z [INFO]  TestAgent_MultiStartStop/#04.server.serf.lan: serf: EventMemberJoin: Node-f420fd60-f5c9-eabe-14bb-f02b0f005dc0 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.529Z [INFO]  TestAgent_MultiStartStop/#04.server: Handled event for server in area: event=member-join server=Node-f420fd60-f5c9-eabe-14bb-f02b0f005dc0.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:09.529Z [INFO]  TestAgent_MultiStartStop/#04.server: Adding LAN server: server="Node-f420fd60-f5c9-eabe-14bb-f02b0f005dc0 (Addr: tcp/127.0.0.1:16486) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:09.529Z [INFO]  TestAgent_MultiStartStop/#04: Started DNS server: address=127.0.0.1:16481 network=udp
>         writer.go:29: 2020-02-23T02:46:09.529Z [INFO]  TestAgent_MultiStartStop/#04: Started DNS server: address=127.0.0.1:16481 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.530Z [INFO]  TestAgent_MultiStartStop/#04: Started HTTP server: address=127.0.0.1:16482 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.530Z [INFO]  TestAgent_MultiStartStop/#04: started state syncer
>         writer.go:29: 2020-02-23T02:46:09.591Z [WARN]  TestAgent_MultiStartStop/#04.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:09.591Z [INFO]  TestAgent_MultiStartStop/#04.server.raft: entering candidate state: node="Node at 127.0.0.1:16486 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:09.594Z [DEBUG] TestAgent_MultiStartStop/#04.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:09.594Z [DEBUG] TestAgent_MultiStartStop/#04.server.raft: vote granted: from=f420fd60-f5c9-eabe-14bb-f02b0f005dc0 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:09.594Z [INFO]  TestAgent_MultiStartStop/#04.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:09.594Z [INFO]  TestAgent_MultiStartStop/#04.server.raft: entering leader state: leader="Node at 127.0.0.1:16486 [Leader]"
>         writer.go:29: 2020-02-23T02:46:09.594Z [INFO]  TestAgent_MultiStartStop/#04.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:09.594Z [INFO]  TestAgent_MultiStartStop/#04.server: New leader elected: payload=Node-f420fd60-f5c9-eabe-14bb-f02b0f005dc0
>         writer.go:29: 2020-02-23T02:46:09.599Z [INFO]  TestAgent_MultiStartStop/#04: Synced node info
>         writer.go:29: 2020-02-23T02:46:09.604Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:09.609Z [INFO]  TestAgent_MultiStartStop/#04.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:09.609Z [INFO]  TestAgent_MultiStartStop/#04.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:09.610Z [DEBUG] TestAgent_MultiStartStop/#04.server: Skipping self join check for node since the cluster is too small: node=Node-f420fd60-f5c9-eabe-14bb-f02b0f005dc0
>         writer.go:29: 2020-02-23T02:46:09.610Z [INFO]  TestAgent_MultiStartStop/#04.server: member joined, marking health alive: member=Node-f420fd60-f5c9-eabe-14bb-f02b0f005dc0
>         writer.go:29: 2020-02-23T02:46:10.087Z [INFO]  TestAgent_MultiStartStop/#04: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:10.087Z [INFO]  TestAgent_MultiStartStop/#04.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:10.087Z [DEBUG] TestAgent_MultiStartStop/#04.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.087Z [WARN]  TestAgent_MultiStartStop/#04.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:10.088Z [DEBUG] TestAgent_MultiStartStop/#04.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.089Z [WARN]  TestAgent_MultiStartStop/#04.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:10.091Z [INFO]  TestAgent_MultiStartStop/#04.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:10.091Z [INFO]  TestAgent_MultiStartStop/#04: consul server down
>         writer.go:29: 2020-02-23T02:46:10.091Z [INFO]  TestAgent_MultiStartStop/#04: shutdown complete
>         writer.go:29: 2020-02-23T02:46:10.091Z [INFO]  TestAgent_MultiStartStop/#04: Stopping server: protocol=DNS address=127.0.0.1:16481 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.091Z [INFO]  TestAgent_MultiStartStop/#04: Stopping server: protocol=DNS address=127.0.0.1:16481 network=udp
>         writer.go:29: 2020-02-23T02:46:10.091Z [INFO]  TestAgent_MultiStartStop/#04: Stopping server: protocol=HTTP address=127.0.0.1:16482 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.091Z [INFO]  TestAgent_MultiStartStop/#04: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:10.091Z [INFO]  TestAgent_MultiStartStop/#04: Endpoints down
>     --- PASS: TestAgent_MultiStartStop/#01 (0.63s)
>         writer.go:29: 2020-02-23T02:46:09.923Z [WARN]  TestAgent_MultiStartStop/#01: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:09.923Z [DEBUG] TestAgent_MultiStartStop/#01.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:09.923Z [DEBUG] TestAgent_MultiStartStop/#01.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:09.938Z [INFO]  TestAgent_MultiStartStop/#01.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:7667e840-7d8e-7c47-8f5c-796057c5b36e Address:127.0.0.1:16498}]"
>         writer.go:29: 2020-02-23T02:46:09.938Z [INFO]  TestAgent_MultiStartStop/#01.server.raft: entering follower state: follower="Node at 127.0.0.1:16498 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:09.939Z [INFO]  TestAgent_MultiStartStop/#01.server.serf.wan: serf: EventMemberJoin: Node-7667e840-7d8e-7c47-8f5c-796057c5b36e.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.939Z [INFO]  TestAgent_MultiStartStop/#01.server.serf.lan: serf: EventMemberJoin: Node-7667e840-7d8e-7c47-8f5c-796057c5b36e 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.940Z [INFO]  TestAgent_MultiStartStop/#01.server: Handled event for server in area: event=member-join server=Node-7667e840-7d8e-7c47-8f5c-796057c5b36e.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:09.940Z [INFO]  TestAgent_MultiStartStop/#01.server: Adding LAN server: server="Node-7667e840-7d8e-7c47-8f5c-796057c5b36e (Addr: tcp/127.0.0.1:16498) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:09.940Z [INFO]  TestAgent_MultiStartStop/#01: Started DNS server: address=127.0.0.1:16493 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.940Z [INFO]  TestAgent_MultiStartStop/#01: Started DNS server: address=127.0.0.1:16493 network=udp
>         writer.go:29: 2020-02-23T02:46:09.940Z [INFO]  TestAgent_MultiStartStop/#01: Started HTTP server: address=127.0.0.1:16494 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.940Z [INFO]  TestAgent_MultiStartStop/#01: started state syncer
>         writer.go:29: 2020-02-23T02:46:09.983Z [WARN]  TestAgent_MultiStartStop/#01.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:09.983Z [INFO]  TestAgent_MultiStartStop/#01.server.raft: entering candidate state: node="Node at 127.0.0.1:16498 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:10.036Z [DEBUG] TestAgent_MultiStartStop/#01.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:10.036Z [DEBUG] TestAgent_MultiStartStop/#01.server.raft: vote granted: from=7667e840-7d8e-7c47-8f5c-796057c5b36e term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:10.036Z [INFO]  TestAgent_MultiStartStop/#01.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:10.036Z [INFO]  TestAgent_MultiStartStop/#01.server.raft: entering leader state: leader="Node at 127.0.0.1:16498 [Leader]"
>         writer.go:29: 2020-02-23T02:46:10.036Z [INFO]  TestAgent_MultiStartStop/#01.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:10.036Z [INFO]  TestAgent_MultiStartStop/#01.server: New leader elected: payload=Node-7667e840-7d8e-7c47-8f5c-796057c5b36e
>         writer.go:29: 2020-02-23T02:46:10.060Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:10.070Z [INFO]  TestAgent_MultiStartStop/#01: Synced node info
>         writer.go:29: 2020-02-23T02:46:10.070Z [DEBUG] TestAgent_MultiStartStop/#01: Node info in sync
>         writer.go:29: 2020-02-23T02:46:10.073Z [INFO]  TestAgent_MultiStartStop/#01.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:10.073Z [INFO]  TestAgent_MultiStartStop/#01.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.073Z [DEBUG] TestAgent_MultiStartStop/#01.server: Skipping self join check for node since the cluster is too small: node=Node-7667e840-7d8e-7c47-8f5c-796057c5b36e
>         writer.go:29: 2020-02-23T02:46:10.073Z [INFO]  TestAgent_MultiStartStop/#01.server: member joined, marking health alive: member=Node-7667e840-7d8e-7c47-8f5c-796057c5b36e
>         writer.go:29: 2020-02-23T02:46:10.490Z [INFO]  TestAgent_MultiStartStop/#01: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:10.490Z [INFO]  TestAgent_MultiStartStop/#01.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:10.490Z [DEBUG] TestAgent_MultiStartStop/#01.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.490Z [WARN]  TestAgent_MultiStartStop/#01.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:10.490Z [DEBUG] TestAgent_MultiStartStop/#01.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.509Z [WARN]  TestAgent_MultiStartStop/#01.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:10.544Z [INFO]  TestAgent_MultiStartStop/#01.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:10.544Z [INFO]  TestAgent_MultiStartStop/#01: consul server down
>         writer.go:29: 2020-02-23T02:46:10.544Z [INFO]  TestAgent_MultiStartStop/#01: shutdown complete
>         writer.go:29: 2020-02-23T02:46:10.544Z [INFO]  TestAgent_MultiStartStop/#01: Stopping server: protocol=DNS address=127.0.0.1:16493 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.544Z [INFO]  TestAgent_MultiStartStop/#01: Stopping server: protocol=DNS address=127.0.0.1:16493 network=udp
>         writer.go:29: 2020-02-23T02:46:10.545Z [INFO]  TestAgent_MultiStartStop/#01: Stopping server: protocol=HTTP address=127.0.0.1:16494 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.545Z [INFO]  TestAgent_MultiStartStop/#01: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:10.545Z [INFO]  TestAgent_MultiStartStop/#01: Endpoints down
>     --- PASS: TestAgent_MultiStartStop/#02 (0.70s)
>         writer.go:29: 2020-02-23T02:46:09.888Z [WARN]  TestAgent_MultiStartStop/#02: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:09.888Z [DEBUG] TestAgent_MultiStartStop/#02.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:09.888Z [DEBUG] TestAgent_MultiStartStop/#02.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:09.897Z [INFO]  TestAgent_MultiStartStop/#02.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:c0066ef0-4f69-1932-730d-70b9a7a4692d Address:127.0.0.1:16492}]"
>         writer.go:29: 2020-02-23T02:46:09.897Z [INFO]  TestAgent_MultiStartStop/#02.server.raft: entering follower state: follower="Node at 127.0.0.1:16492 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:09.898Z [INFO]  TestAgent_MultiStartStop/#02.server.serf.wan: serf: EventMemberJoin: Node-c0066ef0-4f69-1932-730d-70b9a7a4692d.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.899Z [INFO]  TestAgent_MultiStartStop/#02.server.serf.lan: serf: EventMemberJoin: Node-c0066ef0-4f69-1932-730d-70b9a7a4692d 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:09.899Z [INFO]  TestAgent_MultiStartStop/#02.server: Handled event for server in area: event=member-join server=Node-c0066ef0-4f69-1932-730d-70b9a7a4692d.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:09.899Z [INFO]  TestAgent_MultiStartStop/#02.server: Adding LAN server: server="Node-c0066ef0-4f69-1932-730d-70b9a7a4692d (Addr: tcp/127.0.0.1:16492) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:09.899Z [INFO]  TestAgent_MultiStartStop/#02: Started DNS server: address=127.0.0.1:16487 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.899Z [INFO]  TestAgent_MultiStartStop/#02: Started DNS server: address=127.0.0.1:16487 network=udp
>         writer.go:29: 2020-02-23T02:46:09.900Z [INFO]  TestAgent_MultiStartStop/#02: Started HTTP server: address=127.0.0.1:16488 network=tcp
>         writer.go:29: 2020-02-23T02:46:09.900Z [INFO]  TestAgent_MultiStartStop/#02: started state syncer
>         writer.go:29: 2020-02-23T02:46:09.955Z [WARN]  TestAgent_MultiStartStop/#02.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:09.955Z [INFO]  TestAgent_MultiStartStop/#02.server.raft: entering candidate state: node="Node at 127.0.0.1:16492 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:09.958Z [DEBUG] TestAgent_MultiStartStop/#02.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:09.958Z [DEBUG] TestAgent_MultiStartStop/#02.server.raft: vote granted: from=c0066ef0-4f69-1932-730d-70b9a7a4692d term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:09.958Z [INFO]  TestAgent_MultiStartStop/#02.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:09.958Z [INFO]  TestAgent_MultiStartStop/#02.server.raft: entering leader state: leader="Node at 127.0.0.1:16492 [Leader]"
>         writer.go:29: 2020-02-23T02:46:09.958Z [INFO]  TestAgent_MultiStartStop/#02.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:09.958Z [INFO]  TestAgent_MultiStartStop/#02.server: New leader elected: payload=Node-c0066ef0-4f69-1932-730d-70b9a7a4692d
>         writer.go:29: 2020-02-23T02:46:09.980Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:10.040Z [INFO]  TestAgent_MultiStartStop/#02: Synced node info
>         writer.go:29: 2020-02-23T02:46:10.042Z [INFO]  TestAgent_MultiStartStop/#02.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:10.042Z [INFO]  TestAgent_MultiStartStop/#02.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.042Z [DEBUG] TestAgent_MultiStartStop/#02.server: Skipping self join check for node since the cluster is too small: node=Node-c0066ef0-4f69-1932-730d-70b9a7a4692d
>         writer.go:29: 2020-02-23T02:46:10.042Z [INFO]  TestAgent_MultiStartStop/#02.server: member joined, marking health alive: member=Node-c0066ef0-4f69-1932-730d-70b9a7a4692d
>         writer.go:29: 2020-02-23T02:46:10.576Z [INFO]  TestAgent_MultiStartStop/#02: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:10.576Z [INFO]  TestAgent_MultiStartStop/#02.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:10.576Z [DEBUG] TestAgent_MultiStartStop/#02.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.576Z [WARN]  TestAgent_MultiStartStop/#02.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:10.576Z [DEBUG] TestAgent_MultiStartStop/#02.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.578Z [WARN]  TestAgent_MultiStartStop/#02.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:10.579Z [INFO]  TestAgent_MultiStartStop/#02.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:10.579Z [INFO]  TestAgent_MultiStartStop/#02: consul server down
>         writer.go:29: 2020-02-23T02:46:10.579Z [INFO]  TestAgent_MultiStartStop/#02: shutdown complete
>         writer.go:29: 2020-02-23T02:46:10.579Z [INFO]  TestAgent_MultiStartStop/#02: Stopping server: protocol=DNS address=127.0.0.1:16487 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.579Z [INFO]  TestAgent_MultiStartStop/#02: Stopping server: protocol=DNS address=127.0.0.1:16487 network=udp
>         writer.go:29: 2020-02-23T02:46:10.579Z [INFO]  TestAgent_MultiStartStop/#02: Stopping server: protocol=HTTP address=127.0.0.1:16488 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.580Z [INFO]  TestAgent_MultiStartStop/#02: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:10.580Z [INFO]  TestAgent_MultiStartStop/#02: Endpoints down
> === RUN   TestAgent_ConnectClusterIDConfig
> === RUN   TestAgent_ConnectClusterIDConfig/default_TestAgent_has_fixed_cluster_id
> === RUN   TestAgent_ConnectClusterIDConfig/no_cluster_ID_specified_sets_to_test_ID
> === RUN   TestAgent_ConnectClusterIDConfig/non-UUID_cluster_id_is_fatal
> --- PASS: TestAgent_ConnectClusterIDConfig (0.40s)
>     --- PASS: TestAgent_ConnectClusterIDConfig/default_TestAgent_has_fixed_cluster_id (0.22s)
>         writer.go:29: 2020-02-23T02:46:10.587Z [WARN]  default TestAgent has fixed cluster id: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:10.587Z [DEBUG] default TestAgent has fixed cluster id.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:10.588Z [DEBUG] default TestAgent has fixed cluster id.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:10.596Z [INFO]  default TestAgent has fixed cluster id.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:73aa89dd-830e-e0ea-1fa4-51068a31c20a Address:127.0.0.1:16504}]"
>         writer.go:29: 2020-02-23T02:46:10.596Z [INFO]  default TestAgent has fixed cluster id.server.raft: entering follower state: follower="Node at 127.0.0.1:16504 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:10.597Z [INFO]  default TestAgent has fixed cluster id.server.serf.wan: serf: EventMemberJoin: Node-73aa89dd-830e-e0ea-1fa4-51068a31c20a.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:10.598Z [INFO]  default TestAgent has fixed cluster id.server.serf.lan: serf: EventMemberJoin: Node-73aa89dd-830e-e0ea-1fa4-51068a31c20a 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:10.598Z [INFO]  default TestAgent has fixed cluster id.server: Handled event for server in area: event=member-join server=Node-73aa89dd-830e-e0ea-1fa4-51068a31c20a.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:10.598Z [INFO]  default TestAgent has fixed cluster id.server: Adding LAN server: server="Node-73aa89dd-830e-e0ea-1fa4-51068a31c20a (Addr: tcp/127.0.0.1:16504) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:10.598Z [INFO]  default TestAgent has fixed cluster id: Started DNS server: address=127.0.0.1:16499 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.598Z [INFO]  default TestAgent has fixed cluster id: Started DNS server: address=127.0.0.1:16499 network=udp
>         writer.go:29: 2020-02-23T02:46:10.598Z [INFO]  default TestAgent has fixed cluster id: Started HTTP server: address=127.0.0.1:16500 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.598Z [INFO]  default TestAgent has fixed cluster id: started state syncer
>         writer.go:29: 2020-02-23T02:46:10.639Z [WARN]  default TestAgent has fixed cluster id.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:10.639Z [INFO]  default TestAgent has fixed cluster id.server.raft: entering candidate state: node="Node at 127.0.0.1:16504 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:10.642Z [DEBUG] default TestAgent has fixed cluster id.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:10.642Z [DEBUG] default TestAgent has fixed cluster id.server.raft: vote granted: from=73aa89dd-830e-e0ea-1fa4-51068a31c20a term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:10.643Z [INFO]  default TestAgent has fixed cluster id.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:10.643Z [INFO]  default TestAgent has fixed cluster id.server.raft: entering leader state: leader="Node at 127.0.0.1:16504 [Leader]"
>         writer.go:29: 2020-02-23T02:46:10.643Z [INFO]  default TestAgent has fixed cluster id.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:10.643Z [INFO]  default TestAgent has fixed cluster id.server: New leader elected: payload=Node-73aa89dd-830e-e0ea-1fa4-51068a31c20a
>         writer.go:29: 2020-02-23T02:46:10.650Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:10.658Z [INFO]  default TestAgent has fixed cluster id.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:10.658Z [INFO]  default TestAgent has fixed cluster id.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.658Z [DEBUG] default TestAgent has fixed cluster id.server: Skipping self join check for node since the cluster is too small: node=Node-73aa89dd-830e-e0ea-1fa4-51068a31c20a
>         writer.go:29: 2020-02-23T02:46:10.658Z [INFO]  default TestAgent has fixed cluster id.server: member joined, marking health alive: member=Node-73aa89dd-830e-e0ea-1fa4-51068a31c20a
>         writer.go:29: 2020-02-23T02:46:10.702Z [DEBUG] default TestAgent has fixed cluster id: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:10.705Z [INFO]  default TestAgent has fixed cluster id: Synced node info
>         writer.go:29: 2020-02-23T02:46:10.705Z [DEBUG] default TestAgent has fixed cluster id: Node info in sync
>         writer.go:29: 2020-02-23T02:46:10.795Z [INFO]  default TestAgent has fixed cluster id: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:10.795Z [INFO]  default TestAgent has fixed cluster id.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:10.795Z [DEBUG] default TestAgent has fixed cluster id.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.795Z [WARN]  default TestAgent has fixed cluster id.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:10.795Z [DEBUG] default TestAgent has fixed cluster id.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.797Z [WARN]  default TestAgent has fixed cluster id.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:10.799Z [INFO]  default TestAgent has fixed cluster id.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:10.799Z [INFO]  default TestAgent has fixed cluster id: consul server down
>         writer.go:29: 2020-02-23T02:46:10.799Z [INFO]  default TestAgent has fixed cluster id: shutdown complete
>         writer.go:29: 2020-02-23T02:46:10.799Z [INFO]  default TestAgent has fixed cluster id: Stopping server: protocol=DNS address=127.0.0.1:16499 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.799Z [INFO]  default TestAgent has fixed cluster id: Stopping server: protocol=DNS address=127.0.0.1:16499 network=udp
>         writer.go:29: 2020-02-23T02:46:10.799Z [INFO]  default TestAgent has fixed cluster id: Stopping server: protocol=HTTP address=127.0.0.1:16500 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.799Z [INFO]  default TestAgent has fixed cluster id: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:10.799Z [INFO]  default TestAgent has fixed cluster id: Endpoints down
>     --- PASS: TestAgent_ConnectClusterIDConfig/no_cluster_ID_specified_sets_to_test_ID (0.17s)
>         writer.go:29: 2020-02-23T02:46:10.807Z [WARN]  no cluster ID specified sets to test ID: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:10.807Z [DEBUG] no cluster ID specified sets to test ID.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:10.807Z [DEBUG] no cluster ID specified sets to test ID.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:10.819Z [INFO]  no cluster ID specified sets to test ID.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:cc6a265f-4320-aa1d-5985-e91f60b65449 Address:127.0.0.1:16510}]"
>         writer.go:29: 2020-02-23T02:46:10.819Z [INFO]  no cluster ID specified sets to test ID.server.serf.wan: serf: EventMemberJoin: Node-cc6a265f-4320-aa1d-5985-e91f60b65449.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:10.820Z [INFO]  no cluster ID specified sets to test ID.server.serf.lan: serf: EventMemberJoin: Node-cc6a265f-4320-aa1d-5985-e91f60b65449 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:10.820Z [INFO]  no cluster ID specified sets to test ID: Started DNS server: address=127.0.0.1:16505 network=udp
>         writer.go:29: 2020-02-23T02:46:10.820Z [INFO]  no cluster ID specified sets to test ID.server.raft: entering follower state: follower="Node at 127.0.0.1:16510 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:10.820Z [INFO]  no cluster ID specified sets to test ID.server: Adding LAN server: server="Node-cc6a265f-4320-aa1d-5985-e91f60b65449 (Addr: tcp/127.0.0.1:16510) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:10.820Z [INFO]  no cluster ID specified sets to test ID.server: Handled event for server in area: event=member-join server=Node-cc6a265f-4320-aa1d-5985-e91f60b65449.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:10.820Z [INFO]  no cluster ID specified sets to test ID: Started DNS server: address=127.0.0.1:16505 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.821Z [INFO]  no cluster ID specified sets to test ID: Started HTTP server: address=127.0.0.1:16506 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.821Z [INFO]  no cluster ID specified sets to test ID: started state syncer
>         writer.go:29: 2020-02-23T02:46:10.888Z [WARN]  no cluster ID specified sets to test ID.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:10.888Z [INFO]  no cluster ID specified sets to test ID.server.raft: entering candidate state: node="Node at 127.0.0.1:16510 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:10.891Z [DEBUG] no cluster ID specified sets to test ID.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:10.891Z [DEBUG] no cluster ID specified sets to test ID.server.raft: vote granted: from=cc6a265f-4320-aa1d-5985-e91f60b65449 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:10.891Z [INFO]  no cluster ID specified sets to test ID.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:10.891Z [INFO]  no cluster ID specified sets to test ID.server.raft: entering leader state: leader="Node at 127.0.0.1:16510 [Leader]"
>         writer.go:29: 2020-02-23T02:46:10.891Z [INFO]  no cluster ID specified sets to test ID.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:10.891Z [INFO]  no cluster ID specified sets to test ID.server: New leader elected: payload=Node-cc6a265f-4320-aa1d-5985-e91f60b65449
>         writer.go:29: 2020-02-23T02:46:10.899Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:10.907Z [INFO]  no cluster ID specified sets to test ID.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:10.907Z [INFO]  no cluster ID specified sets to test ID.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.907Z [DEBUG] no cluster ID specified sets to test ID.server: Skipping self join check for node since the cluster is too small: node=Node-cc6a265f-4320-aa1d-5985-e91f60b65449
>         writer.go:29: 2020-02-23T02:46:10.907Z [INFO]  no cluster ID specified sets to test ID.server: member joined, marking health alive: member=Node-cc6a265f-4320-aa1d-5985-e91f60b65449
>         writer.go:29: 2020-02-23T02:46:10.969Z [INFO]  no cluster ID specified sets to test ID: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:10.969Z [INFO]  no cluster ID specified sets to test ID.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:10.969Z [DEBUG] no cluster ID specified sets to test ID.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.969Z [WARN]  no cluster ID specified sets to test ID.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:10.969Z [ERROR] no cluster ID specified sets to test ID.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:10.969Z [DEBUG] no cluster ID specified sets to test ID.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:10.970Z [WARN]  no cluster ID specified sets to test ID.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:10.972Z [INFO]  no cluster ID specified sets to test ID.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:10.972Z [INFO]  no cluster ID specified sets to test ID: consul server down
>         writer.go:29: 2020-02-23T02:46:10.972Z [INFO]  no cluster ID specified sets to test ID: shutdown complete
>         writer.go:29: 2020-02-23T02:46:10.972Z [INFO]  no cluster ID specified sets to test ID: Stopping server: protocol=DNS address=127.0.0.1:16505 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.972Z [INFO]  no cluster ID specified sets to test ID: Stopping server: protocol=DNS address=127.0.0.1:16505 network=udp
>         writer.go:29: 2020-02-23T02:46:10.972Z [INFO]  no cluster ID specified sets to test ID: Stopping server: protocol=HTTP address=127.0.0.1:16506 network=tcp
>         writer.go:29: 2020-02-23T02:46:10.972Z [INFO]  no cluster ID specified sets to test ID: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:10.972Z [INFO]  no cluster ID specified sets to test ID: Endpoints down
>     --- PASS: TestAgent_ConnectClusterIDConfig/non-UUID_cluster_id_is_fatal (0.01s)
>         writer.go:29: 2020-02-23T02:46:10.980Z [WARN]  non-UUID cluster_id is fatal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:10.980Z [ERROR] non-UUID cluster_id is fatal: connect CA config cluster_id specified but is not a valid UUID, aborting startup
>         writer.go:29: 2020-02-23T02:46:10.980Z [INFO]  non-UUID cluster_id is fatal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:10.980Z [INFO]  non-UUID cluster_id is fatal: shutdown complete
> === RUN   TestAgent_StartStop
> === PAUSE TestAgent_StartStop
> === RUN   TestAgent_RPCPing
> === PAUSE TestAgent_RPCPing
> === RUN   TestAgent_TokenStore
> === PAUSE TestAgent_TokenStore
> === RUN   TestAgent_ReconnectConfigSettings
> === PAUSE TestAgent_ReconnectConfigSettings
> === RUN   TestAgent_ReconnectConfigWanDisabled
> === PAUSE TestAgent_ReconnectConfigWanDisabled
> === RUN   TestAgent_setupNodeID
> === PAUSE TestAgent_setupNodeID
> === RUN   TestAgent_makeNodeID
> === PAUSE TestAgent_makeNodeID
> === RUN   TestAgent_AddService
> === RUN   TestAgent_AddService/normal
> === PAUSE TestAgent_AddService/normal
> === RUN   TestAgent_AddService/service_manager
> === PAUSE TestAgent_AddService/service_manager
> === CONT  TestAgent_AddService/normal
> === CONT  TestAgent_AddService/service_manager
> === RUN   TestAgent_AddService/service_manager/one_check
> === RUN   TestAgent_AddService/service_manager/one_check/svcid1
> === RUN   TestAgent_AddService/service_manager/one_check/check1
> === RUN   TestAgent_AddService/service_manager/one_check/check1_ttl
> === RUN   TestAgent_AddService/service_manager/multiple_checks
> === RUN   TestAgent_AddService/service_manager/multiple_checks/svcid2
> === RUN   TestAgent_AddService/service_manager/multiple_checks/check1
> === RUN   TestAgent_AddService/service_manager/multiple_checks/check-noname
> === RUN   TestAgent_AddService/service_manager/multiple_checks/service:svcid2:3
> === RUN   TestAgent_AddService/service_manager/multiple_checks/service:svcid2:4
> === RUN   TestAgent_AddService/service_manager/multiple_checks/check1_ttl
> === RUN   TestAgent_AddService/service_manager/multiple_checks/check-noname_ttl
> === RUN   TestAgent_AddService/service_manager/multiple_checks/service:svcid2:3_ttl
> === RUN   TestAgent_AddService/service_manager/multiple_checks/service:svcid2:4_ttl
> === RUN   TestAgent_AddService/normal/one_check
> === RUN   TestAgent_AddService/normal/one_check/svcid1
> === RUN   TestAgent_AddService/normal/one_check/check1
> === RUN   TestAgent_AddService/normal/one_check/check1_ttl
> === RUN   TestAgent_AddService/normal/multiple_checks
> === RUN   TestAgent_AddService/normal/multiple_checks/svcid2
> === RUN   TestAgent_AddService/normal/multiple_checks/check1
> === RUN   TestAgent_AddService/normal/multiple_checks/check-noname
> === RUN   TestAgent_AddService/normal/multiple_checks/service:svcid2:3
> === RUN   TestAgent_AddService/normal/multiple_checks/service:svcid2:4
> === RUN   TestAgent_AddService/normal/multiple_checks/check1_ttl
> === RUN   TestAgent_AddService/normal/multiple_checks/check-noname_ttl
> === RUN   TestAgent_AddService/normal/multiple_checks/service:svcid2:3_ttl
> === RUN   TestAgent_AddService/normal/multiple_checks/service:svcid2:4_ttl
> --- PASS: TestAgent_AddService (0.00s)
>     --- PASS: TestAgent_AddService/service_manager (0.27s)
>         writer.go:29: 2020-02-23T02:46:10.988Z [WARN]  TestAgent_AddService/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:10.988Z [DEBUG] TestAgent_AddService/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:10.989Z [DEBUG] TestAgent_AddService/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:11.009Z [INFO]  TestAgent_AddService/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:74b72c26-eeea-f629-fa14-c919ab08a9ef Address:127.0.0.1:16522}]"
>         writer.go:29: 2020-02-23T02:46:11.009Z [INFO]  TestAgent_AddService/service_manager.server.serf.wan: serf: EventMemberJoin: node1.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:11.009Z [INFO]  TestAgent_AddService/service_manager.server.serf.lan: serf: EventMemberJoin: node1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:11.010Z [INFO]  TestAgent_AddService/service_manager: Started DNS server: address=127.0.0.1:16517 network=udp
>         writer.go:29: 2020-02-23T02:46:11.010Z [INFO]  TestAgent_AddService/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16522 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:11.010Z [INFO]  TestAgent_AddService/service_manager.server: Adding LAN server: server="node1 (Addr: tcp/127.0.0.1:16522) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:11.010Z [INFO]  TestAgent_AddService/service_manager.server: Handled event for server in area: event=member-join server=node1.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:11.010Z [INFO]  TestAgent_AddService/service_manager: Started DNS server: address=127.0.0.1:16517 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.010Z [INFO]  TestAgent_AddService/service_manager: Started HTTP server: address=127.0.0.1:16518 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.010Z [INFO]  TestAgent_AddService/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:11.071Z [WARN]  TestAgent_AddService/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:11.071Z [INFO]  TestAgent_AddService/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16522 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:11.077Z [DEBUG] TestAgent_AddService/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:11.077Z [DEBUG] TestAgent_AddService/service_manager.server.raft: vote granted: from=74b72c26-eeea-f629-fa14-c919ab08a9ef term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:11.077Z [INFO]  TestAgent_AddService/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:11.077Z [INFO]  TestAgent_AddService/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16522 [Leader]"
>         writer.go:29: 2020-02-23T02:46:11.077Z [INFO]  TestAgent_AddService/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:11.077Z [INFO]  TestAgent_AddService/service_manager.server: New leader elected: payload=node1
>         writer.go:29: 2020-02-23T02:46:11.097Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:11.106Z [INFO]  TestAgent_AddService/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:11.106Z [INFO]  TestAgent_AddService/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:11.106Z [DEBUG] TestAgent_AddService/service_manager.server: Skipping self join check for node since the cluster is too small: node=node1
>         writer.go:29: 2020-02-23T02:46:11.106Z [INFO]  TestAgent_AddService/service_manager.server: member joined, marking health alive: member=node1
>         writer.go:29: 2020-02-23T02:46:11.185Z [DEBUG] TestAgent_AddService/service_manager: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:11.187Z [INFO]  TestAgent_AddService/service_manager: Synced node info
>         --- PASS: TestAgent_AddService/service_manager/one_check (0.00s)
>             --- PASS: TestAgent_AddService/service_manager/one_check/svcid1 (0.00s)
>             --- PASS: TestAgent_AddService/service_manager/one_check/check1 (0.00s)
>             --- PASS: TestAgent_AddService/service_manager/one_check/check1_ttl (0.00s)
>         --- PASS: TestAgent_AddService/service_manager/multiple_checks (0.00s)
>             --- PASS: TestAgent_AddService/service_manager/multiple_checks/svcid2 (0.00s)
>             --- PASS: TestAgent_AddService/service_manager/multiple_checks/check1 (0.00s)
>             --- PASS: TestAgent_AddService/service_manager/multiple_checks/check-noname (0.00s)
>             --- PASS: TestAgent_AddService/service_manager/multiple_checks/service:svcid2:3 (0.00s)
>             --- PASS: TestAgent_AddService/service_manager/multiple_checks/service:svcid2:4 (0.00s)
>             --- PASS: TestAgent_AddService/service_manager/multiple_checks/check1_ttl (0.00s)
>             --- PASS: TestAgent_AddService/service_manager/multiple_checks/check-noname_ttl (0.00s)
>             --- PASS: TestAgent_AddService/service_manager/multiple_checks/service:svcid2:3_ttl (0.00s)
>             --- PASS: TestAgent_AddService/service_manager/multiple_checks/service:svcid2:4_ttl (0.00s)
>         writer.go:29: 2020-02-23T02:46:11.248Z [INFO]  TestAgent_AddService/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:11.248Z [INFO]  TestAgent_AddService/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:11.248Z [DEBUG] TestAgent_AddService/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:11.248Z [WARN]  TestAgent_AddService/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:11.248Z [DEBUG] TestAgent_AddService/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:11.251Z [WARN]  TestAgent_AddService/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:11.252Z [INFO]  TestAgent_AddService/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:11.252Z [INFO]  TestAgent_AddService/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:11.252Z [INFO]  TestAgent_AddService/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:11.252Z [INFO]  TestAgent_AddService/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16517 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.252Z [INFO]  TestAgent_AddService/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16517 network=udp
>         writer.go:29: 2020-02-23T02:46:11.252Z [INFO]  TestAgent_AddService/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16518 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.252Z [INFO]  TestAgent_AddService/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:11.253Z [INFO]  TestAgent_AddService/service_manager: Endpoints down
>     --- PASS: TestAgent_AddService/normal (0.47s)
>         writer.go:29: 2020-02-23T02:46:11.005Z [WARN]  TestAgent_AddService/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:11.005Z [DEBUG] TestAgent_AddService/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:11.006Z [DEBUG] TestAgent_AddService/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:11.018Z [INFO]  TestAgent_AddService/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4ba65101-a056-a6c9-ab6a-0fc5327361f7 Address:127.0.0.1:16528}]"
>         writer.go:29: 2020-02-23T02:46:11.018Z [INFO]  TestAgent_AddService/normal.server.serf.wan: serf: EventMemberJoin: node1.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:11.019Z [INFO]  TestAgent_AddService/normal.server.serf.lan: serf: EventMemberJoin: node1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:11.019Z [INFO]  TestAgent_AddService/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16528 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:11.019Z [INFO]  TestAgent_AddService/normal: Started DNS server: address=127.0.0.1:16523 network=udp
>         writer.go:29: 2020-02-23T02:46:11.019Z [INFO]  TestAgent_AddService/normal.server: Adding LAN server: server="node1 (Addr: tcp/127.0.0.1:16528) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:11.019Z [INFO]  TestAgent_AddService/normal.server: Handled event for server in area: event=member-join server=node1.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:11.019Z [INFO]  TestAgent_AddService/normal: Started DNS server: address=127.0.0.1:16523 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.019Z [INFO]  TestAgent_AddService/normal: Started HTTP server: address=127.0.0.1:16524 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.019Z [INFO]  TestAgent_AddService/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:11.058Z [WARN]  TestAgent_AddService/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:11.058Z [INFO]  TestAgent_AddService/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16528 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:11.064Z [DEBUG] TestAgent_AddService/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:11.064Z [DEBUG] TestAgent_AddService/normal.server.raft: vote granted: from=4ba65101-a056-a6c9-ab6a-0fc5327361f7 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:11.064Z [INFO]  TestAgent_AddService/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:11.064Z [INFO]  TestAgent_AddService/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16528 [Leader]"
>         writer.go:29: 2020-02-23T02:46:11.065Z [INFO]  TestAgent_AddService/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:11.065Z [INFO]  TestAgent_AddService/normal.server: New leader elected: payload=node1
>         writer.go:29: 2020-02-23T02:46:11.073Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:11.081Z [INFO]  TestAgent_AddService/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:11.081Z [INFO]  TestAgent_AddService/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:11.081Z [DEBUG] TestAgent_AddService/normal.server: Skipping self join check for node since the cluster is too small: node=node1
>         writer.go:29: 2020-02-23T02:46:11.081Z [INFO]  TestAgent_AddService/normal.server: member joined, marking health alive: member=node1
>         writer.go:29: 2020-02-23T02:46:11.107Z [DEBUG] TestAgent_AddService/normal: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:11.110Z [INFO]  TestAgent_AddService/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:11.110Z [DEBUG] TestAgent_AddService/normal: Node info in sync
>         writer.go:29: 2020-02-23T02:46:11.283Z [DEBUG] TestAgent_AddService/normal: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:11.283Z [DEBUG] TestAgent_AddService/normal: Node info in sync
>         --- PASS: TestAgent_AddService/normal/one_check (0.00s)
>             --- PASS: TestAgent_AddService/normal/one_check/svcid1 (0.00s)
>             --- PASS: TestAgent_AddService/normal/one_check/check1 (0.00s)
>             --- PASS: TestAgent_AddService/normal/one_check/check1_ttl (0.00s)
>         --- PASS: TestAgent_AddService/normal/multiple_checks (0.00s)
>             --- PASS: TestAgent_AddService/normal/multiple_checks/svcid2 (0.00s)
>             --- PASS: TestAgent_AddService/normal/multiple_checks/check1 (0.00s)
>             --- PASS: TestAgent_AddService/normal/multiple_checks/check-noname (0.00s)
>             --- PASS: TestAgent_AddService/normal/multiple_checks/service:svcid2:3 (0.00s)
>             --- PASS: TestAgent_AddService/normal/multiple_checks/service:svcid2:4 (0.00s)
>             --- PASS: TestAgent_AddService/normal/multiple_checks/check1_ttl (0.00s)
>             --- PASS: TestAgent_AddService/normal/multiple_checks/check-noname_ttl (0.00s)
>             --- PASS: TestAgent_AddService/normal/multiple_checks/service:svcid2:3_ttl (0.00s)
>             --- PASS: TestAgent_AddService/normal/multiple_checks/service:svcid2:4_ttl (0.00s)
>         writer.go:29: 2020-02-23T02:46:11.446Z [INFO]  TestAgent_AddService/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:11.446Z [INFO]  TestAgent_AddService/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:11.446Z [DEBUG] TestAgent_AddService/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:11.446Z [WARN]  TestAgent_AddService/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:11.446Z [DEBUG] TestAgent_AddService/normal: Node info in sync
>         writer.go:29: 2020-02-23T02:46:11.446Z [DEBUG] TestAgent_AddService/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:11.448Z [INFO]  TestAgent_AddService/normal: Synced service: service=svcid1
>         writer.go:29: 2020-02-23T02:46:11.449Z [WARN]  TestAgent_AddService/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:11.451Z [INFO]  TestAgent_AddService/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:11.451Z [INFO]  TestAgent_AddService/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:11.451Z [INFO]  TestAgent_AddService/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:11.451Z [INFO]  TestAgent_AddService/normal: Stopping server: protocol=DNS address=127.0.0.1:16523 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.451Z [INFO]  TestAgent_AddService/normal: Stopping server: protocol=DNS address=127.0.0.1:16523 network=udp
>         writer.go:29: 2020-02-23T02:46:11.451Z [INFO]  TestAgent_AddService/normal: Stopping server: protocol=HTTP address=127.0.0.1:16524 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.451Z [INFO]  TestAgent_AddService/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:11.451Z [INFO]  TestAgent_AddService/normal: Endpoints down
> === RUN   TestAgent_AddServices_AliasUpdateCheckNotReverted
> === RUN   TestAgent_AddServices_AliasUpdateCheckNotReverted/normal
> === PAUSE TestAgent_AddServices_AliasUpdateCheckNotReverted/normal
> === RUN   TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager
> === PAUSE TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager
> === CONT  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal
> === CONT  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager
> --- PASS: TestAgent_AddServices_AliasUpdateCheckNotReverted (0.00s)
>     --- PASS: TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager (0.37s)
>         writer.go:29: 2020-02-23T02:46:11.464Z [WARN]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:11.465Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:11.465Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:11.475Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:9f69ba00-cbfd-ec8e-ba15-af0a79ef8994 Address:127.0.0.1:16540}]"
>         writer.go:29: 2020-02-23T02:46:11.475Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16540 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:11.476Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server.serf.wan: serf: EventMemberJoin: node1.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:11.476Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server.serf.lan: serf: EventMemberJoin: node1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:11.476Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server: Handled event for server in area: event=member-join server=node1.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:11.476Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server: Adding LAN server: server="node1 (Addr: tcp/127.0.0.1:16540) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:11.476Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager: Started DNS server: address=127.0.0.1:16535 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.477Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager: Started DNS server: address=127.0.0.1:16535 network=udp
>         writer.go:29: 2020-02-23T02:46:11.477Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager: Started HTTP server: address=127.0.0.1:16536 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.477Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:11.529Z [WARN]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:11.530Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16540 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:11.533Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:11.533Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server.raft: vote granted: from=9f69ba00-cbfd-ec8e-ba15-af0a79ef8994 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:11.533Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:11.533Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16540 [Leader]"
>         writer.go:29: 2020-02-23T02:46:11.533Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:11.533Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server: New leader elected: payload=node1
>         writer.go:29: 2020-02-23T02:46:11.540Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:11.549Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:11.549Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:11.549Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server: Skipping self join check for node since the cluster is too small: node=node1
>         writer.go:29: 2020-02-23T02:46:11.549Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server: member joined, marking health alive: member=node1
>         writer.go:29: 2020-02-23T02:46:11.789Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:11.792Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager: Synced node info
>         writer.go:29: 2020-02-23T02:46:11.818Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:11.818Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:11.818Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:11.818Z [WARN]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:11.818Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:11.820Z [WARN]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:11.821Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:11.821Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:11.821Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:11.821Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16535 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.821Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16535 network=udp
>         writer.go:29: 2020-02-23T02:46:11.821Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16536 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.822Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:11.822Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/service_manager: Endpoints down
>     --- PASS: TestAgent_AddServices_AliasUpdateCheckNotReverted/normal (0.46s)
>         writer.go:29: 2020-02-23T02:46:11.467Z [WARN]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:11.467Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:11.467Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:11.478Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:0d87d8c9-2769-5498-1233-de8166af9bf5 Address:127.0.0.1:16534}]"
>         writer.go:29: 2020-02-23T02:46:11.479Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server.serf.wan: serf: EventMemberJoin: node1.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:11.479Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16534 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:11.479Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server.serf.lan: serf: EventMemberJoin: node1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:11.479Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server: Adding LAN server: server="node1 (Addr: tcp/127.0.0.1:16534) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:11.480Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server: Handled event for server in area: event=member-join server=node1.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:11.480Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal: Started DNS server: address=127.0.0.1:16529 network=udp
>         writer.go:29: 2020-02-23T02:46:11.480Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal: Started DNS server: address=127.0.0.1:16529 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.480Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal: Started HTTP server: address=127.0.0.1:16530 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.480Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:11.534Z [WARN]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:11.534Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16534 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:11.537Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:11.537Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server.raft: vote granted: from=0d87d8c9-2769-5498-1233-de8166af9bf5 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:11.537Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:11.537Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16534 [Leader]"
>         writer.go:29: 2020-02-23T02:46:11.537Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:11.537Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server: New leader elected: payload=node1
>         writer.go:29: 2020-02-23T02:46:11.545Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:11.554Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:11.554Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:11.554Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server: Skipping self join check for node since the cluster is too small: node=node1
>         writer.go:29: 2020-02-23T02:46:11.554Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server: member joined, marking health alive: member=node1
>         writer.go:29: 2020-02-23T02:46:11.777Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/normal: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:11.780Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:11.908Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:11.908Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:11.908Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:11.908Z [WARN]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:11.909Z [DEBUG] TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:11.910Z [WARN]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:11.912Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:11.912Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:11.912Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:11.912Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal: Stopping server: protocol=DNS address=127.0.0.1:16529 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.912Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal: Stopping server: protocol=DNS address=127.0.0.1:16529 network=udp
>         writer.go:29: 2020-02-23T02:46:11.912Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal: Stopping server: protocol=HTTP address=127.0.0.1:16530 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.912Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:11.912Z [INFO]  TestAgent_AddServices_AliasUpdateCheckNotReverted/normal: Endpoints down
> === RUN   TestAgent_AddServiceNoExec
> === RUN   TestAgent_AddServiceNoExec/normal
> === PAUSE TestAgent_AddServiceNoExec/normal
> === RUN   TestAgent_AddServiceNoExec/service_manager
> === PAUSE TestAgent_AddServiceNoExec/service_manager
> === CONT  TestAgent_AddServiceNoExec/normal
> === CONT  TestAgent_AddServiceNoExec/service_manager
> --- PASS: TestAgent_AddServiceNoExec (0.00s)
>     --- PASS: TestAgent_AddServiceNoExec/service_manager (0.20s)
>         writer.go:29: 2020-02-23T02:46:11.920Z [WARN]  TestAgent_AddServiceNoExec/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:11.920Z [DEBUG] TestAgent_AddServiceNoExec/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:11.920Z [DEBUG] TestAgent_AddServiceNoExec/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:11.930Z [INFO]  TestAgent_AddServiceNoExec/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:779adcb1-cd4a-c750-5b65-cd8a65f97cb9 Address:127.0.0.1:16552}]"
>         writer.go:29: 2020-02-23T02:46:11.930Z [INFO]  TestAgent_AddServiceNoExec/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16552 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:11.931Z [INFO]  TestAgent_AddServiceNoExec/service_manager.server.serf.wan: serf: EventMemberJoin: node1.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:11.932Z [INFO]  TestAgent_AddServiceNoExec/service_manager.server.serf.lan: serf: EventMemberJoin: node1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:11.932Z [INFO]  TestAgent_AddServiceNoExec/service_manager.server: Adding LAN server: server="node1 (Addr: tcp/127.0.0.1:16552) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:11.932Z [INFO]  TestAgent_AddServiceNoExec/service_manager.server: Handled event for server in area: event=member-join server=node1.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:11.932Z [INFO]  TestAgent_AddServiceNoExec/service_manager: Started DNS server: address=127.0.0.1:16547 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.933Z [INFO]  TestAgent_AddServiceNoExec/service_manager: Started DNS server: address=127.0.0.1:16547 network=udp
>         writer.go:29: 2020-02-23T02:46:11.933Z [INFO]  TestAgent_AddServiceNoExec/service_manager: Started HTTP server: address=127.0.0.1:16548 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.934Z [INFO]  TestAgent_AddServiceNoExec/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:11.980Z [WARN]  TestAgent_AddServiceNoExec/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:11.980Z [INFO]  TestAgent_AddServiceNoExec/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16552 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:11.983Z [DEBUG] TestAgent_AddServiceNoExec/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:11.983Z [DEBUG] TestAgent_AddServiceNoExec/service_manager.server.raft: vote granted: from=779adcb1-cd4a-c750-5b65-cd8a65f97cb9 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:11.983Z [INFO]  TestAgent_AddServiceNoExec/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:11.983Z [INFO]  TestAgent_AddServiceNoExec/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16552 [Leader]"
>         writer.go:29: 2020-02-23T02:46:11.983Z [INFO]  TestAgent_AddServiceNoExec/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:11.983Z [INFO]  TestAgent_AddServiceNoExec/service_manager.server: New leader elected: payload=node1
>         writer.go:29: 2020-02-23T02:46:11.994Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:12.002Z [INFO]  TestAgent_AddServiceNoExec/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:12.002Z [INFO]  TestAgent_AddServiceNoExec/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:12.002Z [DEBUG] TestAgent_AddServiceNoExec/service_manager.server: Skipping self join check for node since the cluster is too small: node=node1
>         writer.go:29: 2020-02-23T02:46:12.002Z [INFO]  TestAgent_AddServiceNoExec/service_manager.server: member joined, marking health alive: member=node1
>         writer.go:29: 2020-02-23T02:46:12.113Z [INFO]  TestAgent_AddServiceNoExec/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:12.113Z [INFO]  TestAgent_AddServiceNoExec/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:12.113Z [DEBUG] TestAgent_AddServiceNoExec/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:12.113Z [WARN]  TestAgent_AddServiceNoExec/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:12.113Z [ERROR] TestAgent_AddServiceNoExec/service_manager.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:12.113Z [DEBUG] TestAgent_AddServiceNoExec/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:12.115Z [WARN]  TestAgent_AddServiceNoExec/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:12.116Z [INFO]  TestAgent_AddServiceNoExec/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:12.117Z [INFO]  TestAgent_AddServiceNoExec/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:12.117Z [INFO]  TestAgent_AddServiceNoExec/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:12.117Z [INFO]  TestAgent_AddServiceNoExec/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16547 network=tcp
>         writer.go:29: 2020-02-23T02:46:12.117Z [INFO]  TestAgent_AddServiceNoExec/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16547 network=udp
>         writer.go:29: 2020-02-23T02:46:12.117Z [INFO]  TestAgent_AddServiceNoExec/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16548 network=tcp
>         writer.go:29: 2020-02-23T02:46:12.117Z [INFO]  TestAgent_AddServiceNoExec/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:12.117Z [INFO]  TestAgent_AddServiceNoExec/service_manager: Endpoints down
>     --- PASS: TestAgent_AddServiceNoExec/normal (0.33s)
>         writer.go:29: 2020-02-23T02:46:11.920Z [WARN]  TestAgent_AddServiceNoExec/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:11.920Z [DEBUG] TestAgent_AddServiceNoExec/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:11.920Z [DEBUG] TestAgent_AddServiceNoExec/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:11.931Z [INFO]  TestAgent_AddServiceNoExec/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:c510e74d-a7b3-be3c-1388-f845cc2c4598 Address:127.0.0.1:16546}]"
>         writer.go:29: 2020-02-23T02:46:11.931Z [INFO]  TestAgent_AddServiceNoExec/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16546 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:11.932Z [INFO]  TestAgent_AddServiceNoExec/normal.server.serf.wan: serf: EventMemberJoin: node1.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:11.933Z [INFO]  TestAgent_AddServiceNoExec/normal.server.serf.lan: serf: EventMemberJoin: node1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:11.934Z [INFO]  TestAgent_AddServiceNoExec/normal.server: Adding LAN server: server="node1 (Addr: tcp/127.0.0.1:16546) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:11.934Z [INFO]  TestAgent_AddServiceNoExec/normal.server: Handled event for server in area: event=member-join server=node1.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:11.934Z [INFO]  TestAgent_AddServiceNoExec/normal: Started DNS server: address=127.0.0.1:16541 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.934Z [INFO]  TestAgent_AddServiceNoExec/normal: Started DNS server: address=127.0.0.1:16541 network=udp
>         writer.go:29: 2020-02-23T02:46:11.934Z [INFO]  TestAgent_AddServiceNoExec/normal: Started HTTP server: address=127.0.0.1:16542 network=tcp
>         writer.go:29: 2020-02-23T02:46:11.934Z [INFO]  TestAgent_AddServiceNoExec/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:11.991Z [WARN]  TestAgent_AddServiceNoExec/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:11.991Z [INFO]  TestAgent_AddServiceNoExec/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16546 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:11.996Z [DEBUG] TestAgent_AddServiceNoExec/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:11.996Z [DEBUG] TestAgent_AddServiceNoExec/normal.server.raft: vote granted: from=c510e74d-a7b3-be3c-1388-f845cc2c4598 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:11.996Z [INFO]  TestAgent_AddServiceNoExec/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:11.996Z [INFO]  TestAgent_AddServiceNoExec/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16546 [Leader]"
>         writer.go:29: 2020-02-23T02:46:11.996Z [INFO]  TestAgent_AddServiceNoExec/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:11.996Z [INFO]  TestAgent_AddServiceNoExec/normal.server: New leader elected: payload=node1
>         writer.go:29: 2020-02-23T02:46:12.004Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:12.012Z [INFO]  TestAgent_AddServiceNoExec/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:12.012Z [INFO]  TestAgent_AddServiceNoExec/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:12.012Z [DEBUG] TestAgent_AddServiceNoExec/normal.server: Skipping self join check for node since the cluster is too small: node=node1
>         writer.go:29: 2020-02-23T02:46:12.012Z [INFO]  TestAgent_AddServiceNoExec/normal.server: member joined, marking health alive: member=node1
>         writer.go:29: 2020-02-23T02:46:12.030Z [DEBUG] TestAgent_AddServiceNoExec/normal: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:12.032Z [INFO]  TestAgent_AddServiceNoExec/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:12.032Z [DEBUG] TestAgent_AddServiceNoExec/normal: Node info in sync
>         writer.go:29: 2020-02-23T02:46:12.243Z [INFO]  TestAgent_AddServiceNoExec/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:12.243Z [INFO]  TestAgent_AddServiceNoExec/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:12.243Z [DEBUG] TestAgent_AddServiceNoExec/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:12.243Z [WARN]  TestAgent_AddServiceNoExec/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:12.243Z [DEBUG] TestAgent_AddServiceNoExec/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:12.245Z [WARN]  TestAgent_AddServiceNoExec/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:12.247Z [INFO]  TestAgent_AddServiceNoExec/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:12.247Z [INFO]  TestAgent_AddServiceNoExec/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:12.247Z [INFO]  TestAgent_AddServiceNoExec/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:12.247Z [INFO]  TestAgent_AddServiceNoExec/normal: Stopping server: protocol=DNS address=127.0.0.1:16541 network=tcp
>         writer.go:29: 2020-02-23T02:46:12.247Z [INFO]  TestAgent_AddServiceNoExec/normal: Stopping server: protocol=DNS address=127.0.0.1:16541 network=udp
>         writer.go:29: 2020-02-23T02:46:12.247Z [INFO]  TestAgent_AddServiceNoExec/normal: Stopping server: protocol=HTTP address=127.0.0.1:16542 network=tcp
>         writer.go:29: 2020-02-23T02:46:12.247Z [INFO]  TestAgent_AddServiceNoExec/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:12.247Z [INFO]  TestAgent_AddServiceNoExec/normal: Endpoints down
> === RUN   TestAgent_AddServiceNoRemoteExec
> === RUN   TestAgent_AddServiceNoRemoteExec/normal
> === PAUSE TestAgent_AddServiceNoRemoteExec/normal
> === RUN   TestAgent_AddServiceNoRemoteExec/service_manager
> === PAUSE TestAgent_AddServiceNoRemoteExec/service_manager
> === CONT  TestAgent_AddServiceNoRemoteExec/normal
> === CONT  TestAgent_AddServiceNoRemoteExec/service_manager
> --- PASS: TestAgent_AddServiceNoRemoteExec (0.00s)
>     --- PASS: TestAgent_AddServiceNoRemoteExec/service_manager (0.15s)
>         writer.go:29: 2020-02-23T02:46:12.293Z [WARN]  TestAgent_AddServiceNoRemoteExec/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:12.293Z [DEBUG] TestAgent_AddServiceNoRemoteExec/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:12.294Z [DEBUG] TestAgent_AddServiceNoRemoteExec/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:12.305Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:05d4faba-b617-21f2-834f-9d64889098da Address:127.0.0.1:16564}]"
>         writer.go:29: 2020-02-23T02:46:12.306Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.server.serf.wan: serf: EventMemberJoin: node1.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:12.306Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.server.serf.lan: serf: EventMemberJoin: node1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:12.306Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager: Started DNS server: address=127.0.0.1:16559 network=udp
>         writer.go:29: 2020-02-23T02:46:12.306Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16564 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:12.307Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.server: Adding LAN server: server="node1 (Addr: tcp/127.0.0.1:16564) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:12.307Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.server: Handled event for server in area: event=member-join server=node1.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:12.307Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager: Started DNS server: address=127.0.0.1:16559 network=tcp
>         writer.go:29: 2020-02-23T02:46:12.307Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager: Started HTTP server: address=127.0.0.1:16560 network=tcp
>         writer.go:29: 2020-02-23T02:46:12.307Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:12.374Z [WARN]  TestAgent_AddServiceNoRemoteExec/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:12.374Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16564 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:12.377Z [DEBUG] TestAgent_AddServiceNoRemoteExec/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:12.377Z [DEBUG] TestAgent_AddServiceNoRemoteExec/service_manager.server.raft: vote granted: from=05d4faba-b617-21f2-834f-9d64889098da term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:12.377Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:12.377Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16564 [Leader]"
>         writer.go:29: 2020-02-23T02:46:12.377Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:12.377Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.server: New leader elected: payload=node1
>         writer.go:29: 2020-02-23T02:46:12.384Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:12.392Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:12.392Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:12.392Z [DEBUG] TestAgent_AddServiceNoRemoteExec/service_manager.server: Skipping self join check for node since the cluster is too small: node=node1
>         writer.go:29: 2020-02-23T02:46:12.392Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.server: member joined, marking health alive: member=node1
>         writer.go:29: 2020-02-23T02:46:12.403Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:12.403Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:12.403Z [DEBUG] TestAgent_AddServiceNoRemoteExec/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:12.403Z [WARN]  TestAgent_AddServiceNoRemoteExec/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:12.404Z [ERROR] TestAgent_AddServiceNoRemoteExec/service_manager.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:12.404Z [DEBUG] TestAgent_AddServiceNoRemoteExec/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:12.405Z [WARN]  TestAgent_AddServiceNoRemoteExec/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:12.407Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:12.407Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:12.407Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:12.407Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16559 network=tcp
>         writer.go:29: 2020-02-23T02:46:12.407Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16559 network=udp
>         writer.go:29: 2020-02-23T02:46:12.407Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16560 network=tcp
>         writer.go:29: 2020-02-23T02:46:12.407Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:12.407Z [INFO]  TestAgent_AddServiceNoRemoteExec/service_manager: Endpoints down
>     --- PASS: TestAgent_AddServiceNoRemoteExec/normal (0.48s)
>         writer.go:29: 2020-02-23T02:46:12.291Z [WARN]  TestAgent_AddServiceNoRemoteExec/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:12.292Z [DEBUG] TestAgent_AddServiceNoRemoteExec/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:12.292Z [DEBUG] TestAgent_AddServiceNoRemoteExec/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:12.303Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a157af4a-bb5a-a02b-db28-e7dc5d353f3f Address:127.0.0.1:16558}]"
>         writer.go:29: 2020-02-23T02:46:12.304Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.server.serf.wan: serf: EventMemberJoin: node1.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:12.304Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.server.serf.lan: serf: EventMemberJoin: node1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:12.304Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal: Started DNS server: address=127.0.0.1:16553 network=udp
>         writer.go:29: 2020-02-23T02:46:12.305Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16558 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:12.305Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.server: Adding LAN server: server="node1 (Addr: tcp/127.0.0.1:16558) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:12.305Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.server: Handled event for server in area: event=member-join server=node1.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:12.305Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal: Started DNS server: address=127.0.0.1:16553 network=tcp
>         writer.go:29: 2020-02-23T02:46:12.305Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal: Started HTTP server: address=127.0.0.1:16554 network=tcp
>         writer.go:29: 2020-02-23T02:46:12.305Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:12.341Z [WARN]  TestAgent_AddServiceNoRemoteExec/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:12.341Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16558 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:12.345Z [DEBUG] TestAgent_AddServiceNoRemoteExec/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:12.345Z [DEBUG] TestAgent_AddServiceNoRemoteExec/normal.server.raft: vote granted: from=a157af4a-bb5a-a02b-db28-e7dc5d353f3f term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:12.345Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:12.345Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16558 [Leader]"
>         writer.go:29: 2020-02-23T02:46:12.345Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:12.345Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.server: New leader elected: payload=node1
>         writer.go:29: 2020-02-23T02:46:12.352Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:12.360Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:12.360Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:12.360Z [DEBUG] TestAgent_AddServiceNoRemoteExec/normal.server: Skipping self join check for node since the cluster is too small: node=node1
>         writer.go:29: 2020-02-23T02:46:12.360Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.server: member joined, marking health alive: member=node1
>         writer.go:29: 2020-02-23T02:46:12.470Z [DEBUG] TestAgent_AddServiceNoRemoteExec/normal: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:12.472Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:12.723Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:12.723Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:12.723Z [DEBUG] TestAgent_AddServiceNoRemoteExec/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:12.723Z [WARN]  TestAgent_AddServiceNoRemoteExec/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:12.723Z [DEBUG] TestAgent_AddServiceNoRemoteExec/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:12.725Z [WARN]  TestAgent_AddServiceNoRemoteExec/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:12.727Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:12.727Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:12.727Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:12.727Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal: Stopping server: protocol=DNS address=127.0.0.1:16553 network=tcp
>         writer.go:29: 2020-02-23T02:46:12.727Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal: Stopping server: protocol=DNS address=127.0.0.1:16553 network=udp
>         writer.go:29: 2020-02-23T02:46:12.727Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal: Stopping server: protocol=HTTP address=127.0.0.1:16554 network=tcp
>         writer.go:29: 2020-02-23T02:46:12.727Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:12.727Z [INFO]  TestAgent_AddServiceNoRemoteExec/normal: Endpoints down
> === RUN   TestAddServiceIPv4TaggedDefault
> --- PASS: TestAddServiceIPv4TaggedDefault (0.24s)
>     writer.go:29: 2020-02-23T02:46:12.734Z [WARN]  TestAddServiceIPv4TaggedDefault: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:12.734Z [DEBUG] TestAddServiceIPv4TaggedDefault.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:12.734Z [DEBUG] TestAddServiceIPv4TaggedDefault.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:12.755Z [INFO]  TestAddServiceIPv4TaggedDefault.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:69d31aca-37f8-1648-95c1-807fe6331a72 Address:127.0.0.1:16570}]"
>     writer.go:29: 2020-02-23T02:46:12.755Z [INFO]  TestAddServiceIPv4TaggedDefault.server.serf.wan: serf: EventMemberJoin: Node-69d31aca-37f8-1648-95c1-807fe6331a72.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:12.756Z [INFO]  TestAddServiceIPv4TaggedDefault.server.serf.lan: serf: EventMemberJoin: Node-69d31aca-37f8-1648-95c1-807fe6331a72 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:12.756Z [INFO]  TestAddServiceIPv4TaggedDefault: Started DNS server: address=127.0.0.1:16565 network=udp
>     writer.go:29: 2020-02-23T02:46:12.756Z [INFO]  TestAddServiceIPv4TaggedDefault.server.raft: entering follower state: follower="Node at 127.0.0.1:16570 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:12.756Z [INFO]  TestAddServiceIPv4TaggedDefault.server: Adding LAN server: server="Node-69d31aca-37f8-1648-95c1-807fe6331a72 (Addr: tcp/127.0.0.1:16570) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:12.756Z [INFO]  TestAddServiceIPv4TaggedDefault.server: Handled event for server in area: event=member-join server=Node-69d31aca-37f8-1648-95c1-807fe6331a72.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:12.756Z [INFO]  TestAddServiceIPv4TaggedDefault: Started DNS server: address=127.0.0.1:16565 network=tcp
>     writer.go:29: 2020-02-23T02:46:12.756Z [INFO]  TestAddServiceIPv4TaggedDefault: Started HTTP server: address=127.0.0.1:16566 network=tcp
>     writer.go:29: 2020-02-23T02:46:12.756Z [INFO]  TestAddServiceIPv4TaggedDefault: started state syncer
>     writer.go:29: 2020-02-23T02:46:12.809Z [WARN]  TestAddServiceIPv4TaggedDefault.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:12.809Z [INFO]  TestAddServiceIPv4TaggedDefault.server.raft: entering candidate state: node="Node at 127.0.0.1:16570 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:12.812Z [DEBUG] TestAddServiceIPv4TaggedDefault.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:12.812Z [DEBUG] TestAddServiceIPv4TaggedDefault.server.raft: vote granted: from=69d31aca-37f8-1648-95c1-807fe6331a72 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:12.812Z [INFO]  TestAddServiceIPv4TaggedDefault.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:12.812Z [INFO]  TestAddServiceIPv4TaggedDefault.server.raft: entering leader state: leader="Node at 127.0.0.1:16570 [Leader]"
>     writer.go:29: 2020-02-23T02:46:12.812Z [INFO]  TestAddServiceIPv4TaggedDefault.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:12.812Z [INFO]  TestAddServiceIPv4TaggedDefault.server: New leader elected: payload=Node-69d31aca-37f8-1648-95c1-807fe6331a72
>     writer.go:29: 2020-02-23T02:46:12.820Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:12.828Z [INFO]  TestAddServiceIPv4TaggedDefault.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:12.828Z [INFO]  TestAddServiceIPv4TaggedDefault.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:12.828Z [DEBUG] TestAddServiceIPv4TaggedDefault.server: Skipping self join check for node since the cluster is too small: node=Node-69d31aca-37f8-1648-95c1-807fe6331a72
>     writer.go:29: 2020-02-23T02:46:12.828Z [INFO]  TestAddServiceIPv4TaggedDefault.server: member joined, marking health alive: member=Node-69d31aca-37f8-1648-95c1-807fe6331a72
>     writer.go:29: 2020-02-23T02:46:12.954Z [WARN]  TestAddServiceIPv4TaggedDefault: Service name will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.: service=my_service
>     writer.go:29: 2020-02-23T02:46:12.954Z [INFO]  TestAddServiceIPv4TaggedDefault: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:12.954Z [INFO]  TestAddServiceIPv4TaggedDefault.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:12.954Z [DEBUG] TestAddServiceIPv4TaggedDefault.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:12.954Z [WARN]  TestAddServiceIPv4TaggedDefault.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:12.954Z [ERROR] TestAddServiceIPv4TaggedDefault.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:12.954Z [DEBUG] TestAddServiceIPv4TaggedDefault.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:12.958Z [WARN]  TestAddServiceIPv4TaggedDefault.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:12.968Z [INFO]  TestAddServiceIPv4TaggedDefault.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:12.968Z [INFO]  TestAddServiceIPv4TaggedDefault: consul server down
>     writer.go:29: 2020-02-23T02:46:12.968Z [INFO]  TestAddServiceIPv4TaggedDefault: shutdown complete
>     writer.go:29: 2020-02-23T02:46:12.968Z [INFO]  TestAddServiceIPv4TaggedDefault: Stopping server: protocol=DNS address=127.0.0.1:16565 network=tcp
>     writer.go:29: 2020-02-23T02:46:12.968Z [INFO]  TestAddServiceIPv4TaggedDefault: Stopping server: protocol=DNS address=127.0.0.1:16565 network=udp
>     writer.go:29: 2020-02-23T02:46:12.968Z [INFO]  TestAddServiceIPv4TaggedDefault: Stopping server: protocol=HTTP address=127.0.0.1:16566 network=tcp
>     writer.go:29: 2020-02-23T02:46:12.968Z [INFO]  TestAddServiceIPv4TaggedDefault: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:12.968Z [INFO]  TestAddServiceIPv4TaggedDefault: Endpoints down
> === RUN   TestAddServiceIPv6TaggedDefault
> --- PASS: TestAddServiceIPv6TaggedDefault (0.49s)
>     writer.go:29: 2020-02-23T02:46:12.976Z [WARN]  TestAddServiceIPv6TaggedDefault: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:12.977Z [DEBUG] TestAddServiceIPv6TaggedDefault.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:12.977Z [DEBUG] TestAddServiceIPv6TaggedDefault.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:13.023Z [INFO]  TestAddServiceIPv6TaggedDefault.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b6f13d9d-f6cb-1ec1-82d3-18f86074d608 Address:127.0.0.1:16576}]"
>     writer.go:29: 2020-02-23T02:46:13.023Z [INFO]  TestAddServiceIPv6TaggedDefault.server.raft: entering follower state: follower="Node at 127.0.0.1:16576 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:13.024Z [INFO]  TestAddServiceIPv6TaggedDefault.server.serf.wan: serf: EventMemberJoin: Node-b6f13d9d-f6cb-1ec1-82d3-18f86074d608.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:13.024Z [INFO]  TestAddServiceIPv6TaggedDefault.server.serf.lan: serf: EventMemberJoin: Node-b6f13d9d-f6cb-1ec1-82d3-18f86074d608 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:13.024Z [INFO]  TestAddServiceIPv6TaggedDefault: Started DNS server: address=127.0.0.1:16571 network=udp
>     writer.go:29: 2020-02-23T02:46:13.024Z [INFO]  TestAddServiceIPv6TaggedDefault.server: Adding LAN server: server="Node-b6f13d9d-f6cb-1ec1-82d3-18f86074d608 (Addr: tcp/127.0.0.1:16576) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:13.024Z [INFO]  TestAddServiceIPv6TaggedDefault.server: Handled event for server in area: event=member-join server=Node-b6f13d9d-f6cb-1ec1-82d3-18f86074d608.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:13.025Z [INFO]  TestAddServiceIPv6TaggedDefault: Started DNS server: address=127.0.0.1:16571 network=tcp
>     writer.go:29: 2020-02-23T02:46:13.025Z [INFO]  TestAddServiceIPv6TaggedDefault: Started HTTP server: address=127.0.0.1:16572 network=tcp
>     writer.go:29: 2020-02-23T02:46:13.025Z [INFO]  TestAddServiceIPv6TaggedDefault: started state syncer
>     writer.go:29: 2020-02-23T02:46:13.079Z [WARN]  TestAddServiceIPv6TaggedDefault.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:13.079Z [INFO]  TestAddServiceIPv6TaggedDefault.server.raft: entering candidate state: node="Node at 127.0.0.1:16576 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:13.162Z [DEBUG] TestAddServiceIPv6TaggedDefault.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:13.162Z [DEBUG] TestAddServiceIPv6TaggedDefault.server.raft: vote granted: from=b6f13d9d-f6cb-1ec1-82d3-18f86074d608 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:13.162Z [INFO]  TestAddServiceIPv6TaggedDefault.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:13.162Z [INFO]  TestAddServiceIPv6TaggedDefault.server.raft: entering leader state: leader="Node at 127.0.0.1:16576 [Leader]"
>     writer.go:29: 2020-02-23T02:46:13.162Z [INFO]  TestAddServiceIPv6TaggedDefault.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:13.162Z [INFO]  TestAddServiceIPv6TaggedDefault.server: New leader elected: payload=Node-b6f13d9d-f6cb-1ec1-82d3-18f86074d608
>     writer.go:29: 2020-02-23T02:46:13.171Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:13.179Z [INFO]  TestAddServiceIPv6TaggedDefault.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:13.180Z [INFO]  TestAddServiceIPv6TaggedDefault.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:13.180Z [DEBUG] TestAddServiceIPv6TaggedDefault.server: Skipping self join check for node since the cluster is too small: node=Node-b6f13d9d-f6cb-1ec1-82d3-18f86074d608
>     writer.go:29: 2020-02-23T02:46:13.180Z [INFO]  TestAddServiceIPv6TaggedDefault.server: member joined, marking health alive: member=Node-b6f13d9d-f6cb-1ec1-82d3-18f86074d608
>     writer.go:29: 2020-02-23T02:46:13.320Z [DEBUG] TestAddServiceIPv6TaggedDefault: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:13.337Z [INFO]  TestAddServiceIPv6TaggedDefault: Synced node info
>     writer.go:29: 2020-02-23T02:46:13.337Z [DEBUG] TestAddServiceIPv6TaggedDefault: Node info in sync
>     writer.go:29: 2020-02-23T02:46:13.455Z [WARN]  TestAddServiceIPv6TaggedDefault: Service name will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.: service=my_service
>     writer.go:29: 2020-02-23T02:46:13.456Z [INFO]  TestAddServiceIPv6TaggedDefault: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:13.456Z [INFO]  TestAddServiceIPv6TaggedDefault.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:13.456Z [DEBUG] TestAddServiceIPv6TaggedDefault.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:13.456Z [WARN]  TestAddServiceIPv6TaggedDefault.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:13.456Z [DEBUG] TestAddServiceIPv6TaggedDefault.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:13.457Z [WARN]  TestAddServiceIPv6TaggedDefault.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:13.459Z [INFO]  TestAddServiceIPv6TaggedDefault.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:13.459Z [INFO]  TestAddServiceIPv6TaggedDefault: consul server down
>     writer.go:29: 2020-02-23T02:46:13.459Z [INFO]  TestAddServiceIPv6TaggedDefault: shutdown complete
>     writer.go:29: 2020-02-23T02:46:13.459Z [INFO]  TestAddServiceIPv6TaggedDefault: Stopping server: protocol=DNS address=127.0.0.1:16571 network=tcp
>     writer.go:29: 2020-02-23T02:46:13.459Z [INFO]  TestAddServiceIPv6TaggedDefault: Stopping server: protocol=DNS address=127.0.0.1:16571 network=udp
>     writer.go:29: 2020-02-23T02:46:13.459Z [INFO]  TestAddServiceIPv6TaggedDefault: Stopping server: protocol=HTTP address=127.0.0.1:16572 network=tcp
>     writer.go:29: 2020-02-23T02:46:13.460Z [INFO]  TestAddServiceIPv6TaggedDefault: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:13.460Z [INFO]  TestAddServiceIPv6TaggedDefault: Endpoints down
> === RUN   TestAddServiceIPv4TaggedSet
> --- PASS: TestAddServiceIPv4TaggedSet (0.27s)
>     writer.go:29: 2020-02-23T02:46:13.468Z [WARN]  TestAddServiceIPv4TaggedSet: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:13.468Z [DEBUG] TestAddServiceIPv4TaggedSet.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:13.468Z [DEBUG] TestAddServiceIPv4TaggedSet.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:13.478Z [INFO]  TestAddServiceIPv4TaggedSet.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:cfa58e8d-90ef-fe2a-d44c-ac92b6738ab8 Address:127.0.0.1:16582}]"
>     writer.go:29: 2020-02-23T02:46:13.478Z [INFO]  TestAddServiceIPv4TaggedSet.server.raft: entering follower state: follower="Node at 127.0.0.1:16582 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:13.479Z [INFO]  TestAddServiceIPv4TaggedSet.server.serf.wan: serf: EventMemberJoin: Node-cfa58e8d-90ef-fe2a-d44c-ac92b6738ab8.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:13.480Z [INFO]  TestAddServiceIPv4TaggedSet.server.serf.lan: serf: EventMemberJoin: Node-cfa58e8d-90ef-fe2a-d44c-ac92b6738ab8 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:13.480Z [INFO]  TestAddServiceIPv4TaggedSet.server: Handled event for server in area: event=member-join server=Node-cfa58e8d-90ef-fe2a-d44c-ac92b6738ab8.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:13.480Z [INFO]  TestAddServiceIPv4TaggedSet.server: Adding LAN server: server="Node-cfa58e8d-90ef-fe2a-d44c-ac92b6738ab8 (Addr: tcp/127.0.0.1:16582) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:13.480Z [INFO]  TestAddServiceIPv4TaggedSet: Started DNS server: address=127.0.0.1:16577 network=tcp
>     writer.go:29: 2020-02-23T02:46:13.480Z [INFO]  TestAddServiceIPv4TaggedSet: Started DNS server: address=127.0.0.1:16577 network=udp
>     writer.go:29: 2020-02-23T02:46:13.481Z [INFO]  TestAddServiceIPv4TaggedSet: Started HTTP server: address=127.0.0.1:16578 network=tcp
>     writer.go:29: 2020-02-23T02:46:13.481Z [INFO]  TestAddServiceIPv4TaggedSet: started state syncer
>     writer.go:29: 2020-02-23T02:46:13.534Z [WARN]  TestAddServiceIPv4TaggedSet.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:13.534Z [INFO]  TestAddServiceIPv4TaggedSet.server.raft: entering candidate state: node="Node at 127.0.0.1:16582 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:13.603Z [DEBUG] TestAddServiceIPv4TaggedSet.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:13.603Z [DEBUG] TestAddServiceIPv4TaggedSet.server.raft: vote granted: from=cfa58e8d-90ef-fe2a-d44c-ac92b6738ab8 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:13.603Z [INFO]  TestAddServiceIPv4TaggedSet.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:13.603Z [INFO]  TestAddServiceIPv4TaggedSet.server.raft: entering leader state: leader="Node at 127.0.0.1:16582 [Leader]"
>     writer.go:29: 2020-02-23T02:46:13.603Z [INFO]  TestAddServiceIPv4TaggedSet.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:13.603Z [INFO]  TestAddServiceIPv4TaggedSet.server: New leader elected: payload=Node-cfa58e8d-90ef-fe2a-d44c-ac92b6738ab8
>     writer.go:29: 2020-02-23T02:46:13.656Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:13.711Z [INFO]  TestAddServiceIPv4TaggedSet.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:13.711Z [INFO]  TestAddServiceIPv4TaggedSet.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:13.712Z [DEBUG] TestAddServiceIPv4TaggedSet.server: Skipping self join check for node since the cluster is too small: node=Node-cfa58e8d-90ef-fe2a-d44c-ac92b6738ab8
>     writer.go:29: 2020-02-23T02:46:13.712Z [INFO]  TestAddServiceIPv4TaggedSet.server: member joined, marking health alive: member=Node-cfa58e8d-90ef-fe2a-d44c-ac92b6738ab8
>     writer.go:29: 2020-02-23T02:46:13.729Z [WARN]  TestAddServiceIPv4TaggedSet: Service name will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.: service=my_service
>     writer.go:29: 2020-02-23T02:46:13.729Z [INFO]  TestAddServiceIPv4TaggedSet: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:13.729Z [INFO]  TestAddServiceIPv4TaggedSet.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:13.729Z [DEBUG] TestAddServiceIPv4TaggedSet.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:13.729Z [WARN]  TestAddServiceIPv4TaggedSet.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:13.730Z [ERROR] TestAddServiceIPv4TaggedSet.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:13.730Z [DEBUG] TestAddServiceIPv4TaggedSet.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:13.731Z [WARN]  TestAddServiceIPv4TaggedSet.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:13.733Z [INFO]  TestAddServiceIPv4TaggedSet.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:13.733Z [INFO]  TestAddServiceIPv4TaggedSet: consul server down
>     writer.go:29: 2020-02-23T02:46:13.733Z [INFO]  TestAddServiceIPv4TaggedSet: shutdown complete
>     writer.go:29: 2020-02-23T02:46:13.733Z [INFO]  TestAddServiceIPv4TaggedSet: Stopping server: protocol=DNS address=127.0.0.1:16577 network=tcp
>     writer.go:29: 2020-02-23T02:46:13.733Z [INFO]  TestAddServiceIPv4TaggedSet: Stopping server: protocol=DNS address=127.0.0.1:16577 network=udp
>     writer.go:29: 2020-02-23T02:46:13.733Z [INFO]  TestAddServiceIPv4TaggedSet: Stopping server: protocol=HTTP address=127.0.0.1:16578 network=tcp
>     writer.go:29: 2020-02-23T02:46:13.733Z [INFO]  TestAddServiceIPv4TaggedSet: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:13.733Z [INFO]  TestAddServiceIPv4TaggedSet: Endpoints down
> === RUN   TestAddServiceIPv6TaggedSet
> --- PASS: TestAddServiceIPv6TaggedSet (0.54s)
>     writer.go:29: 2020-02-23T02:46:13.753Z [WARN]  TestAddServiceIPv6TaggedSet: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:13.753Z [DEBUG] TestAddServiceIPv6TaggedSet.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:13.754Z [DEBUG] TestAddServiceIPv6TaggedSet.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:13.834Z [INFO]  TestAddServiceIPv6TaggedSet.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:5f144542-29b6-e460-6211-9cf44591386d Address:127.0.0.1:16588}]"
>     writer.go:29: 2020-02-23T02:46:13.834Z [INFO]  TestAddServiceIPv6TaggedSet.server.serf.wan: serf: EventMemberJoin: Node-5f144542-29b6-e460-6211-9cf44591386d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:13.834Z [INFO]  TestAddServiceIPv6TaggedSet.server.serf.lan: serf: EventMemberJoin: Node-5f144542-29b6-e460-6211-9cf44591386d 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:13.835Z [INFO]  TestAddServiceIPv6TaggedSet: Started DNS server: address=127.0.0.1:16583 network=udp
>     writer.go:29: 2020-02-23T02:46:13.835Z [INFO]  TestAddServiceIPv6TaggedSet.server.raft: entering follower state: follower="Node at 127.0.0.1:16588 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:13.835Z [INFO]  TestAddServiceIPv6TaggedSet.server: Adding LAN server: server="Node-5f144542-29b6-e460-6211-9cf44591386d (Addr: tcp/127.0.0.1:16588) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:13.835Z [INFO]  TestAddServiceIPv6TaggedSet.server: Handled event for server in area: event=member-join server=Node-5f144542-29b6-e460-6211-9cf44591386d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:13.835Z [INFO]  TestAddServiceIPv6TaggedSet: Started DNS server: address=127.0.0.1:16583 network=tcp
>     writer.go:29: 2020-02-23T02:46:13.835Z [INFO]  TestAddServiceIPv6TaggedSet: Started HTTP server: address=127.0.0.1:16584 network=tcp
>     writer.go:29: 2020-02-23T02:46:13.835Z [INFO]  TestAddServiceIPv6TaggedSet: started state syncer
>     writer.go:29: 2020-02-23T02:46:13.876Z [WARN]  TestAddServiceIPv6TaggedSet.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:13.876Z [INFO]  TestAddServiceIPv6TaggedSet.server.raft: entering candidate state: node="Node at 127.0.0.1:16588 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:13.879Z [DEBUG] TestAddServiceIPv6TaggedSet.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:13.879Z [DEBUG] TestAddServiceIPv6TaggedSet.server.raft: vote granted: from=5f144542-29b6-e460-6211-9cf44591386d term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:13.879Z [INFO]  TestAddServiceIPv6TaggedSet.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:13.879Z [INFO]  TestAddServiceIPv6TaggedSet.server.raft: entering leader state: leader="Node at 127.0.0.1:16588 [Leader]"
>     writer.go:29: 2020-02-23T02:46:13.879Z [INFO]  TestAddServiceIPv6TaggedSet.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:13.879Z [INFO]  TestAddServiceIPv6TaggedSet.server: New leader elected: payload=Node-5f144542-29b6-e460-6211-9cf44591386d
>     writer.go:29: 2020-02-23T02:46:13.886Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:13.894Z [INFO]  TestAddServiceIPv6TaggedSet.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:13.894Z [INFO]  TestAddServiceIPv6TaggedSet.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:13.894Z [DEBUG] TestAddServiceIPv6TaggedSet.server: Skipping self join check for node since the cluster is too small: node=Node-5f144542-29b6-e460-6211-9cf44591386d
>     writer.go:29: 2020-02-23T02:46:13.894Z [INFO]  TestAddServiceIPv6TaggedSet.server: member joined, marking health alive: member=Node-5f144542-29b6-e460-6211-9cf44591386d
>     writer.go:29: 2020-02-23T02:46:13.945Z [DEBUG] TestAddServiceIPv6TaggedSet: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:13.947Z [INFO]  TestAddServiceIPv6TaggedSet: Synced node info
>     writer.go:29: 2020-02-23T02:46:14.269Z [WARN]  TestAddServiceIPv6TaggedSet: Service name will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.: service=my_service
>     writer.go:29: 2020-02-23T02:46:14.269Z [INFO]  TestAddServiceIPv6TaggedSet: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:14.269Z [INFO]  TestAddServiceIPv6TaggedSet.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:14.269Z [DEBUG] TestAddServiceIPv6TaggedSet.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:14.269Z [WARN]  TestAddServiceIPv6TaggedSet.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:14.269Z [DEBUG] TestAddServiceIPv6TaggedSet.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:14.271Z [WARN]  TestAddServiceIPv6TaggedSet.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:14.274Z [INFO]  TestAddServiceIPv6TaggedSet.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:14.274Z [INFO]  TestAddServiceIPv6TaggedSet: consul server down
>     writer.go:29: 2020-02-23T02:46:14.274Z [INFO]  TestAddServiceIPv6TaggedSet: shutdown complete
>     writer.go:29: 2020-02-23T02:46:14.274Z [INFO]  TestAddServiceIPv6TaggedSet: Stopping server: protocol=DNS address=127.0.0.1:16583 network=tcp
>     writer.go:29: 2020-02-23T02:46:14.274Z [INFO]  TestAddServiceIPv6TaggedSet: Stopping server: protocol=DNS address=127.0.0.1:16583 network=udp
>     writer.go:29: 2020-02-23T02:46:14.274Z [INFO]  TestAddServiceIPv6TaggedSet: Stopping server: protocol=HTTP address=127.0.0.1:16584 network=tcp
>     writer.go:29: 2020-02-23T02:46:14.274Z [INFO]  TestAddServiceIPv6TaggedSet: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:14.274Z [INFO]  TestAddServiceIPv6TaggedSet: Endpoints down
> === RUN   TestAgent_RemoveService
> === RUN   TestAgent_RemoveService/normal
> === PAUSE TestAgent_RemoveService/normal
> === RUN   TestAgent_RemoveService/service_manager
> === PAUSE TestAgent_RemoveService/service_manager
> === CONT  TestAgent_RemoveService/normal
> === CONT  TestAgent_RemoveService/service_manager
> --- PASS: TestAgent_RemoveService (0.00s)
>     --- PASS: TestAgent_RemoveService/normal (0.17s)
>         writer.go:29: 2020-02-23T02:46:14.285Z [WARN]  TestAgent_RemoveService/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:14.285Z [DEBUG] TestAgent_RemoveService/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:14.286Z [DEBUG] TestAgent_RemoveService/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:14.301Z [INFO]  TestAgent_RemoveService/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f345d4fe-39e3-d531-4e42-d6a91aef00d0 Address:127.0.0.1:16594}]"
>         writer.go:29: 2020-02-23T02:46:14.301Z [INFO]  TestAgent_RemoveService/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16594 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:14.301Z [INFO]  TestAgent_RemoveService/normal.server.serf.wan: serf: EventMemberJoin: Node-f345d4fe-39e3-d531-4e42-d6a91aef00d0.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:14.301Z [INFO]  TestAgent_RemoveService/normal.server.serf.lan: serf: EventMemberJoin: Node-f345d4fe-39e3-d531-4e42-d6a91aef00d0 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:14.302Z [INFO]  TestAgent_RemoveService/normal.server: Adding LAN server: server="Node-f345d4fe-39e3-d531-4e42-d6a91aef00d0 (Addr: tcp/127.0.0.1:16594) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:14.302Z [INFO]  TestAgent_RemoveService/normal: Started DNS server: address=127.0.0.1:16589 network=udp
>         writer.go:29: 2020-02-23T02:46:14.302Z [INFO]  TestAgent_RemoveService/normal.server: Handled event for server in area: event=member-join server=Node-f345d4fe-39e3-d531-4e42-d6a91aef00d0.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:14.302Z [INFO]  TestAgent_RemoveService/normal: Started DNS server: address=127.0.0.1:16589 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.302Z [INFO]  TestAgent_RemoveService/normal: Started HTTP server: address=127.0.0.1:16590 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.302Z [INFO]  TestAgent_RemoveService/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:14.357Z [WARN]  TestAgent_RemoveService/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:14.357Z [INFO]  TestAgent_RemoveService/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16594 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:14.362Z [DEBUG] TestAgent_RemoveService/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:14.362Z [DEBUG] TestAgent_RemoveService/normal.server.raft: vote granted: from=f345d4fe-39e3-d531-4e42-d6a91aef00d0 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:14.362Z [INFO]  TestAgent_RemoveService/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:14.362Z [INFO]  TestAgent_RemoveService/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16594 [Leader]"
>         writer.go:29: 2020-02-23T02:46:14.362Z [INFO]  TestAgent_RemoveService/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:14.362Z [INFO]  TestAgent_RemoveService/normal.server: New leader elected: payload=Node-f345d4fe-39e3-d531-4e42-d6a91aef00d0
>         writer.go:29: 2020-02-23T02:46:14.372Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:14.381Z [INFO]  TestAgent_RemoveService/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:14.381Z [INFO]  TestAgent_RemoveService/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:14.381Z [DEBUG] TestAgent_RemoveService/normal.server: Skipping self join check for node since the cluster is too small: node=Node-f345d4fe-39e3-d531-4e42-d6a91aef00d0
>         writer.go:29: 2020-02-23T02:46:14.381Z [INFO]  TestAgent_RemoveService/normal.server: member joined, marking health alive: member=Node-f345d4fe-39e3-d531-4e42-d6a91aef00d0
>         writer.go:29: 2020-02-23T02:46:14.442Z [WARN]  TestAgent_RemoveService/normal: Failed to deregister service: service=redis error="Service {"redis" {}} does not exist"
>         writer.go:29: 2020-02-23T02:46:14.442Z [DEBUG] TestAgent_RemoveService/normal: removed check: check=service:memcache
>         writer.go:29: 2020-02-23T02:46:14.442Z [DEBUG] TestAgent_RemoveService/normal: removed check: check=check2
>         writer.go:29: 2020-02-23T02:46:14.442Z [DEBUG] TestAgent_RemoveService/normal: removed service: service=memcache
>         writer.go:29: 2020-02-23T02:46:14.442Z [DEBUG] TestAgent_RemoveService/normal: removed check: check=service:redis:1
>         writer.go:29: 2020-02-23T02:46:14.442Z [DEBUG] TestAgent_RemoveService/normal: removed check: check=service:redis:2
>         writer.go:29: 2020-02-23T02:46:14.442Z [DEBUG] TestAgent_RemoveService/normal: removed service: service=redis
>         writer.go:29: 2020-02-23T02:46:14.442Z [INFO]  TestAgent_RemoveService/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:14.442Z [INFO]  TestAgent_RemoveService/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:14.442Z [DEBUG] TestAgent_RemoveService/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:14.442Z [WARN]  TestAgent_RemoveService/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:14.442Z [DEBUG] TestAgent_RemoveService/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:14.442Z [ERROR] TestAgent_RemoveService/normal.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:14.444Z [WARN]  TestAgent_RemoveService/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:14.445Z [INFO]  TestAgent_RemoveService/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:14.445Z [INFO]  TestAgent_RemoveService/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:14.445Z [INFO]  TestAgent_RemoveService/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:14.445Z [INFO]  TestAgent_RemoveService/normal: Stopping server: protocol=DNS address=127.0.0.1:16589 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.445Z [INFO]  TestAgent_RemoveService/normal: Stopping server: protocol=DNS address=127.0.0.1:16589 network=udp
>         writer.go:29: 2020-02-23T02:46:14.445Z [INFO]  TestAgent_RemoveService/normal: Stopping server: protocol=HTTP address=127.0.0.1:16590 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.445Z [INFO]  TestAgent_RemoveService/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:14.446Z [INFO]  TestAgent_RemoveService/normal: Endpoints down
>     --- PASS: TestAgent_RemoveService/service_manager (0.22s)
>         writer.go:29: 2020-02-23T02:46:14.284Z [WARN]  TestAgent_RemoveService/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:14.284Z [DEBUG] TestAgent_RemoveService/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:14.285Z [DEBUG] TestAgent_RemoveService/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:14.298Z [INFO]  TestAgent_RemoveService/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fa778a38-f8dd-693f-d59d-a7cfe5b6d9f5 Address:127.0.0.1:16600}]"
>         writer.go:29: 2020-02-23T02:46:14.298Z [INFO]  TestAgent_RemoveService/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16600 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:14.298Z [INFO]  TestAgent_RemoveService/service_manager.server.serf.wan: serf: EventMemberJoin: Node-fa778a38-f8dd-693f-d59d-a7cfe5b6d9f5.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:14.298Z [INFO]  TestAgent_RemoveService/service_manager.server.serf.lan: serf: EventMemberJoin: Node-fa778a38-f8dd-693f-d59d-a7cfe5b6d9f5 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:14.298Z [INFO]  TestAgent_RemoveService/service_manager: Started DNS server: address=127.0.0.1:16595 network=udp
>         writer.go:29: 2020-02-23T02:46:14.299Z [INFO]  TestAgent_RemoveService/service_manager.server: Adding LAN server: server="Node-fa778a38-f8dd-693f-d59d-a7cfe5b6d9f5 (Addr: tcp/127.0.0.1:16600) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:14.299Z [INFO]  TestAgent_RemoveService/service_manager.server: Handled event for server in area: event=member-join server=Node-fa778a38-f8dd-693f-d59d-a7cfe5b6d9f5.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:14.299Z [INFO]  TestAgent_RemoveService/service_manager: Started DNS server: address=127.0.0.1:16595 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.299Z [INFO]  TestAgent_RemoveService/service_manager: Started HTTP server: address=127.0.0.1:16596 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.299Z [INFO]  TestAgent_RemoveService/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:14.354Z [WARN]  TestAgent_RemoveService/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:14.354Z [INFO]  TestAgent_RemoveService/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16600 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:14.357Z [DEBUG] TestAgent_RemoveService/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:14.357Z [DEBUG] TestAgent_RemoveService/service_manager.server.raft: vote granted: from=fa778a38-f8dd-693f-d59d-a7cfe5b6d9f5 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:14.358Z [INFO]  TestAgent_RemoveService/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:14.358Z [INFO]  TestAgent_RemoveService/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16600 [Leader]"
>         writer.go:29: 2020-02-23T02:46:14.358Z [INFO]  TestAgent_RemoveService/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:14.358Z [INFO]  TestAgent_RemoveService/service_manager.server: New leader elected: payload=Node-fa778a38-f8dd-693f-d59d-a7cfe5b6d9f5
>         writer.go:29: 2020-02-23T02:46:14.366Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:14.377Z [INFO]  TestAgent_RemoveService/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:14.377Z [INFO]  TestAgent_RemoveService/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:14.377Z [DEBUG] TestAgent_RemoveService/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-fa778a38-f8dd-693f-d59d-a7cfe5b6d9f5
>         writer.go:29: 2020-02-23T02:46:14.377Z [INFO]  TestAgent_RemoveService/service_manager.server: member joined, marking health alive: member=Node-fa778a38-f8dd-693f-d59d-a7cfe5b6d9f5
>         writer.go:29: 2020-02-23T02:46:14.456Z [DEBUG] TestAgent_RemoveService/service_manager: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:14.458Z [INFO]  TestAgent_RemoveService/service_manager: Synced node info
>         writer.go:29: 2020-02-23T02:46:14.458Z [DEBUG] TestAgent_RemoveService/service_manager: Node info in sync
>         writer.go:29: 2020-02-23T02:46:14.493Z [WARN]  TestAgent_RemoveService/service_manager: Failed to deregister service: service=redis error="Service {"redis" {}} does not exist"
>         writer.go:29: 2020-02-23T02:46:14.493Z [DEBUG] TestAgent_RemoveService/service_manager: removed check: check=service:memcache
>         writer.go:29: 2020-02-23T02:46:14.493Z [DEBUG] TestAgent_RemoveService/service_manager: removed check: check=check2
>         writer.go:29: 2020-02-23T02:46:14.493Z [DEBUG] TestAgent_RemoveService/service_manager: removed service: service=memcache
>         writer.go:29: 2020-02-23T02:46:14.493Z [DEBUG] TestAgent_RemoveService/service_manager: removed check: check=service:redis:1
>         writer.go:29: 2020-02-23T02:46:14.494Z [DEBUG] TestAgent_RemoveService/service_manager: removed check: check=service:redis:2
>         writer.go:29: 2020-02-23T02:46:14.494Z [DEBUG] TestAgent_RemoveService/service_manager: removed service: service=redis
>         writer.go:29: 2020-02-23T02:46:14.494Z [INFO]  TestAgent_RemoveService/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:14.494Z [INFO]  TestAgent_RemoveService/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:14.494Z [DEBUG] TestAgent_RemoveService/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:14.494Z [WARN]  TestAgent_RemoveService/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:14.494Z [DEBUG] TestAgent_RemoveService/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:14.495Z [WARN]  TestAgent_RemoveService/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:14.497Z [INFO]  TestAgent_RemoveService/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:14.497Z [INFO]  TestAgent_RemoveService/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:14.497Z [INFO]  TestAgent_RemoveService/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:14.497Z [INFO]  TestAgent_RemoveService/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16595 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.497Z [INFO]  TestAgent_RemoveService/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16595 network=udp
>         writer.go:29: 2020-02-23T02:46:14.497Z [INFO]  TestAgent_RemoveService/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16596 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.497Z [INFO]  TestAgent_RemoveService/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:14.497Z [INFO]  TestAgent_RemoveService/service_manager: Endpoints down
> === RUN   TestAgent_RemoveServiceRemovesAllChecks
> === RUN   TestAgent_RemoveServiceRemovesAllChecks/normal
> === PAUSE TestAgent_RemoveServiceRemovesAllChecks/normal
> === RUN   TestAgent_RemoveServiceRemovesAllChecks/service_manager
> === PAUSE TestAgent_RemoveServiceRemovesAllChecks/service_manager
> === CONT  TestAgent_RemoveServiceRemovesAllChecks/normal
> === CONT  TestAgent_RemoveServiceRemovesAllChecks/service_manager
> --- PASS: TestAgent_RemoveServiceRemovesAllChecks (0.00s)
>     --- PASS: TestAgent_RemoveServiceRemovesAllChecks/service_manager (0.35s)
>         writer.go:29: 2020-02-23T02:46:14.505Z [WARN]  TestAgent_RemoveServiceRemovesAllChecks/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:14.505Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:14.506Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:14.515Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:06b78fe5-46a2-1985-0005-ae290658bfb9 Address:127.0.0.1:16612}]"
>         writer.go:29: 2020-02-23T02:46:14.515Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16612 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:14.515Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server.serf.wan: serf: EventMemberJoin: node1.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:14.516Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server.serf.lan: serf: EventMemberJoin: node1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:14.516Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server: Handled event for server in area: event=member-join server=node1.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:14.516Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server: Adding LAN server: server="node1 (Addr: tcp/127.0.0.1:16612) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:14.516Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager: Started DNS server: address=127.0.0.1:16607 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.516Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager: Started DNS server: address=127.0.0.1:16607 network=udp
>         writer.go:29: 2020-02-23T02:46:14.517Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager: Started HTTP server: address=127.0.0.1:16608 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.517Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:14.558Z [WARN]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:14.558Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16612 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:14.562Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:14.562Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/service_manager.server.raft: vote granted: from=06b78fe5-46a2-1985-0005-ae290658bfb9 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:14.562Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:14.562Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16612 [Leader]"
>         writer.go:29: 2020-02-23T02:46:14.562Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:14.562Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server: New leader elected: payload=node1
>         writer.go:29: 2020-02-23T02:46:14.569Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:14.577Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:14.577Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:14.577Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/service_manager.server: Skipping self join check for node since the cluster is too small: node=node1
>         writer.go:29: 2020-02-23T02:46:14.577Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server: member joined, marking health alive: member=node1
>         writer.go:29: 2020-02-23T02:46:14.841Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/service_manager: removed check: check=chk1
>         writer.go:29: 2020-02-23T02:46:14.841Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/service_manager: removed check: check=chk2
>         writer.go:29: 2020-02-23T02:46:14.841Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/service_manager: removed service: service=redis
>         writer.go:29: 2020-02-23T02:46:14.841Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:14.841Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:14.841Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:14.841Z [WARN]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:14.841Z [ERROR] TestAgent_RemoveServiceRemovesAllChecks/service_manager.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:14.841Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:14.843Z [WARN]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:14.845Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:14.845Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:14.845Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:14.845Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16607 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.845Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16607 network=udp
>         writer.go:29: 2020-02-23T02:46:14.845Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16608 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.845Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:14.845Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/service_manager: Endpoints down
>     --- PASS: TestAgent_RemoveServiceRemovesAllChecks/normal (0.45s)
>         writer.go:29: 2020-02-23T02:46:14.505Z [WARN]  TestAgent_RemoveServiceRemovesAllChecks/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:14.505Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:14.506Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:14.515Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4ab19c47-5512-8ab5-bd68-e060716086d0 Address:127.0.0.1:16606}]"
>         writer.go:29: 2020-02-23T02:46:14.515Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16606 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:14.516Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.server.serf.wan: serf: EventMemberJoin: node1.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:14.516Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.server.serf.lan: serf: EventMemberJoin: node1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:14.517Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.server: Adding LAN server: server="node1 (Addr: tcp/127.0.0.1:16606) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:14.517Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.server: Handled event for server in area: event=member-join server=node1.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:14.517Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal: Started DNS server: address=127.0.0.1:16601 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.517Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal: Started DNS server: address=127.0.0.1:16601 network=udp
>         writer.go:29: 2020-02-23T02:46:14.517Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal: Started HTTP server: address=127.0.0.1:16602 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.517Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:14.582Z [WARN]  TestAgent_RemoveServiceRemovesAllChecks/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:14.582Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16606 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:14.585Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:14.585Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/normal.server.raft: vote granted: from=4ab19c47-5512-8ab5-bd68-e060716086d0 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:14.585Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:14.585Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16606 [Leader]"
>         writer.go:29: 2020-02-23T02:46:14.586Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:14.586Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.server: New leader elected: payload=node1
>         writer.go:29: 2020-02-23T02:46:14.595Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:14.611Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:14.611Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:14.612Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/normal.server: Skipping self join check for node since the cluster is too small: node=node1
>         writer.go:29: 2020-02-23T02:46:14.612Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.server: member joined, marking health alive: member=node1
>         writer.go:29: 2020-02-23T02:46:14.717Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/normal: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:14.721Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:14.946Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/normal: removed check: check=chk1
>         writer.go:29: 2020-02-23T02:46:14.946Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/normal: removed check: check=chk2
>         writer.go:29: 2020-02-23T02:46:14.946Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/normal: removed service: service=redis
>         writer.go:29: 2020-02-23T02:46:14.946Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:14.946Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:14.946Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:14.946Z [WARN]  TestAgent_RemoveServiceRemovesAllChecks/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:14.946Z [DEBUG] TestAgent_RemoveServiceRemovesAllChecks/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:14.948Z [WARN]  TestAgent_RemoveServiceRemovesAllChecks/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:14.950Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:14.950Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:14.950Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:14.950Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal: Stopping server: protocol=DNS address=127.0.0.1:16601 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.950Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal: Stopping server: protocol=DNS address=127.0.0.1:16601 network=udp
>         writer.go:29: 2020-02-23T02:46:14.950Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal: Stopping server: protocol=HTTP address=127.0.0.1:16602 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.950Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:14.950Z [INFO]  TestAgent_RemoveServiceRemovesAllChecks/normal: Endpoints down
> === RUN   TestAgent_IndexChurn
> === PAUSE TestAgent_IndexChurn
> === RUN   TestAgent_AddCheck
> === PAUSE TestAgent_AddCheck
> === RUN   TestAgent_AddCheck_StartPassing
> === PAUSE TestAgent_AddCheck_StartPassing
> === RUN   TestAgent_AddCheck_MinInterval
> === PAUSE TestAgent_AddCheck_MinInterval
> === RUN   TestAgent_AddCheck_MissingService
> === PAUSE TestAgent_AddCheck_MissingService
> === RUN   TestAgent_AddCheck_RestoreState
> === PAUSE TestAgent_AddCheck_RestoreState
> === RUN   TestAgent_AddCheck_ExecDisable
> === PAUSE TestAgent_AddCheck_ExecDisable
> === RUN   TestAgent_AddCheck_ExecRemoteDisable
> === PAUSE TestAgent_AddCheck_ExecRemoteDisable
> === RUN   TestAgent_AddCheck_GRPC
> === PAUSE TestAgent_AddCheck_GRPC
> === RUN   TestAgent_RestoreServiceWithAliasCheck
> --- SKIP: TestAgent_RestoreServiceWithAliasCheck (0.00s)
>     agent_test.go:1412: skipping slow test; set SLOWTEST=1 to run
> === RUN   TestAgent_AddCheck_Alias
> === PAUSE TestAgent_AddCheck_Alias
> === RUN   TestAgent_AddCheck_Alias_setToken
> === PAUSE TestAgent_AddCheck_Alias_setToken
> === RUN   TestAgent_AddCheck_Alias_userToken
> === PAUSE TestAgent_AddCheck_Alias_userToken
> === RUN   TestAgent_AddCheck_Alias_userAndSetToken
> === PAUSE TestAgent_AddCheck_Alias_userAndSetToken
> === RUN   TestAgent_RemoveCheck
> === PAUSE TestAgent_RemoveCheck
> === RUN   TestAgent_HTTPCheck_TLSSkipVerify
> === PAUSE TestAgent_HTTPCheck_TLSSkipVerify
> === RUN   TestAgent_HTTPCheck_EnableAgentTLSForChecks
> --- SKIP: TestAgent_HTTPCheck_EnableAgentTLSForChecks (0.00s)
>     agent_test.go:1774: DM-skipped
> === RUN   TestAgent_updateTTLCheck
> === PAUSE TestAgent_updateTTLCheck
> === RUN   TestAgent_PersistService
> === RUN   TestAgent_PersistService/normal
> === PAUSE TestAgent_PersistService/normal
> === RUN   TestAgent_PersistService/service_manager
> === PAUSE TestAgent_PersistService/service_manager
> === CONT  TestAgent_PersistService/normal
> === CONT  TestAgent_PersistService/service_manager
> --- PASS: TestAgent_PersistService (0.00s)
>     --- PASS: TestAgent_PersistService/service_manager (0.07s)
>         writer.go:29: 2020-02-23T02:46:14.981Z [DEBUG] TestAgent_PersistService/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:14.983Z [INFO]  TestAgent_PersistService/service_manager.client.serf.lan: serf: EventMemberJoin: Node-4c6036ec-476b-fb5b-8bff-d18ecd4b0352 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:14.984Z [INFO]  TestAgent_PersistService/service_manager: Started DNS server: address=127.0.0.1:16619 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.984Z [INFO]  TestAgent_PersistService/service_manager: Started DNS server: address=127.0.0.1:16619 network=udp
>         writer.go:29: 2020-02-23T02:46:14.985Z [INFO]  TestAgent_PersistService/service_manager: Started HTTP server: address=127.0.0.1:16620 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.985Z [INFO]  TestAgent_PersistService/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:14.985Z [WARN]  TestAgent_PersistService/service_manager.client.manager: No servers available
>         writer.go:29: 2020-02-23T02:46:14.985Z [ERROR] TestAgent_PersistService/service_manager.anti_entropy: failed to sync remote state: error="No known Consul servers"
>         writer.go:29: 2020-02-23T02:46:14.990Z [INFO]  TestAgent_PersistService/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:14.990Z [INFO]  TestAgent_PersistService/service_manager.client: shutting down client
>         writer.go:29: 2020-02-23T02:46:14.990Z [WARN]  TestAgent_PersistService/service_manager.client.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:14.990Z [INFO]  TestAgent_PersistService/service_manager.client.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:14.992Z [INFO]  TestAgent_PersistService/service_manager: consul client down
>         writer.go:29: 2020-02-23T02:46:14.992Z [INFO]  TestAgent_PersistService/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:14.992Z [INFO]  TestAgent_PersistService/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16619 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.992Z [INFO]  TestAgent_PersistService/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16619 network=udp
>         writer.go:29: 2020-02-23T02:46:14.992Z [INFO]  TestAgent_PersistService/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16620 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.992Z [INFO]  TestAgent_PersistService/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:14.992Z [INFO]  TestAgent_PersistService/service_manager: Endpoints down
>         writer.go:29: 2020-02-23T02:46:15.002Z [DEBUG] TestAgent_PersistService/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:15.002Z [INFO]  TestAgent_PersistService/service_manager.client.serf.lan: serf: EventMemberJoin: Node-1d0b8562-6ff2-681c-6d7b-6f6e4dc7203b 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:15.002Z [INFO]  TestAgent_PersistService/service_manager.client.serf.lan: serf: Attempting re-join to previously known node: Node-4c6036ec-476b-fb5b-8bff-d18ecd4b0352: 127.0.0.1:16622
>         writer.go:29: 2020-02-23T02:46:15.003Z [DEBUG] TestAgent_PersistService/service_manager.client.memberlist.lan: memberlist: Failed to join 127.0.0.1: dial tcp 127.0.0.1:16622: connect: connection refused
>         writer.go:29: 2020-02-23T02:46:15.003Z [WARN]  TestAgent_PersistService/service_manager.client.serf.lan: serf: Failed to re-join any previously known node
>         writer.go:29: 2020-02-23T02:46:15.003Z [DEBUG] TestAgent_PersistService/service_manager: restored service definition from file: service=redis file=/tmp/consul-test/TestAgent_PersistService_service_manager-agent609706653/services/86a1b907d54bf7010394bf316e183e67
>         writer.go:29: 2020-02-23T02:46:15.003Z [INFO]  TestAgent_PersistService/service_manager: Started DNS server: address=127.0.0.1:16625 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.003Z [INFO]  TestAgent_PersistService/service_manager: Started DNS server: address=127.0.0.1:16625 network=udp
>         writer.go:29: 2020-02-23T02:46:15.004Z [INFO]  TestAgent_PersistService/service_manager: Started HTTP server: address=127.0.0.1:16626 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.004Z [INFO]  TestAgent_PersistService/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:15.004Z [INFO]  TestAgent_PersistService/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:15.004Z [INFO]  TestAgent_PersistService/service_manager.client: shutting down client
>         writer.go:29: 2020-02-23T02:46:15.004Z [WARN]  TestAgent_PersistService/service_manager.client.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:15.004Z [WARN]  TestAgent_PersistService/service_manager.client.manager: No servers available
>         writer.go:29: 2020-02-23T02:46:15.004Z [ERROR] TestAgent_PersistService/service_manager.anti_entropy: failed to sync remote state: error="No known Consul servers"
>         writer.go:29: 2020-02-23T02:46:15.004Z [INFO]  TestAgent_PersistService/service_manager.client.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:15.023Z [INFO]  TestAgent_PersistService/service_manager: consul client down
>         writer.go:29: 2020-02-23T02:46:15.023Z [INFO]  TestAgent_PersistService/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:15.023Z [INFO]  TestAgent_PersistService/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16625 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.023Z [INFO]  TestAgent_PersistService/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16625 network=udp
>         writer.go:29: 2020-02-23T02:46:15.023Z [INFO]  TestAgent_PersistService/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16626 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.023Z [INFO]  TestAgent_PersistService/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:15.023Z [INFO]  TestAgent_PersistService/service_manager: Endpoints down
>     --- PASS: TestAgent_PersistService/normal (0.09s)
>         writer.go:29: 2020-02-23T02:46:14.984Z [DEBUG] TestAgent_PersistService/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:14.985Z [INFO]  TestAgent_PersistService/normal.client.serf.lan: serf: EventMemberJoin: Node-9b581c20-5f60-4e70-a031-5696d6027cb6 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:14.986Z [INFO]  TestAgent_PersistService/normal: Started DNS server: address=127.0.0.1:16613 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.986Z [INFO]  TestAgent_PersistService/normal: Started DNS server: address=127.0.0.1:16613 network=udp
>         writer.go:29: 2020-02-23T02:46:14.987Z [INFO]  TestAgent_PersistService/normal: Started HTTP server: address=127.0.0.1:16614 network=tcp
>         writer.go:29: 2020-02-23T02:46:14.987Z [INFO]  TestAgent_PersistService/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:15.003Z [INFO]  TestAgent_PersistService/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:15.003Z [INFO]  TestAgent_PersistService/normal.client: shutting down client
>         writer.go:29: 2020-02-23T02:46:15.004Z [WARN]  TestAgent_PersistService/normal.client.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:15.004Z [INFO]  TestAgent_PersistService/normal.client.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:15.006Z [INFO]  TestAgent_PersistService/normal: consul client down
>         writer.go:29: 2020-02-23T02:46:15.006Z [INFO]  TestAgent_PersistService/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:15.006Z [INFO]  TestAgent_PersistService/normal: Stopping server: protocol=DNS address=127.0.0.1:16613 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.006Z [INFO]  TestAgent_PersistService/normal: Stopping server: protocol=DNS address=127.0.0.1:16613 network=udp
>         writer.go:29: 2020-02-23T02:46:15.006Z [INFO]  TestAgent_PersistService/normal: Stopping server: protocol=HTTP address=127.0.0.1:16614 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.006Z [INFO]  TestAgent_PersistService/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:15.006Z [INFO]  TestAgent_PersistService/normal: Endpoints down
>         writer.go:29: 2020-02-23T02:46:15.035Z [DEBUG] TestAgent_PersistService/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:15.036Z [INFO]  TestAgent_PersistService/normal.client.serf.lan: serf: EventMemberJoin: Node-ec909d8b-9bb5-1c87-22c1-e03b93eabee9 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:15.036Z [INFO]  TestAgent_PersistService/normal.client.serf.lan: serf: Attempting re-join to previously known node: Node-9b581c20-5f60-4e70-a031-5696d6027cb6: 127.0.0.1:16616
>         writer.go:29: 2020-02-23T02:46:15.036Z [DEBUG] TestAgent_PersistService/normal.client.memberlist.lan: memberlist: Failed to join 127.0.0.1: dial tcp 127.0.0.1:16616: connect: connection refused
>         writer.go:29: 2020-02-23T02:46:15.036Z [WARN]  TestAgent_PersistService/normal.client.serf.lan: serf: Failed to re-join any previously known node
>         writer.go:29: 2020-02-23T02:46:15.036Z [DEBUG] TestAgent_PersistService/normal: restored service definition from file: service=redis file=/tmp/consul-test/TestAgent_PersistService_normal-agent969780406/services/86a1b907d54bf7010394bf316e183e67
>         writer.go:29: 2020-02-23T02:46:15.037Z [INFO]  TestAgent_PersistService/normal: Started DNS server: address=127.0.0.1:16631 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.037Z [INFO]  TestAgent_PersistService/normal: Started DNS server: address=127.0.0.1:16631 network=udp
>         writer.go:29: 2020-02-23T02:46:15.037Z [INFO]  TestAgent_PersistService/normal: Started HTTP server: address=127.0.0.1:16632 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.037Z [INFO]  TestAgent_PersistService/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:15.037Z [WARN]  TestAgent_PersistService/normal.client.manager: No servers available
>         writer.go:29: 2020-02-23T02:46:15.037Z [ERROR] TestAgent_PersistService/normal.anti_entropy: failed to sync remote state: error="No known Consul servers"
>         writer.go:29: 2020-02-23T02:46:15.038Z [INFO]  TestAgent_PersistService/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:15.038Z [INFO]  TestAgent_PersistService/normal.client: shutting down client
>         writer.go:29: 2020-02-23T02:46:15.038Z [WARN]  TestAgent_PersistService/normal.client.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:15.038Z [INFO]  TestAgent_PersistService/normal.client.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:15.040Z [INFO]  TestAgent_PersistService/normal: consul client down
>         writer.go:29: 2020-02-23T02:46:15.040Z [INFO]  TestAgent_PersistService/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:15.040Z [INFO]  TestAgent_PersistService/normal: Stopping server: protocol=DNS address=127.0.0.1:16631 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.040Z [INFO]  TestAgent_PersistService/normal: Stopping server: protocol=DNS address=127.0.0.1:16631 network=udp
>         writer.go:29: 2020-02-23T02:46:15.040Z [INFO]  TestAgent_PersistService/normal: Stopping server: protocol=HTTP address=127.0.0.1:16632 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.040Z [INFO]  TestAgent_PersistService/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:15.040Z [INFO]  TestAgent_PersistService/normal: Endpoints down
> === RUN   TestAgent_persistedService_compat
> === RUN   TestAgent_persistedService_compat/normal
> === PAUSE TestAgent_persistedService_compat/normal
> === RUN   TestAgent_persistedService_compat/service_manager
> === PAUSE TestAgent_persistedService_compat/service_manager
> === CONT  TestAgent_persistedService_compat/normal
> === CONT  TestAgent_persistedService_compat/service_manager
> --- PASS: TestAgent_persistedService_compat (0.00s)
>     --- PASS: TestAgent_persistedService_compat/service_manager (0.48s)
>         writer.go:29: 2020-02-23T02:46:15.056Z [WARN]  TestAgent_persistedService_compat/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:15.056Z [DEBUG] TestAgent_persistedService_compat/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:15.075Z [DEBUG] TestAgent_persistedService_compat/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:15.086Z [INFO]  TestAgent_persistedService_compat/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:2233afaa-4e41-d037-0656-82aa5ff6603e Address:127.0.0.1:16648}]"
>         writer.go:29: 2020-02-23T02:46:15.087Z [INFO]  TestAgent_persistedService_compat/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16648 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:15.087Z [INFO]  TestAgent_persistedService_compat/service_manager.server.serf.wan: serf: EventMemberJoin: Node-2233afaa-4e41-d037-0656-82aa5ff6603e.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:15.088Z [INFO]  TestAgent_persistedService_compat/service_manager.server.serf.lan: serf: EventMemberJoin: Node-2233afaa-4e41-d037-0656-82aa5ff6603e 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:15.088Z [INFO]  TestAgent_persistedService_compat/service_manager: Started DNS server: address=127.0.0.1:16643 network=udp
>         writer.go:29: 2020-02-23T02:46:15.088Z [INFO]  TestAgent_persistedService_compat/service_manager.server: Adding LAN server: server="Node-2233afaa-4e41-d037-0656-82aa5ff6603e (Addr: tcp/127.0.0.1:16648) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:15.088Z [INFO]  TestAgent_persistedService_compat/service_manager.server: Handled event for server in area: event=member-join server=Node-2233afaa-4e41-d037-0656-82aa5ff6603e.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:15.088Z [INFO]  TestAgent_persistedService_compat/service_manager: Started DNS server: address=127.0.0.1:16643 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.089Z [INFO]  TestAgent_persistedService_compat/service_manager: Started HTTP server: address=127.0.0.1:16644 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.089Z [INFO]  TestAgent_persistedService_compat/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:15.124Z [WARN]  TestAgent_persistedService_compat/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:15.124Z [INFO]  TestAgent_persistedService_compat/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16648 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:15.127Z [DEBUG] TestAgent_persistedService_compat/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:15.127Z [DEBUG] TestAgent_persistedService_compat/service_manager.server.raft: vote granted: from=2233afaa-4e41-d037-0656-82aa5ff6603e term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:15.127Z [INFO]  TestAgent_persistedService_compat/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:15.127Z [INFO]  TestAgent_persistedService_compat/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16648 [Leader]"
>         writer.go:29: 2020-02-23T02:46:15.127Z [INFO]  TestAgent_persistedService_compat/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:15.127Z [INFO]  TestAgent_persistedService_compat/service_manager.server: New leader elected: payload=Node-2233afaa-4e41-d037-0656-82aa5ff6603e
>         writer.go:29: 2020-02-23T02:46:15.135Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:15.143Z [INFO]  TestAgent_persistedService_compat/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:15.143Z [INFO]  TestAgent_persistedService_compat/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:15.143Z [DEBUG] TestAgent_persistedService_compat/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-2233afaa-4e41-d037-0656-82aa5ff6603e
>         writer.go:29: 2020-02-23T02:46:15.143Z [INFO]  TestAgent_persistedService_compat/service_manager.server: member joined, marking health alive: member=Node-2233afaa-4e41-d037-0656-82aa5ff6603e
>         writer.go:29: 2020-02-23T02:46:15.400Z [DEBUG] TestAgent_persistedService_compat/service_manager: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:15.402Z [INFO]  TestAgent_persistedService_compat/service_manager: Synced node info
>         writer.go:29: 2020-02-23T02:46:15.464Z [DEBUG] TestAgent_persistedService_compat/service_manager: restored service definition from file: service=redis file=/tmp/TestAgent_persistedService_compat_service_manager-agent609115863/services/86a1b907d54bf7010394bf316e183e67
>         writer.go:29: 2020-02-23T02:46:15.464Z [INFO]  TestAgent_persistedService_compat/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:15.464Z [INFO]  TestAgent_persistedService_compat/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:15.464Z [DEBUG] TestAgent_persistedService_compat/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:15.464Z [WARN]  TestAgent_persistedService_compat/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:15.464Z [DEBUG] TestAgent_persistedService_compat/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:15.497Z [WARN]  TestAgent_persistedService_compat/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:15.529Z [INFO]  TestAgent_persistedService_compat/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:15.529Z [INFO]  TestAgent_persistedService_compat/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:15.529Z [INFO]  TestAgent_persistedService_compat/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:15.529Z [INFO]  TestAgent_persistedService_compat/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16643 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.529Z [INFO]  TestAgent_persistedService_compat/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16643 network=udp
>         writer.go:29: 2020-02-23T02:46:15.530Z [INFO]  TestAgent_persistedService_compat/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16644 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.530Z [INFO]  TestAgent_persistedService_compat/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:15.530Z [INFO]  TestAgent_persistedService_compat/service_manager: Endpoints down
>     --- PASS: TestAgent_persistedService_compat/normal (0.51s)
>         writer.go:29: 2020-02-23T02:46:15.048Z [WARN]  TestAgent_persistedService_compat/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:15.048Z [DEBUG] TestAgent_persistedService_compat/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:15.048Z [DEBUG] TestAgent_persistedService_compat/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:15.086Z [INFO]  TestAgent_persistedService_compat/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:7c5fe0e0-03d5-b834-8956-6c8e5b14fdcc Address:127.0.0.1:16642}]"
>         writer.go:29: 2020-02-23T02:46:15.086Z [INFO]  TestAgent_persistedService_compat/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16642 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:15.087Z [INFO]  TestAgent_persistedService_compat/normal.server.serf.wan: serf: EventMemberJoin: Node-7c5fe0e0-03d5-b834-8956-6c8e5b14fdcc.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:15.088Z [INFO]  TestAgent_persistedService_compat/normal.server.serf.lan: serf: EventMemberJoin: Node-7c5fe0e0-03d5-b834-8956-6c8e5b14fdcc 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:15.088Z [INFO]  TestAgent_persistedService_compat/normal.server: Adding LAN server: server="Node-7c5fe0e0-03d5-b834-8956-6c8e5b14fdcc (Addr: tcp/127.0.0.1:16642) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:15.088Z [INFO]  TestAgent_persistedService_compat/normal.server: Handled event for server in area: event=member-join server=Node-7c5fe0e0-03d5-b834-8956-6c8e5b14fdcc.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:15.088Z [INFO]  TestAgent_persistedService_compat/normal: Started DNS server: address=127.0.0.1:16637 network=udp
>         writer.go:29: 2020-02-23T02:46:15.088Z [INFO]  TestAgent_persistedService_compat/normal: Started DNS server: address=127.0.0.1:16637 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.089Z [INFO]  TestAgent_persistedService_compat/normal: Started HTTP server: address=127.0.0.1:16638 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.089Z [INFO]  TestAgent_persistedService_compat/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:15.138Z [WARN]  TestAgent_persistedService_compat/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:15.138Z [INFO]  TestAgent_persistedService_compat/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16642 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:15.142Z [DEBUG] TestAgent_persistedService_compat/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:15.142Z [DEBUG] TestAgent_persistedService_compat/normal.server.raft: vote granted: from=7c5fe0e0-03d5-b834-8956-6c8e5b14fdcc term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:15.143Z [INFO]  TestAgent_persistedService_compat/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:15.143Z [INFO]  TestAgent_persistedService_compat/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16642 [Leader]"
>         writer.go:29: 2020-02-23T02:46:15.143Z [INFO]  TestAgent_persistedService_compat/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:15.143Z [INFO]  TestAgent_persistedService_compat/normal.server: New leader elected: payload=Node-7c5fe0e0-03d5-b834-8956-6c8e5b14fdcc
>         writer.go:29: 2020-02-23T02:46:15.200Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:15.208Z [INFO]  TestAgent_persistedService_compat/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:15.208Z [INFO]  TestAgent_persistedService_compat/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:15.208Z [DEBUG] TestAgent_persistedService_compat/normal.server: Skipping self join check for node since the cluster is too small: node=Node-7c5fe0e0-03d5-b834-8956-6c8e5b14fdcc
>         writer.go:29: 2020-02-23T02:46:15.208Z [INFO]  TestAgent_persistedService_compat/normal.server: member joined, marking health alive: member=Node-7c5fe0e0-03d5-b834-8956-6c8e5b14fdcc
>         writer.go:29: 2020-02-23T02:46:15.373Z [DEBUG] TestAgent_persistedService_compat/normal: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:15.376Z [INFO]  TestAgent_persistedService_compat/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:15.521Z [DEBUG] TestAgent_persistedService_compat/normal: restored service definition from file: service=redis file=/tmp/TestAgent_persistedService_compat_normal-agent481749848/services/86a1b907d54bf7010394bf316e183e67
>         writer.go:29: 2020-02-23T02:46:15.521Z [INFO]  TestAgent_persistedService_compat/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:15.521Z [INFO]  TestAgent_persistedService_compat/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:15.521Z [DEBUG] TestAgent_persistedService_compat/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:15.522Z [WARN]  TestAgent_persistedService_compat/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:15.522Z [DEBUG] TestAgent_persistedService_compat/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:15.550Z [WARN]  TestAgent_persistedService_compat/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:15.552Z [INFO]  TestAgent_persistedService_compat/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:15.552Z [INFO]  TestAgent_persistedService_compat/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:15.552Z [INFO]  TestAgent_persistedService_compat/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:15.552Z [INFO]  TestAgent_persistedService_compat/normal: Stopping server: protocol=DNS address=127.0.0.1:16637 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.552Z [INFO]  TestAgent_persistedService_compat/normal: Stopping server: protocol=DNS address=127.0.0.1:16637 network=udp
>         writer.go:29: 2020-02-23T02:46:15.552Z [INFO]  TestAgent_persistedService_compat/normal: Stopping server: protocol=HTTP address=127.0.0.1:16638 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.552Z [INFO]  TestAgent_persistedService_compat/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:15.552Z [INFO]  TestAgent_persistedService_compat/normal: Endpoints down
> === RUN   TestAgent_PurgeService
> === RUN   TestAgent_PurgeService/normal
> === PAUSE TestAgent_PurgeService/normal
> === RUN   TestAgent_PurgeService/service_manager
> === PAUSE TestAgent_PurgeService/service_manager
> === CONT  TestAgent_PurgeService/normal
> === CONT  TestAgent_PurgeService/service_manager
> --- PASS: TestAgent_PurgeService (0.00s)
>     --- PASS: TestAgent_PurgeService/service_manager (0.34s)
>         writer.go:29: 2020-02-23T02:46:15.562Z [WARN]  TestAgent_PurgeService/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:15.562Z [DEBUG] TestAgent_PurgeService/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:15.563Z [DEBUG] TestAgent_PurgeService/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:15.573Z [INFO]  TestAgent_PurgeService/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4a8a4bd9-d24a-a5a8-66d4-5b6bacc74fa9 Address:127.0.0.1:16660}]"
>         writer.go:29: 2020-02-23T02:46:15.573Z [INFO]  TestAgent_PurgeService/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16660 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:15.573Z [INFO]  TestAgent_PurgeService/service_manager.server.serf.wan: serf: EventMemberJoin: Node-4a8a4bd9-d24a-a5a8-66d4-5b6bacc74fa9.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:15.574Z [INFO]  TestAgent_PurgeService/service_manager.server.serf.lan: serf: EventMemberJoin: Node-4a8a4bd9-d24a-a5a8-66d4-5b6bacc74fa9 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:15.574Z [INFO]  TestAgent_PurgeService/service_manager.server: Adding LAN server: server="Node-4a8a4bd9-d24a-a5a8-66d4-5b6bacc74fa9 (Addr: tcp/127.0.0.1:16660) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:15.574Z [INFO]  TestAgent_PurgeService/service_manager.server: Handled event for server in area: event=member-join server=Node-4a8a4bd9-d24a-a5a8-66d4-5b6bacc74fa9.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:15.574Z [INFO]  TestAgent_PurgeService/service_manager: Started DNS server: address=127.0.0.1:16655 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.575Z [INFO]  TestAgent_PurgeService/service_manager: Started DNS server: address=127.0.0.1:16655 network=udp
>         writer.go:29: 2020-02-23T02:46:15.577Z [INFO]  TestAgent_PurgeService/service_manager: Started HTTP server: address=127.0.0.1:16656 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.577Z [INFO]  TestAgent_PurgeService/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:15.614Z [WARN]  TestAgent_PurgeService/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:15.614Z [INFO]  TestAgent_PurgeService/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16660 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:15.618Z [DEBUG] TestAgent_PurgeService/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:15.618Z [DEBUG] TestAgent_PurgeService/service_manager.server.raft: vote granted: from=4a8a4bd9-d24a-a5a8-66d4-5b6bacc74fa9 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:15.618Z [INFO]  TestAgent_PurgeService/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:15.618Z [INFO]  TestAgent_PurgeService/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16660 [Leader]"
>         writer.go:29: 2020-02-23T02:46:15.618Z [INFO]  TestAgent_PurgeService/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:15.618Z [INFO]  TestAgent_PurgeService/service_manager.server: New leader elected: payload=Node-4a8a4bd9-d24a-a5a8-66d4-5b6bacc74fa9
>         writer.go:29: 2020-02-23T02:46:15.626Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:15.635Z [INFO]  TestAgent_PurgeService/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:15.635Z [INFO]  TestAgent_PurgeService/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:15.635Z [DEBUG] TestAgent_PurgeService/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-4a8a4bd9-d24a-a5a8-66d4-5b6bacc74fa9
>         writer.go:29: 2020-02-23T02:46:15.635Z [INFO]  TestAgent_PurgeService/service_manager.server: member joined, marking health alive: member=Node-4a8a4bd9-d24a-a5a8-66d4-5b6bacc74fa9
>         writer.go:29: 2020-02-23T02:46:15.890Z [DEBUG] TestAgent_PurgeService/service_manager: removed service: service=redis
>         writer.go:29: 2020-02-23T02:46:15.892Z [DEBUG] TestAgent_PurgeService/service_manager: removed service: service=redis
>         writer.go:29: 2020-02-23T02:46:15.892Z [INFO]  TestAgent_PurgeService/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:15.892Z [INFO]  TestAgent_PurgeService/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:15.892Z [DEBUG] TestAgent_PurgeService/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:15.892Z [WARN]  TestAgent_PurgeService/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:15.892Z [ERROR] TestAgent_PurgeService/service_manager.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:15.892Z [DEBUG] TestAgent_PurgeService/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:15.894Z [WARN]  TestAgent_PurgeService/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:15.896Z [INFO]  TestAgent_PurgeService/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:15.896Z [INFO]  TestAgent_PurgeService/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:15.896Z [INFO]  TestAgent_PurgeService/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:15.896Z [INFO]  TestAgent_PurgeService/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16655 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.896Z [INFO]  TestAgent_PurgeService/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16655 network=udp
>         writer.go:29: 2020-02-23T02:46:15.896Z [INFO]  TestAgent_PurgeService/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16656 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.896Z [INFO]  TestAgent_PurgeService/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:15.896Z [INFO]  TestAgent_PurgeService/service_manager: Endpoints down
>     --- PASS: TestAgent_PurgeService/normal (0.41s)
>         writer.go:29: 2020-02-23T02:46:15.563Z [WARN]  TestAgent_PurgeService/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:15.564Z [DEBUG] TestAgent_PurgeService/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:15.564Z [DEBUG] TestAgent_PurgeService/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:15.575Z [INFO]  TestAgent_PurgeService/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b7607890-9c3b-aa80-816d-7e12659bb4bb Address:127.0.0.1:16654}]"
>         writer.go:29: 2020-02-23T02:46:15.575Z [INFO]  TestAgent_PurgeService/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16654 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:15.576Z [INFO]  TestAgent_PurgeService/normal.server.serf.wan: serf: EventMemberJoin: Node-b7607890-9c3b-aa80-816d-7e12659bb4bb.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:15.576Z [INFO]  TestAgent_PurgeService/normal.server.serf.lan: serf: EventMemberJoin: Node-b7607890-9c3b-aa80-816d-7e12659bb4bb 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:15.576Z [INFO]  TestAgent_PurgeService/normal.server: Handled event for server in area: event=member-join server=Node-b7607890-9c3b-aa80-816d-7e12659bb4bb.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:15.576Z [INFO]  TestAgent_PurgeService/normal.server: Adding LAN server: server="Node-b7607890-9c3b-aa80-816d-7e12659bb4bb (Addr: tcp/127.0.0.1:16654) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:15.577Z [INFO]  TestAgent_PurgeService/normal: Started DNS server: address=127.0.0.1:16649 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.577Z [INFO]  TestAgent_PurgeService/normal: Started DNS server: address=127.0.0.1:16649 network=udp
>         writer.go:29: 2020-02-23T02:46:15.577Z [INFO]  TestAgent_PurgeService/normal: Started HTTP server: address=127.0.0.1:16650 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.577Z [INFO]  TestAgent_PurgeService/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:15.618Z [WARN]  TestAgent_PurgeService/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:15.618Z [INFO]  TestAgent_PurgeService/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16654 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:15.622Z [DEBUG] TestAgent_PurgeService/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:15.622Z [DEBUG] TestAgent_PurgeService/normal.server.raft: vote granted: from=b7607890-9c3b-aa80-816d-7e12659bb4bb term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:15.622Z [INFO]  TestAgent_PurgeService/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:15.622Z [INFO]  TestAgent_PurgeService/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16654 [Leader]"
>         writer.go:29: 2020-02-23T02:46:15.622Z [INFO]  TestAgent_PurgeService/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:15.622Z [INFO]  TestAgent_PurgeService/normal.server: New leader elected: payload=Node-b7607890-9c3b-aa80-816d-7e12659bb4bb
>         writer.go:29: 2020-02-23T02:46:15.631Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:15.640Z [INFO]  TestAgent_PurgeService/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:15.640Z [INFO]  TestAgent_PurgeService/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:15.640Z [DEBUG] TestAgent_PurgeService/normal.server: Skipping self join check for node since the cluster is too small: node=Node-b7607890-9c3b-aa80-816d-7e12659bb4bb
>         writer.go:29: 2020-02-23T02:46:15.640Z [INFO]  TestAgent_PurgeService/normal.server: member joined, marking health alive: member=Node-b7607890-9c3b-aa80-816d-7e12659bb4bb
>         writer.go:29: 2020-02-23T02:46:15.904Z [DEBUG] TestAgent_PurgeService/normal: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:15.907Z [INFO]  TestAgent_PurgeService/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:15.953Z [DEBUG] TestAgent_PurgeService/normal: removed service: service=redis
>         writer.go:29: 2020-02-23T02:46:15.955Z [DEBUG] TestAgent_PurgeService/normal: removed service: service=redis
>         writer.go:29: 2020-02-23T02:46:15.955Z [INFO]  TestAgent_PurgeService/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:15.955Z [INFO]  TestAgent_PurgeService/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:15.955Z [DEBUG] TestAgent_PurgeService/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:15.955Z [WARN]  TestAgent_PurgeService/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:15.955Z [DEBUG] TestAgent_PurgeService/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:15.956Z [WARN]  TestAgent_PurgeService/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:15.958Z [INFO]  TestAgent_PurgeService/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:15.958Z [INFO]  TestAgent_PurgeService/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:15.958Z [INFO]  TestAgent_PurgeService/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:15.958Z [INFO]  TestAgent_PurgeService/normal: Stopping server: protocol=DNS address=127.0.0.1:16649 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.958Z [INFO]  TestAgent_PurgeService/normal: Stopping server: protocol=DNS address=127.0.0.1:16649 network=udp
>         writer.go:29: 2020-02-23T02:46:15.958Z [INFO]  TestAgent_PurgeService/normal: Stopping server: protocol=HTTP address=127.0.0.1:16650 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.958Z [INFO]  TestAgent_PurgeService/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:15.958Z [INFO]  TestAgent_PurgeService/normal: Endpoints down
> === RUN   TestAgent_PurgeServiceOnDuplicate
> === RUN   TestAgent_PurgeServiceOnDuplicate/normal
> === PAUSE TestAgent_PurgeServiceOnDuplicate/normal
> === RUN   TestAgent_PurgeServiceOnDuplicate/service_manager
> === PAUSE TestAgent_PurgeServiceOnDuplicate/service_manager
> === CONT  TestAgent_PurgeServiceOnDuplicate/normal
> === CONT  TestAgent_PurgeServiceOnDuplicate/service_manager
> --- PASS: TestAgent_PurgeServiceOnDuplicate (0.00s)
>     --- PASS: TestAgent_PurgeServiceOnDuplicate/service_manager (0.04s)
>         writer.go:29: 2020-02-23T02:46:15.972Z [DEBUG] TestAgent_PurgeServiceOnDuplicate/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:15.974Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager.client.serf.lan: serf: EventMemberJoin: Node-915106b3-76cc-e8c3-22bd-60db48bc4321 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:15.974Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager: Started DNS server: address=127.0.0.1:16667 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.974Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager: Started DNS server: address=127.0.0.1:16667 network=udp
>         writer.go:29: 2020-02-23T02:46:15.976Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager: Started HTTP server: address=127.0.0.1:16668 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.976Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:15.980Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:15.980Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager.client: shutting down client
>         writer.go:29: 2020-02-23T02:46:15.980Z [WARN]  TestAgent_PurgeServiceOnDuplicate/service_manager.client.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:15.980Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager.client.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:15.982Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager: consul client down
>         writer.go:29: 2020-02-23T02:46:15.982Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:15.982Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16667 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.982Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16667 network=udp
>         writer.go:29: 2020-02-23T02:46:15.982Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16668 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.982Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:15.982Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager: Endpoints down
>         writer.go:29: 2020-02-23T02:46:15.998Z [DEBUG] TestAgent_PurgeServiceOnDuplicate/service_manager-a2.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:15.999Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2.client.serf.lan: serf: EventMemberJoin: Node-617224e2-2439-fe1b-6c4a-426e9b06529b 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:15.999Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2.client.serf.lan: serf: Attempting re-join to previously known node: Node-915106b3-76cc-e8c3-22bd-60db48bc4321: 127.0.0.1:16670
>         writer.go:29: 2020-02-23T02:46:15.999Z [DEBUG] TestAgent_PurgeServiceOnDuplicate/service_manager-a2.client.memberlist.lan: memberlist: Failed to join 127.0.0.1: dial tcp 127.0.0.1:16670: connect: connection refused
>         writer.go:29: 2020-02-23T02:46:15.999Z [WARN]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2.client.serf.lan: serf: Failed to re-join any previously known node
>         writer.go:29: 2020-02-23T02:46:16.000Z [DEBUG] TestAgent_PurgeServiceOnDuplicate/service_manager-a2: service exists, not restoring from file: service=redis file=/tmp/consul-test/TestAgent_PurgeServiceOnDuplicate_service_manager-agent275204603/services/86a1b907d54bf7010394bf316e183e67
>         writer.go:29: 2020-02-23T02:46:16.000Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2: Started DNS server: address=127.0.0.1:16673 network=udp
>         writer.go:29: 2020-02-23T02:46:16.000Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2: Started DNS server: address=127.0.0.1:16673 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.000Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2: Started HTTP server: address=127.0.0.1:16674 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.000Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2: started state syncer
>         writer.go:29: 2020-02-23T02:46:16.001Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:16.001Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2.client: shutting down client
>         writer.go:29: 2020-02-23T02:46:16.001Z [WARN]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2.client.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:16.001Z [WARN]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2.client.manager: No servers available
>         writer.go:29: 2020-02-23T02:46:16.001Z [ERROR] TestAgent_PurgeServiceOnDuplicate/service_manager-a2.anti_entropy: failed to sync remote state: error="No known Consul servers"
>         writer.go:29: 2020-02-23T02:46:16.001Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2.client.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:16.002Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2: consul client down
>         writer.go:29: 2020-02-23T02:46:16.002Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2: shutdown complete
>         writer.go:29: 2020-02-23T02:46:16.002Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2: Stopping server: protocol=DNS address=127.0.0.1:16673 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.002Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2: Stopping server: protocol=DNS address=127.0.0.1:16673 network=udp
>         writer.go:29: 2020-02-23T02:46:16.002Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2: Stopping server: protocol=HTTP address=127.0.0.1:16674 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.002Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:16.002Z [INFO]  TestAgent_PurgeServiceOnDuplicate/service_manager-a2: Endpoints down
>     --- PASS: TestAgent_PurgeServiceOnDuplicate/normal (0.06s)
>         writer.go:29: 2020-02-23T02:46:15.974Z [DEBUG] TestAgent_PurgeServiceOnDuplicate/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:15.975Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal.client.serf.lan: serf: EventMemberJoin: Node-383c79c4-7822-e3eb-3e38-10b2d041143d 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:15.979Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal: Started DNS server: address=127.0.0.1:16661 network=udp
>         writer.go:29: 2020-02-23T02:46:15.979Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal: Started DNS server: address=127.0.0.1:16661 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.980Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal: Started HTTP server: address=127.0.0.1:16662 network=tcp
>         writer.go:29: 2020-02-23T02:46:15.980Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:16.000Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:16.000Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal.client: shutting down client
>         writer.go:29: 2020-02-23T02:46:16.000Z [WARN]  TestAgent_PurgeServiceOnDuplicate/normal.client.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:16.000Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal.client.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:16.002Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal: consul client down
>         writer.go:29: 2020-02-23T02:46:16.002Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:16.002Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal: Stopping server: protocol=DNS address=127.0.0.1:16661 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.002Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal: Stopping server: protocol=DNS address=127.0.0.1:16661 network=udp
>         writer.go:29: 2020-02-23T02:46:16.002Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal: Stopping server: protocol=HTTP address=127.0.0.1:16662 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.002Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:16.002Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal: Endpoints down
>         writer.go:29: 2020-02-23T02:46:16.011Z [DEBUG] TestAgent_PurgeServiceOnDuplicate/normal-a2.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:16.011Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2.client.serf.lan: serf: EventMemberJoin: Node-38678ecc-570d-ca17-916a-9b71efbfc2ff 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.011Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2.client.serf.lan: serf: Attempting re-join to previously known node: Node-383c79c4-7822-e3eb-3e38-10b2d041143d: 127.0.0.1:16664
>         writer.go:29: 2020-02-23T02:46:16.011Z [DEBUG] TestAgent_PurgeServiceOnDuplicate/normal-a2.client.memberlist.lan: memberlist: Failed to join 127.0.0.1: dial tcp 127.0.0.1:16664: connect: connection refused
>         writer.go:29: 2020-02-23T02:46:16.011Z [WARN]  TestAgent_PurgeServiceOnDuplicate/normal-a2.client.serf.lan: serf: Failed to re-join any previously known node
>         writer.go:29: 2020-02-23T02:46:16.012Z [DEBUG] TestAgent_PurgeServiceOnDuplicate/normal-a2: service exists, not restoring from file: service=redis file=/tmp/consul-test/TestAgent_PurgeServiceOnDuplicate_normal-agent207635980/services/86a1b907d54bf7010394bf316e183e67
>         writer.go:29: 2020-02-23T02:46:16.012Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2: Started DNS server: address=127.0.0.1:16679 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.012Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2: Started DNS server: address=127.0.0.1:16679 network=udp
>         writer.go:29: 2020-02-23T02:46:16.012Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2: Started HTTP server: address=127.0.0.1:16680 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.012Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2: started state syncer
>         writer.go:29: 2020-02-23T02:46:16.012Z [WARN]  TestAgent_PurgeServiceOnDuplicate/normal-a2.client.manager: No servers available
>         writer.go:29: 2020-02-23T02:46:16.012Z [ERROR] TestAgent_PurgeServiceOnDuplicate/normal-a2.anti_entropy: failed to sync remote state: error="No known Consul servers"
>         writer.go:29: 2020-02-23T02:46:16.013Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:16.013Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2.client: shutting down client
>         writer.go:29: 2020-02-23T02:46:16.013Z [WARN]  TestAgent_PurgeServiceOnDuplicate/normal-a2.client.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:16.013Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2.client.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:16.015Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2: consul client down
>         writer.go:29: 2020-02-23T02:46:16.015Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2: shutdown complete
>         writer.go:29: 2020-02-23T02:46:16.015Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2: Stopping server: protocol=DNS address=127.0.0.1:16679 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.015Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2: Stopping server: protocol=DNS address=127.0.0.1:16679 network=udp
>         writer.go:29: 2020-02-23T02:46:16.015Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2: Stopping server: protocol=HTTP address=127.0.0.1:16680 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.015Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:16.015Z [INFO]  TestAgent_PurgeServiceOnDuplicate/normal-a2: Endpoints down
> === RUN   TestAgent_PersistCheck
> === PAUSE TestAgent_PersistCheck
> === RUN   TestAgent_PurgeCheck
> --- SKIP: TestAgent_PurgeCheck (0.00s)
>     agent_test.go:2236: DM-skipped
> === RUN   TestAgent_PurgeCheckOnDuplicate
> === PAUSE TestAgent_PurgeCheckOnDuplicate
> === RUN   TestAgent_loadChecks_token
> === PAUSE TestAgent_loadChecks_token
> === RUN   TestAgent_unloadChecks
> === PAUSE TestAgent_unloadChecks
> === RUN   TestAgent_loadServices_token
> === RUN   TestAgent_loadServices_token/normal
> === PAUSE TestAgent_loadServices_token/normal
> === RUN   TestAgent_loadServices_token/service_manager
> === PAUSE TestAgent_loadServices_token/service_manager
> === CONT  TestAgent_loadServices_token/normal
> === CONT  TestAgent_loadServices_token/service_manager
> --- PASS: TestAgent_loadServices_token (0.00s)
>     --- PASS: TestAgent_loadServices_token/service_manager (0.22s)
>         writer.go:29: 2020-02-23T02:46:16.025Z [WARN]  TestAgent_loadServices_token/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:16.025Z [DEBUG] TestAgent_loadServices_token/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:16.025Z [DEBUG] TestAgent_loadServices_token/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:16.040Z [INFO]  TestAgent_loadServices_token/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d0cdb784-8d0c-b7ee-665b-8498077e697c Address:127.0.0.1:16690}]"
>         writer.go:29: 2020-02-23T02:46:16.041Z [INFO]  TestAgent_loadServices_token/service_manager.server.serf.wan: serf: EventMemberJoin: Node-d0cdb784-8d0c-b7ee-665b-8498077e697c.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.041Z [INFO]  TestAgent_loadServices_token/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16690 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:16.041Z [INFO]  TestAgent_loadServices_token/service_manager.server.serf.lan: serf: EventMemberJoin: Node-d0cdb784-8d0c-b7ee-665b-8498077e697c 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.042Z [INFO]  TestAgent_loadServices_token/service_manager.server: Adding LAN server: server="Node-d0cdb784-8d0c-b7ee-665b-8498077e697c (Addr: tcp/127.0.0.1:16690) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:16.042Z [INFO]  TestAgent_loadServices_token/service_manager.server: Handled event for server in area: event=member-join server=Node-d0cdb784-8d0c-b7ee-665b-8498077e697c.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:16.042Z [INFO]  TestAgent_loadServices_token/service_manager: Started DNS server: address=127.0.0.1:16685 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.042Z [INFO]  TestAgent_loadServices_token/service_manager: Started DNS server: address=127.0.0.1:16685 network=udp
>         writer.go:29: 2020-02-23T02:46:16.042Z [INFO]  TestAgent_loadServices_token/service_manager: Started HTTP server: address=127.0.0.1:16686 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.042Z [INFO]  TestAgent_loadServices_token/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:16.077Z [WARN]  TestAgent_loadServices_token/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:16.077Z [INFO]  TestAgent_loadServices_token/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16690 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:16.080Z [DEBUG] TestAgent_loadServices_token/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:16.081Z [DEBUG] TestAgent_loadServices_token/service_manager.server.raft: vote granted: from=d0cdb784-8d0c-b7ee-665b-8498077e697c term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:16.081Z [INFO]  TestAgent_loadServices_token/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:16.081Z [INFO]  TestAgent_loadServices_token/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16690 [Leader]"
>         writer.go:29: 2020-02-23T02:46:16.081Z [INFO]  TestAgent_loadServices_token/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:16.081Z [INFO]  TestAgent_loadServices_token/service_manager.server: New leader elected: payload=Node-d0cdb784-8d0c-b7ee-665b-8498077e697c
>         writer.go:29: 2020-02-23T02:46:16.089Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:16.097Z [INFO]  TestAgent_loadServices_token/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:16.097Z [INFO]  TestAgent_loadServices_token/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.097Z [DEBUG] TestAgent_loadServices_token/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-d0cdb784-8d0c-b7ee-665b-8498077e697c
>         writer.go:29: 2020-02-23T02:46:16.097Z [INFO]  TestAgent_loadServices_token/service_manager.server: member joined, marking health alive: member=Node-d0cdb784-8d0c-b7ee-665b-8498077e697c
>         writer.go:29: 2020-02-23T02:46:16.231Z [INFO]  TestAgent_loadServices_token/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:16.232Z [INFO]  TestAgent_loadServices_token/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:16.232Z [DEBUG] TestAgent_loadServices_token/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.232Z [WARN]  TestAgent_loadServices_token/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:16.232Z [ERROR] TestAgent_loadServices_token/service_manager.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:16.232Z [DEBUG] TestAgent_loadServices_token/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.233Z [WARN]  TestAgent_loadServices_token/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:16.235Z [INFO]  TestAgent_loadServices_token/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:16.235Z [INFO]  TestAgent_loadServices_token/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:16.235Z [INFO]  TestAgent_loadServices_token/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:16.235Z [INFO]  TestAgent_loadServices_token/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16685 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.235Z [INFO]  TestAgent_loadServices_token/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16685 network=udp
>         writer.go:29: 2020-02-23T02:46:16.235Z [INFO]  TestAgent_loadServices_token/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16686 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.235Z [INFO]  TestAgent_loadServices_token/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:16.235Z [INFO]  TestAgent_loadServices_token/service_manager: Endpoints down
>     --- PASS: TestAgent_loadServices_token/normal (0.30s)
>         writer.go:29: 2020-02-23T02:46:16.032Z [WARN]  TestAgent_loadServices_token/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:16.032Z [DEBUG] TestAgent_loadServices_token/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:16.032Z [DEBUG] TestAgent_loadServices_token/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:16.044Z [INFO]  TestAgent_loadServices_token/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b2832f9f-8f8f-b2c0-757f-ece103136f22 Address:127.0.0.1:16696}]"
>         writer.go:29: 2020-02-23T02:46:16.044Z [INFO]  TestAgent_loadServices_token/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16696 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:16.045Z [INFO]  TestAgent_loadServices_token/normal.server.serf.wan: serf: EventMemberJoin: Node-b2832f9f-8f8f-b2c0-757f-ece103136f22.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.046Z [INFO]  TestAgent_loadServices_token/normal.server.serf.lan: serf: EventMemberJoin: Node-b2832f9f-8f8f-b2c0-757f-ece103136f22 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.046Z [INFO]  TestAgent_loadServices_token/normal.server: Adding LAN server: server="Node-b2832f9f-8f8f-b2c0-757f-ece103136f22 (Addr: tcp/127.0.0.1:16696) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:16.046Z [INFO]  TestAgent_loadServices_token/normal.server: Handled event for server in area: event=member-join server=Node-b2832f9f-8f8f-b2c0-757f-ece103136f22.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:16.046Z [INFO]  TestAgent_loadServices_token/normal: Started DNS server: address=127.0.0.1:16691 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.046Z [INFO]  TestAgent_loadServices_token/normal: Started DNS server: address=127.0.0.1:16691 network=udp
>         writer.go:29: 2020-02-23T02:46:16.047Z [INFO]  TestAgent_loadServices_token/normal: Started HTTP server: address=127.0.0.1:16692 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.047Z [INFO]  TestAgent_loadServices_token/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:16.114Z [WARN]  TestAgent_loadServices_token/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:16.114Z [INFO]  TestAgent_loadServices_token/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16696 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:16.117Z [DEBUG] TestAgent_loadServices_token/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:16.117Z [DEBUG] TestAgent_loadServices_token/normal.server.raft: vote granted: from=b2832f9f-8f8f-b2c0-757f-ece103136f22 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:16.117Z [INFO]  TestAgent_loadServices_token/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:16.117Z [INFO]  TestAgent_loadServices_token/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16696 [Leader]"
>         writer.go:29: 2020-02-23T02:46:16.118Z [INFO]  TestAgent_loadServices_token/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:16.118Z [INFO]  TestAgent_loadServices_token/normal.server: New leader elected: payload=Node-b2832f9f-8f8f-b2c0-757f-ece103136f22
>         writer.go:29: 2020-02-23T02:46:16.124Z [INFO]  TestAgent_loadServices_token/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:16.129Z [INFO]  TestAgent_loadServices_token/normal: Synced service: service=rabbitmq
>         writer.go:29: 2020-02-23T02:46:16.129Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:16.135Z [INFO]  TestAgent_loadServices_token/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:16.135Z [INFO]  TestAgent_loadServices_token/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.135Z [DEBUG] TestAgent_loadServices_token/normal.server: Skipping self join check for node since the cluster is too small: node=Node-b2832f9f-8f8f-b2c0-757f-ece103136f22
>         writer.go:29: 2020-02-23T02:46:16.135Z [INFO]  TestAgent_loadServices_token/normal.server: member joined, marking health alive: member=Node-b2832f9f-8f8f-b2c0-757f-ece103136f22
>         writer.go:29: 2020-02-23T02:46:16.310Z [INFO]  TestAgent_loadServices_token/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:16.310Z [INFO]  TestAgent_loadServices_token/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:16.310Z [DEBUG] TestAgent_loadServices_token/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.310Z [WARN]  TestAgent_loadServices_token/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:16.310Z [DEBUG] TestAgent_loadServices_token/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.312Z [WARN]  TestAgent_loadServices_token/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:16.314Z [INFO]  TestAgent_loadServices_token/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:16.314Z [INFO]  TestAgent_loadServices_token/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:16.314Z [INFO]  TestAgent_loadServices_token/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:16.314Z [INFO]  TestAgent_loadServices_token/normal: Stopping server: protocol=DNS address=127.0.0.1:16691 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.314Z [INFO]  TestAgent_loadServices_token/normal: Stopping server: protocol=DNS address=127.0.0.1:16691 network=udp
>         writer.go:29: 2020-02-23T02:46:16.314Z [INFO]  TestAgent_loadServices_token/normal: Stopping server: protocol=HTTP address=127.0.0.1:16692 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.314Z [INFO]  TestAgent_loadServices_token/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:16.314Z [INFO]  TestAgent_loadServices_token/normal: Endpoints down
> === RUN   TestAgent_loadServices_sidecar
> === RUN   TestAgent_loadServices_sidecar/normal
> === PAUSE TestAgent_loadServices_sidecar/normal
> === RUN   TestAgent_loadServices_sidecar/service_manager
> === PAUSE TestAgent_loadServices_sidecar/service_manager
> === CONT  TestAgent_loadServices_sidecar/normal
> === CONT  TestAgent_loadServices_sidecar/service_manager
> --- PASS: TestAgent_loadServices_sidecar (0.00s)
>     --- PASS: TestAgent_loadServices_sidecar/service_manager (0.15s)
>         writer.go:29: 2020-02-23T02:46:16.330Z [WARN]  TestAgent_loadServices_sidecar/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:16.330Z [DEBUG] TestAgent_loadServices_sidecar/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:16.331Z [DEBUG] TestAgent_loadServices_sidecar/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:16.351Z [INFO]  TestAgent_loadServices_sidecar/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4d2789de-789a-d90a-a359-dd4212aa1dff Address:127.0.0.1:16708}]"
>         writer.go:29: 2020-02-23T02:46:16.352Z [INFO]  TestAgent_loadServices_sidecar/service_manager.server.serf.wan: serf: EventMemberJoin: Node-4d2789de-789a-d90a-a359-dd4212aa1dff.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.352Z [INFO]  TestAgent_loadServices_sidecar/service_manager.server.serf.lan: serf: EventMemberJoin: Node-4d2789de-789a-d90a-a359-dd4212aa1dff 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.353Z [DEBUG] TestAgent_loadServices_sidecar/service_manager: added local registration for service: service=rabbitmq-sidecar-proxy
>         writer.go:29: 2020-02-23T02:46:16.353Z [INFO]  TestAgent_loadServices_sidecar/service_manager: Started DNS server: address=127.0.0.1:16703 network=udp
>         writer.go:29: 2020-02-23T02:46:16.353Z [INFO]  TestAgent_loadServices_sidecar/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16708 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:16.353Z [INFO]  TestAgent_loadServices_sidecar/service_manager.server: Adding LAN server: server="Node-4d2789de-789a-d90a-a359-dd4212aa1dff (Addr: tcp/127.0.0.1:16708) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:16.353Z [INFO]  TestAgent_loadServices_sidecar/service_manager.server: Handled event for server in area: event=member-join server=Node-4d2789de-789a-d90a-a359-dd4212aa1dff.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:16.354Z [INFO]  TestAgent_loadServices_sidecar/service_manager: Started DNS server: address=127.0.0.1:16703 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.354Z [INFO]  TestAgent_loadServices_sidecar/service_manager: Started HTTP server: address=127.0.0.1:16704 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.354Z [INFO]  TestAgent_loadServices_sidecar/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:16.394Z [WARN]  TestAgent_loadServices_sidecar/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:16.394Z [INFO]  TestAgent_loadServices_sidecar/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16708 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:16.398Z [DEBUG] TestAgent_loadServices_sidecar/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:16.398Z [DEBUG] TestAgent_loadServices_sidecar/service_manager.server.raft: vote granted: from=4d2789de-789a-d90a-a359-dd4212aa1dff term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:16.398Z [INFO]  TestAgent_loadServices_sidecar/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:16.398Z [INFO]  TestAgent_loadServices_sidecar/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16708 [Leader]"
>         writer.go:29: 2020-02-23T02:46:16.398Z [INFO]  TestAgent_loadServices_sidecar/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:16.398Z [INFO]  TestAgent_loadServices_sidecar/service_manager.server: New leader elected: payload=Node-4d2789de-789a-d90a-a359-dd4212aa1dff
>         writer.go:29: 2020-02-23T02:46:16.406Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:16.414Z [INFO]  TestAgent_loadServices_sidecar/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:16.414Z [INFO]  TestAgent_loadServices_sidecar/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.414Z [DEBUG] TestAgent_loadServices_sidecar/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-4d2789de-789a-d90a-a359-dd4212aa1dff
>         writer.go:29: 2020-02-23T02:46:16.414Z [INFO]  TestAgent_loadServices_sidecar/service_manager.server: member joined, marking health alive: member=Node-4d2789de-789a-d90a-a359-dd4212aa1dff
>         writer.go:29: 2020-02-23T02:46:16.468Z [INFO]  TestAgent_loadServices_sidecar/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:16.468Z [INFO]  TestAgent_loadServices_sidecar/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:16.468Z [DEBUG] TestAgent_loadServices_sidecar/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.468Z [WARN]  TestAgent_loadServices_sidecar/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:16.468Z [ERROR] TestAgent_loadServices_sidecar/service_manager.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:16.468Z [DEBUG] TestAgent_loadServices_sidecar/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.470Z [WARN]  TestAgent_loadServices_sidecar/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:16.472Z [INFO]  TestAgent_loadServices_sidecar/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:16.472Z [INFO]  TestAgent_loadServices_sidecar/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:16.472Z [INFO]  TestAgent_loadServices_sidecar/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:16.472Z [INFO]  TestAgent_loadServices_sidecar/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16703 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.472Z [INFO]  TestAgent_loadServices_sidecar/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16703 network=udp
>         writer.go:29: 2020-02-23T02:46:16.472Z [INFO]  TestAgent_loadServices_sidecar/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16704 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.472Z [INFO]  TestAgent_loadServices_sidecar/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:16.472Z [INFO]  TestAgent_loadServices_sidecar/service_manager: Endpoints down
>     --- PASS: TestAgent_loadServices_sidecar/normal (0.36s)
>         writer.go:29: 2020-02-23T02:46:16.330Z [WARN]  TestAgent_loadServices_sidecar/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:16.331Z [DEBUG] TestAgent_loadServices_sidecar/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:16.332Z [DEBUG] TestAgent_loadServices_sidecar/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:16.372Z [INFO]  TestAgent_loadServices_sidecar/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:6dd11748-f690-a2af-b289-41b22bd192a2 Address:127.0.0.1:16702}]"
>         writer.go:29: 2020-02-23T02:46:16.373Z [INFO]  TestAgent_loadServices_sidecar/normal.server.serf.wan: serf: EventMemberJoin: Node-6dd11748-f690-a2af-b289-41b22bd192a2.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.373Z [INFO]  TestAgent_loadServices_sidecar/normal.server.serf.lan: serf: EventMemberJoin: Node-6dd11748-f690-a2af-b289-41b22bd192a2 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.374Z [INFO]  TestAgent_loadServices_sidecar/normal: Started DNS server: address=127.0.0.1:16697 network=udp
>         writer.go:29: 2020-02-23T02:46:16.374Z [INFO]  TestAgent_loadServices_sidecar/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16702 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:16.374Z [INFO]  TestAgent_loadServices_sidecar/normal.server: Adding LAN server: server="Node-6dd11748-f690-a2af-b289-41b22bd192a2 (Addr: tcp/127.0.0.1:16702) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:16.374Z [INFO]  TestAgent_loadServices_sidecar/normal.server: Handled event for server in area: event=member-join server=Node-6dd11748-f690-a2af-b289-41b22bd192a2.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:16.374Z [INFO]  TestAgent_loadServices_sidecar/normal: Started DNS server: address=127.0.0.1:16697 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.374Z [INFO]  TestAgent_loadServices_sidecar/normal: Started HTTP server: address=127.0.0.1:16698 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.374Z [INFO]  TestAgent_loadServices_sidecar/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:16.426Z [WARN]  TestAgent_loadServices_sidecar/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:16.426Z [INFO]  TestAgent_loadServices_sidecar/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16702 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:16.429Z [DEBUG] TestAgent_loadServices_sidecar/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:16.429Z [DEBUG] TestAgent_loadServices_sidecar/normal.server.raft: vote granted: from=6dd11748-f690-a2af-b289-41b22bd192a2 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:16.429Z [INFO]  TestAgent_loadServices_sidecar/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:16.429Z [INFO]  TestAgent_loadServices_sidecar/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16702 [Leader]"
>         writer.go:29: 2020-02-23T02:46:16.429Z [INFO]  TestAgent_loadServices_sidecar/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:16.429Z [INFO]  TestAgent_loadServices_sidecar/normal.server: New leader elected: payload=Node-6dd11748-f690-a2af-b289-41b22bd192a2
>         writer.go:29: 2020-02-23T02:46:16.446Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:16.453Z [INFO]  TestAgent_loadServices_sidecar/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:16.453Z [INFO]  TestAgent_loadServices_sidecar/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.453Z [DEBUG] TestAgent_loadServices_sidecar/normal.server: Skipping self join check for node since the cluster is too small: node=Node-6dd11748-f690-a2af-b289-41b22bd192a2
>         writer.go:29: 2020-02-23T02:46:16.453Z [INFO]  TestAgent_loadServices_sidecar/normal.server: member joined, marking health alive: member=Node-6dd11748-f690-a2af-b289-41b22bd192a2
>         writer.go:29: 2020-02-23T02:46:16.643Z [DEBUG] TestAgent_loadServices_sidecar/normal: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:16.646Z [INFO]  TestAgent_loadServices_sidecar/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:16.649Z [INFO]  TestAgent_loadServices_sidecar/normal: Synced service: service=rabbitmq-sidecar-proxy
>         writer.go:29: 2020-02-23T02:46:16.652Z [INFO]  TestAgent_loadServices_sidecar/normal: Synced service: service=rabbitmq
>         writer.go:29: 2020-02-23T02:46:16.652Z [DEBUG] TestAgent_loadServices_sidecar/normal: Check in sync: check=service:rabbitmq-sidecar-proxy:1
>         writer.go:29: 2020-02-23T02:46:16.652Z [DEBUG] TestAgent_loadServices_sidecar/normal: Check in sync: check=service:rabbitmq-sidecar-proxy:2
>         writer.go:29: 2020-02-23T02:46:16.673Z [INFO]  TestAgent_loadServices_sidecar/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:16.673Z [INFO]  TestAgent_loadServices_sidecar/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:16.673Z [DEBUG] TestAgent_loadServices_sidecar/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.673Z [WARN]  TestAgent_loadServices_sidecar/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:16.673Z [DEBUG] TestAgent_loadServices_sidecar/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.675Z [WARN]  TestAgent_loadServices_sidecar/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:16.677Z [INFO]  TestAgent_loadServices_sidecar/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:16.677Z [INFO]  TestAgent_loadServices_sidecar/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:16.677Z [INFO]  TestAgent_loadServices_sidecar/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:16.677Z [INFO]  TestAgent_loadServices_sidecar/normal: Stopping server: protocol=DNS address=127.0.0.1:16697 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.677Z [INFO]  TestAgent_loadServices_sidecar/normal: Stopping server: protocol=DNS address=127.0.0.1:16697 network=udp
>         writer.go:29: 2020-02-23T02:46:16.677Z [INFO]  TestAgent_loadServices_sidecar/normal: Stopping server: protocol=HTTP address=127.0.0.1:16698 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.677Z [INFO]  TestAgent_loadServices_sidecar/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:16.677Z [INFO]  TestAgent_loadServices_sidecar/normal: Endpoints down
> === RUN   TestAgent_loadServices_sidecarSeparateToken
> === RUN   TestAgent_loadServices_sidecarSeparateToken/normal
> === PAUSE TestAgent_loadServices_sidecarSeparateToken/normal
> === RUN   TestAgent_loadServices_sidecarSeparateToken/service_manager
> === PAUSE TestAgent_loadServices_sidecarSeparateToken/service_manager
> === CONT  TestAgent_loadServices_sidecarSeparateToken/normal
> === CONT  TestAgent_loadServices_sidecarSeparateToken/service_manager
> --- PASS: TestAgent_loadServices_sidecarSeparateToken (0.00s)
>     --- PASS: TestAgent_loadServices_sidecarSeparateToken/normal (0.18s)
>         writer.go:29: 2020-02-23T02:46:16.693Z [WARN]  TestAgent_loadServices_sidecarSeparateToken/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:16.693Z [DEBUG] TestAgent_loadServices_sidecarSeparateToken/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:16.694Z [DEBUG] TestAgent_loadServices_sidecarSeparateToken/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:16.705Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:85be1847-b87f-a3f1-6d64-248cd5b0c41a Address:127.0.0.1:16714}]"
>         writer.go:29: 2020-02-23T02:46:16.705Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16714 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:16.706Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.server.serf.wan: serf: EventMemberJoin: Node-85be1847-b87f-a3f1-6d64-248cd5b0c41a.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.707Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.server.serf.lan: serf: EventMemberJoin: Node-85be1847-b87f-a3f1-6d64-248cd5b0c41a 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.707Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.server: Handled event for server in area: event=member-join server=Node-85be1847-b87f-a3f1-6d64-248cd5b0c41a.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:16.707Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.server: Adding LAN server: server="Node-85be1847-b87f-a3f1-6d64-248cd5b0c41a (Addr: tcp/127.0.0.1:16714) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:16.708Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal: Started DNS server: address=127.0.0.1:16709 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.709Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal: Started DNS server: address=127.0.0.1:16709 network=udp
>         writer.go:29: 2020-02-23T02:46:16.709Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal: Started HTTP server: address=127.0.0.1:16710 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.709Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:16.740Z [WARN]  TestAgent_loadServices_sidecarSeparateToken/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:16.740Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16714 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:16.743Z [DEBUG] TestAgent_loadServices_sidecarSeparateToken/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:16.744Z [DEBUG] TestAgent_loadServices_sidecarSeparateToken/normal.server.raft: vote granted: from=85be1847-b87f-a3f1-6d64-248cd5b0c41a term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:16.744Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:16.744Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16714 [Leader]"
>         writer.go:29: 2020-02-23T02:46:16.744Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:16.744Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.server: New leader elected: payload=Node-85be1847-b87f-a3f1-6d64-248cd5b0c41a
>         writer.go:29: 2020-02-23T02:46:16.751Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:16.759Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:16.759Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.759Z [DEBUG] TestAgent_loadServices_sidecarSeparateToken/normal.server: Skipping self join check for node since the cluster is too small: node=Node-85be1847-b87f-a3f1-6d64-248cd5b0c41a
>         writer.go:29: 2020-02-23T02:46:16.759Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.server: member joined, marking health alive: member=Node-85be1847-b87f-a3f1-6d64-248cd5b0c41a
>         writer.go:29: 2020-02-23T02:46:16.856Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:16.856Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:16.856Z [DEBUG] TestAgent_loadServices_sidecarSeparateToken/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.856Z [WARN]  TestAgent_loadServices_sidecarSeparateToken/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:16.856Z [ERROR] TestAgent_loadServices_sidecarSeparateToken/normal.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:16.856Z [DEBUG] TestAgent_loadServices_sidecarSeparateToken/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.858Z [WARN]  TestAgent_loadServices_sidecarSeparateToken/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:16.859Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:16.859Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:16.860Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:16.860Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal: Stopping server: protocol=DNS address=127.0.0.1:16709 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.860Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal: Stopping server: protocol=DNS address=127.0.0.1:16709 network=udp
>         writer.go:29: 2020-02-23T02:46:16.860Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal: Stopping server: protocol=HTTP address=127.0.0.1:16710 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.860Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:16.860Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/normal: Endpoints down
>     --- PASS: TestAgent_loadServices_sidecarSeparateToken/service_manager (0.25s)
>         writer.go:29: 2020-02-23T02:46:16.694Z [WARN]  TestAgent_loadServices_sidecarSeparateToken/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:16.695Z [DEBUG] TestAgent_loadServices_sidecarSeparateToken/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:16.695Z [DEBUG] TestAgent_loadServices_sidecarSeparateToken/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:16.705Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b4e2f9c0-d74d-deb2-06b5-69a6c38cf6c1 Address:127.0.0.1:16720}]"
>         writer.go:29: 2020-02-23T02:46:16.705Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16720 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:16.706Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server.serf.wan: serf: EventMemberJoin: Node-b4e2f9c0-d74d-deb2-06b5-69a6c38cf6c1.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.707Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server.serf.lan: serf: EventMemberJoin: Node-b4e2f9c0-d74d-deb2-06b5-69a6c38cf6c1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.707Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server: Adding LAN server: server="Node-b4e2f9c0-d74d-deb2-06b5-69a6c38cf6c1 (Addr: tcp/127.0.0.1:16720) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:16.707Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server: Handled event for server in area: event=member-join server=Node-b4e2f9c0-d74d-deb2-06b5-69a6c38cf6c1.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:16.708Z [DEBUG] TestAgent_loadServices_sidecarSeparateToken/service_manager: added local registration for service: service=rabbitmq-sidecar-proxy
>         writer.go:29: 2020-02-23T02:46:16.709Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager: Started DNS server: address=127.0.0.1:16715 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.709Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager: Started DNS server: address=127.0.0.1:16715 network=udp
>         writer.go:29: 2020-02-23T02:46:16.709Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager: Started HTTP server: address=127.0.0.1:16716 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.709Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:16.762Z [WARN]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:16.762Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16720 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:16.765Z [DEBUG] TestAgent_loadServices_sidecarSeparateToken/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:16.765Z [DEBUG] TestAgent_loadServices_sidecarSeparateToken/service_manager.server.raft: vote granted: from=b4e2f9c0-d74d-deb2-06b5-69a6c38cf6c1 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:16.765Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:16.765Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16720 [Leader]"
>         writer.go:29: 2020-02-23T02:46:16.765Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:16.765Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server: New leader elected: payload=Node-b4e2f9c0-d74d-deb2-06b5-69a6c38cf6c1
>         writer.go:29: 2020-02-23T02:46:16.772Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:16.780Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:16.780Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.780Z [DEBUG] TestAgent_loadServices_sidecarSeparateToken/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-b4e2f9c0-d74d-deb2-06b5-69a6c38cf6c1
>         writer.go:29: 2020-02-23T02:46:16.780Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server: member joined, marking health alive: member=Node-b4e2f9c0-d74d-deb2-06b5-69a6c38cf6c1
>         writer.go:29: 2020-02-23T02:46:16.920Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:16.920Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:16.921Z [DEBUG] TestAgent_loadServices_sidecarSeparateToken/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.921Z [WARN]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:16.921Z [ERROR] TestAgent_loadServices_sidecarSeparateToken/service_manager.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:16.921Z [DEBUG] TestAgent_loadServices_sidecarSeparateToken/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:16.923Z [WARN]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:16.924Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:16.924Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:16.924Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:16.924Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16715 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.924Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16715 network=udp
>         writer.go:29: 2020-02-23T02:46:16.924Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16716 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.924Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:16.924Z [INFO]  TestAgent_loadServices_sidecarSeparateToken/service_manager: Endpoints down
> === RUN   TestAgent_loadServices_sidecarInheritMeta
> === RUN   TestAgent_loadServices_sidecarInheritMeta/normal
> === PAUSE TestAgent_loadServices_sidecarInheritMeta/normal
> === RUN   TestAgent_loadServices_sidecarInheritMeta/service_manager
> === PAUSE TestAgent_loadServices_sidecarInheritMeta/service_manager
> === CONT  TestAgent_loadServices_sidecarInheritMeta/normal
> === CONT  TestAgent_loadServices_sidecarInheritMeta/service_manager
> --- PASS: TestAgent_loadServices_sidecarInheritMeta (0.00s)
>     --- PASS: TestAgent_loadServices_sidecarInheritMeta/normal (0.14s)
>         writer.go:29: 2020-02-23T02:46:16.945Z [WARN]  TestAgent_loadServices_sidecarInheritMeta/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:16.945Z [DEBUG] TestAgent_loadServices_sidecarInheritMeta/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:16.945Z [DEBUG] TestAgent_loadServices_sidecarInheritMeta/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:16.967Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:0c56ddb2-94ea-7f93-b70f-9edf61c38e4c Address:127.0.0.1:16726}]"
>         writer.go:29: 2020-02-23T02:46:16.969Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.server.serf.wan: serf: EventMemberJoin: Node-0c56ddb2-94ea-7f93-b70f-9edf61c38e4c.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.975Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16726 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:16.983Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.server.serf.lan: serf: EventMemberJoin: Node-0c56ddb2-94ea-7f93-b70f-9edf61c38e4c 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.983Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.server: Handled event for server in area: event=member-join server=Node-0c56ddb2-94ea-7f93-b70f-9edf61c38e4c.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:16.986Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal: Started DNS server: address=127.0.0.1:16721 network=udp
>         writer.go:29: 2020-02-23T02:46:16.987Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal: Started DNS server: address=127.0.0.1:16721 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.988Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal: Started HTTP server: address=127.0.0.1:16722 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.988Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:16.988Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.server: Adding LAN server: server="Node-0c56ddb2-94ea-7f93-b70f-9edf61c38e4c (Addr: tcp/127.0.0.1:16726) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:17.032Z [WARN]  TestAgent_loadServices_sidecarInheritMeta/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:17.032Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16726 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:17.035Z [DEBUG] TestAgent_loadServices_sidecarInheritMeta/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:17.035Z [DEBUG] TestAgent_loadServices_sidecarInheritMeta/normal.server.raft: vote granted: from=0c56ddb2-94ea-7f93-b70f-9edf61c38e4c term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:17.035Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:17.035Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16726 [Leader]"
>         writer.go:29: 2020-02-23T02:46:17.035Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:17.035Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.server: New leader elected: payload=Node-0c56ddb2-94ea-7f93-b70f-9edf61c38e4c
>         writer.go:29: 2020-02-23T02:46:17.042Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:17.050Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:17.050Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.050Z [DEBUG] TestAgent_loadServices_sidecarInheritMeta/normal.server: Skipping self join check for node since the cluster is too small: node=Node-0c56ddb2-94ea-7f93-b70f-9edf61c38e4c
>         writer.go:29: 2020-02-23T02:46:17.050Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.server: member joined, marking health alive: member=Node-0c56ddb2-94ea-7f93-b70f-9edf61c38e4c
>         writer.go:29: 2020-02-23T02:46:17.058Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:17.058Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:17.058Z [DEBUG] TestAgent_loadServices_sidecarInheritMeta/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.058Z [WARN]  TestAgent_loadServices_sidecarInheritMeta/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:17.058Z [ERROR] TestAgent_loadServices_sidecarInheritMeta/normal.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:17.058Z [DEBUG] TestAgent_loadServices_sidecarInheritMeta/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.060Z [WARN]  TestAgent_loadServices_sidecarInheritMeta/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:17.062Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:17.062Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:17.062Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:17.062Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal: Stopping server: protocol=DNS address=127.0.0.1:16721 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.062Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal: Stopping server: protocol=DNS address=127.0.0.1:16721 network=udp
>         writer.go:29: 2020-02-23T02:46:17.062Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal: Stopping server: protocol=HTTP address=127.0.0.1:16722 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.062Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:17.062Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/normal: Endpoints down
>     --- PASS: TestAgent_loadServices_sidecarInheritMeta/service_manager (0.18s)
>         writer.go:29: 2020-02-23T02:46:16.943Z [WARN]  TestAgent_loadServices_sidecarInheritMeta/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:16.944Z [DEBUG] TestAgent_loadServices_sidecarInheritMeta/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:16.944Z [DEBUG] TestAgent_loadServices_sidecarInheritMeta/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:16.956Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:87d8d2c6-fbc7-4b87-d6f7-06441e3b7502 Address:127.0.0.1:16732}]"
>         writer.go:29: 2020-02-23T02:46:16.956Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16732 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:16.961Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server.serf.wan: serf: EventMemberJoin: Node-87d8d2c6-fbc7-4b87-d6f7-06441e3b7502.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.962Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server.serf.lan: serf: EventMemberJoin: Node-87d8d2c6-fbc7-4b87-d6f7-06441e3b7502 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:16.963Z [DEBUG] TestAgent_loadServices_sidecarInheritMeta/service_manager: added local registration for service: service=rabbitmq-sidecar-proxy
>         writer.go:29: 2020-02-23T02:46:16.964Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager: Started DNS server: address=127.0.0.1:16727 network=udp
>         writer.go:29: 2020-02-23T02:46:16.967Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server: Adding LAN server: server="Node-87d8d2c6-fbc7-4b87-d6f7-06441e3b7502 (Addr: tcp/127.0.0.1:16732) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:16.972Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server: Handled event for server in area: event=member-join server=Node-87d8d2c6-fbc7-4b87-d6f7-06441e3b7502.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:16.977Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager: Started DNS server: address=127.0.0.1:16727 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.978Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager: Started HTTP server: address=127.0.0.1:16728 network=tcp
>         writer.go:29: 2020-02-23T02:46:16.978Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:17.004Z [WARN]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:17.004Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16732 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:17.007Z [DEBUG] TestAgent_loadServices_sidecarInheritMeta/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:17.007Z [DEBUG] TestAgent_loadServices_sidecarInheritMeta/service_manager.server.raft: vote granted: from=87d8d2c6-fbc7-4b87-d6f7-06441e3b7502 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:17.007Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:17.007Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16732 [Leader]"
>         writer.go:29: 2020-02-23T02:46:17.008Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:17.008Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server: New leader elected: payload=Node-87d8d2c6-fbc7-4b87-d6f7-06441e3b7502
>         writer.go:29: 2020-02-23T02:46:17.016Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:17.025Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:17.025Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.025Z [DEBUG] TestAgent_loadServices_sidecarInheritMeta/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-87d8d2c6-fbc7-4b87-d6f7-06441e3b7502
>         writer.go:29: 2020-02-23T02:46:17.025Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server: member joined, marking health alive: member=Node-87d8d2c6-fbc7-4b87-d6f7-06441e3b7502
>         writer.go:29: 2020-02-23T02:46:17.102Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:17.102Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:17.102Z [DEBUG] TestAgent_loadServices_sidecarInheritMeta/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.102Z [WARN]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:17.102Z [ERROR] TestAgent_loadServices_sidecarInheritMeta/service_manager.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:17.102Z [DEBUG] TestAgent_loadServices_sidecarInheritMeta/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.104Z [WARN]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:17.106Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:17.106Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:17.106Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:17.106Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16727 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.106Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16727 network=udp
>         writer.go:29: 2020-02-23T02:46:17.106Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16728 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.106Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:17.106Z [INFO]  TestAgent_loadServices_sidecarInheritMeta/service_manager: Endpoints down
> === RUN   TestAgent_loadServices_sidecarOverrideMeta
> === RUN   TestAgent_loadServices_sidecarOverrideMeta/normal
> === PAUSE TestAgent_loadServices_sidecarOverrideMeta/normal
> === RUN   TestAgent_loadServices_sidecarOverrideMeta/service_manager
> === PAUSE TestAgent_loadServices_sidecarOverrideMeta/service_manager
> === CONT  TestAgent_loadServices_sidecarOverrideMeta/normal
> === CONT  TestAgent_loadServices_sidecarOverrideMeta/service_manager
> --- PASS: TestAgent_loadServices_sidecarOverrideMeta (0.00s)
>     --- PASS: TestAgent_loadServices_sidecarOverrideMeta/normal (0.39s)
>         writer.go:29: 2020-02-23T02:46:17.130Z [WARN]  TestAgent_loadServices_sidecarOverrideMeta/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:17.130Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:17.131Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:17.149Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fa9dc144-2fe2-d6bd-0cd1-b0173255fd75 Address:127.0.0.1:16738}]"
>         writer.go:29: 2020-02-23T02:46:17.150Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.server.serf.wan: serf: EventMemberJoin: Node-fa9dc144-2fe2-d6bd-0cd1-b0173255fd75.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:17.150Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.server.serf.lan: serf: EventMemberJoin: Node-fa9dc144-2fe2-d6bd-0cd1-b0173255fd75 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:17.150Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal: Started DNS server: address=127.0.0.1:16733 network=udp
>         writer.go:29: 2020-02-23T02:46:17.150Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16738 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:17.151Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.server: Adding LAN server: server="Node-fa9dc144-2fe2-d6bd-0cd1-b0173255fd75 (Addr: tcp/127.0.0.1:16738) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:17.151Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.server: Handled event for server in area: event=member-join server=Node-fa9dc144-2fe2-d6bd-0cd1-b0173255fd75.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:17.151Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal: Started DNS server: address=127.0.0.1:16733 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.152Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal: Started HTTP server: address=127.0.0.1:16734 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.152Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:17.220Z [WARN]  TestAgent_loadServices_sidecarOverrideMeta/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:17.220Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16738 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:17.225Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:17.225Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/normal.server.raft: vote granted: from=fa9dc144-2fe2-d6bd-0cd1-b0173255fd75 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:17.225Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:17.225Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16738 [Leader]"
>         writer.go:29: 2020-02-23T02:46:17.225Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:17.225Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.server: New leader elected: payload=Node-fa9dc144-2fe2-d6bd-0cd1-b0173255fd75
>         writer.go:29: 2020-02-23T02:46:17.234Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:17.243Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:17.243Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.243Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/normal.server: Skipping self join check for node since the cluster is too small: node=Node-fa9dc144-2fe2-d6bd-0cd1-b0173255fd75
>         writer.go:29: 2020-02-23T02:46:17.243Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.server: member joined, marking health alive: member=Node-fa9dc144-2fe2-d6bd-0cd1-b0173255fd75
>         writer.go:29: 2020-02-23T02:46:17.496Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:17.496Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:17.496Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.496Z [WARN]  TestAgent_loadServices_sidecarOverrideMeta/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:17.497Z [ERROR] TestAgent_loadServices_sidecarOverrideMeta/normal.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:17.497Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.498Z [WARN]  TestAgent_loadServices_sidecarOverrideMeta/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:17.500Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:17.500Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:17.500Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:17.500Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal: Stopping server: protocol=DNS address=127.0.0.1:16733 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.500Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal: Stopping server: protocol=DNS address=127.0.0.1:16733 network=udp
>         writer.go:29: 2020-02-23T02:46:17.500Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal: Stopping server: protocol=HTTP address=127.0.0.1:16734 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.500Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:17.500Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/normal: Endpoints down
>     --- PASS: TestAgent_loadServices_sidecarOverrideMeta/service_manager (0.44s)
>         writer.go:29: 2020-02-23T02:46:17.129Z [WARN]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:17.129Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:17.130Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:17.141Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:6c00df6c-f3c6-8df9-5553-f5408829c541 Address:127.0.0.1:16744}]"
>         writer.go:29: 2020-02-23T02:46:17.142Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server.serf.wan: serf: EventMemberJoin: Node-6c00df6c-f3c6-8df9-5553-f5408829c541.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:17.142Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server.serf.lan: serf: EventMemberJoin: Node-6c00df6c-f3c6-8df9-5553-f5408829c541 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:17.142Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/service_manager: added local registration for service: service=rabbitmq-sidecar-proxy
>         writer.go:29: 2020-02-23T02:46:17.143Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16744 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:17.143Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server: Adding LAN server: server="Node-6c00df6c-f3c6-8df9-5553-f5408829c541 (Addr: tcp/127.0.0.1:16744) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:17.143Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server: Handled event for server in area: event=member-join server=Node-6c00df6c-f3c6-8df9-5553-f5408829c541.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:17.148Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: Started DNS server: address=127.0.0.1:16739 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.148Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: Started DNS server: address=127.0.0.1:16739 network=udp
>         writer.go:29: 2020-02-23T02:46:17.149Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: Started HTTP server: address=127.0.0.1:16740 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.149Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:17.206Z [WARN]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:17.206Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16744 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:17.210Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:17.210Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/service_manager.server.raft: vote granted: from=6c00df6c-f3c6-8df9-5553-f5408829c541 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:17.210Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:17.210Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16744 [Leader]"
>         writer.go:29: 2020-02-23T02:46:17.210Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:17.210Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server: New leader elected: payload=Node-6c00df6c-f3c6-8df9-5553-f5408829c541
>         writer.go:29: 2020-02-23T02:46:17.218Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:17.225Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:17.225Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.225Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-6c00df6c-f3c6-8df9-5553-f5408829c541
>         writer.go:29: 2020-02-23T02:46:17.226Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server: member joined, marking health alive: member=Node-6c00df6c-f3c6-8df9-5553-f5408829c541
>         writer.go:29: 2020-02-23T02:46:17.507Z [WARN]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: Check socket connection failed: check=service:rabbitmq-sidecar-proxy:1 error="dial tcp 127.0.0.1:21000: connect: connection refused"
>         writer.go:29: 2020-02-23T02:46:17.507Z [WARN]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: Check is now critical: check=service:rabbitmq-sidecar-proxy:1
>         writer.go:29: 2020-02-23T02:46:17.542Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/service_manager: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:17.543Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: Synced node info
>         writer.go:29: 2020-02-23T02:46:17.545Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: Synced service: service=rabbitmq
>         writer.go:29: 2020-02-23T02:46:17.548Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: Synced service: service=rabbitmq-sidecar-proxy
>         writer.go:29: 2020-02-23T02:46:17.548Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/service_manager: Check in sync: check=service:rabbitmq-sidecar-proxy:1
>         writer.go:29: 2020-02-23T02:46:17.548Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/service_manager: Check in sync: check=service:rabbitmq-sidecar-proxy:2
>         writer.go:29: 2020-02-23T02:46:17.548Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:17.548Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:17.548Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.548Z [WARN]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:17.548Z [DEBUG] TestAgent_loadServices_sidecarOverrideMeta/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.550Z [WARN]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:17.551Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:17.551Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:17.552Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:17.552Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16739 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.552Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16739 network=udp
>         writer.go:29: 2020-02-23T02:46:17.552Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16740 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.552Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:17.552Z [INFO]  TestAgent_loadServices_sidecarOverrideMeta/service_manager: Endpoints down
> === RUN   TestAgent_unloadServices
> === RUN   TestAgent_unloadServices/normal
> === PAUSE TestAgent_unloadServices/normal
> === RUN   TestAgent_unloadServices/service_manager
> === PAUSE TestAgent_unloadServices/service_manager
> === CONT  TestAgent_unloadServices/normal
> === CONT  TestAgent_unloadServices/service_manager
> --- PASS: TestAgent_unloadServices (0.00s)
>     --- PASS: TestAgent_unloadServices/normal (0.22s)
>         writer.go:29: 2020-02-23T02:46:17.568Z [WARN]  TestAgent_unloadServices/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:17.568Z [DEBUG] TestAgent_unloadServices/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:17.568Z [DEBUG] TestAgent_unloadServices/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:17.597Z [INFO]  TestAgent_unloadServices/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:de797186-b433-5ba7-1feb-f4560298957c Address:127.0.0.1:16750}]"
>         writer.go:29: 2020-02-23T02:46:17.598Z [INFO]  TestAgent_unloadServices/normal.server.serf.wan: serf: EventMemberJoin: Node-de797186-b433-5ba7-1feb-f4560298957c.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:17.598Z [INFO]  TestAgent_unloadServices/normal.server.serf.lan: serf: EventMemberJoin: Node-de797186-b433-5ba7-1feb-f4560298957c 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:17.598Z [INFO]  TestAgent_unloadServices/normal: Started DNS server: address=127.0.0.1:16745 network=udp
>         writer.go:29: 2020-02-23T02:46:17.598Z [INFO]  TestAgent_unloadServices/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16750 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:17.599Z [INFO]  TestAgent_unloadServices/normal.server: Adding LAN server: server="Node-de797186-b433-5ba7-1feb-f4560298957c (Addr: tcp/127.0.0.1:16750) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:17.599Z [INFO]  TestAgent_unloadServices/normal.server: Handled event for server in area: event=member-join server=Node-de797186-b433-5ba7-1feb-f4560298957c.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:17.599Z [INFO]  TestAgent_unloadServices/normal: Started DNS server: address=127.0.0.1:16745 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.599Z [INFO]  TestAgent_unloadServices/normal: Started HTTP server: address=127.0.0.1:16746 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.599Z [INFO]  TestAgent_unloadServices/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:17.647Z [WARN]  TestAgent_unloadServices/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:17.647Z [INFO]  TestAgent_unloadServices/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16750 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:17.652Z [DEBUG] TestAgent_unloadServices/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:17.652Z [DEBUG] TestAgent_unloadServices/normal.server.raft: vote granted: from=de797186-b433-5ba7-1feb-f4560298957c term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:17.652Z [INFO]  TestAgent_unloadServices/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:17.652Z [INFO]  TestAgent_unloadServices/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16750 [Leader]"
>         writer.go:29: 2020-02-23T02:46:17.652Z [INFO]  TestAgent_unloadServices/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:17.652Z [INFO]  TestAgent_unloadServices/normal.server: New leader elected: payload=Node-de797186-b433-5ba7-1feb-f4560298957c
>         writer.go:29: 2020-02-23T02:46:17.659Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:17.666Z [INFO]  TestAgent_unloadServices/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:17.666Z [INFO]  TestAgent_unloadServices/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.666Z [DEBUG] TestAgent_unloadServices/normal.server: Skipping self join check for node since the cluster is too small: node=Node-de797186-b433-5ba7-1feb-f4560298957c
>         writer.go:29: 2020-02-23T02:46:17.666Z [INFO]  TestAgent_unloadServices/normal.server: member joined, marking health alive: member=Node-de797186-b433-5ba7-1feb-f4560298957c
>         writer.go:29: 2020-02-23T02:46:17.671Z [DEBUG] TestAgent_unloadServices/normal: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:17.673Z [INFO]  TestAgent_unloadServices/normal: Synced node info
>         writer.go:29: 2020-02-23T02:46:17.673Z [DEBUG] TestAgent_unloadServices/normal: Node info in sync
>         writer.go:29: 2020-02-23T02:46:17.764Z [DEBUG] TestAgent_unloadServices/normal: removed service: service=redis
>         writer.go:29: 2020-02-23T02:46:17.764Z [INFO]  TestAgent_unloadServices/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:17.764Z [INFO]  TestAgent_unloadServices/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:17.764Z [DEBUG] TestAgent_unloadServices/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.764Z [WARN]  TestAgent_unloadServices/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:17.764Z [DEBUG] TestAgent_unloadServices/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.765Z [WARN]  TestAgent_unloadServices/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:17.768Z [INFO]  TestAgent_unloadServices/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:17.768Z [INFO]  TestAgent_unloadServices/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:17.768Z [INFO]  TestAgent_unloadServices/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:17.768Z [INFO]  TestAgent_unloadServices/normal: Stopping server: protocol=DNS address=127.0.0.1:16745 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.768Z [INFO]  TestAgent_unloadServices/normal: Stopping server: protocol=DNS address=127.0.0.1:16745 network=udp
>         writer.go:29: 2020-02-23T02:46:17.768Z [INFO]  TestAgent_unloadServices/normal: Stopping server: protocol=HTTP address=127.0.0.1:16746 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.768Z [INFO]  TestAgent_unloadServices/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:17.768Z [INFO]  TestAgent_unloadServices/normal: Endpoints down
>     --- PASS: TestAgent_unloadServices/service_manager (0.21s)
>         writer.go:29: 2020-02-23T02:46:17.565Z [WARN]  TestAgent_unloadServices/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:17.565Z [DEBUG] TestAgent_unloadServices/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:17.565Z [DEBUG] TestAgent_unloadServices/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:17.576Z [INFO]  TestAgent_unloadServices/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4e7bf7af-c144-20dc-5a95-033e9acfdfac Address:127.0.0.1:16756}]"
>         writer.go:29: 2020-02-23T02:46:17.576Z [INFO]  TestAgent_unloadServices/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16756 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:17.576Z [INFO]  TestAgent_unloadServices/service_manager.server.serf.wan: serf: EventMemberJoin: Node-4e7bf7af-c144-20dc-5a95-033e9acfdfac.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:17.577Z [INFO]  TestAgent_unloadServices/service_manager.server.serf.lan: serf: EventMemberJoin: Node-4e7bf7af-c144-20dc-5a95-033e9acfdfac 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:17.577Z [INFO]  TestAgent_unloadServices/service_manager.server: Handled event for server in area: event=member-join server=Node-4e7bf7af-c144-20dc-5a95-033e9acfdfac.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:17.577Z [INFO]  TestAgent_unloadServices/service_manager.server: Adding LAN server: server="Node-4e7bf7af-c144-20dc-5a95-033e9acfdfac (Addr: tcp/127.0.0.1:16756) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:17.578Z [INFO]  TestAgent_unloadServices/service_manager: Started DNS server: address=127.0.0.1:16751 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.578Z [INFO]  TestAgent_unloadServices/service_manager: Started DNS server: address=127.0.0.1:16751 network=udp
>         writer.go:29: 2020-02-23T02:46:17.603Z [INFO]  TestAgent_unloadServices/service_manager: Started HTTP server: address=127.0.0.1:16752 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.603Z [INFO]  TestAgent_unloadServices/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:17.629Z [WARN]  TestAgent_unloadServices/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:17.630Z [INFO]  TestAgent_unloadServices/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16756 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:17.633Z [DEBUG] TestAgent_unloadServices/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:17.633Z [DEBUG] TestAgent_unloadServices/service_manager.server.raft: vote granted: from=4e7bf7af-c144-20dc-5a95-033e9acfdfac term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:17.633Z [INFO]  TestAgent_unloadServices/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:17.633Z [INFO]  TestAgent_unloadServices/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16756 [Leader]"
>         writer.go:29: 2020-02-23T02:46:17.633Z [INFO]  TestAgent_unloadServices/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:17.633Z [INFO]  TestAgent_unloadServices/service_manager.server: New leader elected: payload=Node-4e7bf7af-c144-20dc-5a95-033e9acfdfac
>         writer.go:29: 2020-02-23T02:46:17.640Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:17.648Z [INFO]  TestAgent_unloadServices/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:17.648Z [INFO]  TestAgent_unloadServices/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.648Z [DEBUG] TestAgent_unloadServices/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-4e7bf7af-c144-20dc-5a95-033e9acfdfac
>         writer.go:29: 2020-02-23T02:46:17.648Z [INFO]  TestAgent_unloadServices/service_manager.server: member joined, marking health alive: member=Node-4e7bf7af-c144-20dc-5a95-033e9acfdfac
>         writer.go:29: 2020-02-23T02:46:17.764Z [DEBUG] TestAgent_unloadServices/service_manager: removed service: service=redis
>         writer.go:29: 2020-02-23T02:46:17.764Z [INFO]  TestAgent_unloadServices/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:17.764Z [INFO]  TestAgent_unloadServices/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:17.764Z [DEBUG] TestAgent_unloadServices/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.765Z [WARN]  TestAgent_unloadServices/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:17.765Z [ERROR] TestAgent_unloadServices/service_manager.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:17.765Z [DEBUG] TestAgent_unloadServices/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:17.767Z [WARN]  TestAgent_unloadServices/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:17.769Z [INFO]  TestAgent_unloadServices/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:17.769Z [INFO]  TestAgent_unloadServices/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:17.769Z [INFO]  TestAgent_unloadServices/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:17.769Z [INFO]  TestAgent_unloadServices/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16751 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.769Z [INFO]  TestAgent_unloadServices/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16751 network=udp
>         writer.go:29: 2020-02-23T02:46:17.769Z [INFO]  TestAgent_unloadServices/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16752 network=tcp
>         writer.go:29: 2020-02-23T02:46:17.769Z [INFO]  TestAgent_unloadServices/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:17.769Z [INFO]  TestAgent_unloadServices/service_manager: Endpoints down
> === RUN   TestAgent_Service_MaintenanceMode
> === PAUSE TestAgent_Service_MaintenanceMode
> === RUN   TestAgent_Service_Reap
> --- PASS: TestAgent_Service_Reap (0.85s)
>     writer.go:29: 2020-02-23T02:46:17.777Z [WARN]  TestAgent_Service_Reap: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:17.777Z [DEBUG] TestAgent_Service_Reap.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:17.778Z [DEBUG] TestAgent_Service_Reap.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:17.791Z [INFO]  TestAgent_Service_Reap.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a2fd4dca-796a-5bad-af31-41f635dc255c Address:127.0.0.1:16762}]"
>     writer.go:29: 2020-02-23T02:46:17.791Z [INFO]  TestAgent_Service_Reap.server.serf.wan: serf: EventMemberJoin: Node-a2fd4dca-796a-5bad-af31-41f635dc255c.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:17.791Z [INFO]  TestAgent_Service_Reap.server.serf.lan: serf: EventMemberJoin: Node-a2fd4dca-796a-5bad-af31-41f635dc255c 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:17.792Z [INFO]  TestAgent_Service_Reap: Started DNS server: address=127.0.0.1:16757 network=udp
>     writer.go:29: 2020-02-23T02:46:17.792Z [INFO]  TestAgent_Service_Reap.server.raft: entering follower state: follower="Node at 127.0.0.1:16762 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:17.792Z [INFO]  TestAgent_Service_Reap.server: Adding LAN server: server="Node-a2fd4dca-796a-5bad-af31-41f635dc255c (Addr: tcp/127.0.0.1:16762) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:17.792Z [INFO]  TestAgent_Service_Reap.server: Handled event for server in area: event=member-join server=Node-a2fd4dca-796a-5bad-af31-41f635dc255c.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:17.792Z [INFO]  TestAgent_Service_Reap: Started DNS server: address=127.0.0.1:16757 network=tcp
>     writer.go:29: 2020-02-23T02:46:17.792Z [INFO]  TestAgent_Service_Reap: Started HTTP server: address=127.0.0.1:16758 network=tcp
>     writer.go:29: 2020-02-23T02:46:17.792Z [INFO]  TestAgent_Service_Reap: started state syncer
>     writer.go:29: 2020-02-23T02:46:17.854Z [WARN]  TestAgent_Service_Reap.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:17.854Z [INFO]  TestAgent_Service_Reap.server.raft: entering candidate state: node="Node at 127.0.0.1:16762 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:17.857Z [DEBUG] TestAgent_Service_Reap.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:17.857Z [DEBUG] TestAgent_Service_Reap.server.raft: vote granted: from=a2fd4dca-796a-5bad-af31-41f635dc255c term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:17.857Z [INFO]  TestAgent_Service_Reap.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:17.857Z [INFO]  TestAgent_Service_Reap.server.raft: entering leader state: leader="Node at 127.0.0.1:16762 [Leader]"
>     writer.go:29: 2020-02-23T02:46:17.857Z [INFO]  TestAgent_Service_Reap.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:17.857Z [INFO]  TestAgent_Service_Reap.server: New leader elected: payload=Node-a2fd4dca-796a-5bad-af31-41f635dc255c
>     writer.go:29: 2020-02-23T02:46:17.865Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:17.873Z [INFO]  TestAgent_Service_Reap.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:17.873Z [INFO]  TestAgent_Service_Reap.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:17.873Z [DEBUG] TestAgent_Service_Reap.server: Skipping self join check for node since the cluster is too small: node=Node-a2fd4dca-796a-5bad-af31-41f635dc255c
>     writer.go:29: 2020-02-23T02:46:17.873Z [INFO]  TestAgent_Service_Reap.server: member joined, marking health alive: member=Node-a2fd4dca-796a-5bad-af31-41f635dc255c
>     writer.go:29: 2020-02-23T02:46:17.888Z [DEBUG] TestAgent_Service_Reap: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:17.890Z [INFO]  TestAgent_Service_Reap: Synced node info
>     writer.go:29: 2020-02-23T02:46:17.977Z [WARN]  TestAgent_Service_Reap: Check missed TTL, is now critical: check=service:redis
>     writer.go:29: 2020-02-23T02:46:18.052Z [DEBUG] TestAgent_Service_Reap: Check status updated: check=service:redis status=passing
>     writer.go:29: 2020-02-23T02:46:18.077Z [WARN]  TestAgent_Service_Reap: Check missed TTL, is now critical: check=service:redis
>     writer.go:29: 2020-02-23T02:46:18.293Z [DEBUG] TestAgent_Service_Reap: removed check: check=service:redis
>     writer.go:29: 2020-02-23T02:46:18.293Z [DEBUG] TestAgent_Service_Reap: removed service: service=redis
>     writer.go:29: 2020-02-23T02:46:18.293Z [INFO]  TestAgent_Service_Reap: Check for service has been critical for too long; deregistered service: service=redis check=service:redis
>     writer.go:29: 2020-02-23T02:46:18.553Z [INFO]  TestAgent_Service_Reap: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:18.553Z [INFO]  TestAgent_Service_Reap.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:18.553Z [DEBUG] TestAgent_Service_Reap.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:18.553Z [WARN]  TestAgent_Service_Reap.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:18.553Z [DEBUG] TestAgent_Service_Reap.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:18.582Z [WARN]  TestAgent_Service_Reap.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:18.623Z [INFO]  TestAgent_Service_Reap.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:18.623Z [INFO]  TestAgent_Service_Reap: consul server down
>     writer.go:29: 2020-02-23T02:46:18.623Z [INFO]  TestAgent_Service_Reap: shutdown complete
>     writer.go:29: 2020-02-23T02:46:18.623Z [INFO]  TestAgent_Service_Reap: Stopping server: protocol=DNS address=127.0.0.1:16757 network=tcp
>     writer.go:29: 2020-02-23T02:46:18.623Z [INFO]  TestAgent_Service_Reap: Stopping server: protocol=DNS address=127.0.0.1:16757 network=udp
>     writer.go:29: 2020-02-23T02:46:18.623Z [INFO]  TestAgent_Service_Reap: Stopping server: protocol=HTTP address=127.0.0.1:16758 network=tcp
>     writer.go:29: 2020-02-23T02:46:18.624Z [INFO]  TestAgent_Service_Reap: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:18.624Z [INFO]  TestAgent_Service_Reap: Endpoints down
> === RUN   TestAgent_Service_NoReap
> --- PASS: TestAgent_Service_NoReap (0.87s)
>     writer.go:29: 2020-02-23T02:46:18.633Z [WARN]  TestAgent_Service_NoReap: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:18.633Z [DEBUG] TestAgent_Service_NoReap.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:18.634Z [DEBUG] TestAgent_Service_NoReap.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:18.692Z [INFO]  TestAgent_Service_NoReap.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:e1d5f324-671f-2348-4199-4889dafe2b1d Address:127.0.0.1:16768}]"
>     writer.go:29: 2020-02-23T02:46:18.693Z [INFO]  TestAgent_Service_NoReap.server.serf.wan: serf: EventMemberJoin: Node-e1d5f324-671f-2348-4199-4889dafe2b1d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:18.693Z [INFO]  TestAgent_Service_NoReap.server.serf.lan: serf: EventMemberJoin: Node-e1d5f324-671f-2348-4199-4889dafe2b1d 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:18.693Z [INFO]  TestAgent_Service_NoReap: Started DNS server: address=127.0.0.1:16763 network=udp
>     writer.go:29: 2020-02-23T02:46:18.693Z [INFO]  TestAgent_Service_NoReap.server.raft: entering follower state: follower="Node at 127.0.0.1:16768 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:18.693Z [INFO]  TestAgent_Service_NoReap.server: Adding LAN server: server="Node-e1d5f324-671f-2348-4199-4889dafe2b1d (Addr: tcp/127.0.0.1:16768) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:18.693Z [INFO]  TestAgent_Service_NoReap.server: Handled event for server in area: event=member-join server=Node-e1d5f324-671f-2348-4199-4889dafe2b1d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:18.694Z [INFO]  TestAgent_Service_NoReap: Started DNS server: address=127.0.0.1:16763 network=tcp
>     writer.go:29: 2020-02-23T02:46:18.694Z [INFO]  TestAgent_Service_NoReap: Started HTTP server: address=127.0.0.1:16764 network=tcp
>     writer.go:29: 2020-02-23T02:46:18.694Z [INFO]  TestAgent_Service_NoReap: started state syncer
>     writer.go:29: 2020-02-23T02:46:18.738Z [WARN]  TestAgent_Service_NoReap.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:18.738Z [INFO]  TestAgent_Service_NoReap.server.raft: entering candidate state: node="Node at 127.0.0.1:16768 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:18.741Z [DEBUG] TestAgent_Service_NoReap.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:18.741Z [DEBUG] TestAgent_Service_NoReap.server.raft: vote granted: from=e1d5f324-671f-2348-4199-4889dafe2b1d term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:18.741Z [INFO]  TestAgent_Service_NoReap.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:18.741Z [INFO]  TestAgent_Service_NoReap.server.raft: entering leader state: leader="Node at 127.0.0.1:16768 [Leader]"
>     writer.go:29: 2020-02-23T02:46:18.741Z [INFO]  TestAgent_Service_NoReap.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:18.741Z [INFO]  TestAgent_Service_NoReap.server: New leader elected: payload=Node-e1d5f324-671f-2348-4199-4889dafe2b1d
>     writer.go:29: 2020-02-23T02:46:18.749Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:18.757Z [INFO]  TestAgent_Service_NoReap.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:18.757Z [INFO]  TestAgent_Service_NoReap.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:18.757Z [DEBUG] TestAgent_Service_NoReap.server: Skipping self join check for node since the cluster is too small: node=Node-e1d5f324-671f-2348-4199-4889dafe2b1d
>     writer.go:29: 2020-02-23T02:46:18.757Z [INFO]  TestAgent_Service_NoReap.server: member joined, marking health alive: member=Node-e1d5f324-671f-2348-4199-4889dafe2b1d
>     writer.go:29: 2020-02-23T02:46:19.079Z [DEBUG] TestAgent_Service_NoReap: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:19.093Z [WARN]  TestAgent_Service_NoReap: Check missed TTL, is now critical: check=service:redis
>     writer.go:29: 2020-02-23T02:46:19.205Z [INFO]  TestAgent_Service_NoReap: Synced node info
>     writer.go:29: 2020-02-23T02:46:19.211Z [INFO]  TestAgent_Service_NoReap: Synced service: service=redis
>     writer.go:29: 2020-02-23T02:46:19.211Z [DEBUG] TestAgent_Service_NoReap: Check in sync: check=service:redis
>     writer.go:29: 2020-02-23T02:46:19.468Z [INFO]  TestAgent_Service_NoReap: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:19.468Z [INFO]  TestAgent_Service_NoReap.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:19.468Z [DEBUG] TestAgent_Service_NoReap.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:19.468Z [WARN]  TestAgent_Service_NoReap.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:19.469Z [DEBUG] TestAgent_Service_NoReap.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:19.480Z [WARN]  TestAgent_Service_NoReap.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:19.496Z [INFO]  TestAgent_Service_NoReap.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:19.496Z [INFO]  TestAgent_Service_NoReap: consul server down
>     writer.go:29: 2020-02-23T02:46:19.496Z [INFO]  TestAgent_Service_NoReap: shutdown complete
>     writer.go:29: 2020-02-23T02:46:19.496Z [INFO]  TestAgent_Service_NoReap: Stopping server: protocol=DNS address=127.0.0.1:16763 network=tcp
>     writer.go:29: 2020-02-23T02:46:19.496Z [INFO]  TestAgent_Service_NoReap: Stopping server: protocol=DNS address=127.0.0.1:16763 network=udp
>     writer.go:29: 2020-02-23T02:46:19.496Z [INFO]  TestAgent_Service_NoReap: Stopping server: protocol=HTTP address=127.0.0.1:16764 network=tcp
>     writer.go:29: 2020-02-23T02:46:19.496Z [INFO]  TestAgent_Service_NoReap: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:19.496Z [INFO]  TestAgent_Service_NoReap: Endpoints down
> === RUN   TestAgent_AddService_restoresSnapshot
> === RUN   TestAgent_AddService_restoresSnapshot/normal
> === PAUSE TestAgent_AddService_restoresSnapshot/normal
> === RUN   TestAgent_AddService_restoresSnapshot/service_manager
> === PAUSE TestAgent_AddService_restoresSnapshot/service_manager
> === CONT  TestAgent_AddService_restoresSnapshot/normal
> === CONT  TestAgent_AddService_restoresSnapshot/service_manager
> --- PASS: TestAgent_AddService_restoresSnapshot (0.00s)
>     --- PASS: TestAgent_AddService_restoresSnapshot/normal (0.48s)
>         writer.go:29: 2020-02-23T02:46:19.513Z [WARN]  TestAgent_AddService_restoresSnapshot/normal: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:19.513Z [DEBUG] TestAgent_AddService_restoresSnapshot/normal.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:19.514Z [DEBUG] TestAgent_AddService_restoresSnapshot/normal.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:19.558Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:e2ba07ed-9322-57e1-ec3a-24b5beecc6d6 Address:127.0.0.1:16774}]"
>         writer.go:29: 2020-02-23T02:46:19.558Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.server.serf.wan: serf: EventMemberJoin: Node-e2ba07ed-9322-57e1-ec3a-24b5beecc6d6.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:19.559Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.server.serf.lan: serf: EventMemberJoin: Node-e2ba07ed-9322-57e1-ec3a-24b5beecc6d6 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:19.559Z [INFO]  TestAgent_AddService_restoresSnapshot/normal: Started DNS server: address=127.0.0.1:16769 network=udp
>         writer.go:29: 2020-02-23T02:46:19.559Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.server.raft: entering follower state: follower="Node at 127.0.0.1:16774 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:19.559Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.server: Adding LAN server: server="Node-e2ba07ed-9322-57e1-ec3a-24b5beecc6d6 (Addr: tcp/127.0.0.1:16774) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:19.559Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.server: Handled event for server in area: event=member-join server=Node-e2ba07ed-9322-57e1-ec3a-24b5beecc6d6.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:19.559Z [INFO]  TestAgent_AddService_restoresSnapshot/normal: Started DNS server: address=127.0.0.1:16769 network=tcp
>         writer.go:29: 2020-02-23T02:46:19.560Z [INFO]  TestAgent_AddService_restoresSnapshot/normal: Started HTTP server: address=127.0.0.1:16770 network=tcp
>         writer.go:29: 2020-02-23T02:46:19.560Z [INFO]  TestAgent_AddService_restoresSnapshot/normal: started state syncer
>         writer.go:29: 2020-02-23T02:46:19.610Z [WARN]  TestAgent_AddService_restoresSnapshot/normal.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:19.611Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.server.raft: entering candidate state: node="Node at 127.0.0.1:16774 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:19.614Z [DEBUG] TestAgent_AddService_restoresSnapshot/normal.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:19.614Z [DEBUG] TestAgent_AddService_restoresSnapshot/normal.server.raft: vote granted: from=e2ba07ed-9322-57e1-ec3a-24b5beecc6d6 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:19.614Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:19.614Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.server.raft: entering leader state: leader="Node at 127.0.0.1:16774 [Leader]"
>         writer.go:29: 2020-02-23T02:46:19.614Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:19.614Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.server: New leader elected: payload=Node-e2ba07ed-9322-57e1-ec3a-24b5beecc6d6
>         writer.go:29: 2020-02-23T02:46:19.621Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:19.629Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:19.629Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:19.629Z [DEBUG] TestAgent_AddService_restoresSnapshot/normal.server: Skipping self join check for node since the cluster is too small: node=Node-e2ba07ed-9322-57e1-ec3a-24b5beecc6d6
>         writer.go:29: 2020-02-23T02:46:19.629Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.server: member joined, marking health alive: member=Node-e2ba07ed-9322-57e1-ec3a-24b5beecc6d6
>         writer.go:29: 2020-02-23T02:46:19.828Z [INFO]  TestAgent_AddService_restoresSnapshot/normal: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:19.828Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:19.828Z [DEBUG] TestAgent_AddService_restoresSnapshot/normal.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:19.828Z [WARN]  TestAgent_AddService_restoresSnapshot/normal.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:19.828Z [DEBUG] TestAgent_AddService_restoresSnapshot/normal.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:19.828Z [ERROR] TestAgent_AddService_restoresSnapshot/normal.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:19.935Z [WARN]  TestAgent_AddService_restoresSnapshot/normal.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:19.974Z [INFO]  TestAgent_AddService_restoresSnapshot/normal.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:19.974Z [INFO]  TestAgent_AddService_restoresSnapshot/normal: consul server down
>         writer.go:29: 2020-02-23T02:46:19.974Z [INFO]  TestAgent_AddService_restoresSnapshot/normal: shutdown complete
>         writer.go:29: 2020-02-23T02:46:19.974Z [INFO]  TestAgent_AddService_restoresSnapshot/normal: Stopping server: protocol=DNS address=127.0.0.1:16769 network=tcp
>         writer.go:29: 2020-02-23T02:46:19.975Z [INFO]  TestAgent_AddService_restoresSnapshot/normal: Stopping server: protocol=DNS address=127.0.0.1:16769 network=udp
>         writer.go:29: 2020-02-23T02:46:19.975Z [INFO]  TestAgent_AddService_restoresSnapshot/normal: Stopping server: protocol=HTTP address=127.0.0.1:16770 network=tcp
>         writer.go:29: 2020-02-23T02:46:19.975Z [INFO]  TestAgent_AddService_restoresSnapshot/normal: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:19.975Z [INFO]  TestAgent_AddService_restoresSnapshot/normal: Endpoints down
>     --- PASS: TestAgent_AddService_restoresSnapshot/service_manager (0.47s)
>         writer.go:29: 2020-02-23T02:46:19.513Z [WARN]  TestAgent_AddService_restoresSnapshot/service_manager: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:19.513Z [DEBUG] TestAgent_AddService_restoresSnapshot/service_manager.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:19.514Z [DEBUG] TestAgent_AddService_restoresSnapshot/service_manager.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:19.560Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:065dbff8-46c7-662c-6bd7-b5bbe3cb660d Address:127.0.0.1:16780}]"
>         writer.go:29: 2020-02-23T02:46:19.561Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.server.serf.wan: serf: EventMemberJoin: Node-065dbff8-46c7-662c-6bd7-b5bbe3cb660d.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:19.561Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.server.serf.lan: serf: EventMemberJoin: Node-065dbff8-46c7-662c-6bd7-b5bbe3cb660d 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:19.561Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager: Started DNS server: address=127.0.0.1:16775 network=udp
>         writer.go:29: 2020-02-23T02:46:19.561Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.server.raft: entering follower state: follower="Node at 127.0.0.1:16780 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:19.561Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.server: Adding LAN server: server="Node-065dbff8-46c7-662c-6bd7-b5bbe3cb660d (Addr: tcp/127.0.0.1:16780) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:19.561Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.server: Handled event for server in area: event=member-join server=Node-065dbff8-46c7-662c-6bd7-b5bbe3cb660d.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:19.561Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager: Started DNS server: address=127.0.0.1:16775 network=tcp
>         writer.go:29: 2020-02-23T02:46:19.562Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager: Started HTTP server: address=127.0.0.1:16776 network=tcp
>         writer.go:29: 2020-02-23T02:46:19.562Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager: started state syncer
>         writer.go:29: 2020-02-23T02:46:19.627Z [WARN]  TestAgent_AddService_restoresSnapshot/service_manager.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:19.627Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.server.raft: entering candidate state: node="Node at 127.0.0.1:16780 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:19.632Z [DEBUG] TestAgent_AddService_restoresSnapshot/service_manager.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:19.632Z [DEBUG] TestAgent_AddService_restoresSnapshot/service_manager.server.raft: vote granted: from=065dbff8-46c7-662c-6bd7-b5bbe3cb660d term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:19.632Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:19.632Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.server.raft: entering leader state: leader="Node at 127.0.0.1:16780 [Leader]"
>         writer.go:29: 2020-02-23T02:46:19.632Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:19.632Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.server: New leader elected: payload=Node-065dbff8-46c7-662c-6bd7-b5bbe3cb660d
>         writer.go:29: 2020-02-23T02:46:19.639Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:19.766Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:19.766Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:19.766Z [DEBUG] TestAgent_AddService_restoresSnapshot/service_manager.server: Skipping self join check for node since the cluster is too small: node=Node-065dbff8-46c7-662c-6bd7-b5bbe3cb660d
>         writer.go:29: 2020-02-23T02:46:19.766Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.server: member joined, marking health alive: member=Node-065dbff8-46c7-662c-6bd7-b5bbe3cb660d
>         writer.go:29: 2020-02-23T02:46:19.870Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:19.870Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:19.870Z [DEBUG] TestAgent_AddService_restoresSnapshot/service_manager.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:19.870Z [WARN]  TestAgent_AddService_restoresSnapshot/service_manager.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:19.870Z [ERROR] TestAgent_AddService_restoresSnapshot/service_manager.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:19.870Z [DEBUG] TestAgent_AddService_restoresSnapshot/service_manager.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:19.973Z [WARN]  TestAgent_AddService_restoresSnapshot/service_manager.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:19.975Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:19.975Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager: consul server down
>         writer.go:29: 2020-02-23T02:46:19.975Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager: shutdown complete
>         writer.go:29: 2020-02-23T02:46:19.976Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16775 network=tcp
>         writer.go:29: 2020-02-23T02:46:19.976Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager: Stopping server: protocol=DNS address=127.0.0.1:16775 network=udp
>         writer.go:29: 2020-02-23T02:46:19.976Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager: Stopping server: protocol=HTTP address=127.0.0.1:16776 network=tcp
>         writer.go:29: 2020-02-23T02:46:19.976Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:19.976Z [INFO]  TestAgent_AddService_restoresSnapshot/service_manager: Endpoints down
> === RUN   TestAgent_AddCheck_restoresSnapshot
> === PAUSE TestAgent_AddCheck_restoresSnapshot
> === RUN   TestAgent_NodeMaintenanceMode
> === PAUSE TestAgent_NodeMaintenanceMode
> === RUN   TestAgent_checkStateSnapshot
> === PAUSE TestAgent_checkStateSnapshot
> === RUN   TestAgent_loadChecks_checkFails
> === PAUSE TestAgent_loadChecks_checkFails
> === RUN   TestAgent_persistCheckState
> === PAUSE TestAgent_persistCheckState
> === RUN   TestAgent_loadCheckState
> === PAUSE TestAgent_loadCheckState
> === RUN   TestAgent_purgeCheckState
> === PAUSE TestAgent_purgeCheckState
> === RUN   TestAgent_GetCoordinate
> === PAUSE TestAgent_GetCoordinate
> === RUN   TestAgent_reloadWatches
> === PAUSE TestAgent_reloadWatches
> === RUN   TestAgent_reloadWatchesHTTPS
> === PAUSE TestAgent_reloadWatchesHTTPS
> === RUN   TestAgent_loadTokens
> === PAUSE TestAgent_loadTokens
> === RUN   TestAgent_ReloadConfigOutgoingRPCConfig
> === PAUSE TestAgent_ReloadConfigOutgoingRPCConfig
> === RUN   TestAgent_ReloadConfigIncomingRPCConfig
> === PAUSE TestAgent_ReloadConfigIncomingRPCConfig
> === RUN   TestAgent_ReloadConfigTLSConfigFailure
> === PAUSE TestAgent_ReloadConfigTLSConfigFailure
> === RUN   TestAgent_consulConfig_AutoEncryptAllowTLS
> === PAUSE TestAgent_consulConfig_AutoEncryptAllowTLS
> === RUN   TestAgent_consulConfig_RaftTrailingLogs
> === PAUSE TestAgent_consulConfig_RaftTrailingLogs
> === RUN   TestAgent_grpcInjectAddr
> === RUN   TestAgent_grpcInjectAddr/localhost_web_svc
> === RUN   TestAgent_grpcInjectAddr/localhost_no_svc
> === RUN   TestAgent_grpcInjectAddr/ipv4_web_svc
> === RUN   TestAgent_grpcInjectAddr/ipv4_no_svc
> === RUN   TestAgent_grpcInjectAddr/ipv6_no_svc
> === RUN   TestAgent_grpcInjectAddr/ipv6_web_svc
> === RUN   TestAgent_grpcInjectAddr/zone_ipv6_web_svc
> === RUN   TestAgent_grpcInjectAddr/ipv6_literal_web_svc
> === RUN   TestAgent_grpcInjectAddr/ipv6_injected_into_ipv6_url
> === RUN   TestAgent_grpcInjectAddr/ipv6_injected_into_ipv6_url_with_svc
> === RUN   TestAgent_grpcInjectAddr/ipv6_injected_into_ipv6_url_with_special
> --- PASS: TestAgent_grpcInjectAddr (0.00s)
>     --- PASS: TestAgent_grpcInjectAddr/localhost_web_svc (0.00s)
>     --- PASS: TestAgent_grpcInjectAddr/localhost_no_svc (0.00s)
>     --- PASS: TestAgent_grpcInjectAddr/ipv4_web_svc (0.00s)
>     --- PASS: TestAgent_grpcInjectAddr/ipv4_no_svc (0.00s)
>     --- PASS: TestAgent_grpcInjectAddr/ipv6_no_svc (0.00s)
>     --- PASS: TestAgent_grpcInjectAddr/ipv6_web_svc (0.00s)
>     --- PASS: TestAgent_grpcInjectAddr/zone_ipv6_web_svc (0.00s)
>     --- PASS: TestAgent_grpcInjectAddr/ipv6_literal_web_svc (0.00s)
>     --- PASS: TestAgent_grpcInjectAddr/ipv6_injected_into_ipv6_url (0.00s)
>     --- PASS: TestAgent_grpcInjectAddr/ipv6_injected_into_ipv6_url_with_svc (0.00s)
>     --- PASS: TestAgent_grpcInjectAddr/ipv6_injected_into_ipv6_url_with_special (0.00s)
> === RUN   TestAgent_httpInjectAddr
> === RUN   TestAgent_httpInjectAddr/localhost_health
> === RUN   TestAgent_httpInjectAddr/https_localhost_health
> === RUN   TestAgent_httpInjectAddr/https_ipv4_health
> === RUN   TestAgent_httpInjectAddr/https_ipv4_without_path
> === RUN   TestAgent_httpInjectAddr/https_ipv6_health
> === RUN   TestAgent_httpInjectAddr/https_ipv6_with_zone
> === RUN   TestAgent_httpInjectAddr/https_ipv6_literal
> === RUN   TestAgent_httpInjectAddr/https_ipv6_without_path
> === RUN   TestAgent_httpInjectAddr/ipv6_injected_into_ipv6_url
> === RUN   TestAgent_httpInjectAddr/ipv6_with_brackets_injected_into_ipv6_url
> === RUN   TestAgent_httpInjectAddr/short_domain_health
> === RUN   TestAgent_httpInjectAddr/nested_url_in_query
> --- PASS: TestAgent_httpInjectAddr (0.00s)
>     --- PASS: TestAgent_httpInjectAddr/localhost_health (0.00s)
>     --- PASS: TestAgent_httpInjectAddr/https_localhost_health (0.00s)
>     --- PASS: TestAgent_httpInjectAddr/https_ipv4_health (0.00s)
>     --- PASS: TestAgent_httpInjectAddr/https_ipv4_without_path (0.00s)
>     --- PASS: TestAgent_httpInjectAddr/https_ipv6_health (0.00s)
>     --- PASS: TestAgent_httpInjectAddr/https_ipv6_with_zone (0.00s)
>     --- PASS: TestAgent_httpInjectAddr/https_ipv6_literal (0.00s)
>     --- PASS: TestAgent_httpInjectAddr/https_ipv6_without_path (0.00s)
>     --- PASS: TestAgent_httpInjectAddr/ipv6_injected_into_ipv6_url (0.00s)
>     --- PASS: TestAgent_httpInjectAddr/ipv6_with_brackets_injected_into_ipv6_url (0.00s)
>     --- PASS: TestAgent_httpInjectAddr/short_domain_health (0.00s)
>     --- PASS: TestAgent_httpInjectAddr/nested_url_in_query (0.00s)
> === RUN   TestDefaultIfEmpty
> --- PASS: TestDefaultIfEmpty (0.00s)
> === RUN   TestConfigSourceFromName
> === RUN   TestConfigSourceFromName/local
> === RUN   TestConfigSourceFromName/remote
> === RUN   TestConfigSourceFromName/#00
> === RUN   TestConfigSourceFromName/LOCAL
> === RUN   TestConfigSourceFromName/REMOTE
> === RUN   TestConfigSourceFromName/garbage
> === RUN   TestConfigSourceFromName/_
> --- PASS: TestConfigSourceFromName (0.00s)
>     --- PASS: TestConfigSourceFromName/local (0.00s)
>     --- PASS: TestConfigSourceFromName/remote (0.00s)
>     --- PASS: TestConfigSourceFromName/#00 (0.00s)
>     --- PASS: TestConfigSourceFromName/LOCAL (0.00s)
>     --- PASS: TestConfigSourceFromName/REMOTE (0.00s)
>     --- PASS: TestConfigSourceFromName/garbage (0.00s)
>     --- PASS: TestConfigSourceFromName/_ (0.00s)
> === RUN   TestAgent_RerouteExistingHTTPChecks
> === PAUSE TestAgent_RerouteExistingHTTPChecks
> === RUN   TestAgent_RerouteNewHTTPChecks
> === PAUSE TestAgent_RerouteNewHTTPChecks
> === RUN   TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521
> === PAUSE TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521
> === RUN   TestBlacklist
> === PAUSE TestBlacklist
> === RUN   TestCatalogRegister_Service_InvalidAddress
> === PAUSE TestCatalogRegister_Service_InvalidAddress
> === RUN   TestCatalogDeregister
> === PAUSE TestCatalogDeregister
> === RUN   TestCatalogDatacenters
> === PAUSE TestCatalogDatacenters
> === RUN   TestCatalogNodes
> === PAUSE TestCatalogNodes
> === RUN   TestCatalogNodes_MetaFilter
> === PAUSE TestCatalogNodes_MetaFilter
> === RUN   TestCatalogNodes_Filter
> === PAUSE TestCatalogNodes_Filter
> === RUN   TestCatalogNodes_WanTranslation
> --- SKIP: TestCatalogNodes_WanTranslation (0.00s)
>     catalog_endpoint_test.go:194: DM-skipped
> === RUN   TestCatalogNodes_Blocking
> === PAUSE TestCatalogNodes_Blocking
> === RUN   TestCatalogNodes_DistanceSort
> === PAUSE TestCatalogNodes_DistanceSort
> === RUN   TestCatalogServices
> === PAUSE TestCatalogServices
> === RUN   TestCatalogServices_NodeMetaFilter
> === PAUSE TestCatalogServices_NodeMetaFilter
> === RUN   TestCatalogRegister_checkRegistration
> === PAUSE TestCatalogRegister_checkRegistration
> === RUN   TestCatalogServiceNodes
> === PAUSE TestCatalogServiceNodes
> === RUN   TestCatalogServiceNodes_NodeMetaFilter
> === PAUSE TestCatalogServiceNodes_NodeMetaFilter
> === RUN   TestCatalogServiceNodes_Filter
> === PAUSE TestCatalogServiceNodes_Filter
> === RUN   TestCatalogServiceNodes_WanTranslation
> --- SKIP: TestCatalogServiceNodes_WanTranslation (0.00s)
>     catalog_endpoint_test.go:813: DM-skipped
> === RUN   TestCatalogServiceNodes_DistanceSort
> === PAUSE TestCatalogServiceNodes_DistanceSort
> === RUN   TestCatalogServiceNodes_ConnectProxy
> === PAUSE TestCatalogServiceNodes_ConnectProxy
> === RUN   TestCatalogConnectServiceNodes_good
> === PAUSE TestCatalogConnectServiceNodes_good
> === RUN   TestCatalogConnectServiceNodes_Filter
> --- SKIP: TestCatalogConnectServiceNodes_Filter (0.00s)
>     catalog_endpoint_test.go:1043: DM-skipped
> === RUN   TestCatalogNodeServices
> === PAUSE TestCatalogNodeServices
> === RUN   TestCatalogNodeServiceList
> === PAUSE TestCatalogNodeServiceList
> === RUN   TestCatalogNodeServices_Filter
> === PAUSE TestCatalogNodeServices_Filter
> === RUN   TestCatalogNodeServices_ConnectProxy
> === PAUSE TestCatalogNodeServices_ConnectProxy
> === RUN   TestCatalogNodeServices_WanTranslation
> --- SKIP: TestCatalogNodeServices_WanTranslation (0.00s)
>     catalog_endpoint_test.go:1240: DM-skipped
> === RUN   TestConfig_Get
> === PAUSE TestConfig_Get
> === RUN   TestConfig_Delete
> === PAUSE TestConfig_Delete
> === RUN   TestConfig_Apply
> === PAUSE TestConfig_Apply
> === RUN   TestConfig_Apply_ProxyDefaultsMeshGateway
> === PAUSE TestConfig_Apply_ProxyDefaultsMeshGateway
> === RUN   TestConfig_Apply_CAS
> === PAUSE TestConfig_Apply_CAS
> === RUN   TestConfig_Apply_Decoding
> === PAUSE TestConfig_Apply_Decoding
> === RUN   TestConfig_Apply_ProxyDefaultsExpose
> === PAUSE TestConfig_Apply_ProxyDefaultsExpose
> === RUN   TestConnectCARoots_empty
> === PAUSE TestConnectCARoots_empty
> === RUN   TestConnectCARoots_list
> === PAUSE TestConnectCARoots_list
> === RUN   TestConnectCAConfig
> === PAUSE TestConnectCAConfig
> === RUN   TestCoordinate_Disabled_Response
> === PAUSE TestCoordinate_Disabled_Response
> === RUN   TestCoordinate_Datacenters
> --- SKIP: TestCoordinate_Datacenters (0.00s)
>     coordinate_endpoint_test.go:54: DM-skipped
> === RUN   TestCoordinate_Nodes
> --- SKIP: TestCoordinate_Nodes (0.00s)
>     coordinate_endpoint_test.go:81: DM-skipped
> === RUN   TestCoordinate_Node
> === PAUSE TestCoordinate_Node
> === RUN   TestCoordinate_Update
> === PAUSE TestCoordinate_Update
> === RUN   TestCoordinate_Update_ACLDeny
> === PAUSE TestCoordinate_Update_ACLDeny
> === RUN   TestDiscoveryChainRead
> === PAUSE TestDiscoveryChainRead
> === RUN   TestRecursorAddr
> === PAUSE TestRecursorAddr
> === RUN   TestEncodeKVasRFC1464
> --- PASS: TestEncodeKVasRFC1464 (0.00s)
> === RUN   TestDNS_Over_TCP
> === PAUSE TestDNS_Over_TCP
> === RUN   TestDNS_NodeLookup
> --- SKIP: TestDNS_NodeLookup (0.00s)
>     dns_test.go:177: DM-skipped
> === RUN   TestDNS_CaseInsensitiveNodeLookup
> === PAUSE TestDNS_CaseInsensitiveNodeLookup
> === RUN   TestDNS_NodeLookup_PeriodName
> === PAUSE TestDNS_NodeLookup_PeriodName
> === RUN   TestDNS_NodeLookup_AAAA
> === PAUSE TestDNS_NodeLookup_AAAA
> === RUN   TestDNSCycleRecursorCheck
> === PAUSE TestDNSCycleRecursorCheck
> === RUN   TestDNSCycleRecursorCheckAllFail
> --- SKIP: TestDNSCycleRecursorCheckAllFail (0.00s)
>     dns_test.go:422: DM-skipped
> === RUN   TestDNS_NodeLookup_CNAME
> === PAUSE TestDNS_NodeLookup_CNAME
> === RUN   TestDNS_NodeLookup_TXT
> --- PASS: TestDNS_NodeLookup_TXT (0.28s)
>     writer.go:29: 2020-02-23T02:46:19.988Z [WARN]  TestDNS_NodeLookup_TXT: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:19.988Z [DEBUG] TestDNS_NodeLookup_TXT.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:19.989Z [DEBUG] TestDNS_NodeLookup_TXT.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:19.998Z [INFO]  TestDNS_NodeLookup_TXT.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:cb790d06-3d36-730d-7c94-dbb73e4605e5 Address:127.0.0.1:16786}]"
>     writer.go:29: 2020-02-23T02:46:19.999Z [INFO]  TestDNS_NodeLookup_TXT.server.raft: entering follower state: follower="Node at 127.0.0.1:16786 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:20.000Z [INFO]  TestDNS_NodeLookup_TXT.server.serf.wan: serf: EventMemberJoin: Node-cb790d06-3d36-730d-7c94-dbb73e4605e5.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:20.000Z [INFO]  TestDNS_NodeLookup_TXT.server.serf.lan: serf: EventMemberJoin: Node-cb790d06-3d36-730d-7c94-dbb73e4605e5 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:20.000Z [INFO]  TestDNS_NodeLookup_TXT: Started DNS server: address=127.0.0.1:16781 network=udp
>     writer.go:29: 2020-02-23T02:46:20.001Z [INFO]  TestDNS_NodeLookup_TXT.server: Adding LAN server: server="Node-cb790d06-3d36-730d-7c94-dbb73e4605e5 (Addr: tcp/127.0.0.1:16786) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:20.001Z [INFO]  TestDNS_NodeLookup_TXT.server: Handled event for server in area: event=member-join server=Node-cb790d06-3d36-730d-7c94-dbb73e4605e5.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:20.001Z [INFO]  TestDNS_NodeLookup_TXT: Started DNS server: address=127.0.0.1:16781 network=tcp
>     writer.go:29: 2020-02-23T02:46:20.001Z [INFO]  TestDNS_NodeLookup_TXT: Started HTTP server: address=127.0.0.1:16782 network=tcp
>     writer.go:29: 2020-02-23T02:46:20.001Z [INFO]  TestDNS_NodeLookup_TXT: started state syncer
>     writer.go:29: 2020-02-23T02:46:20.035Z [WARN]  TestDNS_NodeLookup_TXT.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:20.035Z [INFO]  TestDNS_NodeLookup_TXT.server.raft: entering candidate state: node="Node at 127.0.0.1:16786 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:20.038Z [DEBUG] TestDNS_NodeLookup_TXT.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:20.038Z [DEBUG] TestDNS_NodeLookup_TXT.server.raft: vote granted: from=cb790d06-3d36-730d-7c94-dbb73e4605e5 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:20.038Z [INFO]  TestDNS_NodeLookup_TXT.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:20.038Z [INFO]  TestDNS_NodeLookup_TXT.server.raft: entering leader state: leader="Node at 127.0.0.1:16786 [Leader]"
>     writer.go:29: 2020-02-23T02:46:20.038Z [INFO]  TestDNS_NodeLookup_TXT.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:20.038Z [INFO]  TestDNS_NodeLookup_TXT.server: New leader elected: payload=Node-cb790d06-3d36-730d-7c94-dbb73e4605e5
>     writer.go:29: 2020-02-23T02:46:20.046Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:20.053Z [INFO]  TestDNS_NodeLookup_TXT.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:20.053Z [INFO]  TestDNS_NodeLookup_TXT.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:20.053Z [DEBUG] TestDNS_NodeLookup_TXT.server: Skipping self join check for node since the cluster is too small: node=Node-cb790d06-3d36-730d-7c94-dbb73e4605e5
>     writer.go:29: 2020-02-23T02:46:20.053Z [INFO]  TestDNS_NodeLookup_TXT.server: member joined, marking health alive: member=Node-cb790d06-3d36-730d-7c94-dbb73e4605e5
>     writer.go:29: 2020-02-23T02:46:20.198Z [DEBUG] TestDNS_NodeLookup_TXT: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:20.202Z [INFO]  TestDNS_NodeLookup_TXT: Synced node info
>     writer.go:29: 2020-02-23T02:46:20.251Z [DEBUG] TestDNS_NodeLookup_TXT.dns: request served from client: name=google.node.consul. type=TXT class=IN latency=124.081µs client=127.0.0.1:41539 client_network=udp
>     writer.go:29: 2020-02-23T02:46:20.251Z [INFO]  TestDNS_NodeLookup_TXT: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:20.251Z [INFO]  TestDNS_NodeLookup_TXT.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:20.251Z [DEBUG] TestDNS_NodeLookup_TXT.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:20.251Z [WARN]  TestDNS_NodeLookup_TXT.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:20.251Z [DEBUG] TestDNS_NodeLookup_TXT.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:20.253Z [WARN]  TestDNS_NodeLookup_TXT.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:20.255Z [INFO]  TestDNS_NodeLookup_TXT.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:20.255Z [INFO]  TestDNS_NodeLookup_TXT: consul server down
>     writer.go:29: 2020-02-23T02:46:20.255Z [INFO]  TestDNS_NodeLookup_TXT: shutdown complete
>     writer.go:29: 2020-02-23T02:46:20.255Z [INFO]  TestDNS_NodeLookup_TXT: Stopping server: protocol=DNS address=127.0.0.1:16781 network=tcp
>     writer.go:29: 2020-02-23T02:46:20.255Z [INFO]  TestDNS_NodeLookup_TXT: Stopping server: protocol=DNS address=127.0.0.1:16781 network=udp
>     writer.go:29: 2020-02-23T02:46:20.255Z [INFO]  TestDNS_NodeLookup_TXT: Stopping server: protocol=HTTP address=127.0.0.1:16782 network=tcp
>     writer.go:29: 2020-02-23T02:46:20.255Z [INFO]  TestDNS_NodeLookup_TXT: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:20.255Z [INFO]  TestDNS_NodeLookup_TXT: Endpoints down
> === RUN   TestDNS_NodeLookup_TXT_DontSuppress
> --- PASS: TestDNS_NodeLookup_TXT_DontSuppress (0.14s)
>     writer.go:29: 2020-02-23T02:46:20.262Z [WARN]  TestDNS_NodeLookup_TXT_DontSuppress: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:20.262Z [DEBUG] TestDNS_NodeLookup_TXT_DontSuppress.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:20.263Z [DEBUG] TestDNS_NodeLookup_TXT_DontSuppress.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:20.277Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d3eff098-0850-c97a-671f-5c8eb8d5c083 Address:127.0.0.1:16792}]"
>     writer.go:29: 2020-02-23T02:46:20.277Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.server.serf.wan: serf: EventMemberJoin: Node-d3eff098-0850-c97a-671f-5c8eb8d5c083.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:20.278Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.server.serf.lan: serf: EventMemberJoin: Node-d3eff098-0850-c97a-671f-5c8eb8d5c083 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:20.278Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress: Started DNS server: address=127.0.0.1:16787 network=udp
>     writer.go:29: 2020-02-23T02:46:20.278Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.server.raft: entering follower state: follower="Node at 127.0.0.1:16792 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:20.278Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.server: Adding LAN server: server="Node-d3eff098-0850-c97a-671f-5c8eb8d5c083 (Addr: tcp/127.0.0.1:16792) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:20.278Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.server: Handled event for server in area: event=member-join server=Node-d3eff098-0850-c97a-671f-5c8eb8d5c083.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:20.278Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress: Started DNS server: address=127.0.0.1:16787 network=tcp
>     writer.go:29: 2020-02-23T02:46:20.279Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress: Started HTTP server: address=127.0.0.1:16788 network=tcp
>     writer.go:29: 2020-02-23T02:46:20.279Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress: started state syncer
>     writer.go:29: 2020-02-23T02:46:20.334Z [WARN]  TestDNS_NodeLookup_TXT_DontSuppress.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:20.334Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.server.raft: entering candidate state: node="Node at 127.0.0.1:16792 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:20.338Z [DEBUG] TestDNS_NodeLookup_TXT_DontSuppress.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:20.338Z [DEBUG] TestDNS_NodeLookup_TXT_DontSuppress.server.raft: vote granted: from=d3eff098-0850-c97a-671f-5c8eb8d5c083 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:20.338Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:20.338Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.server.raft: entering leader state: leader="Node at 127.0.0.1:16792 [Leader]"
>     writer.go:29: 2020-02-23T02:46:20.338Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:20.338Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.server: New leader elected: payload=Node-d3eff098-0850-c97a-671f-5c8eb8d5c083
>     writer.go:29: 2020-02-23T02:46:20.355Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:20.367Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:20.367Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:20.367Z [DEBUG] TestDNS_NodeLookup_TXT_DontSuppress.server: Skipping self join check for node since the cluster is too small: node=Node-d3eff098-0850-c97a-671f-5c8eb8d5c083
>     writer.go:29: 2020-02-23T02:46:20.367Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.server: member joined, marking health alive: member=Node-d3eff098-0850-c97a-671f-5c8eb8d5c083
>     writer.go:29: 2020-02-23T02:46:20.385Z [DEBUG] TestDNS_NodeLookup_TXT_DontSuppress.dns: request served from client: name=google.node.consul. type=TXT class=IN latency=99.535µs client=127.0.0.1:44201 client_network=udp
>     writer.go:29: 2020-02-23T02:46:20.385Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:20.385Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:20.385Z [DEBUG] TestDNS_NodeLookup_TXT_DontSuppress.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:20.385Z [WARN]  TestDNS_NodeLookup_TXT_DontSuppress.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:20.385Z [ERROR] TestDNS_NodeLookup_TXT_DontSuppress.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:20.385Z [DEBUG] TestDNS_NodeLookup_TXT_DontSuppress.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:20.390Z [WARN]  TestDNS_NodeLookup_TXT_DontSuppress.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:20.392Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:20.392Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress: consul server down
>     writer.go:29: 2020-02-23T02:46:20.392Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress: shutdown complete
>     writer.go:29: 2020-02-23T02:46:20.392Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress: Stopping server: protocol=DNS address=127.0.0.1:16787 network=tcp
>     writer.go:29: 2020-02-23T02:46:20.392Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress: Stopping server: protocol=DNS address=127.0.0.1:16787 network=udp
>     writer.go:29: 2020-02-23T02:46:20.392Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress: Stopping server: protocol=HTTP address=127.0.0.1:16788 network=tcp
>     writer.go:29: 2020-02-23T02:46:20.392Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:20.392Z [INFO]  TestDNS_NodeLookup_TXT_DontSuppress: Endpoints down
> === RUN   TestDNS_NodeLookup_ANY
> --- PASS: TestDNS_NodeLookup_ANY (0.47s)
>     writer.go:29: 2020-02-23T02:46:20.399Z [WARN]  TestDNS_NodeLookup_ANY: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:20.399Z [DEBUG] TestDNS_NodeLookup_ANY.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:20.399Z [DEBUG] TestDNS_NodeLookup_ANY.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:20.443Z [INFO]  TestDNS_NodeLookup_ANY.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:89741ee3-ed44-b465-01b2-7ffba2798213 Address:127.0.0.1:16798}]"
>     writer.go:29: 2020-02-23T02:46:20.443Z [INFO]  TestDNS_NodeLookup_ANY.server.raft: entering follower state: follower="Node at 127.0.0.1:16798 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:20.444Z [INFO]  TestDNS_NodeLookup_ANY.server.serf.wan: serf: EventMemberJoin: Node-89741ee3-ed44-b465-01b2-7ffba2798213.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:20.445Z [INFO]  TestDNS_NodeLookup_ANY.server.serf.lan: serf: EventMemberJoin: Node-89741ee3-ed44-b465-01b2-7ffba2798213 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:20.445Z [INFO]  TestDNS_NodeLookup_ANY.server: Adding LAN server: server="Node-89741ee3-ed44-b465-01b2-7ffba2798213 (Addr: tcp/127.0.0.1:16798) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:20.445Z [INFO]  TestDNS_NodeLookup_ANY.server: Handled event for server in area: event=member-join server=Node-89741ee3-ed44-b465-01b2-7ffba2798213.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:20.445Z [INFO]  TestDNS_NodeLookup_ANY: Started DNS server: address=127.0.0.1:16793 network=tcp
>     writer.go:29: 2020-02-23T02:46:20.445Z [INFO]  TestDNS_NodeLookup_ANY: Started DNS server: address=127.0.0.1:16793 network=udp
>     writer.go:29: 2020-02-23T02:46:20.446Z [INFO]  TestDNS_NodeLookup_ANY: Started HTTP server: address=127.0.0.1:16794 network=tcp
>     writer.go:29: 2020-02-23T02:46:20.446Z [INFO]  TestDNS_NodeLookup_ANY: started state syncer
>     writer.go:29: 2020-02-23T02:46:20.500Z [WARN]  TestDNS_NodeLookup_ANY.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:20.500Z [INFO]  TestDNS_NodeLookup_ANY.server.raft: entering candidate state: node="Node at 127.0.0.1:16798 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:20.600Z [DEBUG] TestDNS_NodeLookup_ANY.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:20.600Z [DEBUG] TestDNS_NodeLookup_ANY.server.raft: vote granted: from=89741ee3-ed44-b465-01b2-7ffba2798213 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:20.600Z [INFO]  TestDNS_NodeLookup_ANY.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:20.600Z [INFO]  TestDNS_NodeLookup_ANY.server.raft: entering leader state: leader="Node at 127.0.0.1:16798 [Leader]"
>     writer.go:29: 2020-02-23T02:46:20.600Z [INFO]  TestDNS_NodeLookup_ANY.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:20.600Z [INFO]  TestDNS_NodeLookup_ANY.server: New leader elected: payload=Node-89741ee3-ed44-b465-01b2-7ffba2798213
>     writer.go:29: 2020-02-23T02:46:20.611Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:20.619Z [INFO]  TestDNS_NodeLookup_ANY.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:20.619Z [INFO]  TestDNS_NodeLookup_ANY.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:20.619Z [DEBUG] TestDNS_NodeLookup_ANY.server: Skipping self join check for node since the cluster is too small: node=Node-89741ee3-ed44-b465-01b2-7ffba2798213
>     writer.go:29: 2020-02-23T02:46:20.619Z [INFO]  TestDNS_NodeLookup_ANY.server: member joined, marking health alive: member=Node-89741ee3-ed44-b465-01b2-7ffba2798213
>     writer.go:29: 2020-02-23T02:46:20.663Z [DEBUG] TestDNS_NodeLookup_ANY: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:20.666Z [INFO]  TestDNS_NodeLookup_ANY: Synced node info
>     writer.go:29: 2020-02-23T02:46:20.666Z [DEBUG] TestDNS_NodeLookup_ANY: Node info in sync
>     writer.go:29: 2020-02-23T02:46:20.855Z [DEBUG] TestDNS_NodeLookup_ANY.dns: request served from client: name=bar.node.consul. type=ANY class=IN latency=110.201µs client=127.0.0.1:41276 client_network=udp
>     writer.go:29: 2020-02-23T02:46:20.855Z [INFO]  TestDNS_NodeLookup_ANY: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:20.855Z [INFO]  TestDNS_NodeLookup_ANY.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:20.855Z [DEBUG] TestDNS_NodeLookup_ANY.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:20.855Z [WARN]  TestDNS_NodeLookup_ANY.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:20.855Z [DEBUG] TestDNS_NodeLookup_ANY.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:20.857Z [WARN]  TestDNS_NodeLookup_ANY.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:20.859Z [INFO]  TestDNS_NodeLookup_ANY.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:20.859Z [INFO]  TestDNS_NodeLookup_ANY: consul server down
>     writer.go:29: 2020-02-23T02:46:20.859Z [INFO]  TestDNS_NodeLookup_ANY: shutdown complete
>     writer.go:29: 2020-02-23T02:46:20.859Z [INFO]  TestDNS_NodeLookup_ANY: Stopping server: protocol=DNS address=127.0.0.1:16793 network=tcp
>     writer.go:29: 2020-02-23T02:46:20.859Z [INFO]  TestDNS_NodeLookup_ANY: Stopping server: protocol=DNS address=127.0.0.1:16793 network=udp
>     writer.go:29: 2020-02-23T02:46:20.859Z [INFO]  TestDNS_NodeLookup_ANY: Stopping server: protocol=HTTP address=127.0.0.1:16794 network=tcp
>     writer.go:29: 2020-02-23T02:46:20.859Z [INFO]  TestDNS_NodeLookup_ANY: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:20.859Z [INFO]  TestDNS_NodeLookup_ANY: Endpoints down
> === RUN   TestDNS_NodeLookup_ANY_DontSuppressTXT
> --- PASS: TestDNS_NodeLookup_ANY_DontSuppressTXT (0.22s)
>     writer.go:29: 2020-02-23T02:46:20.867Z [WARN]  TestDNS_NodeLookup_ANY_DontSuppressTXT: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:20.867Z [DEBUG] TestDNS_NodeLookup_ANY_DontSuppressTXT.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:20.867Z [DEBUG] TestDNS_NodeLookup_ANY_DontSuppressTXT.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:20.880Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:23240f04-49e0-9e3f-3945-64781f08cca5 Address:127.0.0.1:16804}]"
>     writer.go:29: 2020-02-23T02:46:20.881Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server.raft: entering follower state: follower="Node at 127.0.0.1:16804 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:20.881Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server.serf.wan: serf: EventMemberJoin: Node-23240f04-49e0-9e3f-3945-64781f08cca5.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:20.882Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server.serf.lan: serf: EventMemberJoin: Node-23240f04-49e0-9e3f-3945-64781f08cca5 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:20.882Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server: Handled event for server in area: event=member-join server=Node-23240f04-49e0-9e3f-3945-64781f08cca5.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:20.882Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server: Adding LAN server: server="Node-23240f04-49e0-9e3f-3945-64781f08cca5 (Addr: tcp/127.0.0.1:16804) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:20.882Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT: Started DNS server: address=127.0.0.1:16799 network=tcp
>     writer.go:29: 2020-02-23T02:46:20.882Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT: Started DNS server: address=127.0.0.1:16799 network=udp
>     writer.go:29: 2020-02-23T02:46:20.883Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT: Started HTTP server: address=127.0.0.1:16800 network=tcp
>     writer.go:29: 2020-02-23T02:46:20.883Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT: started state syncer
>     writer.go:29: 2020-02-23T02:46:20.928Z [WARN]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:20.928Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server.raft: entering candidate state: node="Node at 127.0.0.1:16804 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:20.932Z [DEBUG] TestDNS_NodeLookup_ANY_DontSuppressTXT.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:20.932Z [DEBUG] TestDNS_NodeLookup_ANY_DontSuppressTXT.server.raft: vote granted: from=23240f04-49e0-9e3f-3945-64781f08cca5 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:20.932Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:20.932Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server.raft: entering leader state: leader="Node at 127.0.0.1:16804 [Leader]"
>     writer.go:29: 2020-02-23T02:46:20.932Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:20.932Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server: New leader elected: payload=Node-23240f04-49e0-9e3f-3945-64781f08cca5
>     writer.go:29: 2020-02-23T02:46:20.940Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:20.948Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:20.948Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:20.948Z [DEBUG] TestDNS_NodeLookup_ANY_DontSuppressTXT.server: Skipping self join check for node since the cluster is too small: node=Node-23240f04-49e0-9e3f-3945-64781f08cca5
>     writer.go:29: 2020-02-23T02:46:20.948Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server: member joined, marking health alive: member=Node-23240f04-49e0-9e3f-3945-64781f08cca5
>     writer.go:29: 2020-02-23T02:46:21.040Z [DEBUG] TestDNS_NodeLookup_ANY_DontSuppressTXT.dns: request served from client: name=bar.node.consul. type=ANY class=IN latency=122.496µs client=127.0.0.1:43614 client_network=udp
>     writer.go:29: 2020-02-23T02:46:21.040Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:21.040Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:21.040Z [DEBUG] TestDNS_NodeLookup_ANY_DontSuppressTXT.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:21.040Z [WARN]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:21.041Z [ERROR] TestDNS_NodeLookup_ANY_DontSuppressTXT.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:21.041Z [DEBUG] TestDNS_NodeLookup_ANY_DontSuppressTXT.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:21.062Z [WARN]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:21.079Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:21.079Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT: consul server down
>     writer.go:29: 2020-02-23T02:46:21.079Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT: shutdown complete
>     writer.go:29: 2020-02-23T02:46:21.079Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT: Stopping server: protocol=DNS address=127.0.0.1:16799 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.079Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT: Stopping server: protocol=DNS address=127.0.0.1:16799 network=udp
>     writer.go:29: 2020-02-23T02:46:21.079Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT: Stopping server: protocol=HTTP address=127.0.0.1:16800 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.079Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:21.079Z [INFO]  TestDNS_NodeLookup_ANY_DontSuppressTXT: Endpoints down
> === RUN   TestDNS_NodeLookup_A_SuppressTXT
> --- PASS: TestDNS_NodeLookup_A_SuppressTXT (0.50s)
>     writer.go:29: 2020-02-23T02:46:21.087Z [WARN]  TestDNS_NodeLookup_A_SuppressTXT: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:21.087Z [DEBUG] TestDNS_NodeLookup_A_SuppressTXT.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:21.087Z [DEBUG] TestDNS_NodeLookup_A_SuppressTXT.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:21.147Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:cd0b74c4-78d7-3ac7-a1bd-5d4eb305f731 Address:127.0.0.1:16810}]"
>     writer.go:29: 2020-02-23T02:46:21.147Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.server.raft: entering follower state: follower="Node at 127.0.0.1:16810 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:21.147Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.server.serf.wan: serf: EventMemberJoin: Node-cd0b74c4-78d7-3ac7-a1bd-5d4eb305f731.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:21.148Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.server.serf.lan: serf: EventMemberJoin: Node-cd0b74c4-78d7-3ac7-a1bd-5d4eb305f731 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:21.148Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.server: Adding LAN server: server="Node-cd0b74c4-78d7-3ac7-a1bd-5d4eb305f731 (Addr: tcp/127.0.0.1:16810) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:21.148Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT: Started DNS server: address=127.0.0.1:16805 network=udp
>     writer.go:29: 2020-02-23T02:46:21.148Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.server: Handled event for server in area: event=member-join server=Node-cd0b74c4-78d7-3ac7-a1bd-5d4eb305f731.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:21.148Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT: Started DNS server: address=127.0.0.1:16805 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.148Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT: Started HTTP server: address=127.0.0.1:16806 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.148Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT: started state syncer
>     writer.go:29: 2020-02-23T02:46:21.189Z [WARN]  TestDNS_NodeLookup_A_SuppressTXT.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:21.189Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.server.raft: entering candidate state: node="Node at 127.0.0.1:16810 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:21.193Z [DEBUG] TestDNS_NodeLookup_A_SuppressTXT.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:21.193Z [DEBUG] TestDNS_NodeLookup_A_SuppressTXT.server.raft: vote granted: from=cd0b74c4-78d7-3ac7-a1bd-5d4eb305f731 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:21.193Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:21.193Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.server.raft: entering leader state: leader="Node at 127.0.0.1:16810 [Leader]"
>     writer.go:29: 2020-02-23T02:46:21.193Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:21.193Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.server: New leader elected: payload=Node-cd0b74c4-78d7-3ac7-a1bd-5d4eb305f731
>     writer.go:29: 2020-02-23T02:46:21.201Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:21.209Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:21.209Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:21.209Z [DEBUG] TestDNS_NodeLookup_A_SuppressTXT.server: Skipping self join check for node since the cluster is too small: node=Node-cd0b74c4-78d7-3ac7-a1bd-5d4eb305f731
>     writer.go:29: 2020-02-23T02:46:21.209Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.server: member joined, marking health alive: member=Node-cd0b74c4-78d7-3ac7-a1bd-5d4eb305f731
>     writer.go:29: 2020-02-23T02:46:21.418Z [DEBUG] TestDNS_NodeLookup_A_SuppressTXT: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:21.421Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT: Synced node info
>     writer.go:29: 2020-02-23T02:46:21.421Z [DEBUG] TestDNS_NodeLookup_A_SuppressTXT: Node info in sync
>     writer.go:29: 2020-02-23T02:46:21.579Z [DEBUG] TestDNS_NodeLookup_A_SuppressTXT.dns: request served from client: name=bar.node.consul. type=A class=IN latency=86.653µs client=127.0.0.1:60135 client_network=udp
>     writer.go:29: 2020-02-23T02:46:21.579Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:21.579Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:21.579Z [DEBUG] TestDNS_NodeLookup_A_SuppressTXT.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:21.580Z [WARN]  TestDNS_NodeLookup_A_SuppressTXT.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:21.580Z [DEBUG] TestDNS_NodeLookup_A_SuppressTXT.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:21.581Z [WARN]  TestDNS_NodeLookup_A_SuppressTXT.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:21.583Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:21.583Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT: consul server down
>     writer.go:29: 2020-02-23T02:46:21.583Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT: shutdown complete
>     writer.go:29: 2020-02-23T02:46:21.583Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT: Stopping server: protocol=DNS address=127.0.0.1:16805 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.583Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT: Stopping server: protocol=DNS address=127.0.0.1:16805 network=udp
>     writer.go:29: 2020-02-23T02:46:21.583Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT: Stopping server: protocol=HTTP address=127.0.0.1:16806 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.583Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:21.583Z [INFO]  TestDNS_NodeLookup_A_SuppressTXT: Endpoints down
> === RUN   TestDNS_EDNS0
> === PAUSE TestDNS_EDNS0
> === RUN   TestDNS_EDNS0_ECS
> --- SKIP: TestDNS_EDNS0_ECS (0.00s)
>     dns_test.go:761: DM-skipped
> === RUN   TestDNS_ReverseLookup
> === PAUSE TestDNS_ReverseLookup
> === RUN   TestDNS_ReverseLookup_CustomDomain
> === PAUSE TestDNS_ReverseLookup_CustomDomain
> === RUN   TestDNS_ReverseLookup_IPV6
> === PAUSE TestDNS_ReverseLookup_IPV6
> === RUN   TestDNS_ServiceReverseLookup
> --- SKIP: TestDNS_ServiceReverseLookup (0.00s)
>     dns_test.go:976: DM-skipped
> === RUN   TestDNS_ServiceReverseLookup_IPV6
> === PAUSE TestDNS_ServiceReverseLookup_IPV6
> === RUN   TestDNS_ServiceReverseLookup_CustomDomain
> === PAUSE TestDNS_ServiceReverseLookup_CustomDomain
> === RUN   TestDNS_SOA_Settings
> === PAUSE TestDNS_SOA_Settings
> === RUN   TestDNS_ServiceReverseLookupNodeAddress
> === PAUSE TestDNS_ServiceReverseLookupNodeAddress
> === RUN   TestDNS_ServiceLookupNoMultiCNAME
> --- SKIP: TestDNS_ServiceLookupNoMultiCNAME (0.00s)
>     dns_test.go:1204: DM-skipped
> === RUN   TestDNS_ServiceLookupPreferNoCNAME
> === PAUSE TestDNS_ServiceLookupPreferNoCNAME
> === RUN   TestDNS_ServiceLookupMultiAddrNoCNAME
> === PAUSE TestDNS_ServiceLookupMultiAddrNoCNAME
> === RUN   TestDNS_ServiceLookup
> === PAUSE TestDNS_ServiceLookup
> === RUN   TestDNS_ServiceLookupWithInternalServiceAddress
> === PAUSE TestDNS_ServiceLookupWithInternalServiceAddress
> === RUN   TestDNS_ConnectServiceLookup
> === PAUSE TestDNS_ConnectServiceLookup
> === RUN   TestDNS_ExternalServiceLookup
> === PAUSE TestDNS_ExternalServiceLookup
> === RUN   TestDNS_InifiniteRecursion
> === PAUSE TestDNS_InifiniteRecursion
> === RUN   TestDNS_ExternalServiceToConsulCNAMELookup
> === PAUSE TestDNS_ExternalServiceToConsulCNAMELookup
> === RUN   TestDNS_NSRecords
> --- SKIP: TestDNS_NSRecords (0.00s)
>     dns_test.go:1827: DM-skipped
> === RUN   TestDNS_NSRecords_IPV6
> === PAUSE TestDNS_NSRecords_IPV6
> === RUN   TestDNS_ExternalServiceToConsulCNAMENestedLookup
> === PAUSE TestDNS_ExternalServiceToConsulCNAMENestedLookup
> === RUN   TestDNS_ServiceLookup_ServiceAddress_A
> === PAUSE TestDNS_ServiceLookup_ServiceAddress_A
> === RUN   TestDNS_ServiceLookup_ServiceAddress_SRV
> === PAUSE TestDNS_ServiceLookup_ServiceAddress_SRV
> === RUN   TestDNS_ServiceLookup_ServiceAddressIPV6
> === PAUSE TestDNS_ServiceLookup_ServiceAddressIPV6
> === RUN   TestDNS_ServiceLookup_WanTranslation
> === PAUSE TestDNS_ServiceLookup_WanTranslation
> === RUN   TestDNS_Lookup_TaggedIPAddresses
> === PAUSE TestDNS_Lookup_TaggedIPAddresses
> === RUN   TestDNS_CaseInsensitiveServiceLookup
> === PAUSE TestDNS_CaseInsensitiveServiceLookup
> === RUN   TestDNS_ServiceLookup_TagPeriod
> === PAUSE TestDNS_ServiceLookup_TagPeriod
> === RUN   TestDNS_PreparedQueryNearIPEDNS
> === PAUSE TestDNS_PreparedQueryNearIPEDNS
> === RUN   TestDNS_PreparedQueryNearIP
> === PAUSE TestDNS_PreparedQueryNearIP
> === RUN   TestDNS_ServiceLookup_PreparedQueryNamePeriod
> === PAUSE TestDNS_ServiceLookup_PreparedQueryNamePeriod
> === RUN   TestDNS_ServiceLookup_Dedup
> --- SKIP: TestDNS_ServiceLookup_Dedup (0.00s)
>     dns_test.go:3183: DM-skipped
> === RUN   TestDNS_ServiceLookup_Dedup_SRV
> === PAUSE TestDNS_ServiceLookup_Dedup_SRV
> === RUN   TestDNS_Recurse
> === PAUSE TestDNS_Recurse
> === RUN   TestDNS_Recurse_Truncation
> === PAUSE TestDNS_Recurse_Truncation
> === RUN   TestDNS_RecursorTimeout
> === PAUSE TestDNS_RecursorTimeout
> === RUN   TestDNS_ServiceLookup_FilterCritical
> === PAUSE TestDNS_ServiceLookup_FilterCritical
> === RUN   TestDNS_ServiceLookup_OnlyFailing
> === PAUSE TestDNS_ServiceLookup_OnlyFailing
> === RUN   TestDNS_ServiceLookup_OnlyPassing
> === PAUSE TestDNS_ServiceLookup_OnlyPassing
> === RUN   TestDNS_ServiceLookup_Randomize
> === PAUSE TestDNS_ServiceLookup_Randomize
> === RUN   TestBinarySearch
> === PAUSE TestBinarySearch
> === RUN   TestDNS_TCP_and_UDP_Truncate
> --- SKIP: TestDNS_TCP_and_UDP_Truncate (0.00s)
>     dns_test.go:4078: DM-skipped
> === RUN   TestDNS_ServiceLookup_Truncate
> === PAUSE TestDNS_ServiceLookup_Truncate
> === RUN   TestDNS_ServiceLookup_LargeResponses
> === PAUSE TestDNS_ServiceLookup_LargeResponses
> === RUN   TestDNS_ServiceLookup_ARecordLimits
> --- SKIP: TestDNS_ServiceLookup_ARecordLimits (0.00s)
>     dns_test.go:4529: DM-skipped
> === RUN   TestDNS_ServiceLookup_AnswerLimits
> === PAUSE TestDNS_ServiceLookup_AnswerLimits
> === RUN   TestDNS_ServiceLookup_CNAME
> --- SKIP: TestDNS_ServiceLookup_CNAME (0.00s)
>     dns_test.go:4674: DM-skipped
> === RUN   TestDNS_ServiceLookup_ServiceAddress_CNAME
> === PAUSE TestDNS_ServiceLookup_ServiceAddress_CNAME
> === RUN   TestDNS_NodeLookup_TTL
> === PAUSE TestDNS_NodeLookup_TTL
> === RUN   TestDNS_ServiceLookup_TTL
> === PAUSE TestDNS_ServiceLookup_TTL
> === RUN   TestDNS_PreparedQuery_TTL
> === PAUSE TestDNS_PreparedQuery_TTL
> === RUN   TestDNS_PreparedQuery_Failover
> --- SKIP: TestDNS_PreparedQuery_Failover (0.00s)
>     dns_test.go:5194: DM-skipped
> === RUN   TestDNS_ServiceLookup_SRV_RFC
> --- SKIP: TestDNS_ServiceLookup_SRV_RFC (0.00s)
>     dns_test.go:5307: DM-skipped
> === RUN   TestDNS_ServiceLookup_SRV_RFC_TCP_Default
> === PAUSE TestDNS_ServiceLookup_SRV_RFC_TCP_Default
> === RUN   TestDNS_ServiceLookup_FilterACL
> === PAUSE TestDNS_ServiceLookup_FilterACL
> === RUN   TestDNS_ServiceLookup_MetaTXT
> --- PASS: TestDNS_ServiceLookup_MetaTXT (0.19s)
>     writer.go:29: 2020-02-23T02:46:21.595Z [WARN]  TestDNS_ServiceLookup_MetaTXT: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:21.595Z [DEBUG] TestDNS_ServiceLookup_MetaTXT.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:21.595Z [DEBUG] TestDNS_ServiceLookup_MetaTXT.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:21.612Z [INFO]  TestDNS_ServiceLookup_MetaTXT.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:1cc71aea-9e04-dee0-80dd-4a3d0c0a72d8 Address:127.0.0.1:16816}]"
>     writer.go:29: 2020-02-23T02:46:21.612Z [INFO]  TestDNS_ServiceLookup_MetaTXT.server.raft: entering follower state: follower="Node at 127.0.0.1:16816 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:21.613Z [INFO]  TestDNS_ServiceLookup_MetaTXT.server.serf.wan: serf: EventMemberJoin: Node-1cc71aea-9e04-dee0-80dd-4a3d0c0a72d8.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:21.613Z [INFO]  TestDNS_ServiceLookup_MetaTXT.server.serf.lan: serf: EventMemberJoin: Node-1cc71aea-9e04-dee0-80dd-4a3d0c0a72d8 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:21.613Z [INFO]  TestDNS_ServiceLookup_MetaTXT.server: Adding LAN server: server="Node-1cc71aea-9e04-dee0-80dd-4a3d0c0a72d8 (Addr: tcp/127.0.0.1:16816) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:21.614Z [INFO]  TestDNS_ServiceLookup_MetaTXT.server: Handled event for server in area: event=member-join server=Node-1cc71aea-9e04-dee0-80dd-4a3d0c0a72d8.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:21.614Z [INFO]  TestDNS_ServiceLookup_MetaTXT: Started DNS server: address=127.0.0.1:16811 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.614Z [INFO]  TestDNS_ServiceLookup_MetaTXT: Started DNS server: address=127.0.0.1:16811 network=udp
>     writer.go:29: 2020-02-23T02:46:21.614Z [INFO]  TestDNS_ServiceLookup_MetaTXT: Started HTTP server: address=127.0.0.1:16812 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.614Z [INFO]  TestDNS_ServiceLookup_MetaTXT: started state syncer
>     writer.go:29: 2020-02-23T02:46:21.648Z [WARN]  TestDNS_ServiceLookup_MetaTXT.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:21.649Z [INFO]  TestDNS_ServiceLookup_MetaTXT.server.raft: entering candidate state: node="Node at 127.0.0.1:16816 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:21.652Z [DEBUG] TestDNS_ServiceLookup_MetaTXT.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:21.652Z [DEBUG] TestDNS_ServiceLookup_MetaTXT.server.raft: vote granted: from=1cc71aea-9e04-dee0-80dd-4a3d0c0a72d8 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:21.652Z [INFO]  TestDNS_ServiceLookup_MetaTXT.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:21.652Z [INFO]  TestDNS_ServiceLookup_MetaTXT.server.raft: entering leader state: leader="Node at 127.0.0.1:16816 [Leader]"
>     writer.go:29: 2020-02-23T02:46:21.652Z [INFO]  TestDNS_ServiceLookup_MetaTXT.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:21.652Z [INFO]  TestDNS_ServiceLookup_MetaTXT.server: New leader elected: payload=Node-1cc71aea-9e04-dee0-80dd-4a3d0c0a72d8
>     writer.go:29: 2020-02-23T02:46:21.666Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:21.673Z [INFO]  TestDNS_ServiceLookup_MetaTXT.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:21.673Z [INFO]  TestDNS_ServiceLookup_MetaTXT.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:21.673Z [DEBUG] TestDNS_ServiceLookup_MetaTXT.server: Skipping self join check for node since the cluster is too small: node=Node-1cc71aea-9e04-dee0-80dd-4a3d0c0a72d8
>     writer.go:29: 2020-02-23T02:46:21.673Z [INFO]  TestDNS_ServiceLookup_MetaTXT.server: member joined, marking health alive: member=Node-1cc71aea-9e04-dee0-80dd-4a3d0c0a72d8
>     writer.go:29: 2020-02-23T02:46:21.770Z [DEBUG] TestDNS_ServiceLookup_MetaTXT.dns: request served from client: name=db.service.consul. type=SRV class=IN latency=132.01µs client=127.0.0.1:40768 client_network=udp
>     writer.go:29: 2020-02-23T02:46:21.770Z [INFO]  TestDNS_ServiceLookup_MetaTXT: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:21.770Z [INFO]  TestDNS_ServiceLookup_MetaTXT.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:21.770Z [DEBUG] TestDNS_ServiceLookup_MetaTXT.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:21.770Z [WARN]  TestDNS_ServiceLookup_MetaTXT.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:21.770Z [ERROR] TestDNS_ServiceLookup_MetaTXT.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:21.770Z [DEBUG] TestDNS_ServiceLookup_MetaTXT.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:21.773Z [WARN]  TestDNS_ServiceLookup_MetaTXT.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:21.774Z [INFO]  TestDNS_ServiceLookup_MetaTXT.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:21.774Z [INFO]  TestDNS_ServiceLookup_MetaTXT: consul server down
>     writer.go:29: 2020-02-23T02:46:21.774Z [INFO]  TestDNS_ServiceLookup_MetaTXT: shutdown complete
>     writer.go:29: 2020-02-23T02:46:21.774Z [INFO]  TestDNS_ServiceLookup_MetaTXT: Stopping server: protocol=DNS address=127.0.0.1:16811 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.774Z [INFO]  TestDNS_ServiceLookup_MetaTXT: Stopping server: protocol=DNS address=127.0.0.1:16811 network=udp
>     writer.go:29: 2020-02-23T02:46:21.774Z [INFO]  TestDNS_ServiceLookup_MetaTXT: Stopping server: protocol=HTTP address=127.0.0.1:16812 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.775Z [INFO]  TestDNS_ServiceLookup_MetaTXT: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:21.775Z [INFO]  TestDNS_ServiceLookup_MetaTXT: Endpoints down
> === RUN   TestDNS_ServiceLookup_SuppressTXT
> --- PASS: TestDNS_ServiceLookup_SuppressTXT (0.17s)
>     writer.go:29: 2020-02-23T02:46:21.798Z [WARN]  TestDNS_ServiceLookup_SuppressTXT: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:21.798Z [DEBUG] TestDNS_ServiceLookup_SuppressTXT.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:21.799Z [DEBUG] TestDNS_ServiceLookup_SuppressTXT.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:21.810Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d4d0931a-eea3-2864-711f-30e848691bff Address:127.0.0.1:16822}]"
>     writer.go:29: 2020-02-23T02:46:21.810Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.server.raft: entering follower state: follower="Node at 127.0.0.1:16822 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:21.810Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.server.serf.wan: serf: EventMemberJoin: Node-d4d0931a-eea3-2864-711f-30e848691bff.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:21.811Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.server.serf.lan: serf: EventMemberJoin: Node-d4d0931a-eea3-2864-711f-30e848691bff 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:21.811Z [INFO]  TestDNS_ServiceLookup_SuppressTXT: Started DNS server: address=127.0.0.1:16817 network=udp
>     writer.go:29: 2020-02-23T02:46:21.811Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.server: Handled event for server in area: event=member-join server=Node-d4d0931a-eea3-2864-711f-30e848691bff.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:21.811Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.server: Adding LAN server: server="Node-d4d0931a-eea3-2864-711f-30e848691bff (Addr: tcp/127.0.0.1:16822) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:21.811Z [INFO]  TestDNS_ServiceLookup_SuppressTXT: Started DNS server: address=127.0.0.1:16817 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.811Z [INFO]  TestDNS_ServiceLookup_SuppressTXT: Started HTTP server: address=127.0.0.1:16818 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.811Z [INFO]  TestDNS_ServiceLookup_SuppressTXT: started state syncer
>     writer.go:29: 2020-02-23T02:46:21.876Z [WARN]  TestDNS_ServiceLookup_SuppressTXT.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:21.876Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.server.raft: entering candidate state: node="Node at 127.0.0.1:16822 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:21.879Z [DEBUG] TestDNS_ServiceLookup_SuppressTXT.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:21.879Z [DEBUG] TestDNS_ServiceLookup_SuppressTXT.server.raft: vote granted: from=d4d0931a-eea3-2864-711f-30e848691bff term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:21.880Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:21.880Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.server.raft: entering leader state: leader="Node at 127.0.0.1:16822 [Leader]"
>     writer.go:29: 2020-02-23T02:46:21.880Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:21.880Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.server: New leader elected: payload=Node-d4d0931a-eea3-2864-711f-30e848691bff
>     writer.go:29: 2020-02-23T02:46:21.887Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:21.896Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:21.896Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:21.896Z [DEBUG] TestDNS_ServiceLookup_SuppressTXT.server: Skipping self join check for node since the cluster is too small: node=Node-d4d0931a-eea3-2864-711f-30e848691bff
>     writer.go:29: 2020-02-23T02:46:21.896Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.server: member joined, marking health alive: member=Node-d4d0931a-eea3-2864-711f-30e848691bff
>     writer.go:29: 2020-02-23T02:46:21.941Z [DEBUG] TestDNS_ServiceLookup_SuppressTXT.dns: request served from client: name=db.service.consul. type=SRV class=IN latency=131.556µs client=127.0.0.1:51810 client_network=udp
>     writer.go:29: 2020-02-23T02:46:21.941Z [INFO]  TestDNS_ServiceLookup_SuppressTXT: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:21.941Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:21.941Z [DEBUG] TestDNS_ServiceLookup_SuppressTXT.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:21.941Z [WARN]  TestDNS_ServiceLookup_SuppressTXT.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:21.941Z [ERROR] TestDNS_ServiceLookup_SuppressTXT.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:21.941Z [DEBUG] TestDNS_ServiceLookup_SuppressTXT.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:21.943Z [WARN]  TestDNS_ServiceLookup_SuppressTXT.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:21.945Z [INFO]  TestDNS_ServiceLookup_SuppressTXT.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:21.945Z [INFO]  TestDNS_ServiceLookup_SuppressTXT: consul server down
>     writer.go:29: 2020-02-23T02:46:21.945Z [INFO]  TestDNS_ServiceLookup_SuppressTXT: shutdown complete
>     writer.go:29: 2020-02-23T02:46:21.945Z [INFO]  TestDNS_ServiceLookup_SuppressTXT: Stopping server: protocol=DNS address=127.0.0.1:16817 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.945Z [INFO]  TestDNS_ServiceLookup_SuppressTXT: Stopping server: protocol=DNS address=127.0.0.1:16817 network=udp
>     writer.go:29: 2020-02-23T02:46:21.945Z [INFO]  TestDNS_ServiceLookup_SuppressTXT: Stopping server: protocol=HTTP address=127.0.0.1:16818 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.945Z [INFO]  TestDNS_ServiceLookup_SuppressTXT: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:21.945Z [INFO]  TestDNS_ServiceLookup_SuppressTXT: Endpoints down
> === RUN   TestDNS_AddressLookup
> === PAUSE TestDNS_AddressLookup
> === RUN   TestDNS_AddressLookupIPV6
> --- SKIP: TestDNS_AddressLookupIPV6 (0.00s)
>     dns_test.go:5636: DM-skipped
> === RUN   TestDNS_NonExistingLookup
> === PAUSE TestDNS_NonExistingLookup
> === RUN   TestDNS_NonExistingLookupEmptyAorAAAA
> === PAUSE TestDNS_NonExistingLookupEmptyAorAAAA
> === RUN   TestDNS_AltDomains_Service
> === PAUSE TestDNS_AltDomains_Service
> === RUN   TestDNS_AltDomains_SOA
> === PAUSE TestDNS_AltDomains_SOA
> === RUN   TestDNS_AltDomains_Overlap
> === PAUSE TestDNS_AltDomains_Overlap
> === RUN   TestDNS_PreparedQuery_AllowStale
> === PAUSE TestDNS_PreparedQuery_AllowStale
> === RUN   TestDNS_InvalidQueries
> === PAUSE TestDNS_InvalidQueries
> === RUN   TestDNS_PreparedQuery_AgentSource
> === PAUSE TestDNS_PreparedQuery_AgentSource
> === RUN   TestDNS_trimUDPResponse_NoTrim
> === PAUSE TestDNS_trimUDPResponse_NoTrim
> === RUN   TestDNS_trimUDPResponse_TrimLimit
> === PAUSE TestDNS_trimUDPResponse_TrimLimit
> === RUN   TestDNS_trimUDPResponse_TrimSize
> === PAUSE TestDNS_trimUDPResponse_TrimSize
> === RUN   TestDNS_trimUDPResponse_TrimSizeEDNS
> === PAUSE TestDNS_trimUDPResponse_TrimSizeEDNS
> === RUN   TestDNS_syncExtra
> === PAUSE TestDNS_syncExtra
> === RUN   TestDNS_Compression_trimUDPResponse
> === PAUSE TestDNS_Compression_trimUDPResponse
> === RUN   TestDNS_Compression_Query
> === PAUSE TestDNS_Compression_Query
> === RUN   TestDNS_Compression_ReverseLookup
> === PAUSE TestDNS_Compression_ReverseLookup
> === RUN   TestDNS_Compression_Recurse
> === PAUSE TestDNS_Compression_Recurse
> === RUN   TestDNSInvalidRegex
> === RUN   TestDNSInvalidRegex/Valid_Hostname
> === RUN   TestDNSInvalidRegex/Valid_Hostname#01
> === RUN   TestDNSInvalidRegex/Invalid_Hostname_with_special_chars
> === RUN   TestDNSInvalidRegex/Invalid_Hostname_with_special_chars_in_the_end
> === RUN   TestDNSInvalidRegex/Whitespace
> === RUN   TestDNSInvalidRegex/Only_special_chars
> --- PASS: TestDNSInvalidRegex (0.00s)
>     --- PASS: TestDNSInvalidRegex/Valid_Hostname (0.00s)
>     --- PASS: TestDNSInvalidRegex/Valid_Hostname#01 (0.00s)
>     --- PASS: TestDNSInvalidRegex/Invalid_Hostname_with_special_chars (0.00s)
>     --- PASS: TestDNSInvalidRegex/Invalid_Hostname_with_special_chars_in_the_end (0.00s)
>     --- PASS: TestDNSInvalidRegex/Whitespace (0.00s)
>     --- PASS: TestDNSInvalidRegex/Only_special_chars (0.00s)
> === RUN   TestDNS_ConfigReload
> === PAUSE TestDNS_ConfigReload
> === RUN   TestDNS_ReloadConfig_DuringQuery
> --- SKIP: TestDNS_ReloadConfig_DuringQuery (0.00s)
>     dns_test.go:6911: DM-skipped
> === RUN   TestEventFire
> === PAUSE TestEventFire
> === RUN   TestEventFire_token
> === PAUSE TestEventFire_token
> === RUN   TestEventList
> === PAUSE TestEventList
> === RUN   TestEventList_Filter
> === PAUSE TestEventList_Filter
> === RUN   TestEventList_ACLFilter
> === PAUSE TestEventList_ACLFilter
> === RUN   TestEventList_Blocking
> === PAUSE TestEventList_Blocking
> === RUN   TestEventList_EventBufOrder
> === PAUSE TestEventList_EventBufOrder
> === RUN   TestUUIDToUint64
> === PAUSE TestUUIDToUint64
> === RUN   TestHealthChecksInState
> --- SKIP: TestHealthChecksInState (0.00s)
>     health_endpoint_test.go:23: DM-skipped
> === RUN   TestHealthChecksInState_NodeMetaFilter
> === PAUSE TestHealthChecksInState_NodeMetaFilter
> === RUN   TestHealthChecksInState_Filter
> === PAUSE TestHealthChecksInState_Filter
> === RUN   TestHealthChecksInState_DistanceSort
> === PAUSE TestHealthChecksInState_DistanceSort
> === RUN   TestHealthNodeChecks
> === PAUSE TestHealthNodeChecks
> === RUN   TestHealthNodeChecks_Filtering
> === PAUSE TestHealthNodeChecks_Filtering
> === RUN   TestHealthServiceChecks
> === PAUSE TestHealthServiceChecks
> === RUN   TestHealthServiceChecks_NodeMetaFilter
> === PAUSE TestHealthServiceChecks_NodeMetaFilter
> === RUN   TestHealthServiceChecks_Filtering
> === PAUSE TestHealthServiceChecks_Filtering
> === RUN   TestHealthServiceChecks_DistanceSort
> === PAUSE TestHealthServiceChecks_DistanceSort
> === RUN   TestHealthServiceNodes
> === PAUSE TestHealthServiceNodes
> === RUN   TestHealthServiceNodes_NodeMetaFilter
> === PAUSE TestHealthServiceNodes_NodeMetaFilter
> === RUN   TestHealthServiceNodes_Filter
> --- SKIP: TestHealthServiceNodes_Filter (0.00s)
>     health_endpoint_test.go:741: DM-skipped
> === RUN   TestHealthServiceNodes_DistanceSort
> === PAUSE TestHealthServiceNodes_DistanceSort
> === RUN   TestHealthServiceNodes_PassingFilter
> --- SKIP: TestHealthServiceNodes_PassingFilter (0.00s)
>     health_endpoint_test.go:883: DM-skipped
> === RUN   TestHealthServiceNodes_CheckType
> === PAUSE TestHealthServiceNodes_CheckType
> === RUN   TestHealthServiceNodes_WanTranslation
> === PAUSE TestHealthServiceNodes_WanTranslation
> === RUN   TestHealthConnectServiceNodes
> === PAUSE TestHealthConnectServiceNodes
> === RUN   TestHealthConnectServiceNodes_Filter
> === PAUSE TestHealthConnectServiceNodes_Filter
> === RUN   TestHealthConnectServiceNodes_PassingFilter
> === PAUSE TestHealthConnectServiceNodes_PassingFilter
> === RUN   TestFilterNonPassing
> === PAUSE TestFilterNonPassing
> === RUN   TestDecodeACLPolicyWrite
> === RUN   TestDecodeACLPolicyWrite/hashes_base64_encoded
> === RUN   TestDecodeACLPolicyWrite/hashes_not-base64_encoded
> === RUN   TestDecodeACLPolicyWrite/hashes_empty_string
> === RUN   TestDecodeACLPolicyWrite/hashes_null
> === RUN   TestDecodeACLPolicyWrite/hashes_numeric_value
> --- PASS: TestDecodeACLPolicyWrite (0.00s)
>     --- PASS: TestDecodeACLPolicyWrite/hashes_base64_encoded (0.00s)
>     --- PASS: TestDecodeACLPolicyWrite/hashes_not-base64_encoded (0.00s)
>     --- PASS: TestDecodeACLPolicyWrite/hashes_empty_string (0.00s)
>     --- PASS: TestDecodeACLPolicyWrite/hashes_null (0.00s)
>     --- PASS: TestDecodeACLPolicyWrite/hashes_numeric_value (0.00s)
> === RUN   TestDecodeACLToken
> === RUN   TestDecodeACLToken/timestamps_correctly_RFC3339_formatted
> === RUN   TestDecodeACLToken/timestamps_incorrectly_formatted_(RFC822)
> === RUN   TestDecodeACLToken/timestamps_incorrectly_formatted_(RFC850)
> === RUN   TestDecodeACLToken/timestamps_empty_string
> === RUN   TestDecodeACLToken/timestamps_null
> === RUN   TestDecodeACLToken/durations_correctly_formatted
> === RUN   TestDecodeACLToken/durations_small,_correctly_formatted
> === RUN   TestDecodeACLToken/durations_incorrectly_formatted
> === RUN   TestDecodeACLToken/durations_empty_string
> === RUN   TestDecodeACLToken/durations_string_without_quotes
> === RUN   TestDecodeACLToken/durations_numeric
> === RUN   TestDecodeACLToken/durations_negative
> === RUN   TestDecodeACLToken/durations_numeric_and_negative
> === RUN   TestDecodeACLToken/hashes_base64_encoded
> === RUN   TestDecodeACLToken/hashes_not-base64_encoded
> === RUN   TestDecodeACLToken/hashes_empty_string
> === RUN   TestDecodeACLToken/hashes_null
> === RUN   TestDecodeACLToken/hashes_numeric_value
> --- PASS: TestDecodeACLToken (0.00s)
>     --- PASS: TestDecodeACLToken/timestamps_correctly_RFC3339_formatted (0.00s)
>     --- PASS: TestDecodeACLToken/timestamps_incorrectly_formatted_(RFC822) (0.00s)
>     --- PASS: TestDecodeACLToken/timestamps_incorrectly_formatted_(RFC850) (0.00s)
>     --- PASS: TestDecodeACLToken/timestamps_empty_string (0.00s)
>     --- PASS: TestDecodeACLToken/timestamps_null (0.00s)
>     --- PASS: TestDecodeACLToken/durations_correctly_formatted (0.00s)
>     --- PASS: TestDecodeACLToken/durations_small,_correctly_formatted (0.00s)
>     --- PASS: TestDecodeACLToken/durations_incorrectly_formatted (0.00s)
>     --- PASS: TestDecodeACLToken/durations_empty_string (0.00s)
>     --- PASS: TestDecodeACLToken/durations_string_without_quotes (0.00s)
>     --- PASS: TestDecodeACLToken/durations_numeric (0.00s)
>     --- PASS: TestDecodeACLToken/durations_negative (0.00s)
>     --- PASS: TestDecodeACLToken/durations_numeric_and_negative (0.00s)
>     --- PASS: TestDecodeACLToken/hashes_base64_encoded (0.00s)
>     --- PASS: TestDecodeACLToken/hashes_not-base64_encoded (0.00s)
>     --- PASS: TestDecodeACLToken/hashes_empty_string (0.00s)
>     --- PASS: TestDecodeACLToken/hashes_null (0.00s)
>     --- PASS: TestDecodeACLToken/hashes_numeric_value (0.00s)
> === RUN   TestDecodeACLRoleWrite
> === RUN   TestDecodeACLRoleWrite/hashes_base64_encoded
> === RUN   TestDecodeACLRoleWrite/hashes_not-base64_encoded
> === RUN   TestDecodeACLRoleWrite/hashes_empty_string
> === RUN   TestDecodeACLRoleWrite/hashes_null
> === RUN   TestDecodeACLRoleWrite/hashes_numeric_value
> --- PASS: TestDecodeACLRoleWrite (0.00s)
>     --- PASS: TestDecodeACLRoleWrite/hashes_base64_encoded (0.00s)
>     --- PASS: TestDecodeACLRoleWrite/hashes_not-base64_encoded (0.00s)
>     --- PASS: TestDecodeACLRoleWrite/hashes_empty_string (0.00s)
>     --- PASS: TestDecodeACLRoleWrite/hashes_null (0.00s)
>     --- PASS: TestDecodeACLRoleWrite/hashes_numeric_value (0.00s)
> === RUN   TestDecodeAgentRegisterCheck
> === RUN   TestDecodeAgentRegisterCheck/durations_correctly_formatted
> === RUN   TestDecodeAgentRegisterCheck/durations_small,_correctly_formatted
> === RUN   TestDecodeAgentRegisterCheck/durations_incorrectly_formatted
> === RUN   TestDecodeAgentRegisterCheck/durations_empty_string
> === RUN   TestDecodeAgentRegisterCheck/durations_string_without_quotes
> === RUN   TestDecodeAgentRegisterCheck/durations_numeric
> === RUN   TestDecodeAgentRegisterCheck/durations_negative
> === RUN   TestDecodeAgentRegisterCheck/durations_numeric_and_negative
> === RUN   TestDecodeAgentRegisterCheck/filled_in_map
> === RUN   TestDecodeAgentRegisterCheck/empty_map
> === RUN   TestDecodeAgentRegisterCheck/empty_map#01
> === RUN   TestDecodeAgentRegisterCheck/malformatted_map
> === RUN   TestDecodeAgentRegisterCheck/not_a_map_(slice)
> === RUN   TestDecodeAgentRegisterCheck/not_a_map_(int)
> === RUN   TestDecodeAgentRegisterCheck/scriptArgs:_all_set
> === RUN   TestDecodeAgentRegisterCheck/scriptArgs:_first_and_second_set
> === RUN   TestDecodeAgentRegisterCheck/scriptArgs:_first_and_third_set
> === RUN   TestDecodeAgentRegisterCheck/scriptArgs:_second_and_third_set
> === RUN   TestDecodeAgentRegisterCheck/scriptArgs:_first_set
> === RUN   TestDecodeAgentRegisterCheck/scriptArgs:_second_set
> === RUN   TestDecodeAgentRegisterCheck/scriptArgs:_third_set
> === RUN   TestDecodeAgentRegisterCheck/scriptArgs:_none_set
> === RUN   TestDecodeAgentRegisterCheck/deregister:_both_set
> === RUN   TestDecodeAgentRegisterCheck/deregister:_first_set
> === RUN   TestDecodeAgentRegisterCheck/deregister:_second_set
> === RUN   TestDecodeAgentRegisterCheck/deregister:_neither_set
> === RUN   TestDecodeAgentRegisterCheck/dockerContainerID:_both_set
> === RUN   TestDecodeAgentRegisterCheck/dockerContainerID:_first_set
> === RUN   TestDecodeAgentRegisterCheck/dockerContainerID:_second_set
> === RUN   TestDecodeAgentRegisterCheck/dockerContainerID:_neither_set
> === RUN   TestDecodeAgentRegisterCheck/tlsSkipVerify:_both_set
> === RUN   TestDecodeAgentRegisterCheck/tlsSkipVerify:_first_set
> === RUN   TestDecodeAgentRegisterCheck/tlsSkipVerify:_second_set
> === RUN   TestDecodeAgentRegisterCheck/tlsSkipVerify:_neither_set
> === RUN   TestDecodeAgentRegisterCheck/serviceID:_both_set
> === RUN   TestDecodeAgentRegisterCheck/serviceID:_first_set
> === RUN   TestDecodeAgentRegisterCheck/serviceID:_second_set
> === RUN   TestDecodeAgentRegisterCheck/serviceID:_neither_set
> --- PASS: TestDecodeAgentRegisterCheck (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/durations_correctly_formatted (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/durations_small,_correctly_formatted (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/durations_incorrectly_formatted (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/durations_empty_string (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/durations_string_without_quotes (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/durations_numeric (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/durations_negative (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/durations_numeric_and_negative (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/filled_in_map (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/empty_map (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/empty_map#01 (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/malformatted_map (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/not_a_map_(slice) (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/not_a_map_(int) (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/scriptArgs:_all_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/scriptArgs:_first_and_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/scriptArgs:_first_and_third_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/scriptArgs:_second_and_third_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/scriptArgs:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/scriptArgs:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/scriptArgs:_third_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/scriptArgs:_none_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/deregister:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/deregister:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/deregister:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/deregister:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/dockerContainerID:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/dockerContainerID:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/dockerContainerID:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/dockerContainerID:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/tlsSkipVerify:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/tlsSkipVerify:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/tlsSkipVerify:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/tlsSkipVerify:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/serviceID:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/serviceID:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/serviceID:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterCheck/serviceID:_neither_set (0.00s)
> === RUN   TestDecodeAgentRegisterService
> === RUN   TestDecodeAgentRegisterService/translateEnableTagTCs:_both_set
> === RUN   TestDecodeAgentRegisterService/translateEnableTagTCs:_first_set
> === RUN   TestDecodeAgentRegisterService/translateEnableTagTCs:_second_set
> === RUN   TestDecodeAgentRegisterService/translateEnableTagTCs:_neither_set
> === RUN   TestDecodeAgentRegisterService/DestinationName:_both_set
> === RUN   TestDecodeAgentRegisterService/DestinationName:_first_set
> === RUN   TestDecodeAgentRegisterService/DestinationName:_second_set
> === RUN   TestDecodeAgentRegisterService/DestinationName:_neither_set
> === RUN   TestDecodeAgentRegisterService/DestinationType:_both_set
> === RUN   TestDecodeAgentRegisterService/DestinationType:_first_set
> === RUN   TestDecodeAgentRegisterService/DestinationType:_second_set
> === RUN   TestDecodeAgentRegisterService/DestinationType:_neither_set
> === RUN   TestDecodeAgentRegisterService/DestinationNamespace:_both_set
> === RUN   TestDecodeAgentRegisterService/DestinationNamespace:_first_set
> === RUN   TestDecodeAgentRegisterService/DestinationNamespace:_second_set
> === RUN   TestDecodeAgentRegisterService/DestinationNamespace:_neither_set
> === RUN   TestDecodeAgentRegisterService/LocalBindPort:_both_set
> === RUN   TestDecodeAgentRegisterService/LocalBindPort:_first_set
> === RUN   TestDecodeAgentRegisterService/LocalBindPort:_second_set
> === RUN   TestDecodeAgentRegisterService/LocalBindPort:_neither_set
> === RUN   TestDecodeAgentRegisterService/LocalBindAddress:_both_set
> === RUN   TestDecodeAgentRegisterService/LocalBindAddress:_first_set
> === RUN   TestDecodeAgentRegisterService/LocalBindAddress:_second_set
> === RUN   TestDecodeAgentRegisterService/LocalBindAddress:_neither_set
> === RUN   TestDecodeAgentRegisterService/DestinationServiceName:_both_set
> === RUN   TestDecodeAgentRegisterService/DestinationServiceName:_first_set
> === RUN   TestDecodeAgentRegisterService/DestinationServiceName:_second_set
> === RUN   TestDecodeAgentRegisterService/DestinationServiceName:_neither_set
> === RUN   TestDecodeAgentRegisterService/DestinationServiceID:_both_set
> === RUN   TestDecodeAgentRegisterService/DestinationServiceID:_first_set
> === RUN   TestDecodeAgentRegisterService/DestinationServiceID:_second_set
> === RUN   TestDecodeAgentRegisterService/DestinationServiceID:_neither_set
> === RUN   TestDecodeAgentRegisterService/LocalServicePort:_both_set
> === RUN   TestDecodeAgentRegisterService/LocalServicePort:_first_set
> === RUN   TestDecodeAgentRegisterService/LocalServicePort:_second_set
> === RUN   TestDecodeAgentRegisterService/LocalServicePort:_neither_set
> === RUN   TestDecodeAgentRegisterService/LocalServiceAddress:_both_set
> === RUN   TestDecodeAgentRegisterService/LocalServiceAddress:_first_set
> === RUN   TestDecodeAgentRegisterService/LocalServiceAddress:_second_set
> === RUN   TestDecodeAgentRegisterService/LocalServiceAddress:_neither_set
> === RUN   TestDecodeAgentRegisterService/SidecarService:_both_set
> === RUN   TestDecodeAgentRegisterService/SidecarService:_first_set
> === RUN   TestDecodeAgentRegisterService/SidecarService:_second_set
> === RUN   TestDecodeAgentRegisterService/SidecarService:_neither_set
> === RUN   TestDecodeAgentRegisterService/LocalPathPort:_both_set
> === RUN   TestDecodeAgentRegisterService/LocalPathPort:_first_set
> === RUN   TestDecodeAgentRegisterService/LocalPathPort:_second_set
> === RUN   TestDecodeAgentRegisterService/LocalPathPort:_neither_set
> === RUN   TestDecodeAgentRegisterService/ListenerPort:_both_set
> === RUN   TestDecodeAgentRegisterService/ListenerPort:_first_set
> === RUN   TestDecodeAgentRegisterService/ListenerPort:_second_set
> === RUN   TestDecodeAgentRegisterService/ListenerPort:_neither_set
> === RUN   TestDecodeAgentRegisterService/TaggedAddresses:_both_set
> === RUN   TestDecodeAgentRegisterService/TaggedAddresses:_first_set
> === RUN   TestDecodeAgentRegisterService/TaggedAddresses:_second_set
> === RUN   TestDecodeAgentRegisterService/TaggedAddresses:_neither_set
> === RUN   TestDecodeAgentRegisterService/durations_correctly_formatted
> === RUN   TestDecodeAgentRegisterService/durations_small,_correctly_formatted
> === RUN   TestDecodeAgentRegisterService/durations_incorrectly_formatted
> === RUN   TestDecodeAgentRegisterService/durations_empty_string
> === RUN   TestDecodeAgentRegisterService/durations_string_without_quotes
> === RUN   TestDecodeAgentRegisterService/durations_numeric
> === RUN   TestDecodeAgentRegisterService/durations_negative
> === RUN   TestDecodeAgentRegisterService/durations_numeric_and_negative
> === RUN   TestDecodeAgentRegisterService/filled_in_map
> === RUN   TestDecodeAgentRegisterService/empty_map
> === RUN   TestDecodeAgentRegisterService/empty_map#01
> === RUN   TestDecodeAgentRegisterService/malformatted_map
> === RUN   TestDecodeAgentRegisterService/not_a_map_(slice)
> === RUN   TestDecodeAgentRegisterService/not_a_map_(int)
> === RUN   TestDecodeAgentRegisterService/scriptArgs:_all_set
> === RUN   TestDecodeAgentRegisterService/scriptArgs:_first_and_second_set
> === RUN   TestDecodeAgentRegisterService/scriptArgs:_first_and_third_set
> === RUN   TestDecodeAgentRegisterService/scriptArgs:_second_and_third_set
> === RUN   TestDecodeAgentRegisterService/scriptArgs:_first_set
> === RUN   TestDecodeAgentRegisterService/scriptArgs:_second_set
> === RUN   TestDecodeAgentRegisterService/scriptArgs:_third_set
> === RUN   TestDecodeAgentRegisterService/scriptArgs:_none_set
> === RUN   TestDecodeAgentRegisterService/deregister:_both_set
> === RUN   TestDecodeAgentRegisterService/deregister:_first_set
> === RUN   TestDecodeAgentRegisterService/deregister:_second_set
> === RUN   TestDecodeAgentRegisterService/deregister:_neither_set
> === RUN   TestDecodeAgentRegisterService/dockerContainerID:_both_set
> === RUN   TestDecodeAgentRegisterService/dockerContainerID:_first_set
> === RUN   TestDecodeAgentRegisterService/dockerContainerID:_second_set
> === RUN   TestDecodeAgentRegisterService/dockerContainerID:_neither_set
> === RUN   TestDecodeAgentRegisterService/tlsSkipVerify:_both_set
> === RUN   TestDecodeAgentRegisterService/tlsSkipVerify:_first_set
> === RUN   TestDecodeAgentRegisterService/tlsSkipVerify:_second_set
> === RUN   TestDecodeAgentRegisterService/tlsSkipVerify:_neither_set
> === RUN   TestDecodeAgentRegisterService/serviceID:_both_set
> === RUN   TestDecodeAgentRegisterService/serviceID:_first_set
> === RUN   TestDecodeAgentRegisterService/serviceID:_second_set
> === RUN   TestDecodeAgentRegisterService/serviceID:_neither_set
> --- PASS: TestDecodeAgentRegisterService (0.01s)
>     --- PASS: TestDecodeAgentRegisterService/translateEnableTagTCs:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/translateEnableTagTCs:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/translateEnableTagTCs:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/translateEnableTagTCs:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationName:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationName:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationName:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationName:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationType:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationType:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationType:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationType:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationNamespace:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationNamespace:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationNamespace:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationNamespace:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalBindPort:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalBindPort:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalBindPort:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalBindPort:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalBindAddress:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalBindAddress:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalBindAddress:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalBindAddress:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationServiceName:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationServiceName:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationServiceName:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationServiceName:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationServiceID:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationServiceID:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationServiceID:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/DestinationServiceID:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalServicePort:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalServicePort:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalServicePort:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalServicePort:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalServiceAddress:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalServiceAddress:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalServiceAddress:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalServiceAddress:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/SidecarService:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/SidecarService:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/SidecarService:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/SidecarService:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalPathPort:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalPathPort:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalPathPort:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/LocalPathPort:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/ListenerPort:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/ListenerPort:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/ListenerPort:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/ListenerPort:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/TaggedAddresses:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/TaggedAddresses:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/TaggedAddresses:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/TaggedAddresses:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/durations_correctly_formatted (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/durations_small,_correctly_formatted (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/durations_incorrectly_formatted (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/durations_empty_string (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/durations_string_without_quotes (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/durations_numeric (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/durations_negative (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/durations_numeric_and_negative (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/filled_in_map (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/empty_map (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/empty_map#01 (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/malformatted_map (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/not_a_map_(slice) (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/not_a_map_(int) (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/scriptArgs:_all_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/scriptArgs:_first_and_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/scriptArgs:_first_and_third_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/scriptArgs:_second_and_third_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/scriptArgs:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/scriptArgs:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/scriptArgs:_third_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/scriptArgs:_none_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/deregister:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/deregister:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/deregister:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/deregister:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/dockerContainerID:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/dockerContainerID:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/dockerContainerID:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/dockerContainerID:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/tlsSkipVerify:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/tlsSkipVerify:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/tlsSkipVerify:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/tlsSkipVerify:_neither_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/serviceID:_both_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/serviceID:_first_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/serviceID:_second_set (0.00s)
>     --- PASS: TestDecodeAgentRegisterService/serviceID:_neither_set (0.00s)
> === RUN   TestDecodeCatalogRegister
> === RUN   TestDecodeCatalogRegister/durations_correctly_formatted
> === RUN   TestDecodeCatalogRegister/durations_small,_correctly_formatted
> === RUN   TestDecodeCatalogRegister/durations_incorrectly_formatted
> === RUN   TestDecodeCatalogRegister/durations_empty_string
> === RUN   TestDecodeCatalogRegister/durations_string_without_quotes
> === RUN   TestDecodeCatalogRegister/durations_numeric
> === RUN   TestDecodeCatalogRegister/durations_negative
> === RUN   TestDecodeCatalogRegister/durations_numeric_and_negative
> --- PASS: TestDecodeCatalogRegister (0.00s)
>     --- PASS: TestDecodeCatalogRegister/durations_correctly_formatted (0.00s)
>     --- PASS: TestDecodeCatalogRegister/durations_small,_correctly_formatted (0.00s)
>     --- PASS: TestDecodeCatalogRegister/durations_incorrectly_formatted (0.00s)
>     --- PASS: TestDecodeCatalogRegister/durations_empty_string (0.00s)
>     --- PASS: TestDecodeCatalogRegister/durations_string_without_quotes (0.00s)
>     --- PASS: TestDecodeCatalogRegister/durations_numeric (0.00s)
>     --- PASS: TestDecodeCatalogRegister/durations_negative (0.00s)
>     --- PASS: TestDecodeCatalogRegister/durations_numeric_and_negative (0.00s)
> === RUN   TestDecodeDiscoveryChainRead
> === RUN   TestDecodeDiscoveryChainRead/durations_correctly_formatted
> === RUN   TestDecodeDiscoveryChainRead/durations_small,_correctly_formatted
> === RUN   TestDecodeDiscoveryChainRead/durations_incorrectly_formatted
> === RUN   TestDecodeDiscoveryChainRead/durations_empty_string
> === RUN   TestDecodeDiscoveryChainRead/durations_string_without_quotes
> === RUN   TestDecodeDiscoveryChainRead/durations_numeric
> === RUN   TestDecodeDiscoveryChainRead/durations_negative
> === RUN   TestDecodeDiscoveryChainRead/durations_numeric_and_negative
> === RUN   TestDecodeDiscoveryChainRead/positive_string_integer_(weakly_typed)
> === RUN   TestDecodeDiscoveryChainRead/negative_string_integer_(weakly_typed)
> === RUN   TestDecodeDiscoveryChainRead/positive_integer_for_string_field_(weakly_typed)
> === RUN   TestDecodeDiscoveryChainRead/negative_integer_for_string_field_(weakly_typed)
> === RUN   TestDecodeDiscoveryChainRead/bool_for_string_field_(weakly_typed)
> === RUN   TestDecodeDiscoveryChainRead/float_for_string_field_(weakly_typed)
> === RUN   TestDecodeDiscoveryChainRead/map_for_string_field_(weakly_typed)
> === RUN   TestDecodeDiscoveryChainRead/slice_for_string_field_(weakly_typed)
> === RUN   TestDecodeDiscoveryChainRead/OverrideMeshGateway:_both_set
> === RUN   TestDecodeDiscoveryChainRead/OverrideMeshGateway:_first_set
> === RUN   TestDecodeDiscoveryChainRead/OverrideMeshGateway:_second_set
> === RUN   TestDecodeDiscoveryChainRead/OverrideMeshGateway:_neither_set
> === RUN   TestDecodeDiscoveryChainRead/OverrideProtocol:_both_set
> === RUN   TestDecodeDiscoveryChainRead/OverrideProtocol:_first_set
> === RUN   TestDecodeDiscoveryChainRead/OverrideProtocol:_second_set
> === RUN   TestDecodeDiscoveryChainRead/OverrideProtocol:_neither_set
> === RUN   TestDecodeDiscoveryChainRead/OverrideConnectTimeout:_both_set
> === RUN   TestDecodeDiscoveryChainRead/OverrideConnectTimeout:_first_set
> === RUN   TestDecodeDiscoveryChainRead/OverrideConnectTimeout:_second_set
> === RUN   TestDecodeDiscoveryChainRead/OverrideConnectTimeout:_neither_set
> --- PASS: TestDecodeDiscoveryChainRead (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/durations_correctly_formatted (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/durations_small,_correctly_formatted (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/durations_incorrectly_formatted (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/durations_empty_string (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/durations_string_without_quotes (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/durations_numeric (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/durations_negative (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/durations_numeric_and_negative (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/positive_string_integer_(weakly_typed) (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/negative_string_integer_(weakly_typed) (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/positive_integer_for_string_field_(weakly_typed) (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/negative_integer_for_string_field_(weakly_typed) (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/bool_for_string_field_(weakly_typed) (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/float_for_string_field_(weakly_typed) (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/map_for_string_field_(weakly_typed) (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/slice_for_string_field_(weakly_typed) (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/OverrideMeshGateway:_both_set (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/OverrideMeshGateway:_first_set (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/OverrideMeshGateway:_second_set (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/OverrideMeshGateway:_neither_set (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/OverrideProtocol:_both_set (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/OverrideProtocol:_first_set (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/OverrideProtocol:_second_set (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/OverrideProtocol:_neither_set (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/OverrideConnectTimeout:_both_set (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/OverrideConnectTimeout:_first_set (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/OverrideConnectTimeout:_second_set (0.00s)
>     --- PASS: TestDecodeDiscoveryChainRead/OverrideConnectTimeout:_neither_set (0.00s)
> === RUN   TestDecodeIntentionCreate
> === RUN   TestDecodeIntentionCreate/hashes_base64_encoded
> === RUN   TestDecodeIntentionCreate/hashes_not-base64_encoded
> === RUN   TestDecodeIntentionCreate/hashes_empty_string
> === RUN   TestDecodeIntentionCreate/hashes_null
> === RUN   TestDecodeIntentionCreate/hashes_numeric_value
> === RUN   TestDecodeIntentionCreate/timestamps_correctly_RFC3339_formatted
> === RUN   TestDecodeIntentionCreate/timestamps_incorrectly_formatted_(RFC822)
> === RUN   TestDecodeIntentionCreate/timestamps_incorrectly_formatted_(RFC850)
> === RUN   TestDecodeIntentionCreate/timestamps_empty_string
> === RUN   TestDecodeIntentionCreate/timestamps_null
> --- PASS: TestDecodeIntentionCreate (0.00s)
>     --- PASS: TestDecodeIntentionCreate/hashes_base64_encoded (0.00s)
>     --- PASS: TestDecodeIntentionCreate/hashes_not-base64_encoded (0.00s)
>     --- PASS: TestDecodeIntentionCreate/hashes_empty_string (0.00s)
>     --- PASS: TestDecodeIntentionCreate/hashes_null (0.00s)
>     --- PASS: TestDecodeIntentionCreate/hashes_numeric_value (0.00s)
>     --- PASS: TestDecodeIntentionCreate/timestamps_correctly_RFC3339_formatted (0.00s)
>     --- PASS: TestDecodeIntentionCreate/timestamps_incorrectly_formatted_(RFC822) (0.00s)
>     --- PASS: TestDecodeIntentionCreate/timestamps_incorrectly_formatted_(RFC850) (0.00s)
>     --- PASS: TestDecodeIntentionCreate/timestamps_empty_string (0.00s)
>     --- PASS: TestDecodeIntentionCreate/timestamps_null (0.00s)
> === RUN   TestDecodeOperatorAutopilotConfiguration
> === RUN   TestDecodeOperatorAutopilotConfiguration/durations_correctly_formatted
> === RUN   TestDecodeOperatorAutopilotConfiguration/durations_small,_correctly_formatted
> === RUN   TestDecodeOperatorAutopilotConfiguration/durations_incorrectly_formatted
> === RUN   TestDecodeOperatorAutopilotConfiguration/durations_empty_string
> === RUN   TestDecodeOperatorAutopilotConfiguration/durations_string_without_quotes
> === RUN   TestDecodeOperatorAutopilotConfiguration/durations_numeric
> === RUN   TestDecodeOperatorAutopilotConfiguration/durations_negative
> === RUN   TestDecodeOperatorAutopilotConfiguration/durations_numeric_and_negative
> --- PASS: TestDecodeOperatorAutopilotConfiguration (0.00s)
>     --- PASS: TestDecodeOperatorAutopilotConfiguration/durations_correctly_formatted (0.00s)
>     --- PASS: TestDecodeOperatorAutopilotConfiguration/durations_small,_correctly_formatted (0.00s)
>     --- PASS: TestDecodeOperatorAutopilotConfiguration/durations_incorrectly_formatted (0.00s)
>     --- PASS: TestDecodeOperatorAutopilotConfiguration/durations_empty_string (0.00s)
>     --- PASS: TestDecodeOperatorAutopilotConfiguration/durations_string_without_quotes (0.00s)
>     --- PASS: TestDecodeOperatorAutopilotConfiguration/durations_numeric (0.00s)
>     --- PASS: TestDecodeOperatorAutopilotConfiguration/durations_negative (0.00s)
>     --- PASS: TestDecodeOperatorAutopilotConfiguration/durations_numeric_and_negative (0.00s)
> === RUN   TestDecodeSessionCreate
> === RUN   TestDecodeSessionCreate/durations_correctly_formatted
> === RUN   TestDecodeSessionCreate/durations_small,_correctly_formatted
> === RUN   TestDecodeSessionCreate/durations_incorrectly_formatted
> === RUN   TestDecodeSessionCreate/durations_empty_string
> === RUN   TestDecodeSessionCreate/durations_string_without_quotes
> === RUN   TestDecodeSessionCreate/durations_numeric
> === RUN   TestDecodeSessionCreate/duration_small,_numeric_(<_lockDelayMinThreshold)
> === RUN   TestDecodeSessionCreate/duration_string,_no_unit
> === RUN   TestDecodeSessionCreate/duration_small,_string,_already_duration
> === RUN   TestDecodeSessionCreate/duration_small,_numeric,_negative
> === RUN   TestDecodeSessionCreate/many_check_ids
> === RUN   TestDecodeSessionCreate/one_check_ids
> === RUN   TestDecodeSessionCreate/empty_check_id_slice
> === RUN   TestDecodeSessionCreate/null_check_ids
> === RUN   TestDecodeSessionCreate/empty_value_check_ids
> === RUN   TestDecodeSessionCreate/malformatted_check_ids_(string)
> --- PASS: TestDecodeSessionCreate (0.00s)
>     --- PASS: TestDecodeSessionCreate/durations_correctly_formatted (0.00s)
>     --- PASS: TestDecodeSessionCreate/durations_small,_correctly_formatted (0.00s)
>     --- PASS: TestDecodeSessionCreate/durations_incorrectly_formatted (0.00s)
>     --- PASS: TestDecodeSessionCreate/durations_empty_string (0.00s)
>     --- PASS: TestDecodeSessionCreate/durations_string_without_quotes (0.00s)
>     --- PASS: TestDecodeSessionCreate/durations_numeric (0.00s)
>     --- PASS: TestDecodeSessionCreate/duration_small,_numeric_(<_lockDelayMinThreshold) (0.00s)
>     --- PASS: TestDecodeSessionCreate/duration_string,_no_unit (0.00s)
>     --- PASS: TestDecodeSessionCreate/duration_small,_string,_already_duration (0.00s)
>     --- PASS: TestDecodeSessionCreate/duration_small,_numeric,_negative (0.00s)
>     --- PASS: TestDecodeSessionCreate/many_check_ids (0.00s)
>     --- PASS: TestDecodeSessionCreate/one_check_ids (0.00s)
>     --- PASS: TestDecodeSessionCreate/empty_check_id_slice (0.00s)
>     --- PASS: TestDecodeSessionCreate/null_check_ids (0.00s)
>     --- PASS: TestDecodeSessionCreate/empty_value_check_ids (0.00s)
>     --- PASS: TestDecodeSessionCreate/malformatted_check_ids_(string) (0.00s)
> === RUN   TestDecodeTxnConvertOps
> === RUN   TestDecodeTxnConvertOps/durations_correctly_formatted
> === RUN   TestDecodeTxnConvertOps/durations_small,_correctly_formatted
> === RUN   TestDecodeTxnConvertOps/durations_incorrectly_formatted
> === RUN   TestDecodeTxnConvertOps/durations_empty_string
> === RUN   TestDecodeTxnConvertOps/durations_string_without_quotes
> === RUN   TestDecodeTxnConvertOps/durations_numeric
> === RUN   TestDecodeTxnConvertOps/durations_negative
> === RUN   TestDecodeTxnConvertOps/durations_numeric_and_negative
> --- PASS: TestDecodeTxnConvertOps (0.00s)
>     --- PASS: TestDecodeTxnConvertOps/durations_correctly_formatted (0.00s)
>     --- PASS: TestDecodeTxnConvertOps/durations_small,_correctly_formatted (0.00s)
>     --- PASS: TestDecodeTxnConvertOps/durations_incorrectly_formatted (0.00s)
>     --- PASS: TestDecodeTxnConvertOps/durations_empty_string (0.00s)
>     --- PASS: TestDecodeTxnConvertOps/durations_string_without_quotes (0.00s)
>     --- PASS: TestDecodeTxnConvertOps/durations_numeric (0.00s)
>     --- PASS: TestDecodeTxnConvertOps/durations_negative (0.00s)
>     --- PASS: TestDecodeTxnConvertOps/durations_numeric_and_negative (0.00s)
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/query
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/query
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/query
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/query
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/query
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/query
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/query/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/query/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/query/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/query/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/query/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/query/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/query/xxx/execute
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/query/xxx/execute
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/query/xxx/execute
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/query/xxx/execute
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/query/xxx/execute
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/query/xxx/execute
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/query/xxx/explain
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/query/xxx/explain
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/query/xxx/explain
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/query/xxx/explain
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/query/xxx/explain
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/query/xxx/explain
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/internal/ui/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/internal/ui/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/internal/ui/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/internal/ui/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/internal/ui/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/internal/ui/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/session/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/session/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/session/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/session/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/session/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/session/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/token/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/token/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/token/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/token/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/token/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/token/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/host
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/host
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/host
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/host
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/host
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/host
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/service/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/service/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/service/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/service/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/service/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/service/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/health/connect/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/health/connect/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/health/connect/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/health/connect/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/health/connect/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/health/connect/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/destroy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/destroy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/destroy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/destroy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/destroy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/destroy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/rules/translate/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/rules/translate/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/rules/translate/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/rules/translate/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/rules/translate/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/rules/translate/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/token/self
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/token/self
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/token/self
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/token/self
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/token/self
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/token/self
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/session/info/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/session/info/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/session/info/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/session/info/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/session/info/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/session/info/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/replication
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/replication
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/replication
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/replication
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/replication
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/replication
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/node-services/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/node-services/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/node-services/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/node-services/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/node-services/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/node-services/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/create
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/create
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/create
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/create
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/create
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/create
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/info/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/info/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/info/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/info/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/info/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/info/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/binding-rule/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/binding-rule/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/binding-rule/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/binding-rule/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/binding-rule/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/binding-rule/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/check/pass/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/check/pass/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/check/pass/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/check/pass/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/check/pass/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/check/pass/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/connect/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/connect/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/connect/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/connect/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/connect/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/connect/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/check/fail/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/check/fail/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/check/fail/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/check/fail/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/check/fail/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/check/fail/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/check/update/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/check/update/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/check/update/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/check/update/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/check/update/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/check/update/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/connect/ca/roots
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/connect/ca/roots
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/connect/ca/roots
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/connect/ca/roots
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/connect/ca/roots
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/connect/ca/roots
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/connect/ca/leaf/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/connect/ca/leaf/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/connect/ca/leaf/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/connect/ca/leaf/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/connect/ca/leaf/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/connect/ca/leaf/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/datacenters
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/datacenters
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/datacenters
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/datacenters
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/datacenters
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/datacenters
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/role
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/role
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/role
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/role
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/role
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/role
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/auth-method/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/auth-method/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/auth-method/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/auth-method/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/auth-method/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/auth-method/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/health/state/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/health/state/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/health/state/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/health/state/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/health/state/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/health/state/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/service/maintenance/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/service/maintenance/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/service/maintenance/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/service/maintenance/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/service/maintenance/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/service/maintenance/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/operator/raft/peer
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/operator/raft/peer
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/operator/raft/peer
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/operator/raft/peer
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/operator/raft/peer
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/operator/raft/peer
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/operator/autopilot/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/operator/autopilot/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/operator/autopilot/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/operator/autopilot/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/operator/autopilot/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/operator/autopilot/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/event/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/event/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/event/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/event/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/event/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/event/list
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/config
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/config
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/config
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/config
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/config
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/config
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/kv/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/kv/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/kv/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/kv/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/kv/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/kv/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/operator/raft/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/operator/raft/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/operator/raft/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/operator/raft/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/operator/raft/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/operator/raft/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/clone/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/clone/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/clone/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/clone/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/clone/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/clone/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/session/destroy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/session/destroy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/session/destroy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/session/destroy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/session/destroy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/session/destroy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/role/name/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/role/name/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/role/name/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/role/name/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/role/name/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/role/name/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/binding-rules
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/binding-rules
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/binding-rules
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/binding-rules
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/binding-rules
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/binding-rules
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/session/renew/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/session/renew/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/session/renew/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/session/renew/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/session/renew/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/session/renew/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/health/checks/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/health/checks/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/health/checks/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/health/checks/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/health/checks/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/health/checks/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/connect/intentions/match
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/connect/intentions/match
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/connect/intentions/match
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/connect/intentions/match
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/connect/intentions/match
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/connect/intentions/match
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/session/create
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/session/create
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/session/create
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/session/create
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/session/create
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/session/create
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/health/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/health/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/health/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/health/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/health/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/health/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/join/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/join/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/join/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/join/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/join/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/join/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/auth-methods
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/auth-methods
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/auth-methods
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/auth-methods
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/auth-methods
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/auth-methods
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/self
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/self
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/self
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/self
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/self
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/self
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/coordinate/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/coordinate/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/coordinate/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/coordinate/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/coordinate/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/coordinate/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/token
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/token
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/token
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/token
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/token
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/token
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/check/deregister/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/check/deregister/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/check/deregister/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/check/deregister/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/check/deregister/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/check/deregister/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/connect/intentions
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/connect/intentions
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/connect/intentions
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/connect/intentions
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/connect/intentions
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/connect/intentions
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/update
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/update
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/update
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/update
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/update
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/update
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/policies
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/policies
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/policies
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/policies
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/policies
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/policies
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/role/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/role/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/role/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/role/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/role/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/role/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/health/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/health/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/health/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/health/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/health/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/health/service/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/auth-method
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/auth-method
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/auth-method
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/auth-method
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/auth-method
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/auth-method
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/coordinate/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/coordinate/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/coordinate/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/coordinate/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/coordinate/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/coordinate/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/operator/autopilot/health
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/operator/autopilot/health
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/operator/autopilot/health
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/operator/autopilot/health
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/operator/autopilot/health
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/operator/autopilot/health
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/discovery-chain/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/discovery-chain/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/discovery-chain/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/discovery-chain/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/discovery-chain/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/discovery-chain/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/login
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/login
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/login
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/login
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/login
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/login
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/force-leave/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/force-leave/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/force-leave/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/force-leave/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/force-leave/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/force-leave/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/connect/authorize
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/connect/authorize
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/connect/authorize
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/connect/authorize
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/connect/authorize
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/connect/authorize
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/internal/acl/authorize
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/internal/acl/authorize
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/internal/acl/authorize
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/internal/acl/authorize
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/internal/acl/authorize
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/internal/acl/authorize
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/session/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/session/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/session/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/session/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/session/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/session/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/connect/ca/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/connect/ca/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/connect/ca/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/connect/ca/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/connect/ca/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/connect/ca/configuration
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/connect/intentions/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/connect/intentions/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/connect/intentions/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/connect/intentions/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/connect/intentions/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/connect/intentions/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/bootstrap
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/bootstrap
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/bootstrap
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/bootstrap
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/bootstrap
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/bootstrap
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/policy
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/policy
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/policy
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/policy
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/policy
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/policy
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/roles
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/roles
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/roles
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/roles
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/roles
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/roles
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/event/fire/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/event/fire/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/event/fire/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/event/fire/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/event/fire/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/event/fire/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/txn
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/txn
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/txn
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/txn
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/txn
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/txn
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/check/warn/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/check/warn/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/check/warn/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/check/warn/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/check/warn/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/check/warn/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/deregister
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/deregister
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/deregister
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/deregister
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/deregister
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/deregister
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/binding-rule
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/binding-rule
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/binding-rule
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/binding-rule
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/binding-rule
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/binding-rule
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/services
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/service/deregister/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/service/deregister/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/service/deregister/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/service/deregister/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/service/deregister/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/service/deregister/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/internal/ui/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/internal/ui/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/internal/ui/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/internal/ui/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/internal/ui/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/internal/ui/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/internal/ui/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/internal/ui/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/internal/ui/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/internal/ui/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/internal/ui/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/internal/ui/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/health/service/name/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/health/service/name/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/health/service/name/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/health/service/name/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/health/service/name/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/health/service/name/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/members
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/members
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/members
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/members
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/members
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/members
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/tokens
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/tokens
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/tokens
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/tokens
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/tokens
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/tokens
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/operator/keyring
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/operator/keyring
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/operator/keyring
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/operator/keyring
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/operator/keyring
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/operator/keyring
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/checks
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/checks
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/checks
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/checks
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/checks
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/checks
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/coordinate/update
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/coordinate/update
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/coordinate/update
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/coordinate/update
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/coordinate/update
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/coordinate/update
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/rules/translate
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/rules/translate
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/rules/translate
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/rules/translate
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/rules/translate
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/rules/translate
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/metrics
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/metrics
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/metrics
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/metrics
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/metrics
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/metrics
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/nodes
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/status/leader
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/status/leader
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/status/leader
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/status/leader
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/status/leader
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/status/leader
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/connect/ca/roots
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/connect/ca/roots
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/connect/ca/roots
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/connect/ca/roots
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/connect/ca/roots
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/connect/ca/roots
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/coordinate/datacenters
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/coordinate/datacenters
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/coordinate/datacenters
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/coordinate/datacenters
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/coordinate/datacenters
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/coordinate/datacenters
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/check/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/check/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/check/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/check/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/check/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/check/register
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/token/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/token/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/token/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/token/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/token/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/token/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/node/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/snapshot
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/snapshot
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/snapshot
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/snapshot
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/snapshot
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/snapshot
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/logout
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/logout
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/logout
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/logout
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/logout
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/logout
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/health/service/id/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/health/service/id/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/health/service/id/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/health/service/id/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/health/service/id/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/health/service/id/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/config/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/config/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/config/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/config/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/config/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/config/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/connect/intentions/check
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/connect/intentions/check
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/connect/intentions/check
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/connect/intentions/check
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/connect/intentions/check
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/connect/intentions/check
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/policy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/policy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/policy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/policy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/policy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/policy/
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/maintenance
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/maintenance
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/maintenance
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/maintenance
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/maintenance
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/maintenance
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/leave
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/leave
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/leave
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/leave
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/leave
> === RUN   TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/leave
> --- PASS: TestHTTPAPI_MethodNotAllowed_OSS (0.76s)
>     writer.go:29: 2020-02-23T02:46:21.968Z [WARN]  TestHTTPAPI_MethodNotAllowed_OSS: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:21.968Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:21.968Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:21.978Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:5451bad8-faf6-bf66-4f98-db625d6f0794 Address:127.0.0.1:16828}]"
>     writer.go:29: 2020-02-23T02:46:21.978Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server.raft: entering follower state: follower="Node at 127.0.0.1:16828 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:21.978Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server.serf.wan: serf: EventMemberJoin: Node-5451bad8-faf6-bf66-4f98-db625d6f0794.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:21.979Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server.serf.lan: serf: EventMemberJoin: Node-5451bad8-faf6-bf66-4f98-db625d6f0794 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:21.979Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server: Adding LAN server: server="Node-5451bad8-faf6-bf66-4f98-db625d6f0794 (Addr: tcp/127.0.0.1:16828) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:21.979Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server: Handled event for server in area: event=member-join server=Node-5451bad8-faf6-bf66-4f98-db625d6f0794.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:21.979Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS: Started DNS server: address=127.0.0.1:16823 network=udp
>     writer.go:29: 2020-02-23T02:46:21.979Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS: Started DNS server: address=127.0.0.1:16823 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.979Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS: Started HTTP server: address=127.0.0.1:16824 network=tcp
>     writer.go:29: 2020-02-23T02:46:21.979Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS: started state syncer
>     writer.go:29: 2020-02-23T02:46:22.037Z [WARN]  TestHTTPAPI_MethodNotAllowed_OSS.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:22.037Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server.raft: entering candidate state: node="Node at 127.0.0.1:16828 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:22.041Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:22.041Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.server.raft: vote granted: from=5451bad8-faf6-bf66-4f98-db625d6f0794 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:22.041Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:22.041Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server.raft: entering leader state: leader="Node at 127.0.0.1:16828 [Leader]"
>     writer.go:29: 2020-02-23T02:46:22.041Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:22.041Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server: New leader elected: payload=Node-5451bad8-faf6-bf66-4f98-db625d6f0794
>     writer.go:29: 2020-02-23T02:46:22.043Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:22.044Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:22.044Z [WARN]  TestHTTPAPI_MethodNotAllowed_OSS.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:22.048Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:22.053Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:22.053Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:22.053Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:22.053Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server.serf.lan: serf: EventMemberUpdate: Node-5451bad8-faf6-bf66-4f98-db625d6f0794
>     writer.go:29: 2020-02-23T02:46:22.053Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server.serf.wan: serf: EventMemberUpdate: Node-5451bad8-faf6-bf66-4f98-db625d6f0794.dc1
>     writer.go:29: 2020-02-23T02:46:22.053Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server: Handled event for server in area: event=member-update server=Node-5451bad8-faf6-bf66-4f98-db625d6f0794.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:22.062Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:22.069Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:22.069Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:22.069Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.server: Skipping self join check for node since the cluster is too small: node=Node-5451bad8-faf6-bf66-4f98-db625d6f0794
>     writer.go:29: 2020-02-23T02:46:22.069Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server: member joined, marking health alive: member=Node-5451bad8-faf6-bf66-4f98-db625d6f0794
>     writer.go:29: 2020-02-23T02:46:22.071Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS: Synced node info
>     writer.go:29: 2020-02-23T02:46:22.074Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.server: Skipping self join check for node since the cluster is too small: node=Node-5451bad8-faf6-bf66-4f98-db625d6f0794
>     writer.go:29: 2020-02-23T02:46:22.407Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.acl: dropping node from result due to ACLs: node=Node-5451bad8-faf6-bf66-4f98-db625d6f0794
>     writer.go:29: 2020-02-23T02:46:22.408Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/query from=127.0.0.1:51954 latency=322.063µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/query (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.409Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/query from=127.0.0.1:51956 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.409Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/query from=127.0.0.1:51956 latency=113.471µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/query (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.409Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/query from=127.0.0.1:51958 latency=78.868µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/query (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.410Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/query from=127.0.0.1:51960 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.410Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/query from=127.0.0.1:51960 latency=71.594µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/query (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.410Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/query from=127.0.0.1:51962 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.410Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/query from=127.0.0.1:51962 latency=30.077µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/query (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.410Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/query from=127.0.0.1:51962 latency=1.267µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/query (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.410Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/query/ from=127.0.0.1:51962 error="failed prepared query lookup: index error: UUID must be 36 characters"
>     writer.go:29: 2020-02-23T02:46:22.411Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/query/ from=127.0.0.1:51962 latency=120.366µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/query/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.411Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/query/ from=127.0.0.1:51964 error="Prepared Query lookup failed: failed prepared query lookup: index error: UUID must be 36 characters"
>     writer.go:29: 2020-02-23T02:46:22.411Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/query/ from=127.0.0.1:51964 latency=100.36µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/query/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.412Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/query/ from=127.0.0.1:51966 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.412Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/query/ from=127.0.0.1:51966 latency=105.293µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/query/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.412Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/query/ from=127.0.0.1:51968 error="Prepared Query lookup failed: failed prepared query lookup: index error: UUID must be 36 characters"
>     writer.go:29: 2020-02-23T02:46:22.413Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/query/ from=127.0.0.1:51968 latency=574.249µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/query/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.413Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/query/ from=127.0.0.1:51970 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.413Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/query/ from=127.0.0.1:51970 latency=30.265µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/query/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.413Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/query/ from=127.0.0.1:51970 latency=98.394µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/query/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.414Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/query/xxx/execute from=127.0.0.1:51972 latency=74.873µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/query/xxx/execute (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.414Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/query/xxx/execute from=127.0.0.1:51974 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.414Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/query/xxx/execute from=127.0.0.1:51974 latency=66.677µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/query/xxx/execute (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.414Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/query/xxx/execute from=127.0.0.1:51976 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.414Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/query/xxx/execute from=127.0.0.1:51976 latency=63.523µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/query/xxx/execute (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.415Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/query/xxx/execute from=127.0.0.1:51978 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.415Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/query/xxx/execute from=127.0.0.1:51978 latency=65.89µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/query/xxx/execute (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.419Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/query/xxx/execute from=127.0.0.1:51980 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.419Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/query/xxx/execute from=127.0.0.1:51980 latency=32.012µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/query/xxx/execute (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.420Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/query/xxx/execute from=127.0.0.1:51980 latency=86.502µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/query/xxx/execute (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.420Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/query/xxx/explain from=127.0.0.1:51982 latency=100.659µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/query/xxx/explain (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.420Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/query/xxx/explain from=127.0.0.1:51984 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.420Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/query/xxx/explain from=127.0.0.1:51984 latency=94.261µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/query/xxx/explain (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.421Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/query/xxx/explain from=127.0.0.1:51986 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.421Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/query/xxx/explain from=127.0.0.1:51986 latency=75.668µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/query/xxx/explain (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.421Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/query/xxx/explain from=127.0.0.1:51988 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.421Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/query/xxx/explain from=127.0.0.1:51988 latency=71.829µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/query/xxx/explain (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.422Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/query/xxx/explain from=127.0.0.1:51990 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.422Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/query/xxx/explain from=127.0.0.1:51990 latency=27.537µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/query/xxx/explain (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.422Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/query/xxx/explain from=127.0.0.1:51990 latency=46.629µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/query/xxx/explain (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.422Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.acl: dropping node from result due to ACLs: node=Node-5451bad8-faf6-bf66-4f98-db625d6f0794
>     writer.go:29: 2020-02-23T02:46:22.422Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/internal/ui/services from=127.0.0.1:51992 latency=232.254µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/internal/ui/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.423Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/internal/ui/services from=127.0.0.1:51994 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.423Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/internal/ui/services from=127.0.0.1:51994 latency=66.493µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/internal/ui/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.423Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/internal/ui/services from=127.0.0.1:51996 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.423Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/internal/ui/services from=127.0.0.1:51996 latency=104.585µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/internal/ui/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.424Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/internal/ui/services from=127.0.0.1:51998 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.424Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/internal/ui/services from=127.0.0.1:51998 latency=100.235µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/internal/ui/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.424Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/internal/ui/services from=127.0.0.1:52000 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.424Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/internal/ui/services from=127.0.0.1:52000 latency=29.733µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/internal/ui/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.424Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/internal/ui/services from=127.0.0.1:52000 latency=1.333µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/internal/ui/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.424Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/session/list from=127.0.0.1:52000 latency=199.676µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/session/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.425Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/session/list from=127.0.0.1:52002 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.425Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/session/list from=127.0.0.1:52002 latency=94.493µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/session/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.425Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/session/list from=127.0.0.1:52004 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.425Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/session/list from=127.0.0.1:52004 latency=86.326µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/session/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.426Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/session/list from=127.0.0.1:52006 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.426Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/session/list from=127.0.0.1:52006 latency=82.918µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/session/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.426Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/session/list from=127.0.0.1:52008 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.426Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/session/list from=127.0.0.1:52008 latency=28.782µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/session/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.426Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/session/list from=127.0.0.1:52008 latency=1.343µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/session/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.426Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/token/ from=127.0.0.1:52008 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.426Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/token/ from=127.0.0.1:52008 latency=74.3µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.427Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/token/ from=127.0.0.1:52010 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.427Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/token/ from=127.0.0.1:52010 latency=349.703µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.428Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/token/ from=127.0.0.1:52012 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.428Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/token/ from=127.0.0.1:52012 latency=81.592µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.428Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/token/ from=127.0.0.1:52014 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.428Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/token/ from=127.0.0.1:52014 latency=75.43µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.429Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/token/ from=127.0.0.1:52016 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.429Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/token/ from=127.0.0.1:52016 latency=29.744µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.429Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/token/ from=127.0.0.1:52016 latency=1.217µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.429Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/host from=127.0.0.1:52016 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.429Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/host from=127.0.0.1:52016 latency=79.688µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/host (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.429Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/host from=127.0.0.1:52018 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.429Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/host from=127.0.0.1:52018 latency=67.547µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/host (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.430Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/host from=127.0.0.1:52020 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.430Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/host from=127.0.0.1:52020 latency=66.111µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/host (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.430Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/host from=127.0.0.1:52022 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.430Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/host from=127.0.0.1:52022 latency=65.519µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/host (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.431Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/host from=127.0.0.1:52024 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.431Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/host from=127.0.0.1:52024 latency=44.85µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/host (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.431Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/host from=127.0.0.1:52024 latency=1.263µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/host (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.431Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/service/register from=127.0.0.1:52024 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.431Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/service/register from=127.0.0.1:52024 latency=76.997µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/service/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.432Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/service/register from=127.0.0.1:52026 latency=84.711µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/service/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.432Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/service/register from=127.0.0.1:52028 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.432Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/service/register from=127.0.0.1:52028 latency=81.834µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/service/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.432Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/service/register from=127.0.0.1:52030 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.432Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/service/register from=127.0.0.1:52030 latency=71.481µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/service/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.433Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/service/register from=127.0.0.1:52032 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.433Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/service/register from=127.0.0.1:52032 latency=32.744µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/service/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.433Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/service/register from=127.0.0.1:52032 latency=1.489µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/service/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.433Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/health/connect/ from=127.0.0.1:52032 latency=84.432µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/health/connect/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.434Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/health/connect/ from=127.0.0.1:52034 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.434Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/health/connect/ from=127.0.0.1:52034 latency=76.486µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/health/connect/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.434Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/health/connect/ from=127.0.0.1:52036 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.434Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/health/connect/ from=127.0.0.1:52036 latency=74.749µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/health/connect/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.435Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/health/connect/ from=127.0.0.1:52038 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.435Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/health/connect/ from=127.0.0.1:52038 latency=90.758µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/health/connect/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.435Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/health/connect/ from=127.0.0.1:52040 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.435Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/health/connect/ from=127.0.0.1:52040 latency=30.896µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/health/connect/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.435Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/health/connect/ from=127.0.0.1:52040 latency=1.437µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/health/connect/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.435Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/destroy/ from=127.0.0.1:52040 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.436Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/destroy/ from=127.0.0.1:52040 latency=79.279µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.436Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/destroy/ from=127.0.0.1:52042 latency=40.221µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.436Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/destroy/ from=127.0.0.1:52044 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.436Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/destroy/ from=127.0.0.1:52044 latency=67.409µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.437Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/destroy/ from=127.0.0.1:52046 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.437Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/destroy/ from=127.0.0.1:52046 latency=68.178µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.437Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/destroy/ from=127.0.0.1:52048 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.437Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/destroy/ from=127.0.0.1:52048 latency=30.466µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.437Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/destroy/ from=127.0.0.1:52048 latency=1.69µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.438Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/rules/translate/ from=127.0.0.1:52048 error="Bad request: Missing token ID"
>     writer.go:29: 2020-02-23T02:46:22.438Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/rules/translate/ from=127.0.0.1:52048 latency=81.323µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/rules/translate/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.438Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/rules/translate/ from=127.0.0.1:52050 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.438Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/rules/translate/ from=127.0.0.1:52050 latency=80.908µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/rules/translate/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.439Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/rules/translate/ from=127.0.0.1:52052 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.439Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/rules/translate/ from=127.0.0.1:52052 latency=101.62µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/rules/translate/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.439Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/rules/translate/ from=127.0.0.1:52054 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.439Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/rules/translate/ from=127.0.0.1:52054 latency=81.233µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/rules/translate/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.440Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/rules/translate/ from=127.0.0.1:52056 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.440Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/rules/translate/ from=127.0.0.1:52056 latency=32.11µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/rules/translate/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.440Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/rules/translate/ from=127.0.0.1:52056 latency=1.333µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/rules/translate/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.440Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/token/self from=127.0.0.1:52056 error="ACL not found"
>     writer.go:29: 2020-02-23T02:46:22.440Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/token/self from=127.0.0.1:52056 latency=112.727µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/token/self (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.440Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/token/self from=127.0.0.1:52058 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.441Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/token/self from=127.0.0.1:52058 latency=84.977µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/token/self (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.441Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/token/self from=127.0.0.1:52060 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.441Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/token/self from=127.0.0.1:52060 latency=134.996µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/token/self (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.441Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/token/self from=127.0.0.1:52062 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.441Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/token/self from=127.0.0.1:52062 latency=116.239µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/token/self (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.442Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/token/self from=127.0.0.1:52064 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.442Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/token/self from=127.0.0.1:52064 latency=28.792µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/token/self (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.442Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/token/self from=127.0.0.1:52064 latency=1.978µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/token/self (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.442Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/session/info/ from=127.0.0.1:52064 latency=42.92µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/session/info/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.443Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/session/info/ from=127.0.0.1:52066 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.443Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/session/info/ from=127.0.0.1:52066 latency=63.438µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/session/info/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.444Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/session/info/ from=127.0.0.1:52068 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.444Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/session/info/ from=127.0.0.1:52068 latency=67.056µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/session/info/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.444Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/session/info/ from=127.0.0.1:52070 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.444Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/session/info/ from=127.0.0.1:52070 latency=65.83µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/session/info/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.449Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/session/info/ from=127.0.0.1:52072 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.449Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/session/info/ from=127.0.0.1:52072 latency=30.209µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/session/info/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.450Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/session/info/ from=127.0.0.1:52072 latency=1.297µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/session/info/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.450Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/replication from=127.0.0.1:52072 latency=128.124µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/replication (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.450Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/replication from=127.0.0.1:52074 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.450Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/replication from=127.0.0.1:52074 latency=75.121µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/replication (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.451Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/replication from=127.0.0.1:52076 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.451Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/replication from=127.0.0.1:52076 latency=78.514µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/replication (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.451Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/replication from=127.0.0.1:52078 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.451Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/replication from=127.0.0.1:52078 latency=69.787µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/replication (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.452Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/replication from=127.0.0.1:52080 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.452Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/replication from=127.0.0.1:52080 latency=30.244µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/replication (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.452Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/replication from=127.0.0.1:52080 latency=1.353µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/replication (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.452Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/catalog/node-services/ from=127.0.0.1:52080 latency=44.48µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/node-services/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.452Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/catalog/node-services/ from=127.0.0.1:52082 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.453Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/catalog/node-services/ from=127.0.0.1:52082 latency=62.796µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/node-services/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.453Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/catalog/node-services/ from=127.0.0.1:52084 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.453Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/catalog/node-services/ from=127.0.0.1:52084 latency=67.517µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/node-services/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.453Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/catalog/node-services/ from=127.0.0.1:52086 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.453Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/catalog/node-services/ from=127.0.0.1:52086 latency=65.045µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/node-services/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.454Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/catalog/node-services/ from=127.0.0.1:52088 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.454Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/catalog/node-services/ from=127.0.0.1:52088 latency=30.945µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/node-services/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.454Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/catalog/node-services/ from=127.0.0.1:52088 latency=1.903µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/node-services/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.454Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/create from=127.0.0.1:52088 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.454Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/create from=127.0.0.1:52088 latency=65.284µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.459Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/create from=127.0.0.1:52090 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.459Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/create from=127.0.0.1:52090 latency=103.51µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.459Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/create from=127.0.0.1:52092 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.459Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/create from=127.0.0.1:52092 latency=69.172µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.460Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/create from=127.0.0.1:52094 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.460Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/create from=127.0.0.1:52094 latency=68.099µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.460Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/create from=127.0.0.1:52096 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.460Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/create from=127.0.0.1:52096 latency=31.45µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.460Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/create from=127.0.0.1:52096 latency=1.436µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.461Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/info/ from=127.0.0.1:52096 latency=41.975µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/info/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.461Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/info/ from=127.0.0.1:52098 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.461Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/info/ from=127.0.0.1:52098 latency=67.215µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/info/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.461Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/info/ from=127.0.0.1:52100 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.462Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/info/ from=127.0.0.1:52100 latency=69.576µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/info/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.462Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/info/ from=127.0.0.1:52102 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.462Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/info/ from=127.0.0.1:52102 latency=66.289µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/info/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.462Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/info/ from=127.0.0.1:52104 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.462Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/info/ from=127.0.0.1:52104 latency=30.337µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/info/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.463Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/info/ from=127.0.0.1:52104 latency=1.501µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/info/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.463Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/binding-rule/ from=127.0.0.1:52104 error="Bad request: Missing binding rule ID"
>     writer.go:29: 2020-02-23T02:46:22.463Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/binding-rule/ from=127.0.0.1:52104 latency=86.979µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/binding-rule/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.463Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/binding-rule/ from=127.0.0.1:52106 error="Bad request: BindingRule decoding failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.463Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/binding-rule/ from=127.0.0.1:52106 latency=73.916µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/binding-rule/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.464Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/binding-rule/ from=127.0.0.1:52108 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.464Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/binding-rule/ from=127.0.0.1:52108 latency=66.853µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/binding-rule/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.464Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/binding-rule/ from=127.0.0.1:52110 error="Bad request: Missing binding rule ID"
>     writer.go:29: 2020-02-23T02:46:22.464Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/binding-rule/ from=127.0.0.1:52110 latency=65.11µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/binding-rule/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.464Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/binding-rule/ from=127.0.0.1:52112 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.464Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/binding-rule/ from=127.0.0.1:52112 latency=31.135µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/binding-rule/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.465Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/binding-rule/ from=127.0.0.1:52112 latency=1.678µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/binding-rule/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.465Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/check/pass/ from=127.0.0.1:52112 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.465Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/check/pass/ from=127.0.0.1:52112 latency=66.858µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/check/pass/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.465Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/check/pass/ from=127.0.0.1:52114 error="Unknown check """
>     writer.go:29: 2020-02-23T02:46:22.465Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/check/pass/ from=127.0.0.1:52114 latency=82.072µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/check/pass/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.466Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/check/pass/ from=127.0.0.1:52116 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.466Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/check/pass/ from=127.0.0.1:52116 latency=68.093µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/check/pass/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.466Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/check/pass/ from=127.0.0.1:52118 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.466Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/check/pass/ from=127.0.0.1:52118 latency=67.499µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/check/pass/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.467Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/check/pass/ from=127.0.0.1:52120 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.467Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/check/pass/ from=127.0.0.1:52120 latency=30.261µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/check/pass/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.467Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/check/pass/ from=127.0.0.1:52120 latency=1.462µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/check/pass/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.467Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/catalog/connect/ from=127.0.0.1:52120 latency=53.324µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/connect/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.467Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/catalog/connect/ from=127.0.0.1:52122 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.467Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/catalog/connect/ from=127.0.0.1:52122 latency=74.338µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/connect/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.468Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/catalog/connect/ from=127.0.0.1:52124 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.468Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/catalog/connect/ from=127.0.0.1:52124 latency=65.227µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/connect/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.468Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/catalog/connect/ from=127.0.0.1:52126 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.468Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/catalog/connect/ from=127.0.0.1:52126 latency=66.566µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/connect/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.469Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/catalog/connect/ from=127.0.0.1:52128 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.469Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/catalog/connect/ from=127.0.0.1:52128 latency=30.629µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/connect/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.469Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/catalog/connect/ from=127.0.0.1:52128 latency=1.346µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/connect/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.469Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/check/fail/ from=127.0.0.1:52128 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.469Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/check/fail/ from=127.0.0.1:52128 latency=81.842µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/check/fail/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.469Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/check/fail/ from=127.0.0.1:52130 error="Unknown check """
>     writer.go:29: 2020-02-23T02:46:22.469Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/check/fail/ from=127.0.0.1:52130 latency=79.592µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/check/fail/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.470Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/check/fail/ from=127.0.0.1:52132 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.470Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/check/fail/ from=127.0.0.1:52132 latency=67.012µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/check/fail/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.470Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/check/fail/ from=127.0.0.1:52134 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.470Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/check/fail/ from=127.0.0.1:52134 latency=66.97µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/check/fail/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.471Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/check/fail/ from=127.0.0.1:52136 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.471Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/check/fail/ from=127.0.0.1:52136 latency=32.028µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/check/fail/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.471Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/check/fail/ from=127.0.0.1:52136 latency=1.553µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/check/fail/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.471Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/check/update/ from=127.0.0.1:52136 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.471Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/check/update/ from=127.0.0.1:52136 latency=71.471µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/check/update/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.472Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/check/update/ from=127.0.0.1:52138 latency=48.495µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/check/update/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.472Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/check/update/ from=127.0.0.1:52140 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.472Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/check/update/ from=127.0.0.1:52140 latency=66.741µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/check/update/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.472Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/check/update/ from=127.0.0.1:52142 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.472Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/check/update/ from=127.0.0.1:52142 latency=83.564µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/check/update/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.473Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/check/update/ from=127.0.0.1:52144 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.474Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/check/update/ from=127.0.0.1:52144 latency=29.847µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/check/update/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.474Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/check/update/ from=127.0.0.1:52144 latency=1.414µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/check/update/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.474Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/connect/ca/roots from=127.0.0.1:52144 latency=286.371µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/connect/ca/roots (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.475Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/connect/ca/roots from=127.0.0.1:52146 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.475Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/connect/ca/roots from=127.0.0.1:52146 latency=66.157µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/connect/ca/roots (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.475Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/connect/ca/roots from=127.0.0.1:52148 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.475Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/connect/ca/roots from=127.0.0.1:52148 latency=62.657µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/connect/ca/roots (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.475Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/connect/ca/roots from=127.0.0.1:52150 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.475Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/connect/ca/roots from=127.0.0.1:52150 latency=67.437µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/connect/ca/roots (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.476Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/connect/ca/roots from=127.0.0.1:52152 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.476Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/connect/ca/roots from=127.0.0.1:52152 latency=34.71µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/connect/ca/roots (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.476Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/connect/ca/roots from=127.0.0.1:52152 latency=1.543µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/connect/ca/roots (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.476Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/connect/ca/leaf/ from=127.0.0.1:52152 error="URI must be either service or agent"
>     writer.go:29: 2020-02-23T02:46:22.476Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/connect/ca/leaf/ from=127.0.0.1:52152 latency=227.547µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/connect/ca/leaf/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.477Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/connect/ca/leaf/ from=127.0.0.1:52154 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.477Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/connect/ca/leaf/ from=127.0.0.1:52154 latency=68.779µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/connect/ca/leaf/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.477Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/connect/ca/leaf/ from=127.0.0.1:52156 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.477Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/connect/ca/leaf/ from=127.0.0.1:52156 latency=65.128µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/connect/ca/leaf/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.477Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/connect/ca/leaf/ from=127.0.0.1:52158 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.477Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/connect/ca/leaf/ from=127.0.0.1:52158 latency=66.182µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/connect/ca/leaf/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.478Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/connect/ca/leaf/ from=127.0.0.1:52160 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.478Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/connect/ca/leaf/ from=127.0.0.1:52160 latency=29.264µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/connect/ca/leaf/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.478Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/connect/ca/leaf/ from=127.0.0.1:52160 latency=1.312µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/connect/ca/leaf/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.478Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/catalog/datacenters from=127.0.0.1:52160 latency=115.603µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/datacenters (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.479Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/catalog/datacenters from=127.0.0.1:52162 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.479Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/catalog/datacenters from=127.0.0.1:52162 latency=69.371µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/datacenters (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.479Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/catalog/datacenters from=127.0.0.1:52164 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.479Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/catalog/datacenters from=127.0.0.1:52164 latency=76.56µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/datacenters (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.480Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/catalog/datacenters from=127.0.0.1:52166 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.480Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/catalog/datacenters from=127.0.0.1:52166 latency=65.952µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/datacenters (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.480Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/catalog/datacenters from=127.0.0.1:52168 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.480Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/catalog/datacenters from=127.0.0.1:52168 latency=28.96µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/datacenters (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.480Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/catalog/datacenters from=127.0.0.1:52168 latency=1.277µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/datacenters (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.480Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/list from=127.0.0.1:52168 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.480Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/list from=127.0.0.1:52168 latency=97.753µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.481Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/list from=127.0.0.1:52170 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.481Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/list from=127.0.0.1:52170 latency=66.9µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.487Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/list from=127.0.0.1:52172 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.487Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/list from=127.0.0.1:52172 latency=74.687µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/list (0.01s)
>     writer.go:29: 2020-02-23T02:46:22.491Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/list from=127.0.0.1:52174 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.491Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/list from=127.0.0.1:52174 latency=106.552µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.491Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/list from=127.0.0.1:52176 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.491Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/list from=127.0.0.1:52176 latency=29.458µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.492Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/list from=127.0.0.1:52176 latency=1.942µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.492Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/role from=127.0.0.1:52176 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.492Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/role from=127.0.0.1:52176 latency=123.606µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/role (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.492Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/role from=127.0.0.1:52178 error="Bad request: Role decoding failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.492Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/role from=127.0.0.1:52178 latency=132.727µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/role (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.493Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/role from=127.0.0.1:52180 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.493Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/role from=127.0.0.1:52180 latency=70.121µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/role (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.493Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/role from=127.0.0.1:52182 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.493Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/role from=127.0.0.1:52182 latency=67.389µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/role (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.493Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/role from=127.0.0.1:52184 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.494Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/role from=127.0.0.1:52184 latency=28.801µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/role (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.494Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/role from=127.0.0.1:52184 latency=1.548µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/role (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.494Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/auth-method/ from=127.0.0.1:52184 error="Bad request: Missing auth method name"
>     writer.go:29: 2020-02-23T02:46:22.494Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/auth-method/ from=127.0.0.1:52184 latency=62.951µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/auth-method/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.494Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/auth-method/ from=127.0.0.1:52186 error="Bad request: AuthMethod decoding failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.494Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/auth-method/ from=127.0.0.1:52186 latency=71.193µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/auth-method/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.495Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/auth-method/ from=127.0.0.1:52188 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.495Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/auth-method/ from=127.0.0.1:52188 latency=74.386µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/auth-method/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.495Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/auth-method/ from=127.0.0.1:52190 error="Bad request: Missing auth method name"
>     writer.go:29: 2020-02-23T02:46:22.495Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/auth-method/ from=127.0.0.1:52190 latency=66.351µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/auth-method/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.495Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/auth-method/ from=127.0.0.1:52192 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.496Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/auth-method/ from=127.0.0.1:52192 latency=29.422µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/auth-method/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.496Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/auth-method/ from=127.0.0.1:52192 latency=1.43µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/auth-method/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.496Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/health/state/ from=127.0.0.1:52192 latency=43.556µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/health/state/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.496Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/health/state/ from=127.0.0.1:52194 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.496Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/health/state/ from=127.0.0.1:52194 latency=65.244µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/health/state/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.497Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/health/state/ from=127.0.0.1:52196 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.497Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/health/state/ from=127.0.0.1:52196 latency=65.334µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/health/state/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.497Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/health/state/ from=127.0.0.1:52198 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.497Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/health/state/ from=127.0.0.1:52198 latency=62.134µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/health/state/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.497Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/health/state/ from=127.0.0.1:52200 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.497Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/health/state/ from=127.0.0.1:52200 latency=31.711µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/health/state/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.498Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/health/state/ from=127.0.0.1:52200 latency=1.293µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/health/state/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.498Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/service/maintenance/ from=127.0.0.1:52200 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.498Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/service/maintenance/ from=127.0.0.1:52200 latency=58.139µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/service/maintenance/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.498Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/service/maintenance/ from=127.0.0.1:52202 latency=40.518µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/service/maintenance/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.498Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/service/maintenance/ from=127.0.0.1:52204 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.498Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/service/maintenance/ from=127.0.0.1:52204 latency=65.504µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/service/maintenance/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.499Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/service/maintenance/ from=127.0.0.1:52206 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.503Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/service/maintenance/ from=127.0.0.1:52206 latency=4.127954ms
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/service/maintenance/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.503Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/service/maintenance/ from=127.0.0.1:52208 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.503Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/service/maintenance/ from=127.0.0.1:52208 latency=32.316µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/service/maintenance/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.503Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/service/maintenance/ from=127.0.0.1:52208 latency=1.192µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/service/maintenance/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.504Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/operator/raft/peer from=127.0.0.1:52208 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.504Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/operator/raft/peer from=127.0.0.1:52208 latency=84.84µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/operator/raft/peer (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.504Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/operator/raft/peer from=127.0.0.1:52210 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.504Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/operator/raft/peer from=127.0.0.1:52210 latency=86.12µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/operator/raft/peer (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.504Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/operator/raft/peer from=127.0.0.1:52212 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.504Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/operator/raft/peer from=127.0.0.1:52212 latency=70.497µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/operator/raft/peer (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.505Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/operator/raft/peer from=127.0.0.1:52214 error="Bad request: Must specify either ?id with the server's ID or ?address with IP:port of peer to remove"
>     writer.go:29: 2020-02-23T02:46:22.505Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/operator/raft/peer from=127.0.0.1:52214 latency=70.692µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/operator/raft/peer (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.505Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/operator/raft/peer from=127.0.0.1:52216 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.505Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/operator/raft/peer from=127.0.0.1:52216 latency=27.943µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/operator/raft/peer (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.505Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/operator/raft/peer from=127.0.0.1:52216 latency=1.197µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/operator/raft/peer (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.505Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/operator/autopilot/configuration from=127.0.0.1:52216 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.506Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/operator/autopilot/configuration from=127.0.0.1:52216 latency=107.833µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/operator/autopilot/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.506Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/operator/autopilot/configuration from=127.0.0.1:52218 error="Bad request: Error parsing autopilot config: EOF"
>     writer.go:29: 2020-02-23T02:46:22.506Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/operator/autopilot/configuration from=127.0.0.1:52218 latency=66.77µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/operator/autopilot/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.506Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/operator/autopilot/configuration from=127.0.0.1:52220 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.506Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/operator/autopilot/configuration from=127.0.0.1:52220 latency=64.821µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/operator/autopilot/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.507Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/operator/autopilot/configuration from=127.0.0.1:52222 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.507Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/operator/autopilot/configuration from=127.0.0.1:52222 latency=192.248µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/operator/autopilot/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.507Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/operator/autopilot/configuration from=127.0.0.1:52224 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.507Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/operator/autopilot/configuration from=127.0.0.1:52224 latency=30.782µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/operator/autopilot/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.507Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/operator/autopilot/configuration from=127.0.0.1:52224 latency=1.306µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/operator/autopilot/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.508Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/service/ from=127.0.0.1:52224 latency=87.457µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.508Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/service/ from=127.0.0.1:52226 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.508Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/service/ from=127.0.0.1:52226 latency=90.173µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.508Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/service/ from=127.0.0.1:52228 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.508Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/service/ from=127.0.0.1:52228 latency=66.547µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.509Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/service/ from=127.0.0.1:52230 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.509Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/service/ from=127.0.0.1:52230 latency=71.122µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.509Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/service/ from=127.0.0.1:52232 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.509Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/service/ from=127.0.0.1:52232 latency=28.936µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.509Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/service/ from=127.0.0.1:52232 latency=1.43µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.510Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/event/list from=127.0.0.1:52232 latency=138.876µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/event/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.510Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/event/list from=127.0.0.1:52234 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.510Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/event/list from=127.0.0.1:52234 latency=75.72µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/event/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.510Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/event/list from=127.0.0.1:52236 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.510Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/event/list from=127.0.0.1:52236 latency=67.18µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/event/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.511Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/event/list from=127.0.0.1:52238 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.511Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/event/list from=127.0.0.1:52238 latency=107.032µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/event/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.511Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/event/list from=127.0.0.1:52240 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.511Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/event/list from=127.0.0.1:52240 latency=51.011µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/event/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.512Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/event/list from=127.0.0.1:52240 latency=1.356µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/event/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.512Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/catalog/service/ from=127.0.0.1:52240 latency=113.617µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.513Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/catalog/service/ from=127.0.0.1:52242 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.513Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/catalog/service/ from=127.0.0.1:52242 latency=75.775µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.513Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/catalog/service/ from=127.0.0.1:52244 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.513Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/catalog/service/ from=127.0.0.1:52244 latency=66.981µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.513Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/catalog/service/ from=127.0.0.1:52246 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.513Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/catalog/service/ from=127.0.0.1:52246 latency=68.719µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.514Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/catalog/service/ from=127.0.0.1:52248 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.514Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/catalog/service/ from=127.0.0.1:52248 latency=29.482µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.514Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/catalog/service/ from=127.0.0.1:52248 latency=1.267µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.514Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/config from=127.0.0.1:52248 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.514Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/config from=127.0.0.1:52248 latency=60.097µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/config (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.514Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/config from=127.0.0.1:52250 error="Bad request: Request decoding failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.515Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/config from=127.0.0.1:52250 latency=68.092µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/config (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.515Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/config from=127.0.0.1:52252 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.515Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/config from=127.0.0.1:52252 latency=64.233µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/config (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.515Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/config from=127.0.0.1:52254 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.515Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/config from=127.0.0.1:52254 latency=65.98µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/config (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.516Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/config from=127.0.0.1:52256 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.516Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/config from=127.0.0.1:52256 latency=29.196µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/config (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.516Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/config from=127.0.0.1:52256 latency=1.387µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/config (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.516Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/kv/ from=127.0.0.1:52256 latency=41.703µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/kv/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.516Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/kv/ from=127.0.0.1:52258 latency=42.027µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/kv/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.517Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/kv/ from=127.0.0.1:52260 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.517Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/kv/ from=127.0.0.1:52260 latency=65.109µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/kv/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.517Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/kv/ from=127.0.0.1:52262 latency=41.903µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/kv/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.517Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/kv/ from=127.0.0.1:52264 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.518Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/kv/ from=127.0.0.1:52264 latency=51.698µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/kv/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.518Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/kv/ from=127.0.0.1:52264 latency=1.425µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/kv/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.518Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/operator/raft/configuration from=127.0.0.1:52264 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.518Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/operator/raft/configuration from=127.0.0.1:52264 latency=139.214µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/operator/raft/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.518Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/operator/raft/configuration from=127.0.0.1:52266 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.518Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/operator/raft/configuration from=127.0.0.1:52266 latency=64µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/operator/raft/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.519Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/operator/raft/configuration from=127.0.0.1:52268 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.519Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/operator/raft/configuration from=127.0.0.1:52268 latency=64.302µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/operator/raft/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.519Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/operator/raft/configuration from=127.0.0.1:52270 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.520Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/operator/raft/configuration from=127.0.0.1:52270 latency=83.556µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/operator/raft/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.520Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/operator/raft/configuration from=127.0.0.1:52272 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.520Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/operator/raft/configuration from=127.0.0.1:52272 latency=31.495µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/operator/raft/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.520Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/operator/raft/configuration from=127.0.0.1:52272 latency=1.412µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/operator/raft/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.520Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/clone/ from=127.0.0.1:52272 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.521Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/clone/ from=127.0.0.1:52272 latency=81.028µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/clone/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.521Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/clone/ from=127.0.0.1:52274 latency=56.781µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/clone/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.521Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/clone/ from=127.0.0.1:52276 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.522Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/clone/ from=127.0.0.1:52276 latency=78.979µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/clone/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.522Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/clone/ from=127.0.0.1:52278 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.522Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/clone/ from=127.0.0.1:52278 latency=78.476µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/clone/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.523Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/clone/ from=127.0.0.1:52280 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.523Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/clone/ from=127.0.0.1:52280 latency=50.889µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/clone/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.523Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/clone/ from=127.0.0.1:52280 latency=1.527µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/clone/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.523Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/session/destroy/ from=127.0.0.1:52280 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.523Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/session/destroy/ from=127.0.0.1:52280 latency=83.156µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/session/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.524Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/session/destroy/ from=127.0.0.1:52282 latency=41.04µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/session/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.524Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/session/destroy/ from=127.0.0.1:52284 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.524Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/session/destroy/ from=127.0.0.1:52284 latency=65.903µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/session/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.524Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/session/destroy/ from=127.0.0.1:52286 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.525Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/session/destroy/ from=127.0.0.1:52286 latency=67.981µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/session/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.525Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/session/destroy/ from=127.0.0.1:52288 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.525Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/session/destroy/ from=127.0.0.1:52288 latency=29.383µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/session/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.525Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/session/destroy/ from=127.0.0.1:52288 latency=1.268µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/session/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.525Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/role/name/ from=127.0.0.1:52288 error="Bad request: Missing role Name"
>     writer.go:29: 2020-02-23T02:46:22.525Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/role/name/ from=127.0.0.1:52288 latency=100.021µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/role/name/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.526Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/role/name/ from=127.0.0.1:52290 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.526Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/role/name/ from=127.0.0.1:52290 latency=98.164µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/role/name/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.526Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/role/name/ from=127.0.0.1:52292 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.526Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/role/name/ from=127.0.0.1:52292 latency=66.524µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/role/name/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.527Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/role/name/ from=127.0.0.1:52294 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.527Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/role/name/ from=127.0.0.1:52294 latency=65.484µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/role/name/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.527Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/role/name/ from=127.0.0.1:52296 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.527Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/role/name/ from=127.0.0.1:52296 latency=29.603µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/role/name/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.527Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/role/name/ from=127.0.0.1:52296 latency=1.266µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/role/name/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.528Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/binding-rules from=127.0.0.1:52296 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.528Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/binding-rules from=127.0.0.1:52296 latency=102.517µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/binding-rules (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.528Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/binding-rules from=127.0.0.1:52298 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.528Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/binding-rules from=127.0.0.1:52298 latency=68.42µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/binding-rules (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.528Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/binding-rules from=127.0.0.1:52300 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.529Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/binding-rules from=127.0.0.1:52300 latency=67.522µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/binding-rules (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.529Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/binding-rules from=127.0.0.1:52302 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.529Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/binding-rules from=127.0.0.1:52302 latency=95.775µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/binding-rules (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.529Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/binding-rules from=127.0.0.1:52304 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.529Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/binding-rules from=127.0.0.1:52304 latency=29.045µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/binding-rules (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.530Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/binding-rules from=127.0.0.1:52304 latency=1.291µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/binding-rules (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.530Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/session/renew/ from=127.0.0.1:52304 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.530Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/session/renew/ from=127.0.0.1:52304 latency=62.126µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/session/renew/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.530Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/session/renew/ from=127.0.0.1:52306 latency=42.872µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/session/renew/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.531Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/session/renew/ from=127.0.0.1:52308 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.531Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/session/renew/ from=127.0.0.1:52308 latency=65.937µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/session/renew/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.531Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/session/renew/ from=127.0.0.1:52310 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.531Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/session/renew/ from=127.0.0.1:52310 latency=64.151µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/session/renew/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.532Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/session/renew/ from=127.0.0.1:52312 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.532Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/session/renew/ from=127.0.0.1:52312 latency=31.502µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/session/renew/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.532Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/session/renew/ from=127.0.0.1:52312 latency=1.345µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/session/renew/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.532Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/health/checks/ from=127.0.0.1:52312 latency=70.549µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/health/checks/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.532Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/health/checks/ from=127.0.0.1:52314 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.532Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/health/checks/ from=127.0.0.1:52314 latency=111.949µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/health/checks/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.533Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/health/checks/ from=127.0.0.1:52316 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.533Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/health/checks/ from=127.0.0.1:52316 latency=68.804µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/health/checks/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.533Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/health/checks/ from=127.0.0.1:52318 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.533Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/health/checks/ from=127.0.0.1:52318 latency=107.798µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/health/checks/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.534Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/health/checks/ from=127.0.0.1:52320 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.534Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/health/checks/ from=127.0.0.1:52320 latency=27.545µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/health/checks/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.534Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/health/checks/ from=127.0.0.1:52320 latency=1.249µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/health/checks/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.534Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/connect/intentions/match from=127.0.0.1:52320 error="required query parameter 'by' not set"
>     writer.go:29: 2020-02-23T02:46:22.534Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/connect/intentions/match from=127.0.0.1:52320 latency=65.791µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/connect/intentions/match (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.534Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/connect/intentions/match from=127.0.0.1:52322 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.535Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/connect/intentions/match from=127.0.0.1:52322 latency=62.842µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/connect/intentions/match (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.535Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/connect/intentions/match from=127.0.0.1:52324 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.535Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/connect/intentions/match from=127.0.0.1:52324 latency=63.007µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/connect/intentions/match (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.535Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/connect/intentions/match from=127.0.0.1:52326 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.535Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/connect/intentions/match from=127.0.0.1:52326 latency=62.832µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/connect/intentions/match (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.536Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/connect/intentions/match from=127.0.0.1:52328 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.536Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/connect/intentions/match from=127.0.0.1:52328 latency=28.523µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/connect/intentions/match (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.536Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/connect/intentions/match from=127.0.0.1:52328 latency=1.384µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/connect/intentions/match (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.536Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/session/create from=127.0.0.1:52328 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.536Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/session/create from=127.0.0.1:52328 latency=62.245µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/session/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.536Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/session/create from=127.0.0.1:52330 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.536Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/session/create from=127.0.0.1:52330 latency=100.452µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/session/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.537Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/session/create from=127.0.0.1:52332 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.537Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/session/create from=127.0.0.1:52332 latency=65.989µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/session/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.537Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/session/create from=127.0.0.1:52334 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.537Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/session/create from=127.0.0.1:52334 latency=63.367µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/session/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.538Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/session/create from=127.0.0.1:52336 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.538Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/session/create from=127.0.0.1:52336 latency=27.364µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/session/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.538Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/session/create from=127.0.0.1:52336 latency=1.118µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/session/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.538Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/health/node/ from=127.0.0.1:52336 latency=41.911µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/health/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.538Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/health/node/ from=127.0.0.1:52338 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.538Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/health/node/ from=127.0.0.1:52338 latency=63.671µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/health/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.539Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/health/node/ from=127.0.0.1:52340 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.539Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/health/node/ from=127.0.0.1:52340 latency=64.504µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/health/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.539Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/health/node/ from=127.0.0.1:52342 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.539Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/health/node/ from=127.0.0.1:52342 latency=67.514µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/health/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.540Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/health/node/ from=127.0.0.1:52344 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.540Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/health/node/ from=127.0.0.1:52344 latency=30.227µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/health/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.540Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/health/node/ from=127.0.0.1:52344 latency=1.38µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/health/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.540Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/join/ from=127.0.0.1:52344 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.540Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/join/ from=127.0.0.1:52344 latency=63.931µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/join/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.540Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/join/ from=127.0.0.1:52346 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.540Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/join/ from=127.0.0.1:52346 latency=82.85µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/join/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.541Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/join/ from=127.0.0.1:52348 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.541Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/join/ from=127.0.0.1:52348 latency=66.325µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/join/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.541Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/join/ from=127.0.0.1:52350 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.541Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/join/ from=127.0.0.1:52350 latency=63.953µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/join/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.542Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/join/ from=127.0.0.1:52352 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.542Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/join/ from=127.0.0.1:52352 latency=30.131µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/join/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.542Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/join/ from=127.0.0.1:52352 latency=1.383µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/join/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.542Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.acl: dropping service from result due to ACLs: service=consul
>     writer.go:29: 2020-02-23T02:46:22.542Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/catalog/services from=127.0.0.1:52352 latency=163.538µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.543Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/catalog/services from=127.0.0.1:52354 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.543Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/catalog/services from=127.0.0.1:52354 latency=66.355µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.547Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/catalog/services from=127.0.0.1:52356 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.547Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/catalog/services from=127.0.0.1:52356 latency=66.459µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.548Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/catalog/services from=127.0.0.1:52358 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.548Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/catalog/services from=127.0.0.1:52358 latency=95.524µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.548Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/catalog/services from=127.0.0.1:52360 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.548Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/catalog/services from=127.0.0.1:52360 latency=30.125µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.548Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/catalog/services from=127.0.0.1:52360 latency=1.327µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.548Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/auth-methods from=127.0.0.1:52360 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.549Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/auth-methods from=127.0.0.1:52360 latency=121.946µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/auth-methods (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.549Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/auth-methods from=127.0.0.1:52362 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.549Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/auth-methods from=127.0.0.1:52362 latency=115.371µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/auth-methods (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.549Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/auth-methods from=127.0.0.1:52364 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.550Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/auth-methods from=127.0.0.1:52364 latency=98.194µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/auth-methods (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.550Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/auth-methods from=127.0.0.1:52366 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.550Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/auth-methods from=127.0.0.1:52366 latency=94.885µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/auth-methods (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.550Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/auth-methods from=127.0.0.1:52368 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.550Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/auth-methods from=127.0.0.1:52368 latency=31.49µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/auth-methods (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.551Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/auth-methods from=127.0.0.1:52368 latency=1.323µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/auth-methods (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.551Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/self from=127.0.0.1:52368 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.551Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:52368 latency=99.869µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/self (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.551Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/self from=127.0.0.1:52370 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.551Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/self from=127.0.0.1:52370 latency=79.652µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/self (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.552Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/self from=127.0.0.1:52372 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.552Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/self from=127.0.0.1:52372 latency=65.603µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/self (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.552Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/self from=127.0.0.1:52374 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.552Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/self from=127.0.0.1:52374 latency=119.583µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/self (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.553Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/self from=127.0.0.1:52376 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.553Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/self from=127.0.0.1:52376 latency=35.634µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/self (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.553Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/self from=127.0.0.1:52376 latency=1.245µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/self (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.553Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/coordinate/nodes from=127.0.0.1:52376 latency=163.031µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/coordinate/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.554Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/coordinate/nodes from=127.0.0.1:52378 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.554Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/coordinate/nodes from=127.0.0.1:52378 latency=79.122µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/coordinate/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.554Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/coordinate/nodes from=127.0.0.1:52380 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.554Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/coordinate/nodes from=127.0.0.1:52380 latency=93.11µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/coordinate/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.554Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/coordinate/nodes from=127.0.0.1:52382 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.555Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/coordinate/nodes from=127.0.0.1:52382 latency=86.651µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/coordinate/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.555Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/coordinate/nodes from=127.0.0.1:52384 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.555Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/coordinate/nodes from=127.0.0.1:52384 latency=29.771µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/coordinate/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.555Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/coordinate/nodes from=127.0.0.1:52384 latency=1.186µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/coordinate/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.555Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/token from=127.0.0.1:52384 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.555Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/token from=127.0.0.1:52384 latency=80.9µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/token (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.556Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/token from=127.0.0.1:52386 error="Bad request: Token decoding failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.556Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/token from=127.0.0.1:52386 latency=69.58µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/token (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.556Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/token from=127.0.0.1:52388 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.556Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/token from=127.0.0.1:52388 latency=65.532µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/token (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.556Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/token from=127.0.0.1:52390 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.556Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/token from=127.0.0.1:52390 latency=62.946µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/token (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.557Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/token from=127.0.0.1:52392 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.557Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/token from=127.0.0.1:52392 latency=26.664µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/token (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.557Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/token from=127.0.0.1:52392 latency=1.158µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/token (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.557Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/check/deregister/ from=127.0.0.1:52392 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.557Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/check/deregister/ from=127.0.0.1:52392 latency=59.477µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/check/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.557Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/check/deregister/ from=127.0.0.1:52394 error="Unknown check """
>     writer.go:29: 2020-02-23T02:46:22.557Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/check/deregister/ from=127.0.0.1:52394 latency=79.281µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/check/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.558Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/check/deregister/ from=127.0.0.1:52396 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.558Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/check/deregister/ from=127.0.0.1:52396 latency=66.085µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/check/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.558Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/check/deregister/ from=127.0.0.1:52398 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.558Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/check/deregister/ from=127.0.0.1:52398 latency=104.656µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/check/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.559Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/check/deregister/ from=127.0.0.1:52400 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.559Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/check/deregister/ from=127.0.0.1:52400 latency=33.282µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/check/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.559Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/check/deregister/ from=127.0.0.1:52400 latency=1.495µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/check/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.559Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/connect/intentions from=127.0.0.1:52400 latency=234.279µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/connect/intentions (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.560Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/connect/intentions from=127.0.0.1:52402 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.560Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/connect/intentions from=127.0.0.1:52402 latency=67.756µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/connect/intentions (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.560Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/connect/intentions from=127.0.0.1:52404 error="Failed to decode request body: EOF"
>     writer.go:29: 2020-02-23T02:46:22.560Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/connect/intentions from=127.0.0.1:52404 latency=69.863µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/connect/intentions (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.560Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/connect/intentions from=127.0.0.1:52406 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.561Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/connect/intentions from=127.0.0.1:52406 latency=63.653µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/connect/intentions (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.561Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/connect/intentions from=127.0.0.1:52408 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.561Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/connect/intentions from=127.0.0.1:52408 latency=28.039µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/connect/intentions (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.561Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/connect/intentions from=127.0.0.1:52408 latency=1.233µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/connect/intentions (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.561Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/update from=127.0.0.1:52408 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.561Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/update from=127.0.0.1:52408 latency=59.655µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.562Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/update from=127.0.0.1:52410 latency=40.161µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.562Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/update from=127.0.0.1:52412 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.562Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/update from=127.0.0.1:52412 latency=64.581µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.562Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/update from=127.0.0.1:52414 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.563Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/update from=127.0.0.1:52414 latency=64.399µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.563Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/update from=127.0.0.1:52416 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.563Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/update from=127.0.0.1:52416 latency=33.137µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.563Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/update from=127.0.0.1:52416 latency=1.477µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.563Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/policies from=127.0.0.1:52416 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.563Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/policies from=127.0.0.1:52416 latency=96.69µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/policies (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.564Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/policies from=127.0.0.1:52418 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.564Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/policies from=127.0.0.1:52418 latency=65.034µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/policies (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.564Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/policies from=127.0.0.1:52420 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.564Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/policies from=127.0.0.1:52420 latency=65.05µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/policies (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.565Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/policies from=127.0.0.1:52422 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.565Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/policies from=127.0.0.1:52422 latency=66.082µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/policies (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.565Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/policies from=127.0.0.1:52424 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.565Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/policies from=127.0.0.1:52424 latency=29.596µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/policies (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.566Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/policies from=127.0.0.1:52424 latency=1.633µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/policies (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.566Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/role/ from=127.0.0.1:52424 error="Bad request: Missing role ID"
>     writer.go:29: 2020-02-23T02:46:22.566Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/role/ from=127.0.0.1:52424 latency=91.698µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/role/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.566Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/role/ from=127.0.0.1:52426 error="Bad request: Role decoding failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.566Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/role/ from=127.0.0.1:52426 latency=96µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/role/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.567Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/role/ from=127.0.0.1:52428 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.567Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/role/ from=127.0.0.1:52428 latency=78.779µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/role/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.567Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/role/ from=127.0.0.1:52430 error="Bad request: Missing role ID"
>     writer.go:29: 2020-02-23T02:46:22.567Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/role/ from=127.0.0.1:52430 latency=67.423µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/role/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.568Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/role/ from=127.0.0.1:52432 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.568Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/role/ from=127.0.0.1:52432 latency=30.777µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/role/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.568Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/role/ from=127.0.0.1:52432 latency=1.698µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/role/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.568Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/health/service/ from=127.0.0.1:52432 latency=67.966µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/health/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.568Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/health/service/ from=127.0.0.1:52434 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.568Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/health/service/ from=127.0.0.1:52434 latency=88.876µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/health/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.569Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/health/service/ from=127.0.0.1:52436 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.569Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/health/service/ from=127.0.0.1:52436 latency=75.358µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/health/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.569Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/health/service/ from=127.0.0.1:52438 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.569Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/health/service/ from=127.0.0.1:52438 latency=68.44µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/health/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.570Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/health/service/ from=127.0.0.1:52440 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.570Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/health/service/ from=127.0.0.1:52440 latency=32.549µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/health/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.570Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/health/service/ from=127.0.0.1:52440 latency=1.457µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/health/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.578Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/auth-method from=127.0.0.1:52440 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.578Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/auth-method from=127.0.0.1:52440 latency=142.447µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/auth-method (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.578Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/auth-method from=127.0.0.1:52442 error="Bad request: AuthMethod decoding failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.578Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/auth-method from=127.0.0.1:52442 latency=124.984µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/auth-method (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.579Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/auth-method from=127.0.0.1:52444 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.579Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/auth-method from=127.0.0.1:52444 latency=79.84µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/auth-method (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.579Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/auth-method from=127.0.0.1:52446 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.580Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/auth-method from=127.0.0.1:52446 latency=75.53µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/auth-method (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.580Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/auth-method from=127.0.0.1:52448 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.580Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/auth-method from=127.0.0.1:52448 latency=37.137µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/auth-method (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.580Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/auth-method from=127.0.0.1:52448 latency=2.098µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/auth-method (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.583Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/coordinate/node/ from=127.0.0.1:52448 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.584Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/coordinate/node/ from=127.0.0.1:52448 latency=241.099µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/coordinate/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.584Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/coordinate/node/ from=127.0.0.1:52450 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.585Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/coordinate/node/ from=127.0.0.1:52450 latency=85.7µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/coordinate/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.586Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/coordinate/node/ from=127.0.0.1:52452 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.586Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/coordinate/node/ from=127.0.0.1:52452 latency=79.421µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/coordinate/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.587Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/coordinate/node/ from=127.0.0.1:52454 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.587Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/coordinate/node/ from=127.0.0.1:52454 latency=161.198µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/coordinate/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.588Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/coordinate/node/ from=127.0.0.1:52456 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.588Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/coordinate/node/ from=127.0.0.1:52456 latency=46.757µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/coordinate/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.588Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/coordinate/node/ from=127.0.0.1:52456 latency=1.845µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/coordinate/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.589Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/operator/autopilot/health from=127.0.0.1:52456 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.589Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/operator/autopilot/health from=127.0.0.1:52456 latency=222.036µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/operator/autopilot/health (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.590Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/operator/autopilot/health from=127.0.0.1:52458 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.590Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/operator/autopilot/health from=127.0.0.1:52458 latency=274.47µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/operator/autopilot/health (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.591Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/operator/autopilot/health from=127.0.0.1:52460 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.591Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/operator/autopilot/health from=127.0.0.1:52460 latency=214.985µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/operator/autopilot/health (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.603Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/operator/autopilot/health from=127.0.0.1:52462 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.603Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/operator/autopilot/health from=127.0.0.1:52462 latency=94.443µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/operator/autopilot/health (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.609Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/operator/autopilot/health from=127.0.0.1:52464 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.609Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/operator/autopilot/health from=127.0.0.1:52464 latency=37.068µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/operator/autopilot/health (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.610Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/operator/autopilot/health from=127.0.0.1:52464 latency=1.855µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/operator/autopilot/health (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.611Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/discovery-chain/ from=127.0.0.1:52464 error="Bad request: Missing chain name"
>     writer.go:29: 2020-02-23T02:46:22.611Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/discovery-chain/ from=127.0.0.1:52464 latency=91.513µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/discovery-chain/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.611Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/discovery-chain/ from=127.0.0.1:52466 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.611Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/discovery-chain/ from=127.0.0.1:52466 latency=71.037µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/discovery-chain/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.612Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/discovery-chain/ from=127.0.0.1:52468 error="Bad request: Missing chain name"
>     writer.go:29: 2020-02-23T02:46:22.612Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/discovery-chain/ from=127.0.0.1:52468 latency=68.683µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/discovery-chain/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.613Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/discovery-chain/ from=127.0.0.1:52470 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.613Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/discovery-chain/ from=127.0.0.1:52470 latency=84.729µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/discovery-chain/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.614Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/discovery-chain/ from=127.0.0.1:52472 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.614Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/discovery-chain/ from=127.0.0.1:52472 latency=28.659µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/discovery-chain/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.614Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/discovery-chain/ from=127.0.0.1:52472 latency=1.246µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/discovery-chain/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.614Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/login from=127.0.0.1:52472 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.614Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/login from=127.0.0.1:52472 latency=96.012µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/login (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.615Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/login from=127.0.0.1:52474 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.615Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/login from=127.0.0.1:52474 latency=61.857µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/login (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.615Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/login from=127.0.0.1:52476 error="Bad request: Failed to decode request body:: EOF"
>     writer.go:29: 2020-02-23T02:46:22.615Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/login from=127.0.0.1:52476 latency=110.776µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/login (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.615Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/login from=127.0.0.1:52478 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.615Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/login from=127.0.0.1:52478 latency=92.367µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/login (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.616Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/login from=127.0.0.1:52480 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.616Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/login from=127.0.0.1:52480 latency=25.709µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/login (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.616Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/login from=127.0.0.1:52480 latency=1.072µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/login (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.616Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/force-leave/ from=127.0.0.1:52480 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.616Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/force-leave/ from=127.0.0.1:52480 latency=73.159µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/force-leave/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.616Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/force-leave/ from=127.0.0.1:52482 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.616Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/force-leave/ from=127.0.0.1:52482 latency=94.88µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/force-leave/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.617Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/force-leave/ from=127.0.0.1:52484 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.617Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/force-leave/ from=127.0.0.1:52484 latency=68.465µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/force-leave/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.617Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/force-leave/ from=127.0.0.1:52486 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.617Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/force-leave/ from=127.0.0.1:52486 latency=63.765µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/force-leave/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.618Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/force-leave/ from=127.0.0.1:52488 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.618Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/force-leave/ from=127.0.0.1:52488 latency=28.312µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/force-leave/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.618Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/force-leave/ from=127.0.0.1:52488 latency=1.199µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/force-leave/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.618Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/connect/authorize from=127.0.0.1:52488 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.618Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/connect/authorize from=127.0.0.1:52488 latency=60.009µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/connect/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.618Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/connect/authorize from=127.0.0.1:52490 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.618Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/connect/authorize from=127.0.0.1:52490 latency=64.096µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/connect/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.619Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/connect/authorize from=127.0.0.1:52492 error="Bad request: Request decode failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.619Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/connect/authorize from=127.0.0.1:52492 latency=69.246µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/connect/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.619Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/connect/authorize from=127.0.0.1:52494 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.619Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/connect/authorize from=127.0.0.1:52494 latency=66.298µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/connect/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.620Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/connect/authorize from=127.0.0.1:52496 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.620Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/connect/authorize from=127.0.0.1:52496 latency=31.033µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/connect/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.620Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/connect/authorize from=127.0.0.1:52496 latency=1.411µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/connect/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.620Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/internal/acl/authorize from=127.0.0.1:52496 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.620Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/internal/acl/authorize from=127.0.0.1:52496 latency=71.285µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/internal/acl/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.620Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/internal/acl/authorize from=127.0.0.1:52498 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.620Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/internal/acl/authorize from=127.0.0.1:52498 latency=63.395µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/internal/acl/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.621Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/internal/acl/authorize from=127.0.0.1:52500 error="Bad request: Failed to decode request body: EOF"
>     writer.go:29: 2020-02-23T02:46:22.621Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/internal/acl/authorize from=127.0.0.1:52500 latency=70.496µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/internal/acl/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.621Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/internal/acl/authorize from=127.0.0.1:52502 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.621Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/internal/acl/authorize from=127.0.0.1:52502 latency=66.415µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/internal/acl/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.622Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/internal/acl/authorize from=127.0.0.1:52504 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.622Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/internal/acl/authorize from=127.0.0.1:52504 latency=31.95µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/internal/acl/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.622Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/internal/acl/authorize from=127.0.0.1:52504 latency=1.355µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/internal/acl/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.622Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/session/node/ from=127.0.0.1:52504 latency=41.411µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/session/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.622Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/session/node/ from=127.0.0.1:52506 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.622Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/session/node/ from=127.0.0.1:52506 latency=66.521µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/session/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.623Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/session/node/ from=127.0.0.1:52508 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.623Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/session/node/ from=127.0.0.1:52508 latency=64.641µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/session/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.623Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/session/node/ from=127.0.0.1:52510 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.623Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/session/node/ from=127.0.0.1:52510 latency=64.479µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/session/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.624Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/session/node/ from=127.0.0.1:52512 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.624Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/session/node/ from=127.0.0.1:52512 latency=30.156µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/session/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.624Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/session/node/ from=127.0.0.1:52512 latency=1.401µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/session/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.624Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/connect/ca/configuration from=127.0.0.1:52512 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.624Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/connect/ca/configuration from=127.0.0.1:52512 latency=110.593µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/connect/ca/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.625Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/connect/ca/configuration from=127.0.0.1:52514 error="Bad request: Request decode failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.625Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/connect/ca/configuration from=127.0.0.1:52514 latency=69.904µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/connect/ca/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.625Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/connect/ca/configuration from=127.0.0.1:52516 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.625Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/connect/ca/configuration from=127.0.0.1:52516 latency=103.222µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/connect/ca/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.626Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/connect/ca/configuration from=127.0.0.1:52518 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.626Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/connect/ca/configuration from=127.0.0.1:52518 latency=90.73µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/connect/ca/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.626Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/connect/ca/configuration from=127.0.0.1:52520 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.626Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/connect/ca/configuration from=127.0.0.1:52520 latency=33.456µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/connect/ca/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.626Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/connect/ca/configuration from=127.0.0.1:52520 latency=1.627µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/connect/ca/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.627Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/connect/intentions/ from=127.0.0.1:52520 error="Bad request: failed intention lookup: index error: UUID must be 36 characters"
>     writer.go:29: 2020-02-23T02:46:22.627Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/connect/intentions/ from=127.0.0.1:52520 latency=129.636µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/connect/intentions/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.627Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/connect/intentions/ from=127.0.0.1:52522 error="Bad request: Request decode failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.627Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/connect/intentions/ from=127.0.0.1:52522 latency=104.265µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/connect/intentions/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.628Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/connect/intentions/ from=127.0.0.1:52524 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.628Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/connect/intentions/ from=127.0.0.1:52524 latency=89.875µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/connect/intentions/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.628Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/connect/intentions/ from=127.0.0.1:52526 error="Intention lookup failed: failed intention lookup: index error: UUID must be 36 characters"
>     writer.go:29: 2020-02-23T02:46:22.628Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/connect/intentions/ from=127.0.0.1:52526 latency=121.57µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/connect/intentions/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.629Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/connect/intentions/ from=127.0.0.1:52528 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.629Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/connect/intentions/ from=127.0.0.1:52528 latency=28.568µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/connect/intentions/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.629Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/connect/intentions/ from=127.0.0.1:52528 latency=1.399µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/connect/intentions/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.629Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/bootstrap from=127.0.0.1:52528 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.629Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/bootstrap from=127.0.0.1:52528 latency=63.624µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/bootstrap (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.631Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/bootstrap from=127.0.0.1:52530 latency=1.557035ms
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/bootstrap (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.631Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/bootstrap from=127.0.0.1:52532 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.631Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/bootstrap from=127.0.0.1:52532 latency=66.025µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/bootstrap (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.632Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/bootstrap from=127.0.0.1:52534 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.632Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/bootstrap from=127.0.0.1:52534 latency=62.942µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/bootstrap (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.632Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/bootstrap from=127.0.0.1:52536 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.632Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/bootstrap from=127.0.0.1:52536 latency=27.99µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/bootstrap (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.632Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/bootstrap from=127.0.0.1:52536 latency=1.248µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/bootstrap (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.633Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/policy from=127.0.0.1:52536 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.633Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/policy from=127.0.0.1:52536 latency=108.809µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/policy (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.633Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/policy from=127.0.0.1:52538 error="Bad request: Policy decoding failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.633Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/policy from=127.0.0.1:52538 latency=85.61µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/policy (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.633Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/policy from=127.0.0.1:52540 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.633Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/policy from=127.0.0.1:52540 latency=88.728µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/policy (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.634Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/policy from=127.0.0.1:52542 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.634Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/policy from=127.0.0.1:52542 latency=72.592µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/policy (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.634Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/policy from=127.0.0.1:52544 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.634Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/policy from=127.0.0.1:52544 latency=28.22µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/policy (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.634Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/policy from=127.0.0.1:52544 latency=1.245µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/policy (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.635Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/roles from=127.0.0.1:52544 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.635Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/roles from=127.0.0.1:52544 latency=100.96µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/roles (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.635Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/roles from=127.0.0.1:52546 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.635Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/roles from=127.0.0.1:52546 latency=70.386µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/roles (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.635Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/roles from=127.0.0.1:52548 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.636Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/roles from=127.0.0.1:52548 latency=67.45µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/roles (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.636Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/roles from=127.0.0.1:52550 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.636Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/roles from=127.0.0.1:52550 latency=67.716µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/roles (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.636Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/roles from=127.0.0.1:52552 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.636Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/roles from=127.0.0.1:52552 latency=28.803µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/roles (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.636Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/roles from=127.0.0.1:52552 latency=1.341µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/roles (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.637Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/event/fire/ from=127.0.0.1:52552 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.637Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/event/fire/ from=127.0.0.1:52552 latency=61.44µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/event/fire/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.637Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/event/fire/ from=127.0.0.1:52554 latency=41.821µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/event/fire/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.637Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/event/fire/ from=127.0.0.1:52556 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.637Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/event/fire/ from=127.0.0.1:52556 latency=66.728µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/event/fire/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.638Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/event/fire/ from=127.0.0.1:52558 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.638Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/event/fire/ from=127.0.0.1:52558 latency=66.082µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/event/fire/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.638Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/event/fire/ from=127.0.0.1:52560 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.638Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/event/fire/ from=127.0.0.1:52560 latency=28.76µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/event/fire/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.638Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/event/fire/ from=127.0.0.1:52560 latency=1.356µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/event/fire/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.638Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/txn from=127.0.0.1:52560 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.638Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/txn from=127.0.0.1:52560 latency=60.757µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/txn (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.639Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/txn from=127.0.0.1:52562 latency=102.468µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/txn (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.639Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/txn from=127.0.0.1:52564 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.639Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/txn from=127.0.0.1:52564 latency=67.654µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/txn (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.640Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/txn from=127.0.0.1:52566 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.640Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/txn from=127.0.0.1:52566 latency=66.095µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/txn (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.640Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/txn from=127.0.0.1:52568 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.640Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/txn from=127.0.0.1:52568 latency=28.903µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/txn (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.640Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/txn from=127.0.0.1:52568 latency=1.281µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/txn (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.640Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/check/warn/ from=127.0.0.1:52568 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.640Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/check/warn/ from=127.0.0.1:52568 latency=62.307µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/check/warn/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.641Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/check/warn/ from=127.0.0.1:52570 error="Unknown check """
>     writer.go:29: 2020-02-23T02:46:22.641Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/check/warn/ from=127.0.0.1:52570 latency=81.746µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/check/warn/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.641Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/check/warn/ from=127.0.0.1:52572 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.641Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/check/warn/ from=127.0.0.1:52572 latency=65.701µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/check/warn/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.642Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/check/warn/ from=127.0.0.1:52574 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.642Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/check/warn/ from=127.0.0.1:52574 latency=64.938µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/check/warn/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.642Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/check/warn/ from=127.0.0.1:52576 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.642Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/check/warn/ from=127.0.0.1:52576 latency=28.978µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/check/warn/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.642Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/check/warn/ from=127.0.0.1:52576 latency=1.639µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/check/warn/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.642Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/catalog/register from=127.0.0.1:52576 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.642Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/catalog/register from=127.0.0.1:52576 latency=60.154µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.643Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/catalog/register from=127.0.0.1:52578 latency=42.459µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.643Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/catalog/register from=127.0.0.1:52580 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.643Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/catalog/register from=127.0.0.1:52580 latency=64.391µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.644Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/catalog/register from=127.0.0.1:52582 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.644Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/catalog/register from=127.0.0.1:52582 latency=64.413µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.644Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/catalog/register from=127.0.0.1:52584 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.644Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/catalog/register from=127.0.0.1:52584 latency=28.297µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.644Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/catalog/register from=127.0.0.1:52584 latency=1.294µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.644Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/catalog/deregister from=127.0.0.1:52584 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.644Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/catalog/deregister from=127.0.0.1:52584 latency=61.706µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/deregister (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.645Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/catalog/deregister from=127.0.0.1:52586 latency=43.699µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/deregister (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.645Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/catalog/deregister from=127.0.0.1:52588 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.645Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/catalog/deregister from=127.0.0.1:52588 latency=64.203µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/deregister (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.646Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/catalog/deregister from=127.0.0.1:52590 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.646Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/catalog/deregister from=127.0.0.1:52590 latency=80.519µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/deregister (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.646Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/catalog/deregister from=127.0.0.1:52592 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.646Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/catalog/deregister from=127.0.0.1:52592 latency=28.568µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/deregister (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.646Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/catalog/deregister from=127.0.0.1:52592 latency=1.25µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/deregister (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.646Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/binding-rule from=127.0.0.1:52592 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.646Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/binding-rule from=127.0.0.1:52592 latency=68.725µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/binding-rule (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.647Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/binding-rule from=127.0.0.1:52594 error="Bad request: BindingRule decoding failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.647Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/binding-rule from=127.0.0.1:52594 latency=68.17µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/binding-rule (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.647Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/binding-rule from=127.0.0.1:52596 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.647Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/binding-rule from=127.0.0.1:52596 latency=64.052µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/binding-rule (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.648Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/binding-rule from=127.0.0.1:52598 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.648Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/binding-rule from=127.0.0.1:52598 latency=63.898µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/binding-rule (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.648Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/binding-rule from=127.0.0.1:52600 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.648Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/binding-rule from=127.0.0.1:52600 latency=27.99µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/binding-rule (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.648Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/binding-rule from=127.0.0.1:52600 latency=1.343µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/binding-rule (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.649Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/services from=127.0.0.1:52600 latency=92.248µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.649Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/services from=127.0.0.1:52602 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.649Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/services from=127.0.0.1:52602 latency=102.832µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.649Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/services from=127.0.0.1:52604 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.649Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/services from=127.0.0.1:52604 latency=89.896µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.650Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/services from=127.0.0.1:52606 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.650Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/services from=127.0.0.1:52606 latency=92.064µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.650Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/services from=127.0.0.1:52608 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.650Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/services from=127.0.0.1:52608 latency=31.074µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.650Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/services from=127.0.0.1:52608 latency=1.238µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.651Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/service/deregister/ from=127.0.0.1:52608 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.651Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/service/deregister/ from=127.0.0.1:52608 latency=81.106µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/service/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.651Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/service/deregister/ from=127.0.0.1:52610 error="Unknown service {"" {}}"
>     writer.go:29: 2020-02-23T02:46:22.651Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/service/deregister/ from=127.0.0.1:52610 latency=87.631µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/service/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.652Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/service/deregister/ from=127.0.0.1:52612 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.652Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/service/deregister/ from=127.0.0.1:52612 latency=68.361µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/service/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.652Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/service/deregister/ from=127.0.0.1:52614 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.652Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/service/deregister/ from=127.0.0.1:52614 latency=69.677µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/service/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.653Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/service/deregister/ from=127.0.0.1:52616 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.653Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/service/deregister/ from=127.0.0.1:52616 latency=31.801µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/service/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.653Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/service/deregister/ from=127.0.0.1:52616 latency=1.39µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/service/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.653Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.acl: dropping node from result due to ACLs: node=Node-5451bad8-faf6-bf66-4f98-db625d6f0794
>     writer.go:29: 2020-02-23T02:46:22.653Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/internal/ui/nodes from=127.0.0.1:52616 latency=249.34µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/internal/ui/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.654Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/internal/ui/nodes from=127.0.0.1:52618 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.654Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/internal/ui/nodes from=127.0.0.1:52618 latency=91.388µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/internal/ui/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.654Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/internal/ui/nodes from=127.0.0.1:52620 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.654Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/internal/ui/nodes from=127.0.0.1:52620 latency=85.265µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/internal/ui/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.655Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/internal/ui/nodes from=127.0.0.1:52622 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.655Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/internal/ui/nodes from=127.0.0.1:52622 latency=67.86µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/internal/ui/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.655Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/internal/ui/nodes from=127.0.0.1:52624 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.655Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/internal/ui/nodes from=127.0.0.1:52624 latency=30.419µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/internal/ui/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.655Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/internal/ui/nodes from=127.0.0.1:52624 latency=1.401µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/internal/ui/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.655Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/internal/ui/node/ from=127.0.0.1:52624 latency=50.092µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/internal/ui/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.656Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/internal/ui/node/ from=127.0.0.1:52626 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.656Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/internal/ui/node/ from=127.0.0.1:52626 latency=69.139µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/internal/ui/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.656Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/internal/ui/node/ from=127.0.0.1:52628 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.656Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/internal/ui/node/ from=127.0.0.1:52628 latency=68.092µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/internal/ui/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.657Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/internal/ui/node/ from=127.0.0.1:52630 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.657Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/internal/ui/node/ from=127.0.0.1:52630 latency=73.575µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/internal/ui/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.657Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/internal/ui/node/ from=127.0.0.1:52632 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.657Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/internal/ui/node/ from=127.0.0.1:52632 latency=29.428µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/internal/ui/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.657Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/internal/ui/node/ from=127.0.0.1:52632 latency=1.865µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/internal/ui/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.658Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/health/service/name/ from=127.0.0.1:52632 error="Bad request: Missing service Name"
>     writer.go:29: 2020-02-23T02:46:22.658Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/health/service/name/ from=127.0.0.1:52632 latency=75.307µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/health/service/name/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.658Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/health/service/name/ from=127.0.0.1:52634 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.658Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/health/service/name/ from=127.0.0.1:52634 latency=66.993µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/health/service/name/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.658Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/health/service/name/ from=127.0.0.1:52636 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.658Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/health/service/name/ from=127.0.0.1:52636 latency=69.171µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/health/service/name/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.659Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/health/service/name/ from=127.0.0.1:52638 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.659Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/health/service/name/ from=127.0.0.1:52638 latency=200.105µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/health/service/name/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.659Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/health/service/name/ from=127.0.0.1:52640 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.659Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/health/service/name/ from=127.0.0.1:52640 latency=28.545µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/health/service/name/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.660Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/health/service/name/ from=127.0.0.1:52640 latency=1.225µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/health/service/name/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.660Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS: dropping node from result due to ACLs: node=Node-5451bad8-faf6-bf66-4f98-db625d6f0794 accessorID=
>     writer.go:29: 2020-02-23T02:46:22.660Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/members from=127.0.0.1:52640 latency=134.219µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/members (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.660Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/members from=127.0.0.1:52642 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.660Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/members from=127.0.0.1:52642 latency=66.258µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/members (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.661Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/members from=127.0.0.1:52644 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.661Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/members from=127.0.0.1:52644 latency=64.091µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/members (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.661Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/members from=127.0.0.1:52646 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.661Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/members from=127.0.0.1:52646 latency=106.54µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/members (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.662Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/members from=127.0.0.1:52648 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.662Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/members from=127.0.0.1:52648 latency=27.868µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/members (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.662Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/members from=127.0.0.1:52648 latency=1.345µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/members (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.662Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/tokens from=127.0.0.1:52648 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.662Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/tokens from=127.0.0.1:52648 latency=95.373µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/tokens (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.662Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/tokens from=127.0.0.1:52650 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.662Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/tokens from=127.0.0.1:52650 latency=64.798µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/tokens (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.663Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/tokens from=127.0.0.1:52652 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.663Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/tokens from=127.0.0.1:52652 latency=60.55µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/tokens (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.663Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/tokens from=127.0.0.1:52654 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.663Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/tokens from=127.0.0.1:52654 latency=54.204µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/tokens (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.664Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/tokens from=127.0.0.1:52656 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.664Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/tokens from=127.0.0.1:52656 latency=22.941µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/tokens (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.664Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/tokens from=127.0.0.1:52656 latency=928ns
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/tokens (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.664Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/operator/keyring from=127.0.0.1:52656 error="Reading keyring denied by ACLs"
>     writer.go:29: 2020-02-23T02:46:22.664Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/operator/keyring from=127.0.0.1:52656 latency=81.71µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/operator/keyring (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.664Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/operator/keyring from=127.0.0.1:52658 error="Bad request: Request decode failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.664Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/operator/keyring from=127.0.0.1:52658 latency=53.408µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/operator/keyring (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.664Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/operator/keyring from=127.0.0.1:52660 error="Bad request: Request decode failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.665Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/operator/keyring from=127.0.0.1:52660 latency=54.374µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/operator/keyring (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.665Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/operator/keyring from=127.0.0.1:52662 error="Bad request: Request decode failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.665Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/operator/keyring from=127.0.0.1:52662 latency=63.876µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/operator/keyring (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.665Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/operator/keyring from=127.0.0.1:52664 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.665Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/operator/keyring from=127.0.0.1:52664 latency=21.92µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/operator/keyring (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.665Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/operator/keyring from=127.0.0.1:52664 latency=1.222µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/operator/keyring (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.665Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/checks from=127.0.0.1:52664 latency=61.971µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/checks (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.666Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/checks from=127.0.0.1:52666 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.666Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/checks from=127.0.0.1:52666 latency=55.816µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/checks (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.666Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/checks from=127.0.0.1:52668 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.666Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/checks from=127.0.0.1:52668 latency=52.895µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/checks (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.666Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/checks from=127.0.0.1:52670 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.666Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/checks from=127.0.0.1:52670 latency=52.749µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/checks (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.667Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/checks from=127.0.0.1:52672 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.667Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/checks from=127.0.0.1:52672 latency=21.964µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/checks (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.667Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/checks from=127.0.0.1:52672 latency=1.104µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/checks (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.667Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/coordinate/update from=127.0.0.1:52672 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.667Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/coordinate/update from=127.0.0.1:52672 latency=49.627µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/coordinate/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.667Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/coordinate/update from=127.0.0.1:52674 latency=35.512µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/coordinate/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.668Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/coordinate/update from=127.0.0.1:52676 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.668Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/coordinate/update from=127.0.0.1:52676 latency=56.173µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/coordinate/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.668Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/coordinate/update from=127.0.0.1:52678 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.668Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/coordinate/update from=127.0.0.1:52678 latency=54.495µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/coordinate/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.668Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/coordinate/update from=127.0.0.1:52680 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.668Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/coordinate/update from=127.0.0.1:52680 latency=23.417µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/coordinate/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.669Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/coordinate/update from=127.0.0.1:52680 latency=1.333µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/coordinate/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.669Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/rules/translate from=127.0.0.1:52680 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.669Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/rules/translate from=127.0.0.1:52680 latency=48.809µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/rules/translate (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.669Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/rules/translate from=127.0.0.1:52682 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.669Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/rules/translate from=127.0.0.1:52682 latency=65.115µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/rules/translate (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.670Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/rules/translate from=127.0.0.1:52684 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.670Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/rules/translate from=127.0.0.1:52684 latency=77.078µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/rules/translate (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.670Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/rules/translate from=127.0.0.1:52686 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.670Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/rules/translate from=127.0.0.1:52686 latency=65.641µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/rules/translate (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.670Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/rules/translate from=127.0.0.1:52688 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.670Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/rules/translate from=127.0.0.1:52688 latency=29.464µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/rules/translate (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.671Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/rules/translate from=127.0.0.1:52688 latency=1.343µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/rules/translate (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.671Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/metrics from=127.0.0.1:52688 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.671Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/metrics from=127.0.0.1:52688 latency=73.905µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/metrics (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.671Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/metrics from=127.0.0.1:52690 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.671Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/metrics from=127.0.0.1:52690 latency=63.757µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/metrics (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.672Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/metrics from=127.0.0.1:52692 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.672Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/metrics from=127.0.0.1:52692 latency=80.485µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/metrics (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.672Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/metrics from=127.0.0.1:52694 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.672Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/metrics from=127.0.0.1:52694 latency=72.919µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/metrics (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.673Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/metrics from=127.0.0.1:52696 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.673Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/metrics from=127.0.0.1:52696 latency=30.093µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/metrics (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.673Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/metrics from=127.0.0.1:52696 latency=1.347µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/metrics (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.673Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.acl: dropping node from result due to ACLs: node=Node-5451bad8-faf6-bf66-4f98-db625d6f0794
>     writer.go:29: 2020-02-23T02:46:22.673Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/catalog/nodes from=127.0.0.1:52696 latency=194.837µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.674Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/catalog/nodes from=127.0.0.1:52698 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.674Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/catalog/nodes from=127.0.0.1:52698 latency=66.623µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.674Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/catalog/nodes from=127.0.0.1:52700 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.674Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/catalog/nodes from=127.0.0.1:52700 latency=66.065µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.675Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/catalog/nodes from=127.0.0.1:52702 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.675Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/catalog/nodes from=127.0.0.1:52702 latency=65.345µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.675Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/catalog/nodes from=127.0.0.1:52704 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.675Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/catalog/nodes from=127.0.0.1:52704 latency=30.736µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.675Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/catalog/nodes from=127.0.0.1:52704 latency=1.203µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.675Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:52704 latency=62.697µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/status/leader (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.676Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/status/leader from=127.0.0.1:52706 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.676Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/status/leader from=127.0.0.1:52706 latency=70.583µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/status/leader (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.676Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/status/leader from=127.0.0.1:52708 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.676Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/status/leader from=127.0.0.1:52708 latency=64.869µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/status/leader (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.677Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/status/leader from=127.0.0.1:52710 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.677Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/status/leader from=127.0.0.1:52710 latency=65.178µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/status/leader (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.677Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/status/leader from=127.0.0.1:52712 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.677Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/status/leader from=127.0.0.1:52712 latency=31.238µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/status/leader (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.677Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/status/leader from=127.0.0.1:52712 latency=1.363µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/status/leader (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.678Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/connect/ca/roots from=127.0.0.1:52712 latency=97.555µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/connect/ca/roots (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.678Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/connect/ca/roots from=127.0.0.1:52714 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.678Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/connect/ca/roots from=127.0.0.1:52714 latency=68.827µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/connect/ca/roots (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.679Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/connect/ca/roots from=127.0.0.1:52716 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.679Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/connect/ca/roots from=127.0.0.1:52716 latency=66.655µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/connect/ca/roots (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.679Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/connect/ca/roots from=127.0.0.1:52718 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.679Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/connect/ca/roots from=127.0.0.1:52718 latency=67.185µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/connect/ca/roots (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.680Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/connect/ca/roots from=127.0.0.1:52720 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.680Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/connect/ca/roots from=127.0.0.1:52720 latency=29.928µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/connect/ca/roots (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.680Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/connect/ca/roots from=127.0.0.1:52720 latency=1.353µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/connect/ca/roots (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.680Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/coordinate/datacenters from=127.0.0.1:52720 latency=129.463µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/coordinate/datacenters (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.680Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/coordinate/datacenters from=127.0.0.1:52722 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.681Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/coordinate/datacenters from=127.0.0.1:52722 latency=64.335µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/coordinate/datacenters (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.681Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/coordinate/datacenters from=127.0.0.1:52724 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.681Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/coordinate/datacenters from=127.0.0.1:52724 latency=66.442µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/coordinate/datacenters (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.681Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/coordinate/datacenters from=127.0.0.1:52726 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.681Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/coordinate/datacenters from=127.0.0.1:52726 latency=64.048µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/coordinate/datacenters (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.682Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/coordinate/datacenters from=127.0.0.1:52728 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.682Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/coordinate/datacenters from=127.0.0.1:52728 latency=29.712µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/coordinate/datacenters (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.682Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/coordinate/datacenters from=127.0.0.1:52728 latency=1.206µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/coordinate/datacenters (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.682Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/check/register from=127.0.0.1:52728 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.682Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/check/register from=127.0.0.1:52728 latency=60.62µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/check/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.683Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/check/register from=127.0.0.1:52730 latency=42.484µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/check/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.683Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/check/register from=127.0.0.1:52732 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.683Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/check/register from=127.0.0.1:52732 latency=66.071µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/check/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.689Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/check/register from=127.0.0.1:52734 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.690Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/check/register from=127.0.0.1:52734 latency=96.93µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/check/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.690Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/check/register from=127.0.0.1:52736 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.690Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/check/register from=127.0.0.1:52736 latency=29.53µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/check/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.691Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/check/register from=127.0.0.1:52736 latency=1.533µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/check/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.691Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/token/ from=127.0.0.1:52736 error="Bad request: Missing token ID"
>     writer.go:29: 2020-02-23T02:46:22.691Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/token/ from=127.0.0.1:52736 latency=64.986µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.691Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/token/ from=127.0.0.1:52738 error="Bad request: Token decoding failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.691Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/token/ from=127.0.0.1:52738 latency=73.746µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.692Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/token/ from=127.0.0.1:52740 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.692Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/token/ from=127.0.0.1:52740 latency=65.853µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.692Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/token/ from=127.0.0.1:52742 error="Bad request: Missing token ID"
>     writer.go:29: 2020-02-23T02:46:22.692Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/token/ from=127.0.0.1:52742 latency=80.805µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.693Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/token/ from=127.0.0.1:52744 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.693Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/token/ from=127.0.0.1:52744 latency=31.063µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.693Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/token/ from=127.0.0.1:52744 latency=1.372µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.693Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/catalog/node/ from=127.0.0.1:52744 latency=72.044µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/catalog/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.693Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/catalog/node/ from=127.0.0.1:52746 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.693Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/catalog/node/ from=127.0.0.1:52746 latency=90.422µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/catalog/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.694Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/catalog/node/ from=127.0.0.1:52748 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.694Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/catalog/node/ from=127.0.0.1:52748 latency=65.775µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/catalog/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.694Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/catalog/node/ from=127.0.0.1:52750 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.694Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/catalog/node/ from=127.0.0.1:52750 latency=63.378µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/catalog/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.695Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/catalog/node/ from=127.0.0.1:52752 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.695Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/catalog/node/ from=127.0.0.1:52752 latency=27.506µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/catalog/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.695Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/catalog/node/ from=127.0.0.1:52752 latency=1.327µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/catalog/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.695Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/snapshot from=127.0.0.1:52752 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.695Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/snapshot from=127.0.0.1:52752 latency=86.583µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/snapshot (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.695Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/snapshot from=127.0.0.1:52754 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.695Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/snapshot from=127.0.0.1:52754 latency=75.183µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/snapshot (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.696Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/snapshot from=127.0.0.1:52756 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.696Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/snapshot from=127.0.0.1:52756 latency=65.341µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/snapshot (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.696Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/snapshot from=127.0.0.1:52758 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.696Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/snapshot from=127.0.0.1:52758 latency=65.116µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/snapshot (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.696Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/snapshot from=127.0.0.1:52760 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.696Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/snapshot from=127.0.0.1:52760 latency=28.441µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/snapshot (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.697Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/snapshot from=127.0.0.1:52760 latency=1.471µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/snapshot (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.697Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/logout from=127.0.0.1:52760 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.697Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/logout from=127.0.0.1:52760 latency=60.821µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/logout (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.697Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/logout from=127.0.0.1:52762 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.697Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/logout from=127.0.0.1:52762 latency=59.615µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/logout (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.698Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/logout from=127.0.0.1:52764 error="ACL not found"
>     writer.go:29: 2020-02-23T02:46:22.698Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/logout from=127.0.0.1:52764 latency=62.327µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/logout (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.698Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/logout from=127.0.0.1:52766 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.698Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/logout from=127.0.0.1:52766 latency=64.127µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/logout (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.698Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/logout from=127.0.0.1:52768 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.698Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/logout from=127.0.0.1:52768 latency=31.044µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/logout (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.698Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/logout from=127.0.0.1:52768 latency=1.373µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/logout (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.699Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/health/service/id/ from=127.0.0.1:52768 error="Bad request: Missing serviceID"
>     writer.go:29: 2020-02-23T02:46:22.699Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/health/service/id/ from=127.0.0.1:52768 latency=65.052µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/health/service/id/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.699Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/health/service/id/ from=127.0.0.1:52770 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.699Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/health/service/id/ from=127.0.0.1:52770 latency=67.083µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/health/service/id/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.700Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/health/service/id/ from=127.0.0.1:52772 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.700Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/health/service/id/ from=127.0.0.1:52772 latency=67.108µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/health/service/id/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.700Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/health/service/id/ from=127.0.0.1:52774 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.700Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/health/service/id/ from=127.0.0.1:52774 latency=73.7µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/health/service/id/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.700Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/health/service/id/ from=127.0.0.1:52776 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.700Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/health/service/id/ from=127.0.0.1:52776 latency=29.64µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/health/service/id/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.701Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/health/service/id/ from=127.0.0.1:52776 latency=1.503µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/health/service/id/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.701Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/config/ from=127.0.0.1:52776 latency=107.9µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/config/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.701Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/config/ from=127.0.0.1:52778 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.701Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/config/ from=127.0.0.1:52778 latency=65.387µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/config/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.702Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/config/ from=127.0.0.1:52780 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.702Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/config/ from=127.0.0.1:52780 latency=68.421µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/config/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.702Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/config/ from=127.0.0.1:52782 latency=64.237µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/config/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.703Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/config/ from=127.0.0.1:52784 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.703Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/config/ from=127.0.0.1:52784 latency=32.951µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/config/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.703Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/config/ from=127.0.0.1:52784 latency=1.241µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/config/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.703Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/connect/intentions/check from=127.0.0.1:52784 error="required query parameter 'source' not set"
>     writer.go:29: 2020-02-23T02:46:22.703Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/connect/intentions/check from=127.0.0.1:52784 latency=82.041µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/connect/intentions/check (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.703Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/connect/intentions/check from=127.0.0.1:52786 error="method PUT not allowed"
>     writer.go:29: 2020-02-23T02:46:22.703Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/connect/intentions/check from=127.0.0.1:52786 latency=82.367µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/connect/intentions/check (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.704Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/connect/intentions/check from=127.0.0.1:52788 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.704Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/connect/intentions/check from=127.0.0.1:52788 latency=66.58µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/connect/intentions/check (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.704Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/connect/intentions/check from=127.0.0.1:52790 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.704Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/connect/intentions/check from=127.0.0.1:52790 latency=66.959µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/connect/intentions/check (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.705Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/connect/intentions/check from=127.0.0.1:52792 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.705Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/connect/intentions/check from=127.0.0.1:52792 latency=31.4µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/connect/intentions/check (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.705Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/connect/intentions/check from=127.0.0.1:52792 latency=1.331µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/connect/intentions/check (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.705Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/acl/policy/ from=127.0.0.1:52792 error="Bad request: Missing policy ID"
>     writer.go:29: 2020-02-23T02:46:22.705Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/acl/policy/ from=127.0.0.1:52792 latency=63.716µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/acl/policy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.706Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/acl/policy/ from=127.0.0.1:52794 error="Bad request: Policy decoding failed: EOF"
>     writer.go:29: 2020-02-23T02:46:22.706Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/acl/policy/ from=127.0.0.1:52794 latency=68.16µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/acl/policy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.706Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/acl/policy/ from=127.0.0.1:52796 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.706Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/acl/policy/ from=127.0.0.1:52796 latency=66.062µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/acl/policy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.706Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/acl/policy/ from=127.0.0.1:52798 error="Bad request: Missing policy ID"
>     writer.go:29: 2020-02-23T02:46:22.707Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/acl/policy/ from=127.0.0.1:52798 latency=83.302µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/acl/policy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.707Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/acl/policy/ from=127.0.0.1:52800 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.707Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/acl/policy/ from=127.0.0.1:52800 latency=31.33µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/acl/policy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.707Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/acl/policy/ from=127.0.0.1:52800 latency=1.555µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/acl/policy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.707Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/maintenance from=127.0.0.1:52800 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.707Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/maintenance from=127.0.0.1:52800 latency=63.982µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/maintenance (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.708Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/maintenance from=127.0.0.1:52802 latency=38.833µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/maintenance (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.708Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/maintenance from=127.0.0.1:52804 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.708Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/maintenance from=127.0.0.1:52804 latency=65.818µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/maintenance (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.709Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/maintenance from=127.0.0.1:52806 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.709Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/maintenance from=127.0.0.1:52806 latency=64.808µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/maintenance (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.709Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/maintenance from=127.0.0.1:52808 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.709Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/maintenance from=127.0.0.1:52808 latency=50.654µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/maintenance (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.709Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/maintenance from=127.0.0.1:52808 latency=1.489µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/maintenance (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.710Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=GET url=/v1/agent/leave from=127.0.0.1:52808 error="method GET not allowed"
>     writer.go:29: 2020-02-23T02:46:22.710Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=GET url=/v1/agent/leave from=127.0.0.1:52808 latency=101.295µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/GET_/v1/agent/leave (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.710Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=PUT url=/v1/agent/leave from=127.0.0.1:52810 error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:22.710Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=PUT url=/v1/agent/leave from=127.0.0.1:52810 latency=135.766µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/PUT_/v1/agent/leave (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.711Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=POST url=/v1/agent/leave from=127.0.0.1:52812 error="method POST not allowed"
>     writer.go:29: 2020-02-23T02:46:22.711Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=POST url=/v1/agent/leave from=127.0.0.1:52812 latency=89.654µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/POST_/v1/agent/leave (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.711Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=DELETE url=/v1/agent/leave from=127.0.0.1:52814 error="method DELETE not allowed"
>     writer.go:29: 2020-02-23T02:46:22.712Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=DELETE url=/v1/agent/leave from=127.0.0.1:52814 latency=80.147µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/DELETE_/v1/agent/leave (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.712Z [ERROR] TestHTTPAPI_MethodNotAllowed_OSS.http: Request error: method=HEAD url=/v1/agent/leave from=127.0.0.1:52816 error="method HEAD not allowed"
>     writer.go:29: 2020-02-23T02:46:22.712Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=HEAD url=/v1/agent/leave from=127.0.0.1:52816 latency=32.194µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/HEAD_/v1/agent/leave (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.712Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.http: Request finished: method=OPTIONS url=/v1/agent/leave from=127.0.0.1:52816 latency=1.506µs
>     --- PASS: TestHTTPAPI_MethodNotAllowed_OSS/OPTIONS_/v1/agent/leave (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.712Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:22.712Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:22.712Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:22.713Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:22.713Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:22.713Z [WARN]  TestHTTPAPI_MethodNotAllowed_OSS.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:22.713Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:22.713Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:22.713Z [DEBUG] TestHTTPAPI_MethodNotAllowed_OSS.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:22.716Z [WARN]  TestHTTPAPI_MethodNotAllowed_OSS.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:22.717Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:22.718Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS: consul server down
>     writer.go:29: 2020-02-23T02:46:22.718Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS: shutdown complete
>     writer.go:29: 2020-02-23T02:46:22.718Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS: Stopping server: protocol=DNS address=127.0.0.1:16823 network=tcp
>     writer.go:29: 2020-02-23T02:46:22.718Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS: Stopping server: protocol=DNS address=127.0.0.1:16823 network=udp
>     writer.go:29: 2020-02-23T02:46:22.718Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS: Stopping server: protocol=HTTP address=127.0.0.1:16824 network=tcp
>     writer.go:29: 2020-02-23T02:46:22.718Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:22.718Z [INFO]  TestHTTPAPI_MethodNotAllowed_OSS: Endpoints down
> === RUN   TestHTTPAPI_OptionMethod_OSS
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/query
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/query/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/query/xxx/execute
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/query/xxx/explain
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/internal/ui/nodes
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/internal/ui/node/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/health/service/name/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/members
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/tokens
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/operator/keyring
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/checks
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/coordinate/update
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/rules/translate
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/metrics
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/nodes
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/status/leader
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/connect/ca/roots
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/coordinate/datacenters
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/check/register
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/snapshot
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/token/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/node/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/logout
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/leave
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/health/service/id/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/config/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/connect/intentions/check
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/policy/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/maintenance
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/internal/ui/services
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/session/list
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/token/self
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/token/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/host
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/service/register
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/health/connect/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/destroy/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/rules/translate/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/session/info/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/replication
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/binding-rule/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/node-services/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/create
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/info/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/check/pass/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/connect/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/auth-method/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/check/fail/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/check/update/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/connect/ca/roots
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/connect/ca/leaf/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/datacenters
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/list
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/role
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/health/state/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/service/maintenance/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/event/list
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/operator/raft/peer
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/operator/autopilot/configuration
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/service/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/service/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/config
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/kv/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/operator/raft/configuration
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/clone/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/session/destroy/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/role/name/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/binding-rules
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/health/checks/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/session/renew/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/connect/intentions/match
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/session/create
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/services
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/health/node/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/join/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/coordinate/nodes
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/auth-methods
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/self
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/role/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/token
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/check/deregister/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/connect/intentions
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/update
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/policies
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/health/service/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/operator/autopilot/health
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/auth-method
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/coordinate/node/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/force-leave/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/discovery-chain/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/login
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/connect/authorize
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/internal/acl/authorize
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/session/node/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/roles
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/connect/ca/configuration
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/connect/intentions/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/bootstrap
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/policy
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/deregister
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/event/fire/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/txn
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/check/warn/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/register
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/service/deregister/
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/binding-rule
> === RUN   TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/services
> --- PASS: TestHTTPAPI_OptionMethod_OSS (0.12s)
>     writer.go:29: 2020-02-23T02:46:22.734Z [WARN]  TestHTTPAPI_OptionMethod_OSS: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:22.734Z [WARN]  TestHTTPAPI_OptionMethod_OSS: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:22.735Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:22.735Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:22.747Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:053a3a72-e189-5898-ee8b-ee12460d1d4f Address:127.0.0.1:16834}]"
>     writer.go:29: 2020-02-23T02:46:22.748Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server.serf.wan: serf: EventMemberJoin: Node-053a3a72-e189-5898-ee8b-ee12460d1d4f.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:22.748Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server.serf.lan: serf: EventMemberJoin: Node-053a3a72-e189-5898-ee8b-ee12460d1d4f 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:22.748Z [INFO]  TestHTTPAPI_OptionMethod_OSS: Started DNS server: address=127.0.0.1:16829 network=udp
>     writer.go:29: 2020-02-23T02:46:22.748Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server.raft: entering follower state: follower="Node at 127.0.0.1:16834 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:22.749Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server: Adding LAN server: server="Node-053a3a72-e189-5898-ee8b-ee12460d1d4f (Addr: tcp/127.0.0.1:16834) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:22.749Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server: Handled event for server in area: event=member-join server=Node-053a3a72-e189-5898-ee8b-ee12460d1d4f.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:22.749Z [INFO]  TestHTTPAPI_OptionMethod_OSS: Started DNS server: address=127.0.0.1:16829 network=tcp
>     writer.go:29: 2020-02-23T02:46:22.749Z [INFO]  TestHTTPAPI_OptionMethod_OSS: Started HTTP server: address=127.0.0.1:16830 network=tcp
>     writer.go:29: 2020-02-23T02:46:22.749Z [INFO]  TestHTTPAPI_OptionMethod_OSS: started state syncer
>     writer.go:29: 2020-02-23T02:46:22.800Z [WARN]  TestHTTPAPI_OptionMethod_OSS.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:22.800Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server.raft: entering candidate state: node="Node at 127.0.0.1:16834 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:22.803Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:22.803Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.server.raft: vote granted: from=053a3a72-e189-5898-ee8b-ee12460d1d4f term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:22.803Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:22.803Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server.raft: entering leader state: leader="Node at 127.0.0.1:16834 [Leader]"
>     writer.go:29: 2020-02-23T02:46:22.804Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:22.804Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server: New leader elected: payload=Node-053a3a72-e189-5898-ee8b-ee12460d1d4f
>     writer.go:29: 2020-02-23T02:46:22.806Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:22.807Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:22.809Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:22.809Z [INFO]  TestHTTPAPI_OptionMethod_OSS.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:22.809Z [INFO]  TestHTTPAPI_OptionMethod_OSS.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:22.810Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server.serf.lan: serf: EventMemberUpdate: Node-053a3a72-e189-5898-ee8b-ee12460d1d4f
>     writer.go:29: 2020-02-23T02:46:22.810Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server.serf.wan: serf: EventMemberUpdate: Node-053a3a72-e189-5898-ee8b-ee12460d1d4f.dc1
>     writer.go:29: 2020-02-23T02:46:22.810Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server: Handled event for server in area: event=member-update server=Node-053a3a72-e189-5898-ee8b-ee12460d1d4f.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:22.816Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:22.821Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:22.822Z [INFO]  TestHTTPAPI_OptionMethod_OSS.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:22.822Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.server: Skipping self join check for node since the cluster is too small: node=Node-053a3a72-e189-5898-ee8b-ee12460d1d4f
>     writer.go:29: 2020-02-23T02:46:22.822Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server: member joined, marking health alive: member=Node-053a3a72-e189-5898-ee8b-ee12460d1d4f
>     writer.go:29: 2020-02-23T02:46:22.824Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.server: Skipping self join check for node since the cluster is too small: node=Node-053a3a72-e189-5898-ee8b-ee12460d1d4f
>     writer.go:29: 2020-02-23T02:46:22.833Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/query from= latency=1.858µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/query (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.833Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/query/ from= latency=93.761µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/query/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.833Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/query/xxx/execute from= latency=8.261µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/query/xxx/execute (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.833Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/query/xxx/explain from= latency=7.787µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/query/xxx/explain (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/internal/ui/nodes from= latency=1.404µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/internal/ui/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/internal/ui/node/ from= latency=1.101µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/internal/ui/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/health/service/name/ from= latency=897ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/health/service/name/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/members from= latency=771ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/members (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/tokens from= latency=1.616µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/tokens (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/operator/keyring from= latency=1.037µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/operator/keyring (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/checks from= latency=790ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/checks (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/coordinate/update from= latency=727ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/coordinate/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/rules/translate from= latency=837ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/rules/translate (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/metrics from= latency=724ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/metrics (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/catalog/nodes from= latency=688ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/status/leader from= latency=659ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/status/leader (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/connect/ca/roots from= latency=762ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/connect/ca/roots (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/coordinate/datacenters from= latency=630ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/coordinate/datacenters (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/check/register from= latency=668ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/check/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/snapshot from= latency=1.269µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/snapshot (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/token/ from= latency=957ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/catalog/node/ from= latency=795ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/logout from= latency=714ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/logout (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/leave from= latency=771ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/leave (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.834Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/health/service/id/ from= latency=749ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/health/service/id/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/config/ from= latency=1.124µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/config/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/connect/intentions/check from= latency=759ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/connect/intentions/check (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/policy/ from= latency=943ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/policy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/maintenance from= latency=782ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/maintenance (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/internal/ui/services from= latency=737ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/internal/ui/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/session/list from= latency=1.195µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/session/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/token/self from= latency=796ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/token/self (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/token/ from= latency=718ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/host from= latency=697ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/host (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/service/register from= latency=732ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/service/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/health/connect/ from= latency=1.218µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/health/connect/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/destroy/ from= latency=1.08µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/rules/translate/ from= latency=762ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/rules/translate/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/session/info/ from= latency=731ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/session/info/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/replication from= latency=831ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/replication (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/binding-rule/ from= latency=1.117µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/binding-rule/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/catalog/node-services/ from= latency=730ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/node-services/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/create from= latency=615ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/info/ from= latency=836ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/info/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/check/pass/ from= latency=745ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/check/pass/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.835Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/catalog/connect/ from= latency=773ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/connect/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/auth-method/ from= latency=846ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/auth-method/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/check/fail/ from= latency=811ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/check/fail/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/check/update/ from= latency=772ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/check/update/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/connect/ca/roots from= latency=930ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/connect/ca/roots (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/connect/ca/leaf/ from= latency=772ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/connect/ca/leaf/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/catalog/datacenters from= latency=696ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/datacenters (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/list from= latency=820ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/role from= latency=944ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/role (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/health/state/ from= latency=739ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/health/state/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/service/maintenance/ from= latency=696ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/service/maintenance/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/event/list from= latency=636ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/event/list (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/operator/raft/peer from= latency=712ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/operator/raft/peer (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/operator/autopilot/configuration from= latency=943ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/operator/autopilot/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/service/ from= latency=855ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/catalog/service/ from= latency=752ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/config from= latency=756ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/config (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/kv/ from= latency=962ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/kv/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/operator/raft/configuration from= latency=1.043µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/operator/raft/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/clone/ from= latency=691ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/clone/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/session/destroy/ from= latency=1.075µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/session/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/role/name/ from= latency=630ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/role/name/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/binding-rules from= latency=771ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/binding-rules (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.836Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/health/checks/ from= latency=870ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/health/checks/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/session/renew/ from= latency=1.108µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/session/renew/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/connect/intentions/match from= latency=618ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/connect/intentions/match (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/session/create from= latency=631ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/session/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/catalog/services from= latency=669ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/health/node/ from= latency=691ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/health/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/join/ from= latency=682ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/join/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/coordinate/nodes from= latency=789ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/coordinate/nodes (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/auth-methods from= latency=751ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/auth-methods (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/self from= latency=1.199µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/self (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/role/ from= latency=1.057µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/role/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/token from= latency=819ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/token (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/check/deregister/ from= latency=775ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/check/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/connect/intentions from= latency=1.038µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/connect/intentions (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/update from= latency=791ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/policies from= latency=744ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/policies (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/health/service/ from= latency=790ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/health/service/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/operator/autopilot/health from= latency=801ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/operator/autopilot/health (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/auth-method from= latency=800ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/auth-method (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/coordinate/node/ from= latency=650ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/coordinate/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/force-leave/ from= latency=654ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/force-leave/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.837Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/discovery-chain/ from= latency=853ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/discovery-chain/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/login from= latency=1.259µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/login (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/connect/authorize from= latency=678ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/connect/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/internal/acl/authorize from= latency=946ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/internal/acl/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/session/node/ from= latency=771ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/session/node/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/roles from= latency=840ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/roles (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/connect/ca/configuration from= latency=781ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/connect/ca/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/connect/intentions/ from= latency=906ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/connect/intentions/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/bootstrap from= latency=661ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/bootstrap (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/policy from= latency=795ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/policy (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/catalog/deregister from= latency=619ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/deregister (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/event/fire/ from= latency=920ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/event/fire/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/txn from= latency=780ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/txn (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/check/warn/ from= latency=1.083µs
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/check/warn/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/catalog/register from= latency=683ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/catalog/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/service/deregister/ from= latency=824ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/service/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/acl/binding-rule from= latency=573ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/acl/binding-rule (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.http: Request finished: method=OPTIONS url=http://127.0.0.1:16830/v1/agent/services from= latency=659ns
>     --- PASS: TestHTTPAPI_OptionMethod_OSS/OPTIONS_/v1/agent/services (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.838Z [INFO]  TestHTTPAPI_OptionMethod_OSS: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:22.838Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:22.838Z [WARN]  TestHTTPAPI_OptionMethod_OSS.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:22.838Z [ERROR] TestHTTPAPI_OptionMethod_OSS.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:22.838Z [DEBUG] TestHTTPAPI_OptionMethod_OSS.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:22.840Z [WARN]  TestHTTPAPI_OptionMethod_OSS.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:22.842Z [INFO]  TestHTTPAPI_OptionMethod_OSS.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:22.842Z [INFO]  TestHTTPAPI_OptionMethod_OSS: consul server down
>     writer.go:29: 2020-02-23T02:46:22.842Z [INFO]  TestHTTPAPI_OptionMethod_OSS: shutdown complete
>     writer.go:29: 2020-02-23T02:46:22.842Z [INFO]  TestHTTPAPI_OptionMethod_OSS: Stopping server: protocol=DNS address=127.0.0.1:16829 network=tcp
>     writer.go:29: 2020-02-23T02:46:22.842Z [INFO]  TestHTTPAPI_OptionMethod_OSS: Stopping server: protocol=DNS address=127.0.0.1:16829 network=udp
>     writer.go:29: 2020-02-23T02:46:22.842Z [INFO]  TestHTTPAPI_OptionMethod_OSS: Stopping server: protocol=HTTP address=127.0.0.1:16830 network=tcp
>     writer.go:29: 2020-02-23T02:46:22.842Z [INFO]  TestHTTPAPI_OptionMethod_OSS: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:22.842Z [INFO]  TestHTTPAPI_OptionMethod_OSS: Endpoints down
> === RUN   TestHTTPAPI_AllowedNets_OSS
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/session/renew/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/session/create
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/join/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/token
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/check/deregister/
> === RUN   TestHTTPAPI_AllowedNets_OSS/POST_/v1/connect/intentions
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/update
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/role/
> === RUN   TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/acl/role/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/auth-method
> === RUN   TestHTTPAPI_AllowedNets_OSS/POST_/v1/discovery-chain/
> === RUN   TestHTTPAPI_AllowedNets_OSS/POST_/v1/acl/login
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/force-leave/
> === RUN   TestHTTPAPI_AllowedNets_OSS/POST_/v1/agent/connect/authorize
> === RUN   TestHTTPAPI_AllowedNets_OSS/POST_/v1/internal/acl/authorize
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/connect/ca/configuration
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/connect/intentions/
> === RUN   TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/connect/intentions/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/bootstrap
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/policy
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/event/fire/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/txn
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/check/warn/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/catalog/register
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/catalog/deregister
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/binding-rule
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/service/deregister/
> === RUN   TestHTTPAPI_AllowedNets_OSS/POST_/v1/operator/keyring
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/operator/keyring
> === RUN   TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/operator/keyring
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/coordinate/update
> === RUN   TestHTTPAPI_AllowedNets_OSS/POST_/v1/acl/rules/translate
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/check/register
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/token/
> === RUN   TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/acl/token/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/snapshot
> === RUN   TestHTTPAPI_AllowedNets_OSS/POST_/v1/acl/logout
> === RUN   TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/config/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/policy/
> === RUN   TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/acl/policy/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/maintenance
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/leave
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/token/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/service/register
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/destroy/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/create
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/binding-rule/
> === RUN   TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/acl/binding-rule/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/check/pass/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/check/fail/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/check/update/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/role
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/auth-method/
> === RUN   TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/acl/auth-method/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/service/maintenance/
> === RUN   TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/operator/raft/peer
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/operator/autopilot/configuration
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/config
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/kv/
> === RUN   TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/kv/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/clone/
> === RUN   TestHTTPAPI_AllowedNets_OSS/PUT_/v1/session/destroy/
> --- PASS: TestHTTPAPI_AllowedNets_OSS (0.11s)
>     writer.go:29: 2020-02-23T02:46:22.849Z [WARN]  TestHTTPAPI_AllowedNets_OSS: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:22.849Z [WARN]  TestHTTPAPI_AllowedNets_OSS: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:22.850Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:22.850Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:22.859Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b0dd4cfd-4f13-7138-3ce7-c9035ace3eba Address:127.0.0.1:16840}]"
>     writer.go:29: 2020-02-23T02:46:22.859Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server.raft: entering follower state: follower="Node at 127.0.0.1:16840 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:22.860Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server.serf.wan: serf: EventMemberJoin: Node-b0dd4cfd-4f13-7138-3ce7-c9035ace3eba.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:22.860Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server.serf.lan: serf: EventMemberJoin: Node-b0dd4cfd-4f13-7138-3ce7-c9035ace3eba 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:22.860Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server: Handled event for server in area: event=member-join server=Node-b0dd4cfd-4f13-7138-3ce7-c9035ace3eba.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:22.860Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server: Adding LAN server: server="Node-b0dd4cfd-4f13-7138-3ce7-c9035ace3eba (Addr: tcp/127.0.0.1:16840) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:22.861Z [INFO]  TestHTTPAPI_AllowedNets_OSS: Started DNS server: address=127.0.0.1:16835 network=tcp
>     writer.go:29: 2020-02-23T02:46:22.861Z [INFO]  TestHTTPAPI_AllowedNets_OSS: Started DNS server: address=127.0.0.1:16835 network=udp
>     writer.go:29: 2020-02-23T02:46:22.861Z [INFO]  TestHTTPAPI_AllowedNets_OSS: Started HTTP server: address=127.0.0.1:16836 network=tcp
>     writer.go:29: 2020-02-23T02:46:22.861Z [INFO]  TestHTTPAPI_AllowedNets_OSS: started state syncer
>     writer.go:29: 2020-02-23T02:46:22.900Z [WARN]  TestHTTPAPI_AllowedNets_OSS.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:22.900Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server.raft: entering candidate state: node="Node at 127.0.0.1:16840 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:22.903Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:22.903Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.server.raft: vote granted: from=b0dd4cfd-4f13-7138-3ce7-c9035ace3eba term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:22.903Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:22.903Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server.raft: entering leader state: leader="Node at 127.0.0.1:16840 [Leader]"
>     writer.go:29: 2020-02-23T02:46:22.903Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:22.903Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server: New leader elected: payload=Node-b0dd4cfd-4f13-7138-3ce7-c9035ace3eba
>     writer.go:29: 2020-02-23T02:46:22.905Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:22.906Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:22.909Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:22.909Z [INFO]  TestHTTPAPI_AllowedNets_OSS.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:22.909Z [INFO]  TestHTTPAPI_AllowedNets_OSS.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:22.909Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server.serf.lan: serf: EventMemberUpdate: Node-b0dd4cfd-4f13-7138-3ce7-c9035ace3eba
>     writer.go:29: 2020-02-23T02:46:22.909Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server.serf.wan: serf: EventMemberUpdate: Node-b0dd4cfd-4f13-7138-3ce7-c9035ace3eba.dc1
>     writer.go:29: 2020-02-23T02:46:22.910Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server: Handled event for server in area: event=member-update server=Node-b0dd4cfd-4f13-7138-3ce7-c9035ace3eba.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:22.915Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:22.920Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:22.920Z [INFO]  TestHTTPAPI_AllowedNets_OSS.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:22.920Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.server: Skipping self join check for node since the cluster is too small: node=Node-b0dd4cfd-4f13-7138-3ce7-c9035ace3eba
>     writer.go:29: 2020-02-23T02:46:22.921Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server: member joined, marking health alive: member=Node-b0dd4cfd-4f13-7138-3ce7-c9035ace3eba
>     writer.go:29: 2020-02-23T02:46:22.923Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.server: Skipping self join check for node since the cluster is too small: node=Node-b0dd4cfd-4f13-7138-3ce7-c9035ace3eba
>     writer.go:29: 2020-02-23T02:46:22.940Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/session/renew/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.940Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/session/renew/ from=192.168.1.2:5555 latency=44.971µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/session/renew/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.940Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/session/create from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.940Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/session/create from=192.168.1.2:5555 latency=20.685µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/session/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.940Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/agent/join/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.940Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/agent/join/ from=192.168.1.2:5555 latency=18.251µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/join/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.940Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/acl/token from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.940Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/acl/token from=192.168.1.2:5555 latency=19.62µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/token (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.940Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/agent/check/deregister/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.940Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/agent/check/deregister/ from=192.168.1.2:5555 latency=19.613µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/check/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.940Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=POST url=http://127.0.0.1:16836/v1/connect/intentions from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.940Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=POST url=http://127.0.0.1:16836/v1/connect/intentions from=192.168.1.2:5555 latency=20.702µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/POST_/v1/connect/intentions (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.940Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/acl/update from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.940Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/acl/update from=192.168.1.2:5555 latency=19.696µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.941Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/acl/role/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.941Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/acl/role/ from=192.168.1.2:5555 latency=20.199µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/role/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.941Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=DELETE url=http://127.0.0.1:16836/v1/acl/role/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.941Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=DELETE url=http://127.0.0.1:16836/v1/acl/role/ from=192.168.1.2:5555 latency=37.129µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/acl/role/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.941Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/acl/auth-method from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.941Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/acl/auth-method from=192.168.1.2:5555 latency=17.427µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/auth-method (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.941Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=POST url=http://127.0.0.1:16836/v1/discovery-chain/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.941Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=POST url=http://127.0.0.1:16836/v1/discovery-chain/ from=192.168.1.2:5555 latency=39.686µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/POST_/v1/discovery-chain/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.941Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=POST url=http://127.0.0.1:16836/v1/acl/login from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.941Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=POST url=http://127.0.0.1:16836/v1/acl/login from=192.168.1.2:5555 latency=20.721µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/POST_/v1/acl/login (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.941Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/agent/force-leave/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.941Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/agent/force-leave/ from=192.168.1.2:5555 latency=17.39µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/force-leave/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.941Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=POST url=http://127.0.0.1:16836/v1/agent/connect/authorize from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.941Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=POST url=http://127.0.0.1:16836/v1/agent/connect/authorize from=192.168.1.2:5555 latency=37.877µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/POST_/v1/agent/connect/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.941Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=POST url=http://127.0.0.1:16836/v1/internal/acl/authorize from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.941Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=POST url=http://127.0.0.1:16836/v1/internal/acl/authorize from=192.168.1.2:5555 latency=23.78µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/POST_/v1/internal/acl/authorize (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.941Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/connect/ca/configuration from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.942Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/connect/ca/configuration from=192.168.1.2:5555 latency=18.914µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/connect/ca/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.942Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/connect/intentions/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.942Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/connect/intentions/ from=192.168.1.2:5555 latency=17.943µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/connect/intentions/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.942Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=DELETE url=http://127.0.0.1:16836/v1/connect/intentions/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.942Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=DELETE url=http://127.0.0.1:16836/v1/connect/intentions/ from=192.168.1.2:5555 latency=33.949µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/connect/intentions/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.942Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/acl/bootstrap from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.942Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/acl/bootstrap from=192.168.1.2:5555 latency=20.08µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/bootstrap (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.942Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/acl/policy from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.942Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/acl/policy from=192.168.1.2:5555 latency=35.021µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/policy (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.942Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/event/fire/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.942Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/event/fire/ from=192.168.1.2:5555 latency=17.444µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/event/fire/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.942Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/txn from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.942Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/txn from=192.168.1.2:5555 latency=36.096µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/txn (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.942Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/agent/check/warn/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.942Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/agent/check/warn/ from=192.168.1.2:5555 latency=46.445µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/check/warn/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.942Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/catalog/register from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.942Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/catalog/register from=192.168.1.2:5555 latency=20.509µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/catalog/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.943Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/catalog/deregister from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.943Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/catalog/deregister from=192.168.1.2:5555 latency=21.146µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/catalog/deregister (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.943Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/acl/binding-rule from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.943Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/acl/binding-rule from=192.168.1.2:5555 latency=36.351µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/binding-rule (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.943Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/agent/service/deregister/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.943Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/agent/service/deregister/ from=192.168.1.2:5555 latency=18.355µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/service/deregister/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.943Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=POST url=http://127.0.0.1:16836/v1/operator/keyring from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.943Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=POST url=http://127.0.0.1:16836/v1/operator/keyring from=192.168.1.2:5555 latency=35.12µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/POST_/v1/operator/keyring (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.943Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/operator/keyring from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.943Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/operator/keyring from=192.168.1.2:5555 latency=20.64µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/operator/keyring (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.943Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=DELETE url=http://127.0.0.1:16836/v1/operator/keyring from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.943Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=DELETE url=http://127.0.0.1:16836/v1/operator/keyring from=192.168.1.2:5555 latency=17.978µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/operator/keyring (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.943Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/coordinate/update from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.943Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/coordinate/update from=192.168.1.2:5555 latency=18.11µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/coordinate/update (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.943Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=POST url=http://127.0.0.1:16836/v1/acl/rules/translate from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.943Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=POST url=http://127.0.0.1:16836/v1/acl/rules/translate from=192.168.1.2:5555 latency=18.229µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/POST_/v1/acl/rules/translate (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.944Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/agent/check/register from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.944Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/agent/check/register from=192.168.1.2:5555 latency=17.789µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/check/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.944Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/acl/token/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.944Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/acl/token/ from=192.168.1.2:5555 latency=18.49µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.944Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=DELETE url=http://127.0.0.1:16836/v1/acl/token/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.944Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=DELETE url=http://127.0.0.1:16836/v1/acl/token/ from=192.168.1.2:5555 latency=17.447µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/acl/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.944Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/snapshot from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.944Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/snapshot from=192.168.1.2:5555 latency=17.962µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/snapshot (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.944Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=POST url=http://127.0.0.1:16836/v1/acl/logout from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.944Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=POST url=http://127.0.0.1:16836/v1/acl/logout from=192.168.1.2:5555 latency=35.807µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/POST_/v1/acl/logout (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.944Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=DELETE url=http://127.0.0.1:16836/v1/config/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.944Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=DELETE url=http://127.0.0.1:16836/v1/config/ from=192.168.1.2:5555 latency=17.993µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/config/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.944Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/acl/policy/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.944Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/acl/policy/ from=192.168.1.2:5555 latency=17.658µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/policy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.944Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=DELETE url=http://127.0.0.1:16836/v1/acl/policy/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.944Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=DELETE url=http://127.0.0.1:16836/v1/acl/policy/ from=192.168.1.2:5555 latency=18.29µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/acl/policy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.944Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/agent/maintenance from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.944Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/agent/maintenance from=192.168.1.2:5555 latency=18.305µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/maintenance (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.944Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/agent/leave from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.944Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/agent/leave from=192.168.1.2:5555 latency=17.739µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/leave (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.945Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/agent/token/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.945Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/agent/token/ from=192.168.1.2:5555 latency=17.7µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/token/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.945Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/agent/service/register from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.945Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/agent/service/register from=192.168.1.2:5555 latency=18.681µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/service/register (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.945Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/acl/destroy/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.945Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/acl/destroy/ from=192.168.1.2:5555 latency=18.242µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.945Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/acl/create from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.945Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/acl/create from=192.168.1.2:5555 latency=26.816µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/create (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.945Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/acl/binding-rule/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.945Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/acl/binding-rule/ from=192.168.1.2:5555 latency=46.084µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/binding-rule/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.945Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=DELETE url=http://127.0.0.1:16836/v1/acl/binding-rule/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.945Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=DELETE url=http://127.0.0.1:16836/v1/acl/binding-rule/ from=192.168.1.2:5555 latency=22.447µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/acl/binding-rule/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.945Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/agent/check/pass/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.945Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/agent/check/pass/ from=192.168.1.2:5555 latency=18.229µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/check/pass/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.945Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/agent/check/fail/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.945Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/agent/check/fail/ from=192.168.1.2:5555 latency=17.737µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/check/fail/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.945Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/agent/check/update/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.945Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/agent/check/update/ from=192.168.1.2:5555 latency=17.768µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/check/update/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.946Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/acl/role from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.946Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/acl/role from=192.168.1.2:5555 latency=17.364µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/role (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.946Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/acl/auth-method/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.946Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/acl/auth-method/ from=192.168.1.2:5555 latency=17.671µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/auth-method/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.946Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=DELETE url=http://127.0.0.1:16836/v1/acl/auth-method/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.946Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=DELETE url=http://127.0.0.1:16836/v1/acl/auth-method/ from=192.168.1.2:5555 latency=21.413µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/acl/auth-method/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.946Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/agent/service/maintenance/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.946Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/agent/service/maintenance/ from=192.168.1.2:5555 latency=18.66µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/agent/service/maintenance/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.946Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=DELETE url=http://127.0.0.1:16836/v1/operator/raft/peer from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.946Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=DELETE url=http://127.0.0.1:16836/v1/operator/raft/peer from=192.168.1.2:5555 latency=18.07µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/operator/raft/peer (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.946Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/operator/autopilot/configuration from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.946Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/operator/autopilot/configuration from=192.168.1.2:5555 latency=18.729µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/operator/autopilot/configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.946Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/config from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.946Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/config from=192.168.1.2:5555 latency=17.75µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/config (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.946Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/kv/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.946Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/kv/ from=192.168.1.2:5555 latency=18.053µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/kv/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.946Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=DELETE url=http://127.0.0.1:16836/v1/kv/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.946Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=DELETE url=http://127.0.0.1:16836/v1/kv/ from=192.168.1.2:5555 latency=19.05µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/DELETE_/v1/kv/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.947Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/acl/clone/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.947Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/acl/clone/ from=192.168.1.2:5555 latency=18.476µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/acl/clone/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.947Z [ERROR] TestHTTPAPI_AllowedNets_OSS.http: Request error: method=PUT url=http://127.0.0.1:16836/v1/session/destroy/ from=192.168.1.2:5555 error="Access is restricted"
>     writer.go:29: 2020-02-23T02:46:22.947Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.http: Request finished: method=PUT url=http://127.0.0.1:16836/v1/session/destroy/ from=192.168.1.2:5555 latency=17.054µs
>     --- PASS: TestHTTPAPI_AllowedNets_OSS/PUT_/v1/session/destroy/ (0.00s)
>     writer.go:29: 2020-02-23T02:46:22.947Z [INFO]  TestHTTPAPI_AllowedNets_OSS: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:22.947Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:22.947Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:22.947Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:22.947Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:22.947Z [ERROR] TestHTTPAPI_AllowedNets_OSS.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:22.947Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:22.947Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:22.947Z [DEBUG] TestHTTPAPI_AllowedNets_OSS.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:22.947Z [WARN]  TestHTTPAPI_AllowedNets_OSS.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:22.949Z [WARN]  TestHTTPAPI_AllowedNets_OSS.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:22.950Z [INFO]  TestHTTPAPI_AllowedNets_OSS.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:22.950Z [INFO]  TestHTTPAPI_AllowedNets_OSS: consul server down
>     writer.go:29: 2020-02-23T02:46:22.950Z [INFO]  TestHTTPAPI_AllowedNets_OSS: shutdown complete
>     writer.go:29: 2020-02-23T02:46:22.950Z [INFO]  TestHTTPAPI_AllowedNets_OSS: Stopping server: protocol=DNS address=127.0.0.1:16835 network=tcp
>     writer.go:29: 2020-02-23T02:46:22.950Z [INFO]  TestHTTPAPI_AllowedNets_OSS: Stopping server: protocol=DNS address=127.0.0.1:16835 network=udp
>     writer.go:29: 2020-02-23T02:46:22.950Z [INFO]  TestHTTPAPI_AllowedNets_OSS: Stopping server: protocol=HTTP address=127.0.0.1:16836 network=tcp
>     writer.go:29: 2020-02-23T02:46:22.950Z [INFO]  TestHTTPAPI_AllowedNets_OSS: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:22.950Z [INFO]  TestHTTPAPI_AllowedNets_OSS: Endpoints down
> === RUN   TestHTTPServer_UnixSocket
> === PAUSE TestHTTPServer_UnixSocket
> === RUN   TestHTTPServer_UnixSocket_FileExists
> === PAUSE TestHTTPServer_UnixSocket_FileExists
> === RUN   TestHTTPServer_H2
> --- SKIP: TestHTTPServer_H2 (0.00s)
>     http_test.go:132: DM-skipped
> === RUN   TestSetIndex
> === PAUSE TestSetIndex
> === RUN   TestSetKnownLeader
> === PAUSE TestSetKnownLeader
> === RUN   TestSetLastContact
> === PAUSE TestSetLastContact
> === RUN   TestSetMeta
> === PAUSE TestSetMeta
> === RUN   TestHTTPAPI_BlockEndpoints
> === PAUSE TestHTTPAPI_BlockEndpoints
> === RUN   TestHTTPAPI_Ban_Nonprintable_Characters
> --- SKIP: TestHTTPAPI_Ban_Nonprintable_Characters (0.00s)
>     http_test.go:324: DM-skipped
> === RUN   TestHTTPAPI_Allow_Nonprintable_Characters_With_Flag
> --- SKIP: TestHTTPAPI_Allow_Nonprintable_Characters_With_Flag (0.00s)
>     http_test.go:344: DM-skipped
> === RUN   TestHTTPAPI_TranslateAddrHeader
> === PAUSE TestHTTPAPI_TranslateAddrHeader
> === RUN   TestHTTPAPIResponseHeaders
> === PAUSE TestHTTPAPIResponseHeaders
> === RUN   TestContentTypeIsJSON
> === PAUSE TestContentTypeIsJSON
> === RUN   TestHTTP_wrap_obfuscateLog
> === PAUSE TestHTTP_wrap_obfuscateLog
> === RUN   TestPrettyPrint
> === PAUSE TestPrettyPrint
> === RUN   TestPrettyPrintBare
> === PAUSE TestPrettyPrintBare
> === RUN   TestParseSource
> === PAUSE TestParseSource
> === RUN   TestParseCacheControl
> === RUN   TestParseCacheControl/empty_header
> === RUN   TestParseCacheControl/simple_max-age
> === RUN   TestParseCacheControl/zero_max-age
> === RUN   TestParseCacheControl/must-revalidate
> === RUN   TestParseCacheControl/mixes_age,_must-revalidate
> === RUN   TestParseCacheControl/quoted_max-age
> === RUN   TestParseCacheControl/mixed_case_max-age
> === RUN   TestParseCacheControl/simple_stale-if-error
> === RUN   TestParseCacheControl/combined_with_space
> === RUN   TestParseCacheControl/combined_no_space
> === RUN   TestParseCacheControl/unsupported_directive
> === RUN   TestParseCacheControl/mixed_unsupported_directive
> === RUN   TestParseCacheControl/garbage_value
> === RUN   TestParseCacheControl/garbage_value_with_quotes
> --- PASS: TestParseCacheControl (0.00s)
>     --- PASS: TestParseCacheControl/empty_header (0.00s)
>     --- PASS: TestParseCacheControl/simple_max-age (0.00s)
>     --- PASS: TestParseCacheControl/zero_max-age (0.00s)
>     --- PASS: TestParseCacheControl/must-revalidate (0.00s)
>     --- PASS: TestParseCacheControl/mixes_age,_must-revalidate (0.00s)
>     --- PASS: TestParseCacheControl/quoted_max-age (0.00s)
>     --- PASS: TestParseCacheControl/mixed_case_max-age (0.00s)
>     --- PASS: TestParseCacheControl/simple_stale-if-error (0.00s)
>     --- PASS: TestParseCacheControl/combined_with_space (0.00s)
>     --- PASS: TestParseCacheControl/combined_no_space (0.00s)
>     --- PASS: TestParseCacheControl/unsupported_directive (0.00s)
>     --- PASS: TestParseCacheControl/mixed_unsupported_directive (0.00s)
>     --- PASS: TestParseCacheControl/garbage_value (0.00s)
>     --- PASS: TestParseCacheControl/garbage_value_with_quotes (0.00s)
> === RUN   TestParseWait
> === PAUSE TestParseWait
> === RUN   TestPProfHandlers_EnableDebug
> === PAUSE TestPProfHandlers_EnableDebug
> === RUN   TestPProfHandlers_DisableDebugNoACLs
> === PAUSE TestPProfHandlers_DisableDebugNoACLs
> === RUN   TestPProfHandlers_ACLs
> === PAUSE TestPProfHandlers_ACLs
> === RUN   TestParseWait_InvalidTime
> === PAUSE TestParseWait_InvalidTime
> === RUN   TestParseWait_InvalidIndex
> === PAUSE TestParseWait_InvalidIndex
> === RUN   TestParseConsistency
> === PAUSE TestParseConsistency
> === RUN   TestParseConsistencyAndMaxStale
> --- PASS: TestParseConsistencyAndMaxStale (0.35s)
>     writer.go:29: 2020-02-23T02:46:22.960Z [WARN]  TestParseConsistencyAndMaxStale: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:22.960Z [DEBUG] TestParseConsistencyAndMaxStale.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:22.960Z [DEBUG] TestParseConsistencyAndMaxStale.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:23.011Z [INFO]  TestParseConsistencyAndMaxStale.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:381c0871-ab2e-dab2-25ab-729e24e9af59 Address:127.0.0.1:16846}]"
>     writer.go:29: 2020-02-23T02:46:23.011Z [INFO]  TestParseConsistencyAndMaxStale.server.raft: entering follower state: follower="Node at 127.0.0.1:16846 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:23.011Z [INFO]  TestParseConsistencyAndMaxStale.server.serf.wan: serf: EventMemberJoin: Node-381c0871-ab2e-dab2-25ab-729e24e9af59.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:23.012Z [INFO]  TestParseConsistencyAndMaxStale.server.serf.lan: serf: EventMemberJoin: Node-381c0871-ab2e-dab2-25ab-729e24e9af59 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:23.012Z [INFO]  TestParseConsistencyAndMaxStale.server: Handled event for server in area: event=member-join server=Node-381c0871-ab2e-dab2-25ab-729e24e9af59.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:23.012Z [INFO]  TestParseConsistencyAndMaxStale.server: Adding LAN server: server="Node-381c0871-ab2e-dab2-25ab-729e24e9af59 (Addr: tcp/127.0.0.1:16846) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:23.012Z [INFO]  TestParseConsistencyAndMaxStale: Started DNS server: address=127.0.0.1:16841 network=tcp
>     writer.go:29: 2020-02-23T02:46:23.013Z [INFO]  TestParseConsistencyAndMaxStale: Started DNS server: address=127.0.0.1:16841 network=udp
>     writer.go:29: 2020-02-23T02:46:23.013Z [INFO]  TestParseConsistencyAndMaxStale: Started HTTP server: address=127.0.0.1:16842 network=tcp
>     writer.go:29: 2020-02-23T02:46:23.013Z [INFO]  TestParseConsistencyAndMaxStale: started state syncer
>     writer.go:29: 2020-02-23T02:46:23.070Z [WARN]  TestParseConsistencyAndMaxStale.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:23.070Z [INFO]  TestParseConsistencyAndMaxStale.server.raft: entering candidate state: node="Node at 127.0.0.1:16846 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:23.074Z [DEBUG] TestParseConsistencyAndMaxStale.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:23.074Z [DEBUG] TestParseConsistencyAndMaxStale.server.raft: vote granted: from=381c0871-ab2e-dab2-25ab-729e24e9af59 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:23.074Z [INFO]  TestParseConsistencyAndMaxStale.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:23.074Z [INFO]  TestParseConsistencyAndMaxStale.server.raft: entering leader state: leader="Node at 127.0.0.1:16846 [Leader]"
>     writer.go:29: 2020-02-23T02:46:23.074Z [INFO]  TestParseConsistencyAndMaxStale.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:23.074Z [INFO]  TestParseConsistencyAndMaxStale.server: New leader elected: payload=Node-381c0871-ab2e-dab2-25ab-729e24e9af59
>     writer.go:29: 2020-02-23T02:46:23.081Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:23.089Z [INFO]  TestParseConsistencyAndMaxStale.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:23.089Z [INFO]  TestParseConsistencyAndMaxStale.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:23.089Z [DEBUG] TestParseConsistencyAndMaxStale.server: Skipping self join check for node since the cluster is too small: node=Node-381c0871-ab2e-dab2-25ab-729e24e9af59
>     writer.go:29: 2020-02-23T02:46:23.089Z [INFO]  TestParseConsistencyAndMaxStale.server: member joined, marking health alive: member=Node-381c0871-ab2e-dab2-25ab-729e24e9af59
>     writer.go:29: 2020-02-23T02:46:23.107Z [DEBUG] TestParseConsistencyAndMaxStale: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:23.110Z [INFO]  TestParseConsistencyAndMaxStale: Synced node info
>     writer.go:29: 2020-02-23T02:46:23.110Z [DEBUG] TestParseConsistencyAndMaxStale: Node info in sync
>     writer.go:29: 2020-02-23T02:46:23.299Z [INFO]  TestParseConsistencyAndMaxStale: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:23.299Z [INFO]  TestParseConsistencyAndMaxStale.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:23.299Z [DEBUG] TestParseConsistencyAndMaxStale.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:23.299Z [WARN]  TestParseConsistencyAndMaxStale.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:23.299Z [DEBUG] TestParseConsistencyAndMaxStale.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:23.301Z [WARN]  TestParseConsistencyAndMaxStale.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:23.303Z [INFO]  TestParseConsistencyAndMaxStale.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:23.303Z [INFO]  TestParseConsistencyAndMaxStale: consul server down
>     writer.go:29: 2020-02-23T02:46:23.303Z [INFO]  TestParseConsistencyAndMaxStale: shutdown complete
>     writer.go:29: 2020-02-23T02:46:23.303Z [INFO]  TestParseConsistencyAndMaxStale: Stopping server: protocol=DNS address=127.0.0.1:16841 network=tcp
>     writer.go:29: 2020-02-23T02:46:23.303Z [INFO]  TestParseConsistencyAndMaxStale: Stopping server: protocol=DNS address=127.0.0.1:16841 network=udp
>     writer.go:29: 2020-02-23T02:46:23.303Z [INFO]  TestParseConsistencyAndMaxStale: Stopping server: protocol=HTTP address=127.0.0.1:16842 network=tcp
>     writer.go:29: 2020-02-23T02:46:23.303Z [INFO]  TestParseConsistencyAndMaxStale: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:23.303Z [INFO]  TestParseConsistencyAndMaxStale: Endpoints down
> === RUN   TestParseConsistency_Invalid
> === PAUSE TestParseConsistency_Invalid
> === RUN   TestACLResolution
> === PAUSE TestACLResolution
> === RUN   TestEnableWebUI
> === PAUSE TestEnableWebUI
> === RUN   TestAllowedNets
> --- SKIP: TestAllowedNets (0.00s)
>     http_test.go:1136: DM-skipped
> === RUN   TestHTTPServer_HandshakeTimeout
> === PAUSE TestHTTPServer_HandshakeTimeout
> === RUN   TestRPC_HTTPSMaxConnsPerClient
> === PAUSE TestRPC_HTTPSMaxConnsPerClient
> === RUN   TestIntentionsList_empty
> === PAUSE TestIntentionsList_empty
> === RUN   TestIntentionsList_values
> === PAUSE TestIntentionsList_values
> === RUN   TestIntentionsMatch_basic
> === PAUSE TestIntentionsMatch_basic
> === RUN   TestIntentionsMatch_noBy
> === PAUSE TestIntentionsMatch_noBy
> === RUN   TestIntentionsMatch_byInvalid
> === PAUSE TestIntentionsMatch_byInvalid
> === RUN   TestIntentionsMatch_noName
> === PAUSE TestIntentionsMatch_noName
> === RUN   TestIntentionsCheck_basic
> === PAUSE TestIntentionsCheck_basic
> === RUN   TestIntentionsCheck_noSource
> === PAUSE TestIntentionsCheck_noSource
> === RUN   TestIntentionsCheck_noDestination
> === PAUSE TestIntentionsCheck_noDestination
> === RUN   TestIntentionsCreate_good
> === PAUSE TestIntentionsCreate_good
> === RUN   TestIntentionsCreate_noBody
> === PAUSE TestIntentionsCreate_noBody
> === RUN   TestIntentionsSpecificGet_good
> === PAUSE TestIntentionsSpecificGet_good
> === RUN   TestIntentionsSpecificGet_invalidId
> === PAUSE TestIntentionsSpecificGet_invalidId
> === RUN   TestIntentionsSpecificUpdate_good
> === PAUSE TestIntentionsSpecificUpdate_good
> === RUN   TestIntentionsSpecificDelete_good
> === PAUSE TestIntentionsSpecificDelete_good
> === RUN   TestParseIntentionMatchEntry
> === RUN   TestParseIntentionMatchEntry/foo
> === RUN   TestParseIntentionMatchEntry/foo/bar
> === RUN   TestParseIntentionMatchEntry/foo/bar/baz
> --- PASS: TestParseIntentionMatchEntry (0.00s)
>     --- PASS: TestParseIntentionMatchEntry/foo (0.00s)
>     --- PASS: TestParseIntentionMatchEntry/foo/bar (0.00s)
>     --- PASS: TestParseIntentionMatchEntry/foo/bar/baz (0.00s)
> === RUN   TestAgent_LoadKeyrings
> === PAUSE TestAgent_LoadKeyrings
> === RUN   TestAgent_InmemKeyrings
> === PAUSE TestAgent_InmemKeyrings
> === RUN   TestAgent_InitKeyring
> === PAUSE TestAgent_InitKeyring
> === RUN   TestAgentKeyring_ACL
> === PAUSE TestAgentKeyring_ACL
> === RUN   TestValidateLocalOnly
> --- PASS: TestValidateLocalOnly (0.00s)
> === RUN   TestKVSEndpoint_PUT_GET_DELETE
> === PAUSE TestKVSEndpoint_PUT_GET_DELETE
> === RUN   TestKVSEndpoint_Recurse
> === PAUSE TestKVSEndpoint_Recurse
> === RUN   TestKVSEndpoint_DELETE_CAS
> === PAUSE TestKVSEndpoint_DELETE_CAS
> === RUN   TestKVSEndpoint_CAS
> === PAUSE TestKVSEndpoint_CAS
> === RUN   TestKVSEndpoint_ListKeys
> --- SKIP: TestKVSEndpoint_ListKeys (0.00s)
>     kvs_endpoint_test.go:294: DM-skipped
> === RUN   TestKVSEndpoint_AcquireRelease
> === PAUSE TestKVSEndpoint_AcquireRelease
> === RUN   TestKVSEndpoint_GET_Raw
> --- SKIP: TestKVSEndpoint_GET_Raw (0.00s)
>     kvs_endpoint_test.go:403: DM-skipped
> === RUN   TestKVSEndpoint_PUT_ConflictingFlags
> === PAUSE TestKVSEndpoint_PUT_ConflictingFlags
> === RUN   TestKVSEndpoint_DELETE_ConflictingFlags
> === PAUSE TestKVSEndpoint_DELETE_ConflictingFlags
> === RUN   TestNotifyGroup
> --- PASS: TestNotifyGroup (0.00s)
> === RUN   TestNotifyGroup_Clear
> --- PASS: TestNotifyGroup_Clear (0.00s)
> === RUN   TestOperator_RaftConfiguration
> === PAUSE TestOperator_RaftConfiguration
> === RUN   TestOperator_RaftPeer
> === PAUSE TestOperator_RaftPeer
> === RUN   TestOperator_KeyringInstall
> === PAUSE TestOperator_KeyringInstall
> === RUN   TestOperator_KeyringList
> === PAUSE TestOperator_KeyringList
> === RUN   TestOperator_KeyringRemove
> === PAUSE TestOperator_KeyringRemove
> === RUN   TestOperator_KeyringUse
> === PAUSE TestOperator_KeyringUse
> === RUN   TestOperator_Keyring_InvalidRelayFactor
> === PAUSE TestOperator_Keyring_InvalidRelayFactor
> === RUN   TestOperator_Keyring_LocalOnly
> === PAUSE TestOperator_Keyring_LocalOnly
> === RUN   TestOperator_AutopilotGetConfiguration
> === PAUSE TestOperator_AutopilotGetConfiguration
> === RUN   TestOperator_AutopilotSetConfiguration
> --- SKIP: TestOperator_AutopilotSetConfiguration (0.00s)
>     operator_endpoint_test.go:350: DM-skipped
> === RUN   TestOperator_AutopilotCASConfiguration
> === PAUSE TestOperator_AutopilotCASConfiguration
> === RUN   TestOperator_ServerHealth
> === PAUSE TestOperator_ServerHealth
> === RUN   TestOperator_ServerHealth_Unhealthy
> === PAUSE TestOperator_ServerHealth_Unhealthy
> === RUN   TestPreparedQuery_Create
> === PAUSE TestPreparedQuery_Create
> === RUN   TestPreparedQuery_List
> === PAUSE TestPreparedQuery_List
> === RUN   TestPreparedQuery_Execute
> === PAUSE TestPreparedQuery_Execute
> === RUN   TestPreparedQuery_ExecuteCached
> === PAUSE TestPreparedQuery_ExecuteCached
> === RUN   TestPreparedQuery_Explain
> === PAUSE TestPreparedQuery_Explain
> === RUN   TestPreparedQuery_Get
> === PAUSE TestPreparedQuery_Get
> === RUN   TestPreparedQuery_Update
> === PAUSE TestPreparedQuery_Update
> === RUN   TestPreparedQuery_Delete
> === PAUSE TestPreparedQuery_Delete
> === RUN   TestPreparedQuery_parseLimit
> === PAUSE TestPreparedQuery_parseLimit
> === RUN   TestPreparedQuery_Integration
> --- SKIP: TestPreparedQuery_Integration (0.00s)
>     prepared_query_endpoint_test.go:994: DM-skipped
> === RUN   TestRexecWriter
> --- SKIP: TestRexecWriter (0.00s)
>     remote_exec_test.go:28: DM-skipped
> === RUN   TestRemoteExecGetSpec
> === PAUSE TestRemoteExecGetSpec
> === RUN   TestRemoteExecGetSpec_ACLToken
> === PAUSE TestRemoteExecGetSpec_ACLToken
> === RUN   TestRemoteExecGetSpec_ACLAgentToken
> === PAUSE TestRemoteExecGetSpec_ACLAgentToken
> === RUN   TestRemoteExecGetSpec_ACLDeny
> === PAUSE TestRemoteExecGetSpec_ACLDeny
> === RUN   TestRemoteExecWrites
> === PAUSE TestRemoteExecWrites
> === RUN   TestRemoteExecWrites_ACLToken
> === PAUSE TestRemoteExecWrites_ACLToken
> === RUN   TestRemoteExecWrites_ACLAgentToken
> === PAUSE TestRemoteExecWrites_ACLAgentToken
> === RUN   TestRemoteExecWrites_ACLDeny
> === PAUSE TestRemoteExecWrites_ACLDeny
> === RUN   TestHandleRemoteExec
> === PAUSE TestHandleRemoteExec
> === RUN   TestHandleRemoteExecFailed
> === PAUSE TestHandleRemoteExecFailed
> === RUN   TestAgent_ServiceHTTPChecksNotification
> === PAUSE TestAgent_ServiceHTTPChecksNotification
> === RUN   TestServiceManager_RegisterService
> --- PASS: TestServiceManager_RegisterService (0.55s)
>     writer.go:29: 2020-02-23T02:46:23.318Z [WARN]  TestServiceManager_RegisterService: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:23.319Z [DEBUG] TestServiceManager_RegisterService.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:23.319Z [DEBUG] TestServiceManager_RegisterService.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:23.334Z [INFO]  TestServiceManager_RegisterService.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f03c6362-aa48-2e7a-a7fe-96ec49cf113b Address:127.0.0.1:16852}]"
>     writer.go:29: 2020-02-23T02:46:23.334Z [INFO]  TestServiceManager_RegisterService.server.raft: entering follower state: follower="Node at 127.0.0.1:16852 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:23.335Z [INFO]  TestServiceManager_RegisterService.server.serf.wan: serf: EventMemberJoin: Node-f03c6362-aa48-2e7a-a7fe-96ec49cf113b.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:23.340Z [INFO]  TestServiceManager_RegisterService.server.serf.lan: serf: EventMemberJoin: Node-f03c6362-aa48-2e7a-a7fe-96ec49cf113b 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:23.343Z [INFO]  TestServiceManager_RegisterService.server: Adding LAN server: server="Node-f03c6362-aa48-2e7a-a7fe-96ec49cf113b (Addr: tcp/127.0.0.1:16852) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:23.343Z [INFO]  TestServiceManager_RegisterService.server: Handled event for server in area: event=member-join server=Node-f03c6362-aa48-2e7a-a7fe-96ec49cf113b.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:23.344Z [INFO]  TestServiceManager_RegisterService: Started DNS server: address=127.0.0.1:16847 network=udp
>     writer.go:29: 2020-02-23T02:46:23.344Z [INFO]  TestServiceManager_RegisterService: Started DNS server: address=127.0.0.1:16847 network=tcp
>     writer.go:29: 2020-02-23T02:46:23.351Z [INFO]  TestServiceManager_RegisterService: Started HTTP server: address=127.0.0.1:16848 network=tcp
>     writer.go:29: 2020-02-23T02:46:23.351Z [INFO]  TestServiceManager_RegisterService: started state syncer
>     writer.go:29: 2020-02-23T02:46:23.380Z [WARN]  TestServiceManager_RegisterService.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:23.380Z [INFO]  TestServiceManager_RegisterService.server.raft: entering candidate state: node="Node at 127.0.0.1:16852 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:23.383Z [DEBUG] TestServiceManager_RegisterService.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:23.383Z [DEBUG] TestServiceManager_RegisterService.server.raft: vote granted: from=f03c6362-aa48-2e7a-a7fe-96ec49cf113b term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:23.383Z [INFO]  TestServiceManager_RegisterService.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:23.383Z [INFO]  TestServiceManager_RegisterService.server.raft: entering leader state: leader="Node at 127.0.0.1:16852 [Leader]"
>     writer.go:29: 2020-02-23T02:46:23.383Z [INFO]  TestServiceManager_RegisterService.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:23.383Z [INFO]  TestServiceManager_RegisterService.server: New leader elected: payload=Node-f03c6362-aa48-2e7a-a7fe-96ec49cf113b
>     writer.go:29: 2020-02-23T02:46:23.395Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:23.403Z [INFO]  TestServiceManager_RegisterService.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:23.403Z [INFO]  TestServiceManager_RegisterService.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:23.403Z [DEBUG] TestServiceManager_RegisterService.server: Skipping self join check for node since the cluster is too small: node=Node-f03c6362-aa48-2e7a-a7fe-96ec49cf113b
>     writer.go:29: 2020-02-23T02:46:23.403Z [INFO]  TestServiceManager_RegisterService.server: member joined, marking health alive: member=Node-f03c6362-aa48-2e7a-a7fe-96ec49cf113b
>     writer.go:29: 2020-02-23T02:46:23.428Z [DEBUG] TestServiceManager_RegisterService: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:23.430Z [INFO]  TestServiceManager_RegisterService: Synced node info
>     writer.go:29: 2020-02-23T02:46:23.430Z [DEBUG] TestServiceManager_RegisterService: Node info in sync
>     writer.go:29: 2020-02-23T02:46:23.795Z [INFO]  TestServiceManager_RegisterService: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:23.795Z [INFO]  TestServiceManager_RegisterService.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:23.795Z [DEBUG] TestServiceManager_RegisterService.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:23.795Z [WARN]  TestServiceManager_RegisterService.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:23.795Z [DEBUG] TestServiceManager_RegisterService.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:23.830Z [WARN]  TestServiceManager_RegisterService.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:23.859Z [INFO]  TestServiceManager_RegisterService.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:23.859Z [INFO]  TestServiceManager_RegisterService: consul server down
>     writer.go:29: 2020-02-23T02:46:23.859Z [INFO]  TestServiceManager_RegisterService: shutdown complete
>     writer.go:29: 2020-02-23T02:46:23.859Z [INFO]  TestServiceManager_RegisterService: Stopping server: protocol=DNS address=127.0.0.1:16847 network=tcp
>     writer.go:29: 2020-02-23T02:46:23.859Z [INFO]  TestServiceManager_RegisterService: Stopping server: protocol=DNS address=127.0.0.1:16847 network=udp
>     writer.go:29: 2020-02-23T02:46:23.859Z [INFO]  TestServiceManager_RegisterService: Stopping server: protocol=HTTP address=127.0.0.1:16848 network=tcp
>     writer.go:29: 2020-02-23T02:46:23.859Z [INFO]  TestServiceManager_RegisterService: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:23.859Z [INFO]  TestServiceManager_RegisterService: Endpoints down
> === RUN   TestServiceManager_RegisterSidecar
> --- PASS: TestServiceManager_RegisterSidecar (0.36s)
>     writer.go:29: 2020-02-23T02:46:23.867Z [WARN]  TestServiceManager_RegisterSidecar: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:23.867Z [DEBUG] TestServiceManager_RegisterSidecar.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:23.868Z [DEBUG] TestServiceManager_RegisterSidecar.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:23.893Z [INFO]  TestServiceManager_RegisterSidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b1b548e8-aed9-f2f4-a9b5-b71a3dccebfc Address:127.0.0.1:16858}]"
>     writer.go:29: 2020-02-23T02:46:23.893Z [INFO]  TestServiceManager_RegisterSidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16858 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:23.894Z [INFO]  TestServiceManager_RegisterSidecar.server.serf.wan: serf: EventMemberJoin: Node-b1b548e8-aed9-f2f4-a9b5-b71a3dccebfc.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:23.894Z [INFO]  TestServiceManager_RegisterSidecar.server.serf.lan: serf: EventMemberJoin: Node-b1b548e8-aed9-f2f4-a9b5-b71a3dccebfc 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:23.894Z [INFO]  TestServiceManager_RegisterSidecar.server: Adding LAN server: server="Node-b1b548e8-aed9-f2f4-a9b5-b71a3dccebfc (Addr: tcp/127.0.0.1:16858) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:23.894Z [INFO]  TestServiceManager_RegisterSidecar: Started DNS server: address=127.0.0.1:16853 network=udp
>     writer.go:29: 2020-02-23T02:46:23.894Z [INFO]  TestServiceManager_RegisterSidecar.server: Handled event for server in area: event=member-join server=Node-b1b548e8-aed9-f2f4-a9b5-b71a3dccebfc.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:23.894Z [INFO]  TestServiceManager_RegisterSidecar: Started DNS server: address=127.0.0.1:16853 network=tcp
>     writer.go:29: 2020-02-23T02:46:23.895Z [INFO]  TestServiceManager_RegisterSidecar: Started HTTP server: address=127.0.0.1:16854 network=tcp
>     writer.go:29: 2020-02-23T02:46:23.895Z [INFO]  TestServiceManager_RegisterSidecar: started state syncer
>     writer.go:29: 2020-02-23T02:46:23.945Z [WARN]  TestServiceManager_RegisterSidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:23.945Z [INFO]  TestServiceManager_RegisterSidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16858 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:23.948Z [DEBUG] TestServiceManager_RegisterSidecar.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:23.948Z [DEBUG] TestServiceManager_RegisterSidecar.server.raft: vote granted: from=b1b548e8-aed9-f2f4-a9b5-b71a3dccebfc term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:23.948Z [INFO]  TestServiceManager_RegisterSidecar.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:23.948Z [INFO]  TestServiceManager_RegisterSidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16858 [Leader]"
>     writer.go:29: 2020-02-23T02:46:23.949Z [INFO]  TestServiceManager_RegisterSidecar.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:23.949Z [INFO]  TestServiceManager_RegisterSidecar.server: New leader elected: payload=Node-b1b548e8-aed9-f2f4-a9b5-b71a3dccebfc
>     writer.go:29: 2020-02-23T02:46:23.956Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:23.964Z [INFO]  TestServiceManager_RegisterSidecar.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:23.964Z [INFO]  TestServiceManager_RegisterSidecar.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:23.964Z [DEBUG] TestServiceManager_RegisterSidecar.server: Skipping self join check for node since the cluster is too small: node=Node-b1b548e8-aed9-f2f4-a9b5-b71a3dccebfc
>     writer.go:29: 2020-02-23T02:46:23.964Z [INFO]  TestServiceManager_RegisterSidecar.server: member joined, marking health alive: member=Node-b1b548e8-aed9-f2f4-a9b5-b71a3dccebfc
>     writer.go:29: 2020-02-23T02:46:24.059Z [DEBUG] TestServiceManager_RegisterSidecar: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:24.061Z [INFO]  TestServiceManager_RegisterSidecar: Synced node info
>     writer.go:29: 2020-02-23T02:46:24.061Z [DEBUG] TestServiceManager_RegisterSidecar: Node info in sync
>     writer.go:29: 2020-02-23T02:46:24.213Z [ERROR] TestServiceManager_RegisterSidecar.proxycfg: watch error: id=service-http-checks:web error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>     writer.go:29: 2020-02-23T02:46:24.215Z [DEBUG] TestServiceManager_RegisterSidecar: added local registration for service: service=web-sidecar-proxy
>     writer.go:29: 2020-02-23T02:46:24.215Z [INFO]  TestServiceManager_RegisterSidecar: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:24.215Z [INFO]  TestServiceManager_RegisterSidecar.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:24.215Z [DEBUG] TestServiceManager_RegisterSidecar.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:24.215Z [WARN]  TestServiceManager_RegisterSidecar.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:24.215Z [DEBUG] TestServiceManager_RegisterSidecar.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:24.217Z [WARN]  TestServiceManager_RegisterSidecar.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:24.219Z [INFO]  TestServiceManager_RegisterSidecar.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:24.219Z [INFO]  TestServiceManager_RegisterSidecar: consul server down
>     writer.go:29: 2020-02-23T02:46:24.219Z [INFO]  TestServiceManager_RegisterSidecar: shutdown complete
>     writer.go:29: 2020-02-23T02:46:24.219Z [INFO]  TestServiceManager_RegisterSidecar: Stopping server: protocol=DNS address=127.0.0.1:16853 network=tcp
>     writer.go:29: 2020-02-23T02:46:24.219Z [INFO]  TestServiceManager_RegisterSidecar: Stopping server: protocol=DNS address=127.0.0.1:16853 network=udp
>     writer.go:29: 2020-02-23T02:46:24.219Z [INFO]  TestServiceManager_RegisterSidecar: Stopping server: protocol=HTTP address=127.0.0.1:16854 network=tcp
>     writer.go:29: 2020-02-23T02:46:24.219Z [INFO]  TestServiceManager_RegisterSidecar: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:24.219Z [INFO]  TestServiceManager_RegisterSidecar: Endpoints down
> === RUN   TestServiceManager_RegisterMeshGateway
> --- PASS: TestServiceManager_RegisterMeshGateway (0.39s)
>     writer.go:29: 2020-02-23T02:46:24.226Z [WARN]  TestServiceManager_RegisterMeshGateway: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:24.226Z [DEBUG] TestServiceManager_RegisterMeshGateway.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:24.227Z [DEBUG] TestServiceManager_RegisterMeshGateway.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:24.346Z [INFO]  TestServiceManager_RegisterMeshGateway.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4c01fbbb-e3fc-9945-1713-c0d8280a7a8e Address:127.0.0.1:16864}]"
>     writer.go:29: 2020-02-23T02:46:24.347Z [INFO]  TestServiceManager_RegisterMeshGateway.server.serf.wan: serf: EventMemberJoin: Node-4c01fbbb-e3fc-9945-1713-c0d8280a7a8e.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:24.347Z [INFO]  TestServiceManager_RegisterMeshGateway.server.serf.lan: serf: EventMemberJoin: Node-4c01fbbb-e3fc-9945-1713-c0d8280a7a8e 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:24.347Z [INFO]  TestServiceManager_RegisterMeshGateway: Started DNS server: address=127.0.0.1:16859 network=udp
>     writer.go:29: 2020-02-23T02:46:24.347Z [INFO]  TestServiceManager_RegisterMeshGateway.server.raft: entering follower state: follower="Node at 127.0.0.1:16864 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:24.348Z [INFO]  TestServiceManager_RegisterMeshGateway.server: Adding LAN server: server="Node-4c01fbbb-e3fc-9945-1713-c0d8280a7a8e (Addr: tcp/127.0.0.1:16864) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:24.348Z [INFO]  TestServiceManager_RegisterMeshGateway.server: Handled event for server in area: event=member-join server=Node-4c01fbbb-e3fc-9945-1713-c0d8280a7a8e.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:24.348Z [INFO]  TestServiceManager_RegisterMeshGateway: Started DNS server: address=127.0.0.1:16859 network=tcp
>     writer.go:29: 2020-02-23T02:46:24.348Z [INFO]  TestServiceManager_RegisterMeshGateway: Started HTTP server: address=127.0.0.1:16860 network=tcp
>     writer.go:29: 2020-02-23T02:46:24.348Z [INFO]  TestServiceManager_RegisterMeshGateway: started state syncer
>     writer.go:29: 2020-02-23T02:46:24.403Z [WARN]  TestServiceManager_RegisterMeshGateway.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:24.403Z [INFO]  TestServiceManager_RegisterMeshGateway.server.raft: entering candidate state: node="Node at 127.0.0.1:16864 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:24.407Z [DEBUG] TestServiceManager_RegisterMeshGateway.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:24.407Z [DEBUG] TestServiceManager_RegisterMeshGateway.server.raft: vote granted: from=4c01fbbb-e3fc-9945-1713-c0d8280a7a8e term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:24.407Z [INFO]  TestServiceManager_RegisterMeshGateway.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:24.407Z [INFO]  TestServiceManager_RegisterMeshGateway.server.raft: entering leader state: leader="Node at 127.0.0.1:16864 [Leader]"
>     writer.go:29: 2020-02-23T02:46:24.407Z [INFO]  TestServiceManager_RegisterMeshGateway.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:24.407Z [INFO]  TestServiceManager_RegisterMeshGateway.server: New leader elected: payload=Node-4c01fbbb-e3fc-9945-1713-c0d8280a7a8e
>     writer.go:29: 2020-02-23T02:46:24.414Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:24.422Z [INFO]  TestServiceManager_RegisterMeshGateway.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:24.422Z [INFO]  TestServiceManager_RegisterMeshGateway.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:24.422Z [DEBUG] TestServiceManager_RegisterMeshGateway.server: Skipping self join check for node since the cluster is too small: node=Node-4c01fbbb-e3fc-9945-1713-c0d8280a7a8e
>     writer.go:29: 2020-02-23T02:46:24.422Z [INFO]  TestServiceManager_RegisterMeshGateway.server: member joined, marking health alive: member=Node-4c01fbbb-e3fc-9945-1713-c0d8280a7a8e
>     writer.go:29: 2020-02-23T02:46:24.603Z [DEBUG] TestServiceManager_RegisterMeshGateway: added local registration for service: service=mesh-gateway
>     writer.go:29: 2020-02-23T02:46:24.603Z [INFO]  TestServiceManager_RegisterMeshGateway: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:24.603Z [INFO]  TestServiceManager_RegisterMeshGateway.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:24.603Z [DEBUG] TestServiceManager_RegisterMeshGateway.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:24.603Z [WARN]  TestServiceManager_RegisterMeshGateway.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:24.603Z [ERROR] TestServiceManager_RegisterMeshGateway.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:24.603Z [DEBUG] TestServiceManager_RegisterMeshGateway.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:24.605Z [WARN]  TestServiceManager_RegisterMeshGateway.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:24.606Z [INFO]  TestServiceManager_RegisterMeshGateway.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:24.606Z [INFO]  TestServiceManager_RegisterMeshGateway: consul server down
>     writer.go:29: 2020-02-23T02:46:24.606Z [INFO]  TestServiceManager_RegisterMeshGateway: shutdown complete
>     writer.go:29: 2020-02-23T02:46:24.606Z [INFO]  TestServiceManager_RegisterMeshGateway: Stopping server: protocol=DNS address=127.0.0.1:16859 network=tcp
>     writer.go:29: 2020-02-23T02:46:24.606Z [INFO]  TestServiceManager_RegisterMeshGateway: Stopping server: protocol=DNS address=127.0.0.1:16859 network=udp
>     writer.go:29: 2020-02-23T02:46:24.607Z [INFO]  TestServiceManager_RegisterMeshGateway: Stopping server: protocol=HTTP address=127.0.0.1:16860 network=tcp
>     writer.go:29: 2020-02-23T02:46:24.607Z [INFO]  TestServiceManager_RegisterMeshGateway: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:24.607Z [INFO]  TestServiceManager_RegisterMeshGateway: Endpoints down
> === RUN   TestServiceManager_PersistService_API
> === PAUSE TestServiceManager_PersistService_API
> === RUN   TestServiceManager_PersistService_ConfigFiles
> === PAUSE TestServiceManager_PersistService_ConfigFiles
> === RUN   TestServiceManager_Disabled
> --- PASS: TestServiceManager_Disabled (0.36s)
>     writer.go:29: 2020-02-23T02:46:24.614Z [WARN]  TestServiceManager_Disabled: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:24.614Z [DEBUG] TestServiceManager_Disabled.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:24.615Z [DEBUG] TestServiceManager_Disabled.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:24.624Z [INFO]  TestServiceManager_Disabled.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:1fa03dd8-5394-f9c8-ad97-2e241ac33b0b Address:127.0.0.1:16870}]"
>     writer.go:29: 2020-02-23T02:46:24.624Z [INFO]  TestServiceManager_Disabled.server.raft: entering follower state: follower="Node at 127.0.0.1:16870 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:24.625Z [INFO]  TestServiceManager_Disabled.server.serf.wan: serf: EventMemberJoin: Node-1fa03dd8-5394-f9c8-ad97-2e241ac33b0b.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:24.625Z [INFO]  TestServiceManager_Disabled.server.serf.lan: serf: EventMemberJoin: Node-1fa03dd8-5394-f9c8-ad97-2e241ac33b0b 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:24.626Z [INFO]  TestServiceManager_Disabled.server: Handled event for server in area: event=member-join server=Node-1fa03dd8-5394-f9c8-ad97-2e241ac33b0b.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:24.626Z [INFO]  TestServiceManager_Disabled.server: Adding LAN server: server="Node-1fa03dd8-5394-f9c8-ad97-2e241ac33b0b (Addr: tcp/127.0.0.1:16870) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:24.626Z [INFO]  TestServiceManager_Disabled: Started DNS server: address=127.0.0.1:16865 network=tcp
>     writer.go:29: 2020-02-23T02:46:24.626Z [INFO]  TestServiceManager_Disabled: Started DNS server: address=127.0.0.1:16865 network=udp
>     writer.go:29: 2020-02-23T02:46:24.626Z [INFO]  TestServiceManager_Disabled: Started HTTP server: address=127.0.0.1:16866 network=tcp
>     writer.go:29: 2020-02-23T02:46:24.626Z [INFO]  TestServiceManager_Disabled: started state syncer
>     writer.go:29: 2020-02-23T02:46:24.668Z [WARN]  TestServiceManager_Disabled.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:24.668Z [INFO]  TestServiceManager_Disabled.server.raft: entering candidate state: node="Node at 127.0.0.1:16870 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:24.714Z [DEBUG] TestServiceManager_Disabled.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:24.714Z [DEBUG] TestServiceManager_Disabled.server.raft: vote granted: from=1fa03dd8-5394-f9c8-ad97-2e241ac33b0b term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:24.714Z [INFO]  TestServiceManager_Disabled.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:24.714Z [INFO]  TestServiceManager_Disabled.server.raft: entering leader state: leader="Node at 127.0.0.1:16870 [Leader]"
>     writer.go:29: 2020-02-23T02:46:24.714Z [INFO]  TestServiceManager_Disabled.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:24.714Z [INFO]  TestServiceManager_Disabled.server: New leader elected: payload=Node-1fa03dd8-5394-f9c8-ad97-2e241ac33b0b
>     writer.go:29: 2020-02-23T02:46:24.722Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:24.735Z [INFO]  TestServiceManager_Disabled.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:24.735Z [INFO]  TestServiceManager_Disabled.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:24.735Z [DEBUG] TestServiceManager_Disabled.server: Skipping self join check for node since the cluster is too small: node=Node-1fa03dd8-5394-f9c8-ad97-2e241ac33b0b
>     writer.go:29: 2020-02-23T02:46:24.735Z [INFO]  TestServiceManager_Disabled.server: member joined, marking health alive: member=Node-1fa03dd8-5394-f9c8-ad97-2e241ac33b0b
>     writer.go:29: 2020-02-23T02:46:24.952Z [INFO]  TestServiceManager_Disabled: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:24.952Z [INFO]  TestServiceManager_Disabled.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:24.952Z [DEBUG] TestServiceManager_Disabled.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:24.952Z [WARN]  TestServiceManager_Disabled.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:24.952Z [ERROR] TestServiceManager_Disabled.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:24.952Z [DEBUG] TestServiceManager_Disabled.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:24.964Z [WARN]  TestServiceManager_Disabled.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:24.971Z [INFO]  TestServiceManager_Disabled.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:24.971Z [INFO]  TestServiceManager_Disabled: consul server down
>     writer.go:29: 2020-02-23T02:46:24.971Z [INFO]  TestServiceManager_Disabled: shutdown complete
>     writer.go:29: 2020-02-23T02:46:24.971Z [INFO]  TestServiceManager_Disabled: Stopping server: protocol=DNS address=127.0.0.1:16865 network=tcp
>     writer.go:29: 2020-02-23T02:46:24.971Z [INFO]  TestServiceManager_Disabled: Stopping server: protocol=DNS address=127.0.0.1:16865 network=udp
>     writer.go:29: 2020-02-23T02:46:24.971Z [INFO]  TestServiceManager_Disabled: Stopping server: protocol=HTTP address=127.0.0.1:16866 network=tcp
>     writer.go:29: 2020-02-23T02:46:24.971Z [INFO]  TestServiceManager_Disabled: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:24.971Z [INFO]  TestServiceManager_Disabled: Endpoints down
> === RUN   TestSessionCreate
> === PAUSE TestSessionCreate
> === RUN   TestSessionCreate_NodeChecks
> === PAUSE TestSessionCreate_NodeChecks
> === RUN   TestSessionCreate_Delete
> === PAUSE TestSessionCreate_Delete
> === RUN   TestSessionCreate_DefaultCheck
> === PAUSE TestSessionCreate_DefaultCheck
> === RUN   TestSessionCreate_NoCheck
> === PAUSE TestSessionCreate_NoCheck
> === RUN   TestSessionDestroy
> === PAUSE TestSessionDestroy
> === RUN   TestSessionCustomTTL
> === PAUSE TestSessionCustomTTL
> === RUN   TestSessionTTLRenew
> --- SKIP: TestSessionTTLRenew (0.00s)
>     session_endpoint_test.go:496: DM-skipped
> === RUN   TestSessionGet
> === PAUSE TestSessionGet
> === RUN   TestSessionList
> === RUN   TestSessionList/#00
> === RUN   TestSessionList/#01
> --- PASS: TestSessionList (0.52s)
>     --- PASS: TestSessionList/#00 (0.13s)
>         writer.go:29: 2020-02-23T02:46:24.980Z [WARN]  TestSessionList/#00: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:24.980Z [DEBUG] TestSessionList/#00.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:24.980Z [DEBUG] TestSessionList/#00.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:24.997Z [INFO]  TestSessionList/#00.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b8a2caea-ead5-2d97-1b6a-5163fbace621 Address:127.0.0.1:16876}]"
>         writer.go:29: 2020-02-23T02:46:24.997Z [INFO]  TestSessionList/#00.server.serf.wan: serf: EventMemberJoin: Node-b8a2caea-ead5-2d97-1b6a-5163fbace621.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:24.997Z [INFO]  TestSessionList/#00.server.serf.lan: serf: EventMemberJoin: Node-b8a2caea-ead5-2d97-1b6a-5163fbace621 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:24.998Z [INFO]  TestSessionList/#00: Started DNS server: address=127.0.0.1:16871 network=udp
>         writer.go:29: 2020-02-23T02:46:24.998Z [INFO]  TestSessionList/#00.server.raft: entering follower state: follower="Node at 127.0.0.1:16876 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:24.998Z [INFO]  TestSessionList/#00.server: Adding LAN server: server="Node-b8a2caea-ead5-2d97-1b6a-5163fbace621 (Addr: tcp/127.0.0.1:16876) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:24.998Z [INFO]  TestSessionList/#00.server: Handled event for server in area: event=member-join server=Node-b8a2caea-ead5-2d97-1b6a-5163fbace621.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:24.998Z [INFO]  TestSessionList/#00: Started DNS server: address=127.0.0.1:16871 network=tcp
>         writer.go:29: 2020-02-23T02:46:24.998Z [INFO]  TestSessionList/#00: Started HTTP server: address=127.0.0.1:16872 network=tcp
>         writer.go:29: 2020-02-23T02:46:24.998Z [INFO]  TestSessionList/#00: started state syncer
>         writer.go:29: 2020-02-23T02:46:25.065Z [WARN]  TestSessionList/#00.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:25.065Z [INFO]  TestSessionList/#00.server.raft: entering candidate state: node="Node at 127.0.0.1:16876 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:25.069Z [DEBUG] TestSessionList/#00.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:25.069Z [DEBUG] TestSessionList/#00.server.raft: vote granted: from=b8a2caea-ead5-2d97-1b6a-5163fbace621 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:25.069Z [INFO]  TestSessionList/#00.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:25.069Z [INFO]  TestSessionList/#00.server.raft: entering leader state: leader="Node at 127.0.0.1:16876 [Leader]"
>         writer.go:29: 2020-02-23T02:46:25.069Z [INFO]  TestSessionList/#00.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:25.069Z [INFO]  TestSessionList/#00.server: New leader elected: payload=Node-b8a2caea-ead5-2d97-1b6a-5163fbace621
>         writer.go:29: 2020-02-23T02:46:25.076Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:25.084Z [INFO]  TestSessionList/#00.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:25.084Z [INFO]  TestSessionList/#00.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:25.084Z [DEBUG] TestSessionList/#00.server: Skipping self join check for node since the cluster is too small: node=Node-b8a2caea-ead5-2d97-1b6a-5163fbace621
>         writer.go:29: 2020-02-23T02:46:25.084Z [INFO]  TestSessionList/#00.server: member joined, marking health alive: member=Node-b8a2caea-ead5-2d97-1b6a-5163fbace621
>         writer.go:29: 2020-02-23T02:46:25.103Z [INFO]  TestSessionList/#00: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:25.103Z [INFO]  TestSessionList/#00.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:25.103Z [DEBUG] TestSessionList/#00.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:25.103Z [WARN]  TestSessionList/#00.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:25.103Z [ERROR] TestSessionList/#00.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:25.103Z [DEBUG] TestSessionList/#00.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:25.105Z [WARN]  TestSessionList/#00.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:25.106Z [INFO]  TestSessionList/#00.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:25.106Z [INFO]  TestSessionList/#00: consul server down
>         writer.go:29: 2020-02-23T02:46:25.106Z [INFO]  TestSessionList/#00: shutdown complete
>         writer.go:29: 2020-02-23T02:46:25.106Z [INFO]  TestSessionList/#00: Stopping server: protocol=DNS address=127.0.0.1:16871 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.106Z [INFO]  TestSessionList/#00: Stopping server: protocol=DNS address=127.0.0.1:16871 network=udp
>         writer.go:29: 2020-02-23T02:46:25.106Z [INFO]  TestSessionList/#00: Stopping server: protocol=HTTP address=127.0.0.1:16872 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.106Z [INFO]  TestSessionList/#00: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:25.106Z [INFO]  TestSessionList/#00: Endpoints down
>     --- PASS: TestSessionList/#01 (0.39s)
>         writer.go:29: 2020-02-23T02:46:25.114Z [WARN]  TestSessionList/#01: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:25.114Z [DEBUG] TestSessionList/#01.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:25.114Z [DEBUG] TestSessionList/#01.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:25.126Z [INFO]  TestSessionList/#01.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:2130dfe3-e75c-4e72-c33e-efd77d11e693 Address:127.0.0.1:16882}]"
>         writer.go:29: 2020-02-23T02:46:25.127Z [INFO]  TestSessionList/#01.server.raft: entering follower state: follower="Node at 127.0.0.1:16882 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:25.127Z [INFO]  TestSessionList/#01.server.serf.wan: serf: EventMemberJoin: Node-2130dfe3-e75c-4e72-c33e-efd77d11e693.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:25.127Z [INFO]  TestSessionList/#01.server.serf.lan: serf: EventMemberJoin: Node-2130dfe3-e75c-4e72-c33e-efd77d11e693 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:25.128Z [INFO]  TestSessionList/#01.server: Adding LAN server: server="Node-2130dfe3-e75c-4e72-c33e-efd77d11e693 (Addr: tcp/127.0.0.1:16882) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:25.128Z [INFO]  TestSessionList/#01: Started DNS server: address=127.0.0.1:16877 network=udp
>         writer.go:29: 2020-02-23T02:46:25.128Z [INFO]  TestSessionList/#01.server: Handled event for server in area: event=member-join server=Node-2130dfe3-e75c-4e72-c33e-efd77d11e693.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:25.128Z [INFO]  TestSessionList/#01: Started DNS server: address=127.0.0.1:16877 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.128Z [INFO]  TestSessionList/#01: Started HTTP server: address=127.0.0.1:16878 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.128Z [INFO]  TestSessionList/#01: started state syncer
>         writer.go:29: 2020-02-23T02:46:25.167Z [WARN]  TestSessionList/#01.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:25.167Z [INFO]  TestSessionList/#01.server.raft: entering candidate state: node="Node at 127.0.0.1:16882 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:25.170Z [DEBUG] TestSessionList/#01.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:25.170Z [DEBUG] TestSessionList/#01.server.raft: vote granted: from=2130dfe3-e75c-4e72-c33e-efd77d11e693 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:25.170Z [INFO]  TestSessionList/#01.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:25.170Z [INFO]  TestSessionList/#01.server.raft: entering leader state: leader="Node at 127.0.0.1:16882 [Leader]"
>         writer.go:29: 2020-02-23T02:46:25.170Z [INFO]  TestSessionList/#01.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:25.171Z [INFO]  TestSessionList/#01.server: New leader elected: payload=Node-2130dfe3-e75c-4e72-c33e-efd77d11e693
>         writer.go:29: 2020-02-23T02:46:25.177Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:25.185Z [INFO]  TestSessionList/#01.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:25.185Z [INFO]  TestSessionList/#01.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:25.185Z [DEBUG] TestSessionList/#01.server: Skipping self join check for node since the cluster is too small: node=Node-2130dfe3-e75c-4e72-c33e-efd77d11e693
>         writer.go:29: 2020-02-23T02:46:25.185Z [INFO]  TestSessionList/#01.server: member joined, marking health alive: member=Node-2130dfe3-e75c-4e72-c33e-efd77d11e693
>         writer.go:29: 2020-02-23T02:46:25.492Z [INFO]  TestSessionList/#01: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:25.492Z [INFO]  TestSessionList/#01.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:25.492Z [DEBUG] TestSessionList/#01.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:25.492Z [WARN]  TestSessionList/#01.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:25.492Z [ERROR] TestSessionList/#01.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:25.492Z [DEBUG] TestSessionList/#01.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:25.494Z [WARN]  TestSessionList/#01.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:25.495Z [INFO]  TestSessionList/#01.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:25.495Z [INFO]  TestSessionList/#01: consul server down
>         writer.go:29: 2020-02-23T02:46:25.496Z [INFO]  TestSessionList/#01: shutdown complete
>         writer.go:29: 2020-02-23T02:46:25.496Z [INFO]  TestSessionList/#01: Stopping server: protocol=DNS address=127.0.0.1:16877 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.496Z [INFO]  TestSessionList/#01: Stopping server: protocol=DNS address=127.0.0.1:16877 network=udp
>         writer.go:29: 2020-02-23T02:46:25.496Z [INFO]  TestSessionList/#01: Stopping server: protocol=HTTP address=127.0.0.1:16878 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.496Z [INFO]  TestSessionList/#01: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:25.496Z [INFO]  TestSessionList/#01: Endpoints down
> === RUN   TestSessionsForNode
> --- SKIP: TestSessionsForNode (0.00s)
>     session_endpoint_test.go:678: DM-skipped
> === RUN   TestSessionDeleteDestroy
> === PAUSE TestSessionDeleteDestroy
> === RUN   TestAgent_sidecarServiceFromNodeService
> === RUN   TestAgent_sidecarServiceFromNodeService/no_sidecar
> === RUN   TestAgent_sidecarServiceFromNodeService/all_the_defaults
> === RUN   TestAgent_sidecarServiceFromNodeService/all_the_allowed_overrides
> === RUN   TestAgent_sidecarServiceFromNodeService/no_auto_ports_available
> === RUN   TestAgent_sidecarServiceFromNodeService/auto_ports_disabled
> === RUN   TestAgent_sidecarServiceFromNodeService/inherit_tags_and_meta
> === RUN   TestAgent_sidecarServiceFromNodeService/invalid_check_type
> === RUN   TestAgent_sidecarServiceFromNodeService/invalid_meta
> === RUN   TestAgent_sidecarServiceFromNodeService/re-registering_same_sidecar_with_no_port_should_pick_same_one
> --- PASS: TestAgent_sidecarServiceFromNodeService (2.64s)
>     --- PASS: TestAgent_sidecarServiceFromNodeService/no_sidecar (0.10s)
>         writer.go:29: 2020-02-23T02:46:25.514Z [WARN]  jones: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:25.514Z [DEBUG] jones.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:25.514Z [DEBUG] jones.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:25.528Z [INFO]  jones.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:de6bcb81-bb7c-f5a7-9e0b-96a11fbbe68a Address:127.0.0.1:16888}]"
>         writer.go:29: 2020-02-23T02:46:25.528Z [INFO]  jones.server.raft: entering follower state: follower="Node at 127.0.0.1:16888 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:25.529Z [INFO]  jones.server.serf.wan: serf: EventMemberJoin: Node-de6bcb81-bb7c-f5a7-9e0b-96a11fbbe68a.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:25.530Z [INFO]  jones.server.serf.lan: serf: EventMemberJoin: Node-de6bcb81-bb7c-f5a7-9e0b-96a11fbbe68a 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:25.530Z [INFO]  jones.server: Adding LAN server: server="Node-de6bcb81-bb7c-f5a7-9e0b-96a11fbbe68a (Addr: tcp/127.0.0.1:16888) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:25.530Z [INFO]  jones: Started DNS server: address=127.0.0.1:16883 network=udp
>         writer.go:29: 2020-02-23T02:46:25.530Z [INFO]  jones.server: Handled event for server in area: event=member-join server=Node-de6bcb81-bb7c-f5a7-9e0b-96a11fbbe68a.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:25.530Z [INFO]  jones: Started DNS server: address=127.0.0.1:16883 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.530Z [INFO]  jones: Started HTTP server: address=127.0.0.1:16884 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.530Z [INFO]  jones: started state syncer
>         writer.go:29: 2020-02-23T02:46:25.566Z [WARN]  jones.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:25.566Z [INFO]  jones.server.raft: entering candidate state: node="Node at 127.0.0.1:16888 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:25.569Z [DEBUG] jones.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:25.569Z [DEBUG] jones.server.raft: vote granted: from=de6bcb81-bb7c-f5a7-9e0b-96a11fbbe68a term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:25.569Z [INFO]  jones.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:25.569Z [INFO]  jones.server.raft: entering leader state: leader="Node at 127.0.0.1:16888 [Leader]"
>         writer.go:29: 2020-02-23T02:46:25.569Z [INFO]  jones.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:25.569Z [INFO]  jones.server: New leader elected: payload=Node-de6bcb81-bb7c-f5a7-9e0b-96a11fbbe68a
>         writer.go:29: 2020-02-23T02:46:25.581Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:25.588Z [INFO]  jones: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:25.588Z [INFO]  jones.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:25.588Z [WARN]  jones.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:25.588Z [ERROR] jones.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:25.589Z [INFO]  jones.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:25.589Z [INFO]  jones.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:25.589Z [DEBUG] jones.server: Skipping self join check for node since the cluster is too small: node=Node-de6bcb81-bb7c-f5a7-9e0b-96a11fbbe68a
>         writer.go:29: 2020-02-23T02:46:25.589Z [INFO]  jones.server: member joined, marking health alive: member=Node-de6bcb81-bb7c-f5a7-9e0b-96a11fbbe68a
>         writer.go:29: 2020-02-23T02:46:25.590Z [WARN]  jones.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:25.591Z [DEBUG] jones.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:25.591Z [DEBUG] jones.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:25.592Z [INFO]  jones.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:25.592Z [INFO]  jones: consul server down
>         writer.go:29: 2020-02-23T02:46:25.592Z [INFO]  jones: shutdown complete
>         writer.go:29: 2020-02-23T02:46:25.592Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16883 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.592Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16883 network=udp
>         writer.go:29: 2020-02-23T02:46:25.592Z [INFO]  jones: Stopping server: protocol=HTTP address=127.0.0.1:16884 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.592Z [INFO]  jones: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:25.592Z [INFO]  jones: Endpoints down
>     --- PASS: TestAgent_sidecarServiceFromNodeService/all_the_defaults (0.13s)
>         writer.go:29: 2020-02-23T02:46:25.625Z [WARN]  jones: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:25.625Z [DEBUG] jones.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:25.625Z [DEBUG] jones.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:25.641Z [INFO]  jones.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4e54429c-c1d9-cf7b-8180-4d204aae4b35 Address:127.0.0.1:16894}]"
>         writer.go:29: 2020-02-23T02:46:25.641Z [INFO]  jones.server.serf.wan: serf: EventMemberJoin: Node-4e54429c-c1d9-cf7b-8180-4d204aae4b35.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:25.641Z [INFO]  jones.server.serf.lan: serf: EventMemberJoin: Node-4e54429c-c1d9-cf7b-8180-4d204aae4b35 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:25.642Z [INFO]  jones: Started DNS server: address=127.0.0.1:16889 network=udp
>         writer.go:29: 2020-02-23T02:46:25.642Z [INFO]  jones.server.raft: entering follower state: follower="Node at 127.0.0.1:16894 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:25.642Z [INFO]  jones.server: Handled event for server in area: event=member-join server=Node-4e54429c-c1d9-cf7b-8180-4d204aae4b35.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:25.642Z [INFO]  jones.server: Adding LAN server: server="Node-4e54429c-c1d9-cf7b-8180-4d204aae4b35 (Addr: tcp/127.0.0.1:16894) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:25.642Z [INFO]  jones: Started DNS server: address=127.0.0.1:16889 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.642Z [INFO]  jones: Started HTTP server: address=127.0.0.1:16890 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.642Z [INFO]  jones: started state syncer
>         writer.go:29: 2020-02-23T02:46:25.704Z [WARN]  jones.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:25.704Z [INFO]  jones.server.raft: entering candidate state: node="Node at 127.0.0.1:16894 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:25.707Z [DEBUG] jones.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:25.707Z [DEBUG] jones.server.raft: vote granted: from=4e54429c-c1d9-cf7b-8180-4d204aae4b35 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:25.707Z [INFO]  jones.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:25.707Z [INFO]  jones.server.raft: entering leader state: leader="Node at 127.0.0.1:16894 [Leader]"
>         writer.go:29: 2020-02-23T02:46:25.708Z [INFO]  jones.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:25.708Z [INFO]  jones.server: New leader elected: payload=Node-4e54429c-c1d9-cf7b-8180-4d204aae4b35
>         writer.go:29: 2020-02-23T02:46:25.714Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:25.719Z [INFO]  jones: Synced node info
>         writer.go:29: 2020-02-23T02:46:25.719Z [DEBUG] jones: Node info in sync
>         writer.go:29: 2020-02-23T02:46:25.722Z [INFO]  jones: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:25.722Z [INFO]  jones.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:25.722Z [WARN]  jones.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:25.724Z [INFO]  jones.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:25.724Z [INFO]  jones.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:25.724Z [DEBUG] jones.server: Skipping self join check for node since the cluster is too small: node=Node-4e54429c-c1d9-cf7b-8180-4d204aae4b35
>         writer.go:29: 2020-02-23T02:46:25.724Z [INFO]  jones.server: member joined, marking health alive: member=Node-4e54429c-c1d9-cf7b-8180-4d204aae4b35
>         writer.go:29: 2020-02-23T02:46:25.724Z [WARN]  jones.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:25.726Z [INFO]  jones.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:25.727Z [ERROR] jones.server: failed to reconcile member: member="{Node-4e54429c-c1d9-cf7b-8180-4d204aae4b35 127.0.0.1 16892 map[acls:0 bootstrap:1 build:1.7.0: dc:dc1 id:4e54429c-c1d9-cf7b-8180-4d204aae4b35 port:16894 raft_vsn:3 role:consul segment: vsn:2 vsn_max:3 vsn_min:2 wan_join_port:16893] alive 1 5 2 2 5 4}" error="leadership lost while committing log"
>         writer.go:29: 2020-02-23T02:46:25.727Z [DEBUG] jones.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:25.727Z [INFO]  jones: consul server down
>         writer.go:29: 2020-02-23T02:46:25.727Z [INFO]  jones: shutdown complete
>         writer.go:29: 2020-02-23T02:46:25.727Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16889 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.727Z [DEBUG] jones.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:25.727Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16889 network=udp
>         writer.go:29: 2020-02-23T02:46:25.727Z [INFO]  jones: Stopping server: protocol=HTTP address=127.0.0.1:16890 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.727Z [INFO]  jones: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:25.727Z [INFO]  jones: Endpoints down
>     --- PASS: TestAgent_sidecarServiceFromNodeService/all_the_allowed_overrides (0.13s)
>         writer.go:29: 2020-02-23T02:46:25.735Z [WARN]  jones: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:25.735Z [DEBUG] jones.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:25.735Z [DEBUG] jones.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:25.744Z [INFO]  jones.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:8cd8be80-233d-74d6-cb18-66eb6bade252 Address:127.0.0.1:16900}]"
>         writer.go:29: 2020-02-23T02:46:25.744Z [INFO]  jones.server.raft: entering follower state: follower="Node at 127.0.0.1:16900 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:25.744Z [INFO]  jones.server.serf.wan: serf: EventMemberJoin: Node-8cd8be80-233d-74d6-cb18-66eb6bade252.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:25.745Z [INFO]  jones.server.serf.lan: serf: EventMemberJoin: Node-8cd8be80-233d-74d6-cb18-66eb6bade252 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:25.745Z [INFO]  jones.server: Adding LAN server: server="Node-8cd8be80-233d-74d6-cb18-66eb6bade252 (Addr: tcp/127.0.0.1:16900) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:25.745Z [INFO]  jones.server: Handled event for server in area: event=member-join server=Node-8cd8be80-233d-74d6-cb18-66eb6bade252.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:25.745Z [INFO]  jones: Started DNS server: address=127.0.0.1:16895 network=udp
>         writer.go:29: 2020-02-23T02:46:25.745Z [INFO]  jones: Started DNS server: address=127.0.0.1:16895 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.746Z [INFO]  jones: Started HTTP server: address=127.0.0.1:16896 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.746Z [INFO]  jones: started state syncer
>         writer.go:29: 2020-02-23T02:46:25.799Z [WARN]  jones.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:25.799Z [INFO]  jones.server.raft: entering candidate state: node="Node at 127.0.0.1:16900 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:25.840Z [DEBUG] jones.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:25.840Z [DEBUG] jones.server.raft: vote granted: from=8cd8be80-233d-74d6-cb18-66eb6bade252 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:25.840Z [INFO]  jones.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:25.840Z [INFO]  jones.server.raft: entering leader state: leader="Node at 127.0.0.1:16900 [Leader]"
>         writer.go:29: 2020-02-23T02:46:25.840Z [INFO]  jones.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:25.840Z [INFO]  jones.server: New leader elected: payload=Node-8cd8be80-233d-74d6-cb18-66eb6bade252
>         writer.go:29: 2020-02-23T02:46:25.847Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:25.853Z [INFO]  jones: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:25.853Z [INFO]  jones.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:25.853Z [WARN]  jones.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:25.853Z [ERROR] jones.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:25.854Z [WARN]  jones.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:25.855Z [INFO]  jones.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:25.855Z [INFO]  jones.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:25.855Z [DEBUG] jones.server: Skipping self join check for node since the cluster is too small: node=Node-8cd8be80-233d-74d6-cb18-66eb6bade252
>         writer.go:29: 2020-02-23T02:46:25.855Z [INFO]  jones.server: member joined, marking health alive: member=Node-8cd8be80-233d-74d6-cb18-66eb6bade252
>         writer.go:29: 2020-02-23T02:46:25.856Z [INFO]  jones.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:25.858Z [ERROR] jones.server: failed to reconcile member: member="{Node-8cd8be80-233d-74d6-cb18-66eb6bade252 127.0.0.1 16898 map[acls:0 bootstrap:1 build:1.7.0: dc:dc1 id:8cd8be80-233d-74d6-cb18-66eb6bade252 port:16900 raft_vsn:3 role:consul segment: vsn:2 vsn_max:3 vsn_min:2 wan_join_port:16899] alive 1 5 2 2 5 4}" error="leadership lost while committing log"
>         writer.go:29: 2020-02-23T02:46:25.858Z [DEBUG] jones.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:25.858Z [INFO]  jones: consul server down
>         writer.go:29: 2020-02-23T02:46:25.858Z [INFO]  jones: shutdown complete
>         writer.go:29: 2020-02-23T02:46:25.858Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16895 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.858Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16895 network=udp
>         writer.go:29: 2020-02-23T02:46:25.858Z [INFO]  jones: Stopping server: protocol=HTTP address=127.0.0.1:16896 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.858Z [INFO]  jones: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:25.858Z [INFO]  jones: Endpoints down
>     testlog.go:86: 2020-02-23T02:46:25.862Z [DEBUG] jones.leader: stopped routine: routine="CA root pruning"
>     --- PASS: TestAgent_sidecarServiceFromNodeService/no_auto_ports_available (0.36s)
>         writer.go:29: 2020-02-23T02:46:25.865Z [WARN]  jones: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:25.865Z [DEBUG] jones.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:25.865Z [DEBUG] jones.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:25.874Z [INFO]  jones.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:58c62e2b-a6b0-4849-3548-561e5a76802c Address:127.0.0.1:16906}]"
>         writer.go:29: 2020-02-23T02:46:25.874Z [INFO]  jones.server.raft: entering follower state: follower="Node at 127.0.0.1:16906 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:25.874Z [INFO]  jones.server.serf.wan: serf: EventMemberJoin: Node-58c62e2b-a6b0-4849-3548-561e5a76802c.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:25.875Z [INFO]  jones.server.serf.lan: serf: EventMemberJoin: Node-58c62e2b-a6b0-4849-3548-561e5a76802c 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:25.875Z [INFO]  jones: Started DNS server: address=127.0.0.1:16901 network=udp
>         writer.go:29: 2020-02-23T02:46:25.875Z [INFO]  jones.server: Adding LAN server: server="Node-58c62e2b-a6b0-4849-3548-561e5a76802c (Addr: tcp/127.0.0.1:16906) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:25.875Z [INFO]  jones.server: Handled event for server in area: event=member-join server=Node-58c62e2b-a6b0-4849-3548-561e5a76802c.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:25.875Z [INFO]  jones: Started DNS server: address=127.0.0.1:16901 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.875Z [INFO]  jones: Started HTTP server: address=127.0.0.1:16902 network=tcp
>         writer.go:29: 2020-02-23T02:46:25.875Z [INFO]  jones: started state syncer
>         writer.go:29: 2020-02-23T02:46:25.926Z [WARN]  jones.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:25.927Z [INFO]  jones.server.raft: entering candidate state: node="Node at 127.0.0.1:16906 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:25.992Z [DEBUG] jones.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:25.992Z [DEBUG] jones.server.raft: vote granted: from=58c62e2b-a6b0-4849-3548-561e5a76802c term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:25.992Z [INFO]  jones.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:25.992Z [INFO]  jones.server.raft: entering leader state: leader="Node at 127.0.0.1:16906 [Leader]"
>         writer.go:29: 2020-02-23T02:46:25.992Z [INFO]  jones.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:25.992Z [INFO]  jones.server: New leader elected: payload=Node-58c62e2b-a6b0-4849-3548-561e5a76802c
>         writer.go:29: 2020-02-23T02:46:26.012Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:26.020Z [INFO]  jones.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:26.020Z [INFO]  jones.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:26.020Z [DEBUG] jones.server: Skipping self join check for node since the cluster is too small: node=Node-58c62e2b-a6b0-4849-3548-561e5a76802c
>         writer.go:29: 2020-02-23T02:46:26.020Z [INFO]  jones.server: member joined, marking health alive: member=Node-58c62e2b-a6b0-4849-3548-561e5a76802c
>         writer.go:29: 2020-02-23T02:46:26.212Z [INFO]  jones: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:26.212Z [INFO]  jones.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:26.212Z [DEBUG] jones.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:26.212Z [WARN]  jones.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:26.213Z [ERROR] jones.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:26.213Z [DEBUG] jones.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:26.214Z [ERROR] jones.proxycfg: watch error: id=service-http-checks: error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>         writer.go:29: 2020-02-23T02:46:26.215Z [WARN]  jones.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:26.216Z [INFO]  jones.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:26.216Z [INFO]  jones: consul server down
>         writer.go:29: 2020-02-23T02:46:26.216Z [INFO]  jones: shutdown complete
>         writer.go:29: 2020-02-23T02:46:26.216Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16901 network=tcp
>         writer.go:29: 2020-02-23T02:46:26.216Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16901 network=udp
>         writer.go:29: 2020-02-23T02:46:26.217Z [INFO]  jones: Stopping server: protocol=HTTP address=127.0.0.1:16902 network=tcp
>         writer.go:29: 2020-02-23T02:46:26.217Z [INFO]  jones: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:26.217Z [INFO]  jones: Endpoints down
>     testlog.go:86: 2020-02-23T02:46:26.264Z [ERROR] jones.proxycfg: watch error: id=leaf error="error filling agent cache: internal error: CA provider is nil"
>     testlog.go:86: 2020-02-23T02:46:26.264Z [ERROR] jones.proxycfg: watch error: id=leaf error="error filling agent cache: No cluster leader"
>     testlog.go:86: 2020-02-23T02:46:26.264Z [ERROR] jones.proxycfg: watch error: id=leaf error="error filling agent cache: No cluster leader"
>     testlog.go:86: 2020-02-23T02:46:26.264Z [ERROR] jones.proxycfg: watch error: id=leaf error="error filling agent cache: No cluster leader"
>     --- PASS: TestAgent_sidecarServiceFromNodeService/auto_ports_disabled (0.27s)
>         writer.go:29: 2020-02-23T02:46:26.225Z [WARN]  jones: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:26.225Z [DEBUG] jones.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:26.225Z [DEBUG] jones.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:26.234Z [INFO]  jones.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:470e4be2-9c00-c36a-d01c-8364944be1be Address:127.0.0.1:16912}]"
>         writer.go:29: 2020-02-23T02:46:26.235Z [INFO]  jones.server.serf.wan: serf: EventMemberJoin: Node-470e4be2-9c00-c36a-d01c-8364944be1be.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:26.235Z [INFO]  jones.server.serf.lan: serf: EventMemberJoin: Node-470e4be2-9c00-c36a-d01c-8364944be1be 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:26.235Z [INFO]  jones: Started DNS server: address=127.0.0.1:16907 network=udp
>         writer.go:29: 2020-02-23T02:46:26.235Z [INFO]  jones.server.raft: entering follower state: follower="Node at 127.0.0.1:16912 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:26.236Z [INFO]  jones.server: Adding LAN server: server="Node-470e4be2-9c00-c36a-d01c-8364944be1be (Addr: tcp/127.0.0.1:16912) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:26.236Z [INFO]  jones.server: Handled event for server in area: event=member-join server=Node-470e4be2-9c00-c36a-d01c-8364944be1be.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:26.236Z [INFO]  jones: Started DNS server: address=127.0.0.1:16907 network=tcp
>         writer.go:29: 2020-02-23T02:46:26.236Z [INFO]  jones: Started HTTP server: address=127.0.0.1:16908 network=tcp
>         writer.go:29: 2020-02-23T02:46:26.236Z [INFO]  jones: started state syncer
>         writer.go:29: 2020-02-23T02:46:26.293Z [WARN]  jones.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:26.293Z [INFO]  jones.server.raft: entering candidate state: node="Node at 127.0.0.1:16912 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:26.343Z [DEBUG] jones.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:26.343Z [DEBUG] jones.server.raft: vote granted: from=470e4be2-9c00-c36a-d01c-8364944be1be term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:26.343Z [INFO]  jones.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:26.343Z [INFO]  jones.server.raft: entering leader state: leader="Node at 127.0.0.1:16912 [Leader]"
>         writer.go:29: 2020-02-23T02:46:26.343Z [INFO]  jones.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:26.343Z [INFO]  jones.server: New leader elected: payload=Node-470e4be2-9c00-c36a-d01c-8364944be1be
>         writer.go:29: 2020-02-23T02:46:26.383Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:26.391Z [INFO]  jones.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:26.391Z [INFO]  jones.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:26.391Z [DEBUG] jones.server: Skipping self join check for node since the cluster is too small: node=Node-470e4be2-9c00-c36a-d01c-8364944be1be
>         writer.go:29: 2020-02-23T02:46:26.391Z [INFO]  jones.server: member joined, marking health alive: member=Node-470e4be2-9c00-c36a-d01c-8364944be1be
>         writer.go:29: 2020-02-23T02:46:26.482Z [INFO]  jones: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:26.482Z [INFO]  jones.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:26.482Z [DEBUG] jones.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:26.482Z [WARN]  jones.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:26.482Z [ERROR] jones.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:26.482Z [DEBUG] jones.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:26.484Z [WARN]  jones.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:26.485Z [INFO]  jones.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:26.486Z [INFO]  jones: consul server down
>         writer.go:29: 2020-02-23T02:46:26.486Z [INFO]  jones: shutdown complete
>         writer.go:29: 2020-02-23T02:46:26.486Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16907 network=tcp
>         writer.go:29: 2020-02-23T02:46:26.486Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16907 network=udp
>         writer.go:29: 2020-02-23T02:46:26.486Z [INFO]  jones: Stopping server: protocol=HTTP address=127.0.0.1:16908 network=tcp
>         writer.go:29: 2020-02-23T02:46:26.486Z [INFO]  jones: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:26.486Z [INFO]  jones: Endpoints down
>     --- PASS: TestAgent_sidecarServiceFromNodeService/inherit_tags_and_meta (0.44s)
>         writer.go:29: 2020-02-23T02:46:26.494Z [WARN]  jones: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:26.494Z [DEBUG] jones.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:26.494Z [DEBUG] jones.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:26.503Z [INFO]  jones.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:afc5e099-caa5-a358-2fb9-27a0a2fd99fa Address:127.0.0.1:16918}]"
>         writer.go:29: 2020-02-23T02:46:26.503Z [INFO]  jones.server.raft: entering follower state: follower="Node at 127.0.0.1:16918 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:26.504Z [INFO]  jones.server.serf.wan: serf: EventMemberJoin: Node-afc5e099-caa5-a358-2fb9-27a0a2fd99fa.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:26.504Z [INFO]  jones.server.serf.lan: serf: EventMemberJoin: Node-afc5e099-caa5-a358-2fb9-27a0a2fd99fa 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:26.504Z [INFO]  jones.server: Handled event for server in area: event=member-join server=Node-afc5e099-caa5-a358-2fb9-27a0a2fd99fa.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:26.504Z [INFO]  jones.server: Adding LAN server: server="Node-afc5e099-caa5-a358-2fb9-27a0a2fd99fa (Addr: tcp/127.0.0.1:16918) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:26.505Z [INFO]  jones: Started DNS server: address=127.0.0.1:16913 network=tcp
>         writer.go:29: 2020-02-23T02:46:26.505Z [INFO]  jones: Started DNS server: address=127.0.0.1:16913 network=udp
>         writer.go:29: 2020-02-23T02:46:26.505Z [INFO]  jones: Started HTTP server: address=127.0.0.1:16914 network=tcp
>         writer.go:29: 2020-02-23T02:46:26.505Z [INFO]  jones: started state syncer
>         writer.go:29: 2020-02-23T02:46:26.568Z [WARN]  jones.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:26.568Z [INFO]  jones.server.raft: entering candidate state: node="Node at 127.0.0.1:16918 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:26.571Z [DEBUG] jones.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:26.571Z [DEBUG] jones.server.raft: vote granted: from=afc5e099-caa5-a358-2fb9-27a0a2fd99fa term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:26.571Z [INFO]  jones.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:26.571Z [INFO]  jones.server.raft: entering leader state: leader="Node at 127.0.0.1:16918 [Leader]"
>         writer.go:29: 2020-02-23T02:46:26.571Z [INFO]  jones.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:26.571Z [INFO]  jones.server: New leader elected: payload=Node-afc5e099-caa5-a358-2fb9-27a0a2fd99fa
>         writer.go:29: 2020-02-23T02:46:26.578Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:26.586Z [INFO]  jones.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:26.586Z [INFO]  jones.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:26.586Z [DEBUG] jones.server: Skipping self join check for node since the cluster is too small: node=Node-afc5e099-caa5-a358-2fb9-27a0a2fd99fa
>         writer.go:29: 2020-02-23T02:46:26.586Z [INFO]  jones.server: member joined, marking health alive: member=Node-afc5e099-caa5-a358-2fb9-27a0a2fd99fa
>         writer.go:29: 2020-02-23T02:46:26.613Z [DEBUG] jones: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:26.616Z [INFO]  jones: Synced node info
>         writer.go:29: 2020-02-23T02:46:26.920Z [INFO]  jones: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:26.920Z [INFO]  jones.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:26.920Z [DEBUG] jones.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:26.920Z [WARN]  jones.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:26.921Z [DEBUG] jones.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:26.922Z [WARN]  jones.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:26.924Z [INFO]  jones.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:26.924Z [INFO]  jones: consul server down
>         writer.go:29: 2020-02-23T02:46:26.924Z [INFO]  jones: shutdown complete
>         writer.go:29: 2020-02-23T02:46:26.924Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16913 network=tcp
>         writer.go:29: 2020-02-23T02:46:26.924Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16913 network=udp
>         writer.go:29: 2020-02-23T02:46:26.924Z [INFO]  jones: Stopping server: protocol=HTTP address=127.0.0.1:16914 network=tcp
>         writer.go:29: 2020-02-23T02:46:26.924Z [INFO]  jones: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:26.924Z [INFO]  jones: Endpoints down
>     --- PASS: TestAgent_sidecarServiceFromNodeService/invalid_check_type (0.34s)
>         writer.go:29: 2020-02-23T02:46:26.932Z [WARN]  jones: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:26.932Z [DEBUG] jones.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:26.933Z [DEBUG] jones.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:26.946Z [INFO]  jones.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f04c239e-567d-b29b-f0a3-0516329d0741 Address:127.0.0.1:16924}]"
>         writer.go:29: 2020-02-23T02:46:26.946Z [INFO]  jones.server.raft: entering follower state: follower="Node at 127.0.0.1:16924 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:26.947Z [INFO]  jones.server.serf.wan: serf: EventMemberJoin: Node-f04c239e-567d-b29b-f0a3-0516329d0741.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:26.948Z [INFO]  jones.server.serf.lan: serf: EventMemberJoin: Node-f04c239e-567d-b29b-f0a3-0516329d0741 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:26.948Z [INFO]  jones.server: Handled event for server in area: event=member-join server=Node-f04c239e-567d-b29b-f0a3-0516329d0741.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:26.948Z [INFO]  jones.server: Adding LAN server: server="Node-f04c239e-567d-b29b-f0a3-0516329d0741 (Addr: tcp/127.0.0.1:16924) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:26.948Z [INFO]  jones: Started DNS server: address=127.0.0.1:16919 network=tcp
>         writer.go:29: 2020-02-23T02:46:26.948Z [INFO]  jones: Started DNS server: address=127.0.0.1:16919 network=udp
>         writer.go:29: 2020-02-23T02:46:26.948Z [INFO]  jones: Started HTTP server: address=127.0.0.1:16920 network=tcp
>         writer.go:29: 2020-02-23T02:46:26.948Z [INFO]  jones: started state syncer
>         writer.go:29: 2020-02-23T02:46:26.987Z [WARN]  jones.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:26.987Z [INFO]  jones.server.raft: entering candidate state: node="Node at 127.0.0.1:16924 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:26.990Z [DEBUG] jones.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:26.990Z [DEBUG] jones.server.raft: vote granted: from=f04c239e-567d-b29b-f0a3-0516329d0741 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:26.990Z [INFO]  jones.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:26.990Z [INFO]  jones.server.raft: entering leader state: leader="Node at 127.0.0.1:16924 [Leader]"
>         writer.go:29: 2020-02-23T02:46:26.990Z [INFO]  jones.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:26.990Z [INFO]  jones.server: New leader elected: payload=Node-f04c239e-567d-b29b-f0a3-0516329d0741
>         writer.go:29: 2020-02-23T02:46:26.998Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:27.006Z [INFO]  jones.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:27.006Z [INFO]  jones.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:27.006Z [DEBUG] jones.server: Skipping self join check for node since the cluster is too small: node=Node-f04c239e-567d-b29b-f0a3-0516329d0741
>         writer.go:29: 2020-02-23T02:46:27.006Z [INFO]  jones.server: member joined, marking health alive: member=Node-f04c239e-567d-b29b-f0a3-0516329d0741
>         writer.go:29: 2020-02-23T02:46:27.039Z [DEBUG] jones: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:27.043Z [INFO]  jones: Synced node info
>         writer.go:29: 2020-02-23T02:46:27.255Z [INFO]  jones: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:27.256Z [INFO]  jones.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:27.256Z [DEBUG] jones.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:27.256Z [WARN]  jones.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:27.256Z [DEBUG] jones.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:27.258Z [WARN]  jones.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:27.260Z [INFO]  jones.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:27.260Z [INFO]  jones: consul server down
>         writer.go:29: 2020-02-23T02:46:27.260Z [INFO]  jones: shutdown complete
>         writer.go:29: 2020-02-23T02:46:27.260Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16919 network=tcp
>         writer.go:29: 2020-02-23T02:46:27.260Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16919 network=udp
>         writer.go:29: 2020-02-23T02:46:27.260Z [INFO]  jones: Stopping server: protocol=HTTP address=127.0.0.1:16920 network=tcp
>         writer.go:29: 2020-02-23T02:46:27.260Z [INFO]  jones: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:27.260Z [INFO]  jones: Endpoints down
>     --- PASS: TestAgent_sidecarServiceFromNodeService/invalid_meta (0.45s)
>         writer.go:29: 2020-02-23T02:46:27.268Z [WARN]  jones: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:27.268Z [DEBUG] jones.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:27.268Z [DEBUG] jones.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:27.301Z [INFO]  jones.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b2089cb8-f59c-9110-bb10-0fd0667156f7 Address:127.0.0.1:16930}]"
>         writer.go:29: 2020-02-23T02:46:27.301Z [INFO]  jones.server.serf.wan: serf: EventMemberJoin: Node-b2089cb8-f59c-9110-bb10-0fd0667156f7.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:27.302Z [INFO]  jones.server.serf.lan: serf: EventMemberJoin: Node-b2089cb8-f59c-9110-bb10-0fd0667156f7 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:27.302Z [INFO]  jones: Started DNS server: address=127.0.0.1:16925 network=udp
>         writer.go:29: 2020-02-23T02:46:27.302Z [INFO]  jones.server.raft: entering follower state: follower="Node at 127.0.0.1:16930 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:27.302Z [INFO]  jones.server: Adding LAN server: server="Node-b2089cb8-f59c-9110-bb10-0fd0667156f7 (Addr: tcp/127.0.0.1:16930) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:27.302Z [INFO]  jones.server: Handled event for server in area: event=member-join server=Node-b2089cb8-f59c-9110-bb10-0fd0667156f7.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:27.302Z [INFO]  jones: Started DNS server: address=127.0.0.1:16925 network=tcp
>         writer.go:29: 2020-02-23T02:46:27.303Z [INFO]  jones: Started HTTP server: address=127.0.0.1:16926 network=tcp
>         writer.go:29: 2020-02-23T02:46:27.303Z [INFO]  jones: started state syncer
>         writer.go:29: 2020-02-23T02:46:27.357Z [WARN]  jones.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:27.357Z [INFO]  jones.server.raft: entering candidate state: node="Node at 127.0.0.1:16930 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:27.360Z [DEBUG] jones.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:27.360Z [DEBUG] jones.server.raft: vote granted: from=b2089cb8-f59c-9110-bb10-0fd0667156f7 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:27.360Z [INFO]  jones.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:27.360Z [INFO]  jones.server.raft: entering leader state: leader="Node at 127.0.0.1:16930 [Leader]"
>         writer.go:29: 2020-02-23T02:46:27.360Z [INFO]  jones.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:27.360Z [INFO]  jones.server: New leader elected: payload=Node-b2089cb8-f59c-9110-bb10-0fd0667156f7
>         writer.go:29: 2020-02-23T02:46:27.368Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:27.375Z [INFO]  jones.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:27.375Z [INFO]  jones.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:27.375Z [DEBUG] jones.server: Skipping self join check for node since the cluster is too small: node=Node-b2089cb8-f59c-9110-bb10-0fd0667156f7
>         writer.go:29: 2020-02-23T02:46:27.375Z [INFO]  jones.server: member joined, marking health alive: member=Node-b2089cb8-f59c-9110-bb10-0fd0667156f7
>         writer.go:29: 2020-02-23T02:46:27.672Z [DEBUG] jones: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:27.675Z [INFO]  jones: Synced node info
>         writer.go:29: 2020-02-23T02:46:27.706Z [INFO]  jones: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:27.706Z [INFO]  jones.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:27.706Z [DEBUG] jones.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:27.706Z [WARN]  jones.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:27.706Z [DEBUG] jones.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:27.708Z [WARN]  jones.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:27.709Z [INFO]  jones.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:27.709Z [INFO]  jones: consul server down
>         writer.go:29: 2020-02-23T02:46:27.709Z [INFO]  jones: shutdown complete
>         writer.go:29: 2020-02-23T02:46:27.709Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16925 network=tcp
>         writer.go:29: 2020-02-23T02:46:27.710Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16925 network=udp
>         writer.go:29: 2020-02-23T02:46:27.710Z [INFO]  jones: Stopping server: protocol=HTTP address=127.0.0.1:16926 network=tcp
>         writer.go:29: 2020-02-23T02:46:27.710Z [INFO]  jones: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:27.710Z [INFO]  jones: Endpoints down
>     --- PASS: TestAgent_sidecarServiceFromNodeService/re-registering_same_sidecar_with_no_port_should_pick_same_one (0.43s)
>         writer.go:29: 2020-02-23T02:46:27.717Z [WARN]  jones: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:27.718Z [DEBUG] jones.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:27.718Z [DEBUG] jones.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:27.733Z [INFO]  jones.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:dbb5476a-44f7-5fc5-0bb2-11bf3697eca8 Address:127.0.0.1:16936}]"
>         writer.go:29: 2020-02-23T02:46:27.733Z [INFO]  jones.server.serf.wan: serf: EventMemberJoin: Node-dbb5476a-44f7-5fc5-0bb2-11bf3697eca8.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:27.734Z [INFO]  jones.server.serf.lan: serf: EventMemberJoin: Node-dbb5476a-44f7-5fc5-0bb2-11bf3697eca8 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:27.734Z [INFO]  jones: Started DNS server: address=127.0.0.1:16931 network=udp
>         writer.go:29: 2020-02-23T02:46:27.734Z [INFO]  jones.server.raft: entering follower state: follower="Node at 127.0.0.1:16936 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:27.734Z [INFO]  jones.server: Adding LAN server: server="Node-dbb5476a-44f7-5fc5-0bb2-11bf3697eca8 (Addr: tcp/127.0.0.1:16936) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:27.734Z [INFO]  jones.server: Handled event for server in area: event=member-join server=Node-dbb5476a-44f7-5fc5-0bb2-11bf3697eca8.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:27.734Z [INFO]  jones: Started DNS server: address=127.0.0.1:16931 network=tcp
>         writer.go:29: 2020-02-23T02:46:27.734Z [INFO]  jones: Started HTTP server: address=127.0.0.1:16932 network=tcp
>         writer.go:29: 2020-02-23T02:46:27.734Z [INFO]  jones: started state syncer
>         writer.go:29: 2020-02-23T02:46:27.800Z [WARN]  jones.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:27.800Z [INFO]  jones.server.raft: entering candidate state: node="Node at 127.0.0.1:16936 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:27.803Z [DEBUG] jones.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:27.803Z [DEBUG] jones.server.raft: vote granted: from=dbb5476a-44f7-5fc5-0bb2-11bf3697eca8 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:27.803Z [INFO]  jones.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:27.803Z [INFO]  jones.server.raft: entering leader state: leader="Node at 127.0.0.1:16936 [Leader]"
>         writer.go:29: 2020-02-23T02:46:27.803Z [INFO]  jones.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:27.804Z [INFO]  jones.server: New leader elected: payload=Node-dbb5476a-44f7-5fc5-0bb2-11bf3697eca8
>         writer.go:29: 2020-02-23T02:46:27.811Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:27.819Z [INFO]  jones.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:27.819Z [INFO]  jones.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:27.819Z [DEBUG] jones.server: Skipping self join check for node since the cluster is too small: node=Node-dbb5476a-44f7-5fc5-0bb2-11bf3697eca8
>         writer.go:29: 2020-02-23T02:46:27.819Z [INFO]  jones.server: member joined, marking health alive: member=Node-dbb5476a-44f7-5fc5-0bb2-11bf3697eca8
>         writer.go:29: 2020-02-23T02:46:28.135Z [INFO]  jones: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:28.135Z [INFO]  jones.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:28.135Z [DEBUG] jones.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:28.135Z [WARN]  jones.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:28.135Z [ERROR] jones.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:28.135Z [DEBUG] jones.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:28.136Z [ERROR] jones.proxycfg: watch error: id=service-http-checks:web1 error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>         writer.go:29: 2020-02-23T02:46:28.138Z [WARN]  jones.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:28.139Z [INFO]  jones.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:28.139Z [INFO]  jones: consul server down
>         writer.go:29: 2020-02-23T02:46:28.139Z [INFO]  jones: shutdown complete
>         writer.go:29: 2020-02-23T02:46:28.139Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16931 network=tcp
>         writer.go:29: 2020-02-23T02:46:28.139Z [INFO]  jones: Stopping server: protocol=DNS address=127.0.0.1:16931 network=udp
>         writer.go:29: 2020-02-23T02:46:28.139Z [INFO]  jones: Stopping server: protocol=HTTP address=127.0.0.1:16932 network=tcp
>         writer.go:29: 2020-02-23T02:46:28.139Z [INFO]  jones: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:28.139Z [INFO]  jones: Endpoints down
> === RUN   TestSnapshot
> --- SKIP: TestSnapshot (0.00s)
>     snapshot_endpoint_test.go:16: DM-skipped
> === RUN   TestSnapshot_Options
> === PAUSE TestSnapshot_Options
> === RUN   TestStatusLeader
> --- SKIP: TestStatusLeader (0.00s)
>     status_endpoint_test.go:14: DM-skipped
> === RUN   TestStatusLeaderSecondary
> === PAUSE TestStatusLeaderSecondary
> === RUN   TestStatusPeers
> === PAUSE TestStatusPeers
> === RUN   TestStatusPeersSecondary
> === PAUSE TestStatusPeersSecondary
> === RUN   TestDefaultConfig
> === RUN   TestDefaultConfig/#00
> === PAUSE TestDefaultConfig/#00
> === RUN   TestDefaultConfig/#01
> === PAUSE TestDefaultConfig/#01
> === RUN   TestDefaultConfig/#02
> === PAUSE TestDefaultConfig/#02
> === RUN   TestDefaultConfig/#03
> === PAUSE TestDefaultConfig/#03
> === RUN   TestDefaultConfig/#04
> === PAUSE TestDefaultConfig/#04
> === RUN   TestDefaultConfig/#05
> === PAUSE TestDefaultConfig/#05
> === RUN   TestDefaultConfig/#06
> === PAUSE TestDefaultConfig/#06
> === RUN   TestDefaultConfig/#07
> === PAUSE TestDefaultConfig/#07
> === RUN   TestDefaultConfig/#08
> === PAUSE TestDefaultConfig/#08
> === RUN   TestDefaultConfig/#09
> === PAUSE TestDefaultConfig/#09
> === RUN   TestDefaultConfig/#10
> === PAUSE TestDefaultConfig/#10
> === RUN   TestDefaultConfig/#11
> === PAUSE TestDefaultConfig/#11
> === RUN   TestDefaultConfig/#12
> === PAUSE TestDefaultConfig/#12
> === RUN   TestDefaultConfig/#13
> === PAUSE TestDefaultConfig/#13
> === RUN   TestDefaultConfig/#14
> === PAUSE TestDefaultConfig/#14
> === RUN   TestDefaultConfig/#15
> === PAUSE TestDefaultConfig/#15
> === RUN   TestDefaultConfig/#16
> === PAUSE TestDefaultConfig/#16
> === RUN   TestDefaultConfig/#17
> === PAUSE TestDefaultConfig/#17
> === RUN   TestDefaultConfig/#18
> === PAUSE TestDefaultConfig/#18
> === RUN   TestDefaultConfig/#19
> === PAUSE TestDefaultConfig/#19
> === RUN   TestDefaultConfig/#20
> === PAUSE TestDefaultConfig/#20
> === RUN   TestDefaultConfig/#21
> === PAUSE TestDefaultConfig/#21
> === RUN   TestDefaultConfig/#22
> === PAUSE TestDefaultConfig/#22
> === RUN   TestDefaultConfig/#23
> === PAUSE TestDefaultConfig/#23
> === RUN   TestDefaultConfig/#24
> === PAUSE TestDefaultConfig/#24
> === RUN   TestDefaultConfig/#25
> === PAUSE TestDefaultConfig/#25
> === RUN   TestDefaultConfig/#26
> === PAUSE TestDefaultConfig/#26
> === RUN   TestDefaultConfig/#27
> === PAUSE TestDefaultConfig/#27
> === RUN   TestDefaultConfig/#28
> === PAUSE TestDefaultConfig/#28
> === RUN   TestDefaultConfig/#29
> === PAUSE TestDefaultConfig/#29
> === RUN   TestDefaultConfig/#30
> === PAUSE TestDefaultConfig/#30
> === RUN   TestDefaultConfig/#31
> === PAUSE TestDefaultConfig/#31
> === RUN   TestDefaultConfig/#32
> === PAUSE TestDefaultConfig/#32
> === RUN   TestDefaultConfig/#33
> === PAUSE TestDefaultConfig/#33
> === RUN   TestDefaultConfig/#34
> === PAUSE TestDefaultConfig/#34
> === RUN   TestDefaultConfig/#35
> === PAUSE TestDefaultConfig/#35
> === RUN   TestDefaultConfig/#36
> === PAUSE TestDefaultConfig/#36
> === RUN   TestDefaultConfig/#37
> === PAUSE TestDefaultConfig/#37
> === RUN   TestDefaultConfig/#38
> === PAUSE TestDefaultConfig/#38
> === RUN   TestDefaultConfig/#39
> === PAUSE TestDefaultConfig/#39
> === RUN   TestDefaultConfig/#40
> === PAUSE TestDefaultConfig/#40
> === RUN   TestDefaultConfig/#41
> === PAUSE TestDefaultConfig/#41
> === RUN   TestDefaultConfig/#42
> === PAUSE TestDefaultConfig/#42
> === RUN   TestDefaultConfig/#43
> === PAUSE TestDefaultConfig/#43
> === RUN   TestDefaultConfig/#44
> === PAUSE TestDefaultConfig/#44
> === RUN   TestDefaultConfig/#45
> === PAUSE TestDefaultConfig/#45
> === RUN   TestDefaultConfig/#46
> === PAUSE TestDefaultConfig/#46
> === RUN   TestDefaultConfig/#47
> === PAUSE TestDefaultConfig/#47
> === RUN   TestDefaultConfig/#48
> === PAUSE TestDefaultConfig/#48
> === RUN   TestDefaultConfig/#49
> === PAUSE TestDefaultConfig/#49
> === RUN   TestDefaultConfig/#50
> === PAUSE TestDefaultConfig/#50
> === RUN   TestDefaultConfig/#51
> === PAUSE TestDefaultConfig/#51
> === RUN   TestDefaultConfig/#52
> === PAUSE TestDefaultConfig/#52
> === RUN   TestDefaultConfig/#53
> === PAUSE TestDefaultConfig/#53
> === RUN   TestDefaultConfig/#54
> === PAUSE TestDefaultConfig/#54
> === RUN   TestDefaultConfig/#55
> === PAUSE TestDefaultConfig/#55
> === RUN   TestDefaultConfig/#56
> === PAUSE TestDefaultConfig/#56
> === RUN   TestDefaultConfig/#57
> === PAUSE TestDefaultConfig/#57
> === RUN   TestDefaultConfig/#58
> === PAUSE TestDefaultConfig/#58
> === RUN   TestDefaultConfig/#59
> === PAUSE TestDefaultConfig/#59
> === RUN   TestDefaultConfig/#60
> === PAUSE TestDefaultConfig/#60
> === RUN   TestDefaultConfig/#61
> === PAUSE TestDefaultConfig/#61
> === RUN   TestDefaultConfig/#62
> === PAUSE TestDefaultConfig/#62
> === RUN   TestDefaultConfig/#63
> === PAUSE TestDefaultConfig/#63
> === RUN   TestDefaultConfig/#64
> === PAUSE TestDefaultConfig/#64
> === RUN   TestDefaultConfig/#65
> === PAUSE TestDefaultConfig/#65
> === RUN   TestDefaultConfig/#66
> === PAUSE TestDefaultConfig/#66
> === RUN   TestDefaultConfig/#67
> === PAUSE TestDefaultConfig/#67
> === RUN   TestDefaultConfig/#68
> === PAUSE TestDefaultConfig/#68
> === RUN   TestDefaultConfig/#69
> === PAUSE TestDefaultConfig/#69
> === RUN   TestDefaultConfig/#70
> === PAUSE TestDefaultConfig/#70
> === RUN   TestDefaultConfig/#71
> === PAUSE TestDefaultConfig/#71
> === RUN   TestDefaultConfig/#72
> === PAUSE TestDefaultConfig/#72
> === RUN   TestDefaultConfig/#73
> === PAUSE TestDefaultConfig/#73
> === RUN   TestDefaultConfig/#74
> === PAUSE TestDefaultConfig/#74
> === RUN   TestDefaultConfig/#75
> === PAUSE TestDefaultConfig/#75
> === RUN   TestDefaultConfig/#76
> === PAUSE TestDefaultConfig/#76
> === RUN   TestDefaultConfig/#77
> === PAUSE TestDefaultConfig/#77
> === RUN   TestDefaultConfig/#78
> === PAUSE TestDefaultConfig/#78
> === RUN   TestDefaultConfig/#79
> === PAUSE TestDefaultConfig/#79
> === RUN   TestDefaultConfig/#80
> === PAUSE TestDefaultConfig/#80
> === RUN   TestDefaultConfig/#81
> === PAUSE TestDefaultConfig/#81
> === RUN   TestDefaultConfig/#82
> === PAUSE TestDefaultConfig/#82
> === RUN   TestDefaultConfig/#83
> === PAUSE TestDefaultConfig/#83
> === RUN   TestDefaultConfig/#84
> === PAUSE TestDefaultConfig/#84
> === RUN   TestDefaultConfig/#85
> === PAUSE TestDefaultConfig/#85
> === RUN   TestDefaultConfig/#86
> === PAUSE TestDefaultConfig/#86
> === RUN   TestDefaultConfig/#87
> === PAUSE TestDefaultConfig/#87
> === RUN   TestDefaultConfig/#88
> === PAUSE TestDefaultConfig/#88
> === RUN   TestDefaultConfig/#89
> === PAUSE TestDefaultConfig/#89
> === RUN   TestDefaultConfig/#90
> === PAUSE TestDefaultConfig/#90
> === RUN   TestDefaultConfig/#91
> === PAUSE TestDefaultConfig/#91
> === RUN   TestDefaultConfig/#92
> === PAUSE TestDefaultConfig/#92
> === RUN   TestDefaultConfig/#93
> === PAUSE TestDefaultConfig/#93
> === RUN   TestDefaultConfig/#94
> === PAUSE TestDefaultConfig/#94
> === RUN   TestDefaultConfig/#95
> === PAUSE TestDefaultConfig/#95
> === RUN   TestDefaultConfig/#96
> === PAUSE TestDefaultConfig/#96
> === RUN   TestDefaultConfig/#97
> === PAUSE TestDefaultConfig/#97
> === RUN   TestDefaultConfig/#98
> === PAUSE TestDefaultConfig/#98
> === RUN   TestDefaultConfig/#99
> === PAUSE TestDefaultConfig/#99
> === RUN   TestDefaultConfig/#100
> === PAUSE TestDefaultConfig/#100
> === RUN   TestDefaultConfig/#101
> === PAUSE TestDefaultConfig/#101
> === RUN   TestDefaultConfig/#102
> === PAUSE TestDefaultConfig/#102
> === RUN   TestDefaultConfig/#103
> === PAUSE TestDefaultConfig/#103
> === RUN   TestDefaultConfig/#104
> === PAUSE TestDefaultConfig/#104
> === RUN   TestDefaultConfig/#105
> === PAUSE TestDefaultConfig/#105
> === RUN   TestDefaultConfig/#106
> === PAUSE TestDefaultConfig/#106
> === RUN   TestDefaultConfig/#107
> === PAUSE TestDefaultConfig/#107
> === RUN   TestDefaultConfig/#108
> === PAUSE TestDefaultConfig/#108
> === RUN   TestDefaultConfig/#109
> === PAUSE TestDefaultConfig/#109
> === RUN   TestDefaultConfig/#110
> === PAUSE TestDefaultConfig/#110
> === RUN   TestDefaultConfig/#111
> === PAUSE TestDefaultConfig/#111
> === RUN   TestDefaultConfig/#112
> === PAUSE TestDefaultConfig/#112
> === RUN   TestDefaultConfig/#113
> === PAUSE TestDefaultConfig/#113
> === RUN   TestDefaultConfig/#114
> === PAUSE TestDefaultConfig/#114
> === RUN   TestDefaultConfig/#115
> === PAUSE TestDefaultConfig/#115
> === RUN   TestDefaultConfig/#116
> === PAUSE TestDefaultConfig/#116
> === RUN   TestDefaultConfig/#117
> === PAUSE TestDefaultConfig/#117
> === RUN   TestDefaultConfig/#118
> === PAUSE TestDefaultConfig/#118
> === RUN   TestDefaultConfig/#119
> === PAUSE TestDefaultConfig/#119
> === RUN   TestDefaultConfig/#120
> === PAUSE TestDefaultConfig/#120
> === RUN   TestDefaultConfig/#121
> === PAUSE TestDefaultConfig/#121
> === RUN   TestDefaultConfig/#122
> === PAUSE TestDefaultConfig/#122
> === RUN   TestDefaultConfig/#123
> === PAUSE TestDefaultConfig/#123
> === RUN   TestDefaultConfig/#124
> === PAUSE TestDefaultConfig/#124
> === RUN   TestDefaultConfig/#125
> === PAUSE TestDefaultConfig/#125
> === RUN   TestDefaultConfig/#126
> === PAUSE TestDefaultConfig/#126
> === RUN   TestDefaultConfig/#127
> === PAUSE TestDefaultConfig/#127
> === RUN   TestDefaultConfig/#128
> === PAUSE TestDefaultConfig/#128
> === RUN   TestDefaultConfig/#129
> === PAUSE TestDefaultConfig/#129
> === RUN   TestDefaultConfig/#130
> === PAUSE TestDefaultConfig/#130
> === RUN   TestDefaultConfig/#131
> === PAUSE TestDefaultConfig/#131
> === RUN   TestDefaultConfig/#132
> === PAUSE TestDefaultConfig/#132
> === RUN   TestDefaultConfig/#133
> === PAUSE TestDefaultConfig/#133
> === RUN   TestDefaultConfig/#134
> === PAUSE TestDefaultConfig/#134
> === RUN   TestDefaultConfig/#135
> === PAUSE TestDefaultConfig/#135
> === RUN   TestDefaultConfig/#136
> === PAUSE TestDefaultConfig/#136
> === RUN   TestDefaultConfig/#137
> === PAUSE TestDefaultConfig/#137
> === RUN   TestDefaultConfig/#138
> === PAUSE TestDefaultConfig/#138
> === RUN   TestDefaultConfig/#139
> === PAUSE TestDefaultConfig/#139
> === RUN   TestDefaultConfig/#140
> === PAUSE TestDefaultConfig/#140
> === RUN   TestDefaultConfig/#141
> === PAUSE TestDefaultConfig/#141
> === RUN   TestDefaultConfig/#142
> === PAUSE TestDefaultConfig/#142
> === RUN   TestDefaultConfig/#143
> === PAUSE TestDefaultConfig/#143
> === RUN   TestDefaultConfig/#144
> === PAUSE TestDefaultConfig/#144
> === RUN   TestDefaultConfig/#145
> === PAUSE TestDefaultConfig/#145
> === RUN   TestDefaultConfig/#146
> === PAUSE TestDefaultConfig/#146
> === RUN   TestDefaultConfig/#147
> === PAUSE TestDefaultConfig/#147
> === RUN   TestDefaultConfig/#148
> === PAUSE TestDefaultConfig/#148
> === RUN   TestDefaultConfig/#149
> === PAUSE TestDefaultConfig/#149
> === RUN   TestDefaultConfig/#150
> === PAUSE TestDefaultConfig/#150
> === RUN   TestDefaultConfig/#151
> === PAUSE TestDefaultConfig/#151
> === RUN   TestDefaultConfig/#152
> === PAUSE TestDefaultConfig/#152
> === RUN   TestDefaultConfig/#153
> === PAUSE TestDefaultConfig/#153
> === RUN   TestDefaultConfig/#154
> === PAUSE TestDefaultConfig/#154
> === RUN   TestDefaultConfig/#155
> === PAUSE TestDefaultConfig/#155
> === RUN   TestDefaultConfig/#156
> === PAUSE TestDefaultConfig/#156
> === RUN   TestDefaultConfig/#157
> === PAUSE TestDefaultConfig/#157
> === RUN   TestDefaultConfig/#158
> === PAUSE TestDefaultConfig/#158
> === RUN   TestDefaultConfig/#159
> === PAUSE TestDefaultConfig/#159
> === RUN   TestDefaultConfig/#160
> === PAUSE TestDefaultConfig/#160
> === RUN   TestDefaultConfig/#161
> === PAUSE TestDefaultConfig/#161
> === RUN   TestDefaultConfig/#162
> === PAUSE TestDefaultConfig/#162
> === RUN   TestDefaultConfig/#163
> === PAUSE TestDefaultConfig/#163
> === RUN   TestDefaultConfig/#164
> === PAUSE TestDefaultConfig/#164
> === RUN   TestDefaultConfig/#165
> === PAUSE TestDefaultConfig/#165
> === RUN   TestDefaultConfig/#166
> === PAUSE TestDefaultConfig/#166
> === RUN   TestDefaultConfig/#167
> === PAUSE TestDefaultConfig/#167
> === RUN   TestDefaultConfig/#168
> === PAUSE TestDefaultConfig/#168
> === RUN   TestDefaultConfig/#169
> === PAUSE TestDefaultConfig/#169
> === RUN   TestDefaultConfig/#170
> === PAUSE TestDefaultConfig/#170
> === RUN   TestDefaultConfig/#171
> === PAUSE TestDefaultConfig/#171
> === RUN   TestDefaultConfig/#172
> === PAUSE TestDefaultConfig/#172
> === RUN   TestDefaultConfig/#173
> === PAUSE TestDefaultConfig/#173
> === RUN   TestDefaultConfig/#174
> === PAUSE TestDefaultConfig/#174
> === RUN   TestDefaultConfig/#175
> === PAUSE TestDefaultConfig/#175
> === RUN   TestDefaultConfig/#176
> === PAUSE TestDefaultConfig/#176
> === RUN   TestDefaultConfig/#177
> === PAUSE TestDefaultConfig/#177
> === RUN   TestDefaultConfig/#178
> === PAUSE TestDefaultConfig/#178
> === RUN   TestDefaultConfig/#179
> === PAUSE TestDefaultConfig/#179
> === RUN   TestDefaultConfig/#180
> === PAUSE TestDefaultConfig/#180
> === RUN   TestDefaultConfig/#181
> === PAUSE TestDefaultConfig/#181
> === RUN   TestDefaultConfig/#182
> === PAUSE TestDefaultConfig/#182
> === RUN   TestDefaultConfig/#183
> === PAUSE TestDefaultConfig/#183
> === RUN   TestDefaultConfig/#184
> === PAUSE TestDefaultConfig/#184
> === RUN   TestDefaultConfig/#185
> === PAUSE TestDefaultConfig/#185
> === RUN   TestDefaultConfig/#186
> === PAUSE TestDefaultConfig/#186
> === RUN   TestDefaultConfig/#187
> === PAUSE TestDefaultConfig/#187
> === RUN   TestDefaultConfig/#188
> === PAUSE TestDefaultConfig/#188
> === RUN   TestDefaultConfig/#189
> === PAUSE TestDefaultConfig/#189
> === RUN   TestDefaultConfig/#190
> === PAUSE TestDefaultConfig/#190
> === RUN   TestDefaultConfig/#191
> === PAUSE TestDefaultConfig/#191
> === RUN   TestDefaultConfig/#192
> === PAUSE TestDefaultConfig/#192
> === RUN   TestDefaultConfig/#193
> === PAUSE TestDefaultConfig/#193
> === RUN   TestDefaultConfig/#194
> === PAUSE TestDefaultConfig/#194
> === RUN   TestDefaultConfig/#195
> === PAUSE TestDefaultConfig/#195
> === RUN   TestDefaultConfig/#196
> === PAUSE TestDefaultConfig/#196
> === RUN   TestDefaultConfig/#197
> === PAUSE TestDefaultConfig/#197
> === RUN   TestDefaultConfig/#198
> === PAUSE TestDefaultConfig/#198
> === RUN   TestDefaultConfig/#199
> === PAUSE TestDefaultConfig/#199
> === RUN   TestDefaultConfig/#200
> === PAUSE TestDefaultConfig/#200
> === RUN   TestDefaultConfig/#201
> === PAUSE TestDefaultConfig/#201
> === RUN   TestDefaultConfig/#202
> === PAUSE TestDefaultConfig/#202
> === RUN   TestDefaultConfig/#203
> === PAUSE TestDefaultConfig/#203
> === RUN   TestDefaultConfig/#204
> === PAUSE TestDefaultConfig/#204
> === RUN   TestDefaultConfig/#205
> === PAUSE TestDefaultConfig/#205
> === RUN   TestDefaultConfig/#206
> === PAUSE TestDefaultConfig/#206
> === RUN   TestDefaultConfig/#207
> === PAUSE TestDefaultConfig/#207
> === RUN   TestDefaultConfig/#208
> === PAUSE TestDefaultConfig/#208
> === RUN   TestDefaultConfig/#209
> === PAUSE TestDefaultConfig/#209
> === RUN   TestDefaultConfig/#210
> === PAUSE TestDefaultConfig/#210
> === RUN   TestDefaultConfig/#211
> === PAUSE TestDefaultConfig/#211
> === RUN   TestDefaultConfig/#212
> === PAUSE TestDefaultConfig/#212
> === RUN   TestDefaultConfig/#213
> === PAUSE TestDefaultConfig/#213
> === RUN   TestDefaultConfig/#214
> === PAUSE TestDefaultConfig/#214
> === RUN   TestDefaultConfig/#215
> === PAUSE TestDefaultConfig/#215
> === RUN   TestDefaultConfig/#216
> === PAUSE TestDefaultConfig/#216
> === RUN   TestDefaultConfig/#217
> === PAUSE TestDefaultConfig/#217
> === RUN   TestDefaultConfig/#218
> === PAUSE TestDefaultConfig/#218
> === RUN   TestDefaultConfig/#219
> === PAUSE TestDefaultConfig/#219
> === RUN   TestDefaultConfig/#220
> === PAUSE TestDefaultConfig/#220
> === RUN   TestDefaultConfig/#221
> === PAUSE TestDefaultConfig/#221
> === RUN   TestDefaultConfig/#222
> === PAUSE TestDefaultConfig/#222
> === RUN   TestDefaultConfig/#223
> === PAUSE TestDefaultConfig/#223
> === RUN   TestDefaultConfig/#224
> === PAUSE TestDefaultConfig/#224
> === RUN   TestDefaultConfig/#225
> === PAUSE TestDefaultConfig/#225
> === RUN   TestDefaultConfig/#226
> === PAUSE TestDefaultConfig/#226
> === RUN   TestDefaultConfig/#227
> === PAUSE TestDefaultConfig/#227
> === RUN   TestDefaultConfig/#228
> === PAUSE TestDefaultConfig/#228
> === RUN   TestDefaultConfig/#229
> === PAUSE TestDefaultConfig/#229
> === RUN   TestDefaultConfig/#230
> === PAUSE TestDefaultConfig/#230
> === RUN   TestDefaultConfig/#231
> === PAUSE TestDefaultConfig/#231
> === RUN   TestDefaultConfig/#232
> === PAUSE TestDefaultConfig/#232
> === RUN   TestDefaultConfig/#233
> === PAUSE TestDefaultConfig/#233
> === RUN   TestDefaultConfig/#234
> === PAUSE TestDefaultConfig/#234
> === RUN   TestDefaultConfig/#235
> === PAUSE TestDefaultConfig/#235
> === RUN   TestDefaultConfig/#236
> === PAUSE TestDefaultConfig/#236
> === RUN   TestDefaultConfig/#237
> === PAUSE TestDefaultConfig/#237
> === RUN   TestDefaultConfig/#238
> === PAUSE TestDefaultConfig/#238
> === RUN   TestDefaultConfig/#239
> === PAUSE TestDefaultConfig/#239
> === RUN   TestDefaultConfig/#240
> === PAUSE TestDefaultConfig/#240
> === RUN   TestDefaultConfig/#241
> === PAUSE TestDefaultConfig/#241
> === RUN   TestDefaultConfig/#242
> === PAUSE TestDefaultConfig/#242
> === RUN   TestDefaultConfig/#243
> === PAUSE TestDefaultConfig/#243
> === RUN   TestDefaultConfig/#244
> === PAUSE TestDefaultConfig/#244
> === RUN   TestDefaultConfig/#245
> === PAUSE TestDefaultConfig/#245
> === RUN   TestDefaultConfig/#246
> === PAUSE TestDefaultConfig/#246
> === RUN   TestDefaultConfig/#247
> === PAUSE TestDefaultConfig/#247
> === RUN   TestDefaultConfig/#248
> === PAUSE TestDefaultConfig/#248
> === RUN   TestDefaultConfig/#249
> === PAUSE TestDefaultConfig/#249
> === RUN   TestDefaultConfig/#250
> === PAUSE TestDefaultConfig/#250
> === RUN   TestDefaultConfig/#251
> === PAUSE TestDefaultConfig/#251
> === RUN   TestDefaultConfig/#252
> === PAUSE TestDefaultConfig/#252
> === RUN   TestDefaultConfig/#253
> === PAUSE TestDefaultConfig/#253
> === RUN   TestDefaultConfig/#254
> === PAUSE TestDefaultConfig/#254
> === RUN   TestDefaultConfig/#255
> === PAUSE TestDefaultConfig/#255
> === RUN   TestDefaultConfig/#256
> === PAUSE TestDefaultConfig/#256
> === RUN   TestDefaultConfig/#257
> === PAUSE TestDefaultConfig/#257
> === RUN   TestDefaultConfig/#258
> === PAUSE TestDefaultConfig/#258
> === RUN   TestDefaultConfig/#259
> === PAUSE TestDefaultConfig/#259
> === RUN   TestDefaultConfig/#260
> === PAUSE TestDefaultConfig/#260
> === RUN   TestDefaultConfig/#261
> === PAUSE TestDefaultConfig/#261
> === RUN   TestDefaultConfig/#262
> === PAUSE TestDefaultConfig/#262
> === RUN   TestDefaultConfig/#263
> === PAUSE TestDefaultConfig/#263
> === RUN   TestDefaultConfig/#264
> === PAUSE TestDefaultConfig/#264
> === RUN   TestDefaultConfig/#265
> === PAUSE TestDefaultConfig/#265
> === RUN   TestDefaultConfig/#266
> === PAUSE TestDefaultConfig/#266
> === RUN   TestDefaultConfig/#267
> === PAUSE TestDefaultConfig/#267
> === RUN   TestDefaultConfig/#268
> === PAUSE TestDefaultConfig/#268
> === RUN   TestDefaultConfig/#269
> === PAUSE TestDefaultConfig/#269
> === RUN   TestDefaultConfig/#270
> === PAUSE TestDefaultConfig/#270
> === RUN   TestDefaultConfig/#271
> === PAUSE TestDefaultConfig/#271
> === RUN   TestDefaultConfig/#272
> === PAUSE TestDefaultConfig/#272
> === RUN   TestDefaultConfig/#273
> === PAUSE TestDefaultConfig/#273
> === RUN   TestDefaultConfig/#274
> === PAUSE TestDefaultConfig/#274
> === RUN   TestDefaultConfig/#275
> === PAUSE TestDefaultConfig/#275
> === RUN   TestDefaultConfig/#276
> === PAUSE TestDefaultConfig/#276
> === RUN   TestDefaultConfig/#277
> === PAUSE TestDefaultConfig/#277
> === RUN   TestDefaultConfig/#278
> === PAUSE TestDefaultConfig/#278
> === RUN   TestDefaultConfig/#279
> === PAUSE TestDefaultConfig/#279
> === RUN   TestDefaultConfig/#280
> === PAUSE TestDefaultConfig/#280
> === RUN   TestDefaultConfig/#281
> === PAUSE TestDefaultConfig/#281
> === RUN   TestDefaultConfig/#282
> === PAUSE TestDefaultConfig/#282
> === RUN   TestDefaultConfig/#283
> === PAUSE TestDefaultConfig/#283
> === RUN   TestDefaultConfig/#284
> === PAUSE TestDefaultConfig/#284
> === RUN   TestDefaultConfig/#285
> === PAUSE TestDefaultConfig/#285
> === RUN   TestDefaultConfig/#286
> === PAUSE TestDefaultConfig/#286
> === RUN   TestDefaultConfig/#287
> === PAUSE TestDefaultConfig/#287
> === RUN   TestDefaultConfig/#288
> === PAUSE TestDefaultConfig/#288
> === RUN   TestDefaultConfig/#289
> === PAUSE TestDefaultConfig/#289
> === RUN   TestDefaultConfig/#290
> === PAUSE TestDefaultConfig/#290
> === RUN   TestDefaultConfig/#291
> === PAUSE TestDefaultConfig/#291
> === RUN   TestDefaultConfig/#292
> === PAUSE TestDefaultConfig/#292
> === RUN   TestDefaultConfig/#293
> === PAUSE TestDefaultConfig/#293
> === RUN   TestDefaultConfig/#294
> === PAUSE TestDefaultConfig/#294
> === RUN   TestDefaultConfig/#295
> === PAUSE TestDefaultConfig/#295
> === RUN   TestDefaultConfig/#296
> === PAUSE TestDefaultConfig/#296
> === RUN   TestDefaultConfig/#297
> === PAUSE TestDefaultConfig/#297
> === RUN   TestDefaultConfig/#298
> === PAUSE TestDefaultConfig/#298
> === RUN   TestDefaultConfig/#299
> === PAUSE TestDefaultConfig/#299
> === RUN   TestDefaultConfig/#300
> === PAUSE TestDefaultConfig/#300
> === RUN   TestDefaultConfig/#301
> === PAUSE TestDefaultConfig/#301
> === RUN   TestDefaultConfig/#302
> === PAUSE TestDefaultConfig/#302
> === RUN   TestDefaultConfig/#303
> === PAUSE TestDefaultConfig/#303
> === RUN   TestDefaultConfig/#304
> === PAUSE TestDefaultConfig/#304
> === RUN   TestDefaultConfig/#305
> === PAUSE TestDefaultConfig/#305
> === RUN   TestDefaultConfig/#306
> === PAUSE TestDefaultConfig/#306
> === RUN   TestDefaultConfig/#307
> === PAUSE TestDefaultConfig/#307
> === RUN   TestDefaultConfig/#308
> === PAUSE TestDefaultConfig/#308
> === RUN   TestDefaultConfig/#309
> === PAUSE TestDefaultConfig/#309
> === RUN   TestDefaultConfig/#310
> === PAUSE TestDefaultConfig/#310
> === RUN   TestDefaultConfig/#311
> === PAUSE TestDefaultConfig/#311
> === RUN   TestDefaultConfig/#312
> === PAUSE TestDefaultConfig/#312
> === RUN   TestDefaultConfig/#313
> === PAUSE TestDefaultConfig/#313
> === RUN   TestDefaultConfig/#314
> === PAUSE TestDefaultConfig/#314
> === RUN   TestDefaultConfig/#315
> === PAUSE TestDefaultConfig/#315
> === RUN   TestDefaultConfig/#316
> === PAUSE TestDefaultConfig/#316
> === RUN   TestDefaultConfig/#317
> === PAUSE TestDefaultConfig/#317
> === RUN   TestDefaultConfig/#318
> === PAUSE TestDefaultConfig/#318
> === RUN   TestDefaultConfig/#319
> === PAUSE TestDefaultConfig/#319
> === RUN   TestDefaultConfig/#320
> === PAUSE TestDefaultConfig/#320
> === RUN   TestDefaultConfig/#321
> === PAUSE TestDefaultConfig/#321
> === RUN   TestDefaultConfig/#322
> === PAUSE TestDefaultConfig/#322
> === RUN   TestDefaultConfig/#323
> === PAUSE TestDefaultConfig/#323
> === RUN   TestDefaultConfig/#324
> === PAUSE TestDefaultConfig/#324
> === RUN   TestDefaultConfig/#325
> === PAUSE TestDefaultConfig/#325
> === RUN   TestDefaultConfig/#326
> === PAUSE TestDefaultConfig/#326
> === RUN   TestDefaultConfig/#327
> === PAUSE TestDefaultConfig/#327
> === RUN   TestDefaultConfig/#328
> === PAUSE TestDefaultConfig/#328
> === RUN   TestDefaultConfig/#329
> === PAUSE TestDefaultConfig/#329
> === RUN   TestDefaultConfig/#330
> === PAUSE TestDefaultConfig/#330
> === RUN   TestDefaultConfig/#331
> === PAUSE TestDefaultConfig/#331
> === RUN   TestDefaultConfig/#332
> === PAUSE TestDefaultConfig/#332
> === RUN   TestDefaultConfig/#333
> === PAUSE TestDefaultConfig/#333
> === RUN   TestDefaultConfig/#334
> === PAUSE TestDefaultConfig/#334
> === RUN   TestDefaultConfig/#335
> === PAUSE TestDefaultConfig/#335
> === RUN   TestDefaultConfig/#336
> === PAUSE TestDefaultConfig/#336
> === RUN   TestDefaultConfig/#337
> === PAUSE TestDefaultConfig/#337
> === RUN   TestDefaultConfig/#338
> === PAUSE TestDefaultConfig/#338
> === RUN   TestDefaultConfig/#339
> === PAUSE TestDefaultConfig/#339
> === RUN   TestDefaultConfig/#340
> === PAUSE TestDefaultConfig/#340
> === RUN   TestDefaultConfig/#341
> === PAUSE TestDefaultConfig/#341
> === RUN   TestDefaultConfig/#342
> === PAUSE TestDefaultConfig/#342
> === RUN   TestDefaultConfig/#343
> === PAUSE TestDefaultConfig/#343
> === RUN   TestDefaultConfig/#344
> === PAUSE TestDefaultConfig/#344
> === RUN   TestDefaultConfig/#345
> === PAUSE TestDefaultConfig/#345
> === RUN   TestDefaultConfig/#346
> === PAUSE TestDefaultConfig/#346
> === RUN   TestDefaultConfig/#347
> === PAUSE TestDefaultConfig/#347
> === RUN   TestDefaultConfig/#348
> === PAUSE TestDefaultConfig/#348
> === RUN   TestDefaultConfig/#349
> === PAUSE TestDefaultConfig/#349
> === RUN   TestDefaultConfig/#350
> === PAUSE TestDefaultConfig/#350
> === RUN   TestDefaultConfig/#351
> === PAUSE TestDefaultConfig/#351
> === RUN   TestDefaultConfig/#352
> === PAUSE TestDefaultConfig/#352
> === RUN   TestDefaultConfig/#353
> === PAUSE TestDefaultConfig/#353
> === RUN   TestDefaultConfig/#354
> === PAUSE TestDefaultConfig/#354
> === RUN   TestDefaultConfig/#355
> === PAUSE TestDefaultConfig/#355
> === RUN   TestDefaultConfig/#356
> === PAUSE TestDefaultConfig/#356
> === RUN   TestDefaultConfig/#357
> === PAUSE TestDefaultConfig/#357
> === RUN   TestDefaultConfig/#358
> === PAUSE TestDefaultConfig/#358
> === RUN   TestDefaultConfig/#359
> === PAUSE TestDefaultConfig/#359
> === RUN   TestDefaultConfig/#360
> === PAUSE TestDefaultConfig/#360
> === RUN   TestDefaultConfig/#361
> === PAUSE TestDefaultConfig/#361
> === RUN   TestDefaultConfig/#362
> === PAUSE TestDefaultConfig/#362
> === RUN   TestDefaultConfig/#363
> === PAUSE TestDefaultConfig/#363
> === RUN   TestDefaultConfig/#364
> === PAUSE TestDefaultConfig/#364
> === RUN   TestDefaultConfig/#365
> === PAUSE TestDefaultConfig/#365
> === RUN   TestDefaultConfig/#366
> === PAUSE TestDefaultConfig/#366
> === RUN   TestDefaultConfig/#367
> === PAUSE TestDefaultConfig/#367
> === RUN   TestDefaultConfig/#368
> === PAUSE TestDefaultConfig/#368
> === RUN   TestDefaultConfig/#369
> === PAUSE TestDefaultConfig/#369
> === RUN   TestDefaultConfig/#370
> === PAUSE TestDefaultConfig/#370
> === RUN   TestDefaultConfig/#371
> === PAUSE TestDefaultConfig/#371
> === RUN   TestDefaultConfig/#372
> === PAUSE TestDefaultConfig/#372
> === RUN   TestDefaultConfig/#373
> === PAUSE TestDefaultConfig/#373
> === RUN   TestDefaultConfig/#374
> === PAUSE TestDefaultConfig/#374
> === RUN   TestDefaultConfig/#375
> === PAUSE TestDefaultConfig/#375
> === RUN   TestDefaultConfig/#376
> === PAUSE TestDefaultConfig/#376
> === RUN   TestDefaultConfig/#377
> === PAUSE TestDefaultConfig/#377
> === RUN   TestDefaultConfig/#378
> === PAUSE TestDefaultConfig/#378
> === RUN   TestDefaultConfig/#379
> === PAUSE TestDefaultConfig/#379
> === RUN   TestDefaultConfig/#380
> === PAUSE TestDefaultConfig/#380
> === RUN   TestDefaultConfig/#381
> === PAUSE TestDefaultConfig/#381
> === RUN   TestDefaultConfig/#382
> === PAUSE TestDefaultConfig/#382
> === RUN   TestDefaultConfig/#383
> === PAUSE TestDefaultConfig/#383
> === RUN   TestDefaultConfig/#384
> === PAUSE TestDefaultConfig/#384
> === RUN   TestDefaultConfig/#385
> === PAUSE TestDefaultConfig/#385
> === RUN   TestDefaultConfig/#386
> === PAUSE TestDefaultConfig/#386
> === RUN   TestDefaultConfig/#387
> === PAUSE TestDefaultConfig/#387
> === RUN   TestDefaultConfig/#388
> === PAUSE TestDefaultConfig/#388
> === RUN   TestDefaultConfig/#389
> === PAUSE TestDefaultConfig/#389
> === RUN   TestDefaultConfig/#390
> === PAUSE TestDefaultConfig/#390
> === RUN   TestDefaultConfig/#391
> === PAUSE TestDefaultConfig/#391
> === RUN   TestDefaultConfig/#392
> === PAUSE TestDefaultConfig/#392
> === RUN   TestDefaultConfig/#393
> === PAUSE TestDefaultConfig/#393
> === RUN   TestDefaultConfig/#394
> === PAUSE TestDefaultConfig/#394
> === RUN   TestDefaultConfig/#395
> === PAUSE TestDefaultConfig/#395
> === RUN   TestDefaultConfig/#396
> === PAUSE TestDefaultConfig/#396
> === RUN   TestDefaultConfig/#397
> === PAUSE TestDefaultConfig/#397
> === RUN   TestDefaultConfig/#398
> === PAUSE TestDefaultConfig/#398
> === RUN   TestDefaultConfig/#399
> === PAUSE TestDefaultConfig/#399
> === RUN   TestDefaultConfig/#400
> === PAUSE TestDefaultConfig/#400
> === RUN   TestDefaultConfig/#401
> === PAUSE TestDefaultConfig/#401
> === RUN   TestDefaultConfig/#402
> === PAUSE TestDefaultConfig/#402
> === RUN   TestDefaultConfig/#403
> === PAUSE TestDefaultConfig/#403
> === RUN   TestDefaultConfig/#404
> === PAUSE TestDefaultConfig/#404
> === RUN   TestDefaultConfig/#405
> === PAUSE TestDefaultConfig/#405
> === RUN   TestDefaultConfig/#406
> === PAUSE TestDefaultConfig/#406
> === RUN   TestDefaultConfig/#407
> === PAUSE TestDefaultConfig/#407
> === RUN   TestDefaultConfig/#408
> === PAUSE TestDefaultConfig/#408
> === RUN   TestDefaultConfig/#409
> === PAUSE TestDefaultConfig/#409
> === RUN   TestDefaultConfig/#410
> === PAUSE TestDefaultConfig/#410
> === RUN   TestDefaultConfig/#411
> === PAUSE TestDefaultConfig/#411
> === RUN   TestDefaultConfig/#412
> === PAUSE TestDefaultConfig/#412
> === RUN   TestDefaultConfig/#413
> === PAUSE TestDefaultConfig/#413
> === RUN   TestDefaultConfig/#414
> === PAUSE TestDefaultConfig/#414
> === RUN   TestDefaultConfig/#415
> === PAUSE TestDefaultConfig/#415
> === RUN   TestDefaultConfig/#416
> === PAUSE TestDefaultConfig/#416
> === RUN   TestDefaultConfig/#417
> === PAUSE TestDefaultConfig/#417
> === RUN   TestDefaultConfig/#418
> === PAUSE TestDefaultConfig/#418
> === RUN   TestDefaultConfig/#419
> === PAUSE TestDefaultConfig/#419
> === RUN   TestDefaultConfig/#420
> === PAUSE TestDefaultConfig/#420
> === RUN   TestDefaultConfig/#421
> === PAUSE TestDefaultConfig/#421
> === RUN   TestDefaultConfig/#422
> === PAUSE TestDefaultConfig/#422
> === RUN   TestDefaultConfig/#423
> === PAUSE TestDefaultConfig/#423
> === RUN   TestDefaultConfig/#424
> === PAUSE TestDefaultConfig/#424
> === RUN   TestDefaultConfig/#425
> === PAUSE TestDefaultConfig/#425
> === RUN   TestDefaultConfig/#426
> === PAUSE TestDefaultConfig/#426
> === RUN   TestDefaultConfig/#427
> === PAUSE TestDefaultConfig/#427
> === RUN   TestDefaultConfig/#428
> === PAUSE TestDefaultConfig/#428
> === RUN   TestDefaultConfig/#429
> === PAUSE TestDefaultConfig/#429
> === RUN   TestDefaultConfig/#430
> === PAUSE TestDefaultConfig/#430
> === RUN   TestDefaultConfig/#431
> === PAUSE TestDefaultConfig/#431
> === RUN   TestDefaultConfig/#432
> === PAUSE TestDefaultConfig/#432
> === RUN   TestDefaultConfig/#433
> === PAUSE TestDefaultConfig/#433
> === RUN   TestDefaultConfig/#434
> === PAUSE TestDefaultConfig/#434
> === RUN   TestDefaultConfig/#435
> === PAUSE TestDefaultConfig/#435
> === RUN   TestDefaultConfig/#436
> === PAUSE TestDefaultConfig/#436
> === RUN   TestDefaultConfig/#437
> === PAUSE TestDefaultConfig/#437
> === RUN   TestDefaultConfig/#438
> === PAUSE TestDefaultConfig/#438
> === RUN   TestDefaultConfig/#439
> === PAUSE TestDefaultConfig/#439
> === RUN   TestDefaultConfig/#440
> === PAUSE TestDefaultConfig/#440
> === RUN   TestDefaultConfig/#441
> === PAUSE TestDefaultConfig/#441
> === RUN   TestDefaultConfig/#442
> === PAUSE TestDefaultConfig/#442
> === RUN   TestDefaultConfig/#443
> === PAUSE TestDefaultConfig/#443
> === RUN   TestDefaultConfig/#444
> === PAUSE TestDefaultConfig/#444
> === RUN   TestDefaultConfig/#445
> === PAUSE TestDefaultConfig/#445
> === RUN   TestDefaultConfig/#446
> === PAUSE TestDefaultConfig/#446
> === RUN   TestDefaultConfig/#447
> === PAUSE TestDefaultConfig/#447
> === RUN   TestDefaultConfig/#448
> === PAUSE TestDefaultConfig/#448
> === RUN   TestDefaultConfig/#449
> === PAUSE TestDefaultConfig/#449
> === RUN   TestDefaultConfig/#450
> === PAUSE TestDefaultConfig/#450
> === RUN   TestDefaultConfig/#451
> === PAUSE TestDefaultConfig/#451
> === RUN   TestDefaultConfig/#452
> === PAUSE TestDefaultConfig/#452
> === RUN   TestDefaultConfig/#453
> === PAUSE TestDefaultConfig/#453
> === RUN   TestDefaultConfig/#454
> === PAUSE TestDefaultConfig/#454
> === RUN   TestDefaultConfig/#455
> === PAUSE TestDefaultConfig/#455
> === RUN   TestDefaultConfig/#456
> === PAUSE TestDefaultConfig/#456
> === RUN   TestDefaultConfig/#457
> === PAUSE TestDefaultConfig/#457
> === RUN   TestDefaultConfig/#458
> === PAUSE TestDefaultConfig/#458
> === RUN   TestDefaultConfig/#459
> === PAUSE TestDefaultConfig/#459
> === RUN   TestDefaultConfig/#460
> === PAUSE TestDefaultConfig/#460
> === RUN   TestDefaultConfig/#461
> === PAUSE TestDefaultConfig/#461
> === RUN   TestDefaultConfig/#462
> === PAUSE TestDefaultConfig/#462
> === RUN   TestDefaultConfig/#463
> === PAUSE TestDefaultConfig/#463
> === RUN   TestDefaultConfig/#464
> === PAUSE TestDefaultConfig/#464
> === RUN   TestDefaultConfig/#465
> === PAUSE TestDefaultConfig/#465
> === RUN   TestDefaultConfig/#466
> === PAUSE TestDefaultConfig/#466
> === RUN   TestDefaultConfig/#467
> === PAUSE TestDefaultConfig/#467
> === RUN   TestDefaultConfig/#468
> === PAUSE TestDefaultConfig/#468
> === RUN   TestDefaultConfig/#469
> === PAUSE TestDefaultConfig/#469
> === RUN   TestDefaultConfig/#470
> === PAUSE TestDefaultConfig/#470
> === RUN   TestDefaultConfig/#471
> === PAUSE TestDefaultConfig/#471
> === RUN   TestDefaultConfig/#472
> === PAUSE TestDefaultConfig/#472
> === RUN   TestDefaultConfig/#473
> === PAUSE TestDefaultConfig/#473
> === RUN   TestDefaultConfig/#474
> === PAUSE TestDefaultConfig/#474
> === RUN   TestDefaultConfig/#475
> === PAUSE TestDefaultConfig/#475
> === RUN   TestDefaultConfig/#476
> === PAUSE TestDefaultConfig/#476
> === RUN   TestDefaultConfig/#477
> === PAUSE TestDefaultConfig/#477
> === RUN   TestDefaultConfig/#478
> === PAUSE TestDefaultConfig/#478
> === RUN   TestDefaultConfig/#479
> === PAUSE TestDefaultConfig/#479
> === RUN   TestDefaultConfig/#480
> === PAUSE TestDefaultConfig/#480
> === RUN   TestDefaultConfig/#481
> === PAUSE TestDefaultConfig/#481
> === RUN   TestDefaultConfig/#482
> === PAUSE TestDefaultConfig/#482
> === RUN   TestDefaultConfig/#483
> === PAUSE TestDefaultConfig/#483
> === RUN   TestDefaultConfig/#484
> === PAUSE TestDefaultConfig/#484
> === RUN   TestDefaultConfig/#485
> === PAUSE TestDefaultConfig/#485
> === RUN   TestDefaultConfig/#486
> === PAUSE TestDefaultConfig/#486
> === RUN   TestDefaultConfig/#487
> === PAUSE TestDefaultConfig/#487
> === RUN   TestDefaultConfig/#488
> === PAUSE TestDefaultConfig/#488
> === RUN   TestDefaultConfig/#489
> === PAUSE TestDefaultConfig/#489
> === RUN   TestDefaultConfig/#490
> === PAUSE TestDefaultConfig/#490
> === RUN   TestDefaultConfig/#491
> === PAUSE TestDefaultConfig/#491
> === RUN   TestDefaultConfig/#492
> === PAUSE TestDefaultConfig/#492
> === RUN   TestDefaultConfig/#493
> === PAUSE TestDefaultConfig/#493
> === RUN   TestDefaultConfig/#494
> === PAUSE TestDefaultConfig/#494
> === RUN   TestDefaultConfig/#495
> === PAUSE TestDefaultConfig/#495
> === RUN   TestDefaultConfig/#496
> === PAUSE TestDefaultConfig/#496
> === RUN   TestDefaultConfig/#497
> === PAUSE TestDefaultConfig/#497
> === RUN   TestDefaultConfig/#498
> === PAUSE TestDefaultConfig/#498
> === RUN   TestDefaultConfig/#499
> === PAUSE TestDefaultConfig/#499
> === CONT  TestDefaultConfig/#469
> === CONT  TestDefaultConfig/#485
> === CONT  TestDefaultConfig/#477
> === CONT  TestDefaultConfig/#484
> === CONT  TestDefaultConfig/#476
> === CONT  TestDefaultConfig/#473
> === CONT  TestDefaultConfig/#475
> === CONT  TestDefaultConfig/#474
> === CONT  TestDefaultConfig/#472
> === CONT  TestDefaultConfig/#471
> === CONT  TestDefaultConfig/#470
> === CONT  TestDefaultConfig/#454
> === CONT  TestDefaultConfig/#468
> === CONT  TestDefaultConfig/#467
> === CONT  TestDefaultConfig/#466
> === CONT  TestDefaultConfig/#465
> === CONT  TestDefaultConfig/#464
> === CONT  TestDefaultConfig/#462
> === CONT  TestDefaultConfig/#463
> === CONT  TestDefaultConfig/#461
> === CONT  TestDefaultConfig/#460
> === CONT  TestDefaultConfig/#459
> === CONT  TestDefaultConfig/#458
> === CONT  TestDefaultConfig/#457
> === CONT  TestDefaultConfig/#456
> === CONT  TestDefaultConfig/#455
> === CONT  TestDefaultConfig/#481
> === CONT  TestDefaultConfig/#483
> === CONT  TestDefaultConfig/#482
> === CONT  TestDefaultConfig/#493
> === CONT  TestDefaultConfig/#499
> === CONT  TestDefaultConfig/#498
> === CONT  TestDefaultConfig/#497
> === CONT  TestDefaultConfig/#496
> === CONT  TestDefaultConfig/#495
> === CONT  TestDefaultConfig/#494
> === CONT  TestDefaultConfig/#479
> === CONT  TestDefaultConfig/#480
> === CONT  TestDefaultConfig/#446
> === CONT  TestDefaultConfig/#453
> === CONT  TestDefaultConfig/#452
> === CONT  TestDefaultConfig/#451
> === CONT  TestDefaultConfig/#450
> === CONT  TestDefaultConfig/#449
> === CONT  TestDefaultConfig/#448
> === CONT  TestDefaultConfig/#489
> === CONT  TestDefaultConfig/#447
> === CONT  TestDefaultConfig/#492
> === CONT  TestDefaultConfig/#491
> === CONT  TestDefaultConfig/#490
> === CONT  TestDefaultConfig/#478
> === CONT  TestDefaultConfig/#488
> === CONT  TestDefaultConfig/#487
> === CONT  TestDefaultConfig/#442
> === CONT  TestDefaultConfig/#445
> === CONT  TestDefaultConfig/#444
> === CONT  TestDefaultConfig/#443
> === CONT  TestDefaultConfig/#440
> === CONT  TestDefaultConfig/#441
> === CONT  TestDefaultConfig/#439
> === CONT  TestDefaultConfig/#486
> === CONT  TestDefaultConfig/#437
> === CONT  TestDefaultConfig/#438
> === CONT  TestDefaultConfig/#435
> === CONT  TestDefaultConfig/#434
> === CONT  TestDefaultConfig/#370
> === CONT  TestDefaultConfig/#433
> === CONT  TestDefaultConfig/#432
> === CONT  TestDefaultConfig/#431
> === CONT  TestDefaultConfig/#430
> === CONT  TestDefaultConfig/#429
> === CONT  TestDefaultConfig/#428
> === CONT  TestDefaultConfig/#427
> === CONT  TestDefaultConfig/#426
> === CONT  TestDefaultConfig/#425
> === CONT  TestDefaultConfig/#424
> === CONT  TestDefaultConfig/#423
> === CONT  TestDefaultConfig/#422
> === CONT  TestDefaultConfig/#421
> === CONT  TestDefaultConfig/#419
> === CONT  TestDefaultConfig/#420
> === CONT  TestDefaultConfig/#418
> === CONT  TestDefaultConfig/#417
> === CONT  TestDefaultConfig/#416
> === CONT  TestDefaultConfig/#415
> === CONT  TestDefaultConfig/#414
> === CONT  TestDefaultConfig/#413
> === CONT  TestDefaultConfig/#411
> === CONT  TestDefaultConfig/#410
> === CONT  TestDefaultConfig/#409
> === CONT  TestDefaultConfig/#408
> === CONT  TestDefaultConfig/#412
> === CONT  TestDefaultConfig/#407
> === CONT  TestDefaultConfig/#406
> === CONT  TestDefaultConfig/#405
> === CONT  TestDefaultConfig/#404
> === CONT  TestDefaultConfig/#403
> === CONT  TestDefaultConfig/#402
> === CONT  TestDefaultConfig/#401
> === CONT  TestDefaultConfig/#400
> === CONT  TestDefaultConfig/#399
> === CONT  TestDefaultConfig/#398
> === CONT  TestDefaultConfig/#397
> === CONT  TestDefaultConfig/#396
> === CONT  TestDefaultConfig/#395
> === CONT  TestDefaultConfig/#394
> === CONT  TestDefaultConfig/#393
> === CONT  TestDefaultConfig/#392
> === CONT  TestDefaultConfig/#391
> === CONT  TestDefaultConfig/#390
> === CONT  TestDefaultConfig/#389
> === CONT  TestDefaultConfig/#388
> === CONT  TestDefaultConfig/#387
> === CONT  TestDefaultConfig/#386
> === CONT  TestDefaultConfig/#384
> === CONT  TestDefaultConfig/#383
> === CONT  TestDefaultConfig/#385
> === CONT  TestDefaultConfig/#382
> === CONT  TestDefaultConfig/#381
> === CONT  TestDefaultConfig/#380
> === CONT  TestDefaultConfig/#379
> === CONT  TestDefaultConfig/#378
> === CONT  TestDefaultConfig/#377
> === CONT  TestDefaultConfig/#376
> === CONT  TestDefaultConfig/#374
> === CONT  TestDefaultConfig/#375
> === CONT  TestDefaultConfig/#369
> === CONT  TestDefaultConfig/#373
> === CONT  TestDefaultConfig/#372
> === CONT  TestDefaultConfig/#371
> === CONT  TestDefaultConfig/#368
> === CONT  TestDefaultConfig/#366
> === CONT  TestDefaultConfig/#367
> === CONT  TestDefaultConfig/#365
> === CONT  TestDefaultConfig/#364
> === CONT  TestDefaultConfig/#362
> === CONT  TestDefaultConfig/#363
> === CONT  TestDefaultConfig/#361
> === CONT  TestDefaultConfig/#360
> === CONT  TestDefaultConfig/#359
> === CONT  TestDefaultConfig/#358
> === CONT  TestDefaultConfig/#357
> === CONT  TestDefaultConfig/#356
> === CONT  TestDefaultConfig/#355
> === CONT  TestDefaultConfig/#354
> === CONT  TestDefaultConfig/#353
> === CONT  TestDefaultConfig/#352
> === CONT  TestDefaultConfig/#351
> === CONT  TestDefaultConfig/#350
> === CONT  TestDefaultConfig/#349
> === CONT  TestDefaultConfig/#348
> === CONT  TestDefaultConfig/#347
> === CONT  TestDefaultConfig/#346
> === CONT  TestDefaultConfig/#345
> === CONT  TestDefaultConfig/#344
> === CONT  TestDefaultConfig/#343
> === CONT  TestDefaultConfig/#342
> === CONT  TestDefaultConfig/#341
> === CONT  TestDefaultConfig/#340
> === CONT  TestDefaultConfig/#338
> === CONT  TestDefaultConfig/#339
> === CONT  TestDefaultConfig/#337
> === CONT  TestDefaultConfig/#336
> === CONT  TestDefaultConfig/#335
> === CONT  TestDefaultConfig/#334
> === CONT  TestDefaultConfig/#333
> === CONT  TestDefaultConfig/#330
> === CONT  TestDefaultConfig/#332
> === CONT  TestDefaultConfig/#329
> === CONT  TestDefaultConfig/#331
> === CONT  TestDefaultConfig/#328
> === CONT  TestDefaultConfig/#327
> === CONT  TestDefaultConfig/#326
> === CONT  TestDefaultConfig/#323
> === CONT  TestDefaultConfig/#325
> === CONT  TestDefaultConfig/#324
> === CONT  TestDefaultConfig/#322
> === CONT  TestDefaultConfig/#321
> === CONT  TestDefaultConfig/#320
> === CONT  TestDefaultConfig/#319
> === CONT  TestDefaultConfig/#318
> === CONT  TestDefaultConfig/#317
> === CONT  TestDefaultConfig/#316
> === CONT  TestDefaultConfig/#315
> === CONT  TestDefaultConfig/#314
> === CONT  TestDefaultConfig/#313
> === CONT  TestDefaultConfig/#312
> === CONT  TestDefaultConfig/#285
> === CONT  TestDefaultConfig/#311
> === CONT  TestDefaultConfig/#310
> === CONT  TestDefaultConfig/#308
> === CONT  TestDefaultConfig/#309
> === CONT  TestDefaultConfig/#180
> === CONT  TestDefaultConfig/#307
> === CONT  TestDefaultConfig/#306
> === CONT  TestDefaultConfig/#305
> === CONT  TestDefaultConfig/#304
> === CONT  TestDefaultConfig/#303
> === CONT  TestDefaultConfig/#302
> === CONT  TestDefaultConfig/#301
> === CONT  TestDefaultConfig/#299
> === CONT  TestDefaultConfig/#298
> === CONT  TestDefaultConfig/#297
> === CONT  TestDefaultConfig/#296
> === CONT  TestDefaultConfig/#300
> === CONT  TestDefaultConfig/#295
> === CONT  TestDefaultConfig/#294
> === CONT  TestDefaultConfig/#292
> === CONT  TestDefaultConfig/#291
> === CONT  TestDefaultConfig/#290
> === CONT  TestDefaultConfig/#293
> === CONT  TestDefaultConfig/#288
> === CONT  TestDefaultConfig/#289
> === CONT  TestDefaultConfig/#287
> === CONT  TestDefaultConfig/#286
> === CONT  TestDefaultConfig/#284
> === CONT  TestDefaultConfig/#283
> === CONT  TestDefaultConfig/#282
> === CONT  TestDefaultConfig/#281
> === CONT  TestDefaultConfig/#280
> === CONT  TestDefaultConfig/#279
> === CONT  TestDefaultConfig/#278
> === CONT  TestDefaultConfig/#277
> === CONT  TestDefaultConfig/#276
> === CONT  TestDefaultConfig/#275
> === CONT  TestDefaultConfig/#273
> === CONT  TestDefaultConfig/#272
> === CONT  TestDefaultConfig/#271
> === CONT  TestDefaultConfig/#270
> === CONT  TestDefaultConfig/#269
> === CONT  TestDefaultConfig/#268
> === CONT  TestDefaultConfig/#274
> === CONT  TestDefaultConfig/#267
> === CONT  TestDefaultConfig/#266
> === CONT  TestDefaultConfig/#265
> === CONT  TestDefaultConfig/#264
> === CONT  TestDefaultConfig/#263
> === CONT  TestDefaultConfig/#261
> === CONT  TestDefaultConfig/#260
> === CONT  TestDefaultConfig/#262
> === CONT  TestDefaultConfig/#259
> === CONT  TestDefaultConfig/#258
> === CONT  TestDefaultConfig/#257
> === CONT  TestDefaultConfig/#256
> === CONT  TestDefaultConfig/#255
> === CONT  TestDefaultConfig/#254
> === CONT  TestDefaultConfig/#253
> === CONT  TestDefaultConfig/#252
> === CONT  TestDefaultConfig/#237
> === CONT  TestDefaultConfig/#251
> === CONT  TestDefaultConfig/#250
> === CONT  TestDefaultConfig/#249
> === CONT  TestDefaultConfig/#248
> === CONT  TestDefaultConfig/#247
> === CONT  TestDefaultConfig/#246
> === CONT  TestDefaultConfig/#245
> === CONT  TestDefaultConfig/#244
> === CONT  TestDefaultConfig/#243
> === CONT  TestDefaultConfig/#242
> === CONT  TestDefaultConfig/#239
> === CONT  TestDefaultConfig/#241
> === CONT  TestDefaultConfig/#240
> === CONT  TestDefaultConfig/#238
> === CONT  TestDefaultConfig/#236
> === CONT  TestDefaultConfig/#235
> === CONT  TestDefaultConfig/#234
> === CONT  TestDefaultConfig/#233
> === CONT  TestDefaultConfig/#232
> === CONT  TestDefaultConfig/#231
> === CONT  TestDefaultConfig/#230
> === CONT  TestDefaultConfig/#229
> === CONT  TestDefaultConfig/#228
> === CONT  TestDefaultConfig/#227
> === CONT  TestDefaultConfig/#226
> === CONT  TestDefaultConfig/#225
> === CONT  TestDefaultConfig/#223
> === CONT  TestDefaultConfig/#224
> === CONT  TestDefaultConfig/#222
> === CONT  TestDefaultConfig/#221
> === CONT  TestDefaultConfig/#220
> === CONT  TestDefaultConfig/#218
> === CONT  TestDefaultConfig/#217
> === CONT  TestDefaultConfig/#216
> === CONT  TestDefaultConfig/#219
> === CONT  TestDefaultConfig/#215
> === CONT  TestDefaultConfig/#214
> === CONT  TestDefaultConfig/#212
> === CONT  TestDefaultConfig/#213
> === CONT  TestDefaultConfig/#211
> === CONT  TestDefaultConfig/#209
> === CONT  TestDefaultConfig/#210
> === CONT  TestDefaultConfig/#208
> === CONT  TestDefaultConfig/#207
> === CONT  TestDefaultConfig/#205
> === CONT  TestDefaultConfig/#206
> === CONT  TestDefaultConfig/#204
> === CONT  TestDefaultConfig/#203
> === CONT  TestDefaultConfig/#202
> === CONT  TestDefaultConfig/#201
> === CONT  TestDefaultConfig/#200
> === CONT  TestDefaultConfig/#198
> === CONT  TestDefaultConfig/#199
> === CONT  TestDefaultConfig/#197
> === CONT  TestDefaultConfig/#196
> === CONT  TestDefaultConfig/#195
> === CONT  TestDefaultConfig/#194
> === CONT  TestDefaultConfig/#193
> === CONT  TestDefaultConfig/#192
> === CONT  TestDefaultConfig/#191
> === CONT  TestDefaultConfig/#187
> === CONT  TestDefaultConfig/#190
> === CONT  TestDefaultConfig/#185
> === CONT  TestDefaultConfig/#186
> === CONT  TestDefaultConfig/#188
> === CONT  TestDefaultConfig/#189
> === CONT  TestDefaultConfig/#184
> === CONT  TestDefaultConfig/#183
> === CONT  TestDefaultConfig/#182
> === CONT  TestDefaultConfig/#181
> === CONT  TestDefaultConfig/#51
> === CONT  TestDefaultConfig/#179
> === CONT  TestDefaultConfig/#178
> === CONT  TestDefaultConfig/#177
> === CONT  TestDefaultConfig/#176
> === CONT  TestDefaultConfig/#91
> === CONT  TestDefaultConfig/#174
> === CONT  TestDefaultConfig/#173
> === CONT  TestDefaultConfig/#172
> === CONT  TestDefaultConfig/#171
> === CONT  TestDefaultConfig/#175
> === CONT  TestDefaultConfig/#170
> === CONT  TestDefaultConfig/#168
> === CONT  TestDefaultConfig/#169
> === CONT  TestDefaultConfig/#167
> === CONT  TestDefaultConfig/#166
> === CONT  TestDefaultConfig/#165
> === CONT  TestDefaultConfig/#164
> === CONT  TestDefaultConfig/#163
> === CONT  TestDefaultConfig/#162
> === CONT  TestDefaultConfig/#161
> === CONT  TestDefaultConfig/#160
> === CONT  TestDefaultConfig/#159
> === CONT  TestDefaultConfig/#158
> === CONT  TestDefaultConfig/#157
> === CONT  TestDefaultConfig/#156
> === CONT  TestDefaultConfig/#155
> === CONT  TestDefaultConfig/#153
> === CONT  TestDefaultConfig/#154
> === CONT  TestDefaultConfig/#152
> === CONT  TestDefaultConfig/#151
> === CONT  TestDefaultConfig/#150
> === CONT  TestDefaultConfig/#149
> === CONT  TestDefaultConfig/#148
> === CONT  TestDefaultConfig/#147
> === CONT  TestDefaultConfig/#146
> === CONT  TestDefaultConfig/#145
> === CONT  TestDefaultConfig/#144
> === CONT  TestDefaultConfig/#143
> === CONT  TestDefaultConfig/#142
> === CONT  TestDefaultConfig/#141
> === CONT  TestDefaultConfig/#140
> === CONT  TestDefaultConfig/#139
> === CONT  TestDefaultConfig/#138
> === CONT  TestDefaultConfig/#136
> === CONT  TestDefaultConfig/#135
> === CONT  TestDefaultConfig/#134
> === CONT  TestDefaultConfig/#137
> === CONT  TestDefaultConfig/#133
> === CONT  TestDefaultConfig/#132
> === CONT  TestDefaultConfig/#131
> === CONT  TestDefaultConfig/#130
> === CONT  TestDefaultConfig/#129
> === CONT  TestDefaultConfig/#128
> === CONT  TestDefaultConfig/#127
> === CONT  TestDefaultConfig/#126
> === CONT  TestDefaultConfig/#125
> === CONT  TestDefaultConfig/#124
> === CONT  TestDefaultConfig/#123
> === CONT  TestDefaultConfig/#121
> === CONT  TestDefaultConfig/#122
> === CONT  TestDefaultConfig/#120
> === CONT  TestDefaultConfig/#118
> === CONT  TestDefaultConfig/#119
> === CONT  TestDefaultConfig/#117
> === CONT  TestDefaultConfig/#116
> === CONT  TestDefaultConfig/#114
> === CONT  TestDefaultConfig/#115
> === CONT  TestDefaultConfig/#113
> === CONT  TestDefaultConfig/#112
> === CONT  TestDefaultConfig/#111
> === CONT  TestDefaultConfig/#110
> === CONT  TestDefaultConfig/#109
> === CONT  TestDefaultConfig/#108
> === CONT  TestDefaultConfig/#107
> === CONT  TestDefaultConfig/#106
> === CONT  TestDefaultConfig/#105
> === CONT  TestDefaultConfig/#104
> === CONT  TestDefaultConfig/#103
> === CONT  TestDefaultConfig/#102
> === CONT  TestDefaultConfig/#101
> === CONT  TestDefaultConfig/#100
> === CONT  TestDefaultConfig/#99
> === CONT  TestDefaultConfig/#98
> === CONT  TestDefaultConfig/#97
> === CONT  TestDefaultConfig/#96
> === CONT  TestDefaultConfig/#95
> === CONT  TestDefaultConfig/#94
> === CONT  TestDefaultConfig/#93
> === CONT  TestDefaultConfig/#92
> === CONT  TestDefaultConfig/#45
> === CONT  TestDefaultConfig/#90
> === CONT  TestDefaultConfig/#89
> === CONT  TestDefaultConfig/#88
> === CONT  TestDefaultConfig/#87
> === CONT  TestDefaultConfig/#86
> === CONT  TestDefaultConfig/#85
> === CONT  TestDefaultConfig/#84
> === CONT  TestDefaultConfig/#83
> === CONT  TestDefaultConfig/#82
> === CONT  TestDefaultConfig/#81
> === CONT  TestDefaultConfig/#80
> === CONT  TestDefaultConfig/#79
> === CONT  TestDefaultConfig/#78
> === CONT  TestDefaultConfig/#77
> === CONT  TestDefaultConfig/#76
> === CONT  TestDefaultConfig/#75
> === CONT  TestDefaultConfig/#74
> === CONT  TestDefaultConfig/#73
> === CONT  TestDefaultConfig/#72
> === CONT  TestDefaultConfig/#71
> === CONT  TestDefaultConfig/#70
> === CONT  TestDefaultConfig/#69
> === CONT  TestDefaultConfig/#68
> === CONT  TestDefaultConfig/#67
> === CONT  TestDefaultConfig/#66
> === CONT  TestDefaultConfig/#65
> === CONT  TestDefaultConfig/#64
> === CONT  TestDefaultConfig/#63
> === CONT  TestDefaultConfig/#62
> === CONT  TestDefaultConfig/#61
> === CONT  TestDefaultConfig/#60
> === CONT  TestDefaultConfig/#58
> === CONT  TestDefaultConfig/#57
> === CONT  TestDefaultConfig/#59
> === CONT  TestDefaultConfig/#56
> === CONT  TestDefaultConfig/#55
> === CONT  TestDefaultConfig/#54
> === CONT  TestDefaultConfig/#53
> === CONT  TestDefaultConfig/#52
> === CONT  TestDefaultConfig/#50
> === CONT  TestDefaultConfig/#49
> === CONT  TestDefaultConfig/#48
> === CONT  TestDefaultConfig/#47
> === CONT  TestDefaultConfig/#46
> === CONT  TestDefaultConfig/#23
> === CONT  TestDefaultConfig/#44
> === CONT  TestDefaultConfig/#43
> === CONT  TestDefaultConfig/#42
> === CONT  TestDefaultConfig/#41
> === CONT  TestDefaultConfig/#40
> === CONT  TestDefaultConfig/#39
> === CONT  TestDefaultConfig/#38
> === CONT  TestDefaultConfig/#37
> === CONT  TestDefaultConfig/#36
> === CONT  TestDefaultConfig/#35
> === CONT  TestDefaultConfig/#34
> === CONT  TestDefaultConfig/#33
> === CONT  TestDefaultConfig/#32
> === CONT  TestDefaultConfig/#31
> === CONT  TestDefaultConfig/#30
> === CONT  TestDefaultConfig/#28
> === CONT  TestDefaultConfig/#29
> === CONT  TestDefaultConfig/#27
> === CONT  TestDefaultConfig/#26
> === CONT  TestDefaultConfig/#24
> === CONT  TestDefaultConfig/#12
> === CONT  TestDefaultConfig/#25
> === CONT  TestDefaultConfig/#22
> === CONT  TestDefaultConfig/#21
> === CONT  TestDefaultConfig/#20
> === CONT  TestDefaultConfig/#19
> === CONT  TestDefaultConfig/#18
> === CONT  TestDefaultConfig/#17
> === CONT  TestDefaultConfig/#16
> === CONT  TestDefaultConfig/#15
> === CONT  TestDefaultConfig/#14
> === CONT  TestDefaultConfig/#13
> === CONT  TestDefaultConfig/#06
> === CONT  TestDefaultConfig/#11
> === CONT  TestDefaultConfig/#09
> === CONT  TestDefaultConfig/#08
> === CONT  TestDefaultConfig/#07
> === CONT  TestDefaultConfig/#10
> === CONT  TestDefaultConfig/#05
> === CONT  TestDefaultConfig/#04
> === CONT  TestDefaultConfig/#01
> === CONT  TestDefaultConfig/#03
> === CONT  TestDefaultConfig/#02
> === CONT  TestDefaultConfig/#436
> === CONT  TestDefaultConfig/#00
> --- PASS: TestDefaultConfig (0.01s)
>     --- PASS: TestDefaultConfig/#477 (0.01s)
>     --- PASS: TestDefaultConfig/#469 (0.02s)
>     --- PASS: TestDefaultConfig/#476 (0.01s)
>     --- PASS: TestDefaultConfig/#485 (0.02s)
>     --- PASS: TestDefaultConfig/#474 (0.01s)
>     --- PASS: TestDefaultConfig/#472 (0.01s)
>     --- PASS: TestDefaultConfig/#475 (0.02s)
>     --- PASS: TestDefaultConfig/#484 (0.04s)
>     --- PASS: TestDefaultConfig/#470 (0.03s)
>     --- PASS: TestDefaultConfig/#471 (0.04s)
>     --- PASS: TestDefaultConfig/#454 (0.04s)
>     --- PASS: TestDefaultConfig/#473 (0.07s)
>     --- PASS: TestDefaultConfig/#466 (0.01s)
>     --- PASS: TestDefaultConfig/#465 (0.01s)
>     --- PASS: TestDefaultConfig/#467 (0.02s)
>     --- PASS: TestDefaultConfig/#468 (0.02s)
>     --- PASS: TestDefaultConfig/#462 (0.01s)
>     --- PASS: TestDefaultConfig/#460 (0.01s)
>     --- PASS: TestDefaultConfig/#464 (0.01s)
>     --- PASS: TestDefaultConfig/#463 (0.01s)
>     --- PASS: TestDefaultConfig/#461 (0.02s)
>     --- PASS: TestDefaultConfig/#459 (0.04s)
>     --- PASS: TestDefaultConfig/#457 (0.05s)
>     --- PASS: TestDefaultConfig/#458 (0.05s)
>     --- PASS: TestDefaultConfig/#456 (0.05s)
>     --- PASS: TestDefaultConfig/#483 (0.01s)
>     --- PASS: TestDefaultConfig/#455 (0.01s)
>     --- PASS: TestDefaultConfig/#482 (0.02s)
>     --- PASS: TestDefaultConfig/#481 (0.02s)
>     --- PASS: TestDefaultConfig/#493 (0.02s)
>     --- PASS: TestDefaultConfig/#498 (0.01s)
>     --- PASS: TestDefaultConfig/#499 (0.02s)
>     --- PASS: TestDefaultConfig/#495 (0.01s)
>     --- PASS: TestDefaultConfig/#494 (0.03s)
>     --- PASS: TestDefaultConfig/#497 (0.07s)
>     --- PASS: TestDefaultConfig/#479 (0.06s)
>     --- PASS: TestDefaultConfig/#496 (0.07s)
>     --- PASS: TestDefaultConfig/#446 (0.01s)
>     --- PASS: TestDefaultConfig/#453 (0.01s)
>     --- PASS: TestDefaultConfig/#452 (0.01s)
>     --- PASS: TestDefaultConfig/#450 (0.01s)
>     --- PASS: TestDefaultConfig/#480 (0.04s)
>     --- PASS: TestDefaultConfig/#451 (0.01s)
>     --- PASS: TestDefaultConfig/#489 (0.01s)
>     --- PASS: TestDefaultConfig/#448 (0.01s)
>     --- PASS: TestDefaultConfig/#491 (0.01s)
>     --- PASS: TestDefaultConfig/#447 (0.02s)
>     --- PASS: TestDefaultConfig/#492 (0.02s)
>     --- PASS: TestDefaultConfig/#449 (0.05s)
>     --- PASS: TestDefaultConfig/#490 (0.04s)
>     --- PASS: TestDefaultConfig/#442 (0.01s)
>     --- PASS: TestDefaultConfig/#478 (0.05s)
>     --- PASS: TestDefaultConfig/#487 (0.02s)
>     --- PASS: TestDefaultConfig/#445 (0.01s)
>     --- PASS: TestDefaultConfig/#488 (0.03s)
>     --- PASS: TestDefaultConfig/#444 (0.01s)
>     --- PASS: TestDefaultConfig/#441 (0.01s)
>     --- PASS: TestDefaultConfig/#443 (0.02s)
>     --- PASS: TestDefaultConfig/#439 (0.01s)
>     --- PASS: TestDefaultConfig/#440 (0.02s)
>     --- PASS: TestDefaultConfig/#486 (0.01s)
>     --- PASS: TestDefaultConfig/#437 (0.01s)
>     --- PASS: TestDefaultConfig/#438 (0.02s)
>     --- PASS: TestDefaultConfig/#434 (0.03s)
>     --- PASS: TestDefaultConfig/#435 (0.04s)
>     --- PASS: TestDefaultConfig/#431 (0.01s)
>     --- PASS: TestDefaultConfig/#433 (0.04s)
>     --- PASS: TestDefaultConfig/#370 (0.08s)
>     --- PASS: TestDefaultConfig/#432 (0.06s)
>     --- PASS: TestDefaultConfig/#428 (0.01s)
>     --- PASS: TestDefaultConfig/#429 (0.04s)
>     --- PASS: TestDefaultConfig/#430 (0.04s)
>     --- PASS: TestDefaultConfig/#426 (0.01s)
>     --- PASS: TestDefaultConfig/#424 (0.01s)
>     --- PASS: TestDefaultConfig/#427 (0.01s)
>     --- PASS: TestDefaultConfig/#425 (0.01s)
>     --- PASS: TestDefaultConfig/#421 (0.01s)
>     --- PASS: TestDefaultConfig/#423 (0.05s)
>     --- PASS: TestDefaultConfig/#419 (0.04s)
>     --- PASS: TestDefaultConfig/#420 (0.04s)
>     --- PASS: TestDefaultConfig/#418 (0.02s)
>     --- PASS: TestDefaultConfig/#422 (0.08s)
>     --- PASS: TestDefaultConfig/#415 (0.02s)
>     --- PASS: TestDefaultConfig/#417 (0.05s)
>     --- PASS: TestDefaultConfig/#413 (0.02s)
>     --- PASS: TestDefaultConfig/#414 (0.03s)
>     --- PASS: TestDefaultConfig/#416 (0.06s)
>     --- PASS: TestDefaultConfig/#410 (0.01s)
>     --- PASS: TestDefaultConfig/#409 (0.01s)
>     --- PASS: TestDefaultConfig/#411 (0.02s)
>     --- PASS: TestDefaultConfig/#412 (0.01s)
>     --- PASS: TestDefaultConfig/#407 (0.02s)
>     --- PASS: TestDefaultConfig/#406 (0.02s)
>     --- PASS: TestDefaultConfig/#405 (0.01s)
>     --- PASS: TestDefaultConfig/#403 (0.01s)
>     --- PASS: TestDefaultConfig/#408 (0.04s)
>     --- PASS: TestDefaultConfig/#401 (0.01s)
>     --- PASS: TestDefaultConfig/#404 (0.02s)
>     --- PASS: TestDefaultConfig/#398 (0.02s)
>     --- PASS: TestDefaultConfig/#400 (0.03s)
>     --- PASS: TestDefaultConfig/#399 (0.02s)
>     --- PASS: TestDefaultConfig/#397 (0.01s)
>     --- PASS: TestDefaultConfig/#402 (0.06s)
>     --- PASS: TestDefaultConfig/#396 (0.03s)
>     --- PASS: TestDefaultConfig/#393 (0.02s)
>     --- PASS: TestDefaultConfig/#395 (0.03s)
>     --- PASS: TestDefaultConfig/#394 (0.03s)
>     --- PASS: TestDefaultConfig/#392 (0.03s)
>     --- PASS: TestDefaultConfig/#390 (0.02s)
>     --- PASS: TestDefaultConfig/#388 (0.01s)
>     --- PASS: TestDefaultConfig/#387 (0.01s)
>     --- PASS: TestDefaultConfig/#391 (0.03s)
>     --- PASS: TestDefaultConfig/#384 (0.01s)
>     --- PASS: TestDefaultConfig/#386 (0.01s)
>     --- PASS: TestDefaultConfig/#389 (0.03s)
>     --- PASS: TestDefaultConfig/#385 (0.01s)
>     --- PASS: TestDefaultConfig/#380 (0.01s)
>     --- PASS: TestDefaultConfig/#383 (0.02s)
>     --- PASS: TestDefaultConfig/#382 (0.02s)
>     --- PASS: TestDefaultConfig/#381 (0.01s)
>     --- PASS: TestDefaultConfig/#377 (0.01s)
>     --- PASS: TestDefaultConfig/#378 (0.02s)
>     --- PASS: TestDefaultConfig/#379 (0.03s)
>     --- PASS: TestDefaultConfig/#374 (0.01s)
>     --- PASS: TestDefaultConfig/#376 (0.05s)
>     --- PASS: TestDefaultConfig/#369 (0.01s)
>     --- PASS: TestDefaultConfig/#372 (0.01s)
>     --- PASS: TestDefaultConfig/#375 (0.02s)
>     --- PASS: TestDefaultConfig/#371 (0.01s)
>     --- PASS: TestDefaultConfig/#367 (0.01s)
>     --- PASS: TestDefaultConfig/#373 (0.03s)
>     --- PASS: TestDefaultConfig/#368 (0.01s)
>     --- PASS: TestDefaultConfig/#366 (0.02s)
>     --- PASS: TestDefaultConfig/#362 (0.01s)
>     --- PASS: TestDefaultConfig/#363 (0.01s)
>     --- PASS: TestDefaultConfig/#364 (0.02s)
>     --- PASS: TestDefaultConfig/#365 (0.02s)
>     --- PASS: TestDefaultConfig/#359 (0.02s)
>     --- PASS: TestDefaultConfig/#360 (0.04s)
>     --- PASS: TestDefaultConfig/#357 (0.01s)
>     --- PASS: TestDefaultConfig/#361 (0.04s)
>     --- PASS: TestDefaultConfig/#358 (0.04s)
>     --- PASS: TestDefaultConfig/#354 (0.01s)
>     --- PASS: TestDefaultConfig/#355 (0.01s)
>     --- PASS: TestDefaultConfig/#352 (0.01s)
>     --- PASS: TestDefaultConfig/#351 (0.01s)
>     --- PASS: TestDefaultConfig/#350 (0.01s)
>     --- PASS: TestDefaultConfig/#353 (0.02s)
>     --- PASS: TestDefaultConfig/#348 (0.01s)
>     --- PASS: TestDefaultConfig/#347 (0.01s)
>     --- PASS: TestDefaultConfig/#346 (0.01s)
>     --- PASS: TestDefaultConfig/#356 (0.04s)
>     --- PASS: TestDefaultConfig/#349 (0.03s)
>     --- PASS: TestDefaultConfig/#342 (0.01s)
>     --- PASS: TestDefaultConfig/#343 (0.03s)
>     --- PASS: TestDefaultConfig/#345 (0.05s)
>     --- PASS: TestDefaultConfig/#341 (0.02s)
>     --- PASS: TestDefaultConfig/#339 (0.01s)
>     --- PASS: TestDefaultConfig/#338 (0.01s)
>     --- PASS: TestDefaultConfig/#340 (0.03s)
>     --- PASS: TestDefaultConfig/#337 (0.01s)
>     --- PASS: TestDefaultConfig/#335 (0.01s)
>     --- PASS: TestDefaultConfig/#336 (0.02s)
>     --- PASS: TestDefaultConfig/#344 (0.07s)
>     --- PASS: TestDefaultConfig/#334 (0.01s)
>     --- PASS: TestDefaultConfig/#332 (0.01s)
>     --- PASS: TestDefaultConfig/#330 (0.01s)
>     --- PASS: TestDefaultConfig/#328 (0.01s)
>     --- PASS: TestDefaultConfig/#331 (0.04s)
>     --- PASS: TestDefaultConfig/#333 (0.05s)
>     --- PASS: TestDefaultConfig/#329 (0.04s)
>     --- PASS: TestDefaultConfig/#327 (0.05s)
>     --- PASS: TestDefaultConfig/#323 (0.01s)
>     --- PASS: TestDefaultConfig/#325 (0.01s)
>     --- PASS: TestDefaultConfig/#326 (0.05s)
>     --- PASS: TestDefaultConfig/#322 (0.02s)
>     --- PASS: TestDefaultConfig/#321 (0.01s)
>     --- PASS: TestDefaultConfig/#324 (0.04s)
>     --- PASS: TestDefaultConfig/#320 (0.01s)
>     --- PASS: TestDefaultConfig/#317 (0.01s)
>     --- PASS: TestDefaultConfig/#315 (0.01s)
>     --- PASS: TestDefaultConfig/#318 (0.03s)
>     --- PASS: TestDefaultConfig/#319 (0.03s)
>     --- PASS: TestDefaultConfig/#316 (0.03s)
>     --- PASS: TestDefaultConfig/#312 (0.01s)
>     --- PASS: TestDefaultConfig/#313 (0.01s)
>     --- PASS: TestDefaultConfig/#314 (0.02s)
>     --- PASS: TestDefaultConfig/#285 (0.01s)
>     --- PASS: TestDefaultConfig/#310 (0.01s)
>     --- PASS: TestDefaultConfig/#311 (0.01s)
>     --- PASS: TestDefaultConfig/#309 (0.02s)
>     --- PASS: TestDefaultConfig/#308 (0.03s)
>     --- PASS: TestDefaultConfig/#180 (0.01s)
>     --- PASS: TestDefaultConfig/#307 (0.01s)
>     --- PASS: TestDefaultConfig/#305 (0.02s)
>     --- PASS: TestDefaultConfig/#303 (0.02s)
>     --- PASS: TestDefaultConfig/#306 (0.03s)
>     --- PASS: TestDefaultConfig/#304 (0.02s)
>     --- PASS: TestDefaultConfig/#302 (0.01s)
>     --- PASS: TestDefaultConfig/#299 (0.01s)
>     --- PASS: TestDefaultConfig/#298 (0.01s)
>     --- PASS: TestDefaultConfig/#300 (0.01s)
>     --- PASS: TestDefaultConfig/#297 (0.02s)
>     --- PASS: TestDefaultConfig/#296 (0.02s)
>     --- PASS: TestDefaultConfig/#301 (0.03s)
>     --- PASS: TestDefaultConfig/#294 (0.01s)
>     --- PASS: TestDefaultConfig/#292 (0.04s)
>     --- PASS: TestDefaultConfig/#295 (0.04s)
>     --- PASS: TestDefaultConfig/#291 (0.05s)
>     --- PASS: TestDefaultConfig/#293 (0.02s)
>     --- PASS: TestDefaultConfig/#290 (0.03s)
>     --- PASS: TestDefaultConfig/#288 (0.02s)
>     --- PASS: TestDefaultConfig/#289 (0.03s)
>     --- PASS: TestDefaultConfig/#286 (0.03s)
>     --- PASS: TestDefaultConfig/#284 (0.03s)
>     --- PASS: TestDefaultConfig/#287 (0.04s)
>     --- PASS: TestDefaultConfig/#283 (0.01s)
>     --- PASS: TestDefaultConfig/#280 (0.01s)
>     --- PASS: TestDefaultConfig/#282 (0.02s)
>     --- PASS: TestDefaultConfig/#281 (0.02s)
>     --- PASS: TestDefaultConfig/#278 (0.02s)
>     --- PASS: TestDefaultConfig/#277 (0.01s)
>     --- PASS: TestDefaultConfig/#276 (0.03s)
>     --- PASS: TestDefaultConfig/#279 (0.04s)
>     --- PASS: TestDefaultConfig/#275 (0.04s)
>     --- PASS: TestDefaultConfig/#272 (0.02s)
>     --- PASS: TestDefaultConfig/#273 (0.03s)
>     --- PASS: TestDefaultConfig/#269 (0.02s)
>     --- PASS: TestDefaultConfig/#271 (0.04s)
>     --- PASS: TestDefaultConfig/#270 (0.03s)
>     --- PASS: TestDefaultConfig/#266 (0.02s)
>     --- PASS: TestDefaultConfig/#268 (0.03s)
>     --- PASS: TestDefaultConfig/#274 (0.03s)
>     --- PASS: TestDefaultConfig/#267 (0.03s)
>     --- PASS: TestDefaultConfig/#265 (0.01s)
>     --- PASS: TestDefaultConfig/#264 (0.01s)
>     --- PASS: TestDefaultConfig/#261 (0.01s)
>     --- PASS: TestDefaultConfig/#262 (0.01s)
>     --- PASS: TestDefaultConfig/#263 (0.02s)
>     --- PASS: TestDefaultConfig/#257 (0.01s)
>     --- PASS: TestDefaultConfig/#258 (0.01s)
>     --- PASS: TestDefaultConfig/#256 (0.02s)
>     --- PASS: TestDefaultConfig/#259 (0.03s)
>     --- PASS: TestDefaultConfig/#255 (0.03s)
>     --- PASS: TestDefaultConfig/#254 (0.02s)
>     --- PASS: TestDefaultConfig/#260 (0.06s)
>     --- PASS: TestDefaultConfig/#252 (0.01s)
>     --- PASS: TestDefaultConfig/#253 (0.04s)
>     --- PASS: TestDefaultConfig/#237 (0.01s)
>     --- PASS: TestDefaultConfig/#251 (0.01s)
>     --- PASS: TestDefaultConfig/#249 (0.01s)
>     --- PASS: TestDefaultConfig/#250 (0.02s)
>     --- PASS: TestDefaultConfig/#246 (0.01s)
>     --- PASS: TestDefaultConfig/#248 (0.02s)
>     --- PASS: TestDefaultConfig/#244 (0.01s)
>     --- PASS: TestDefaultConfig/#245 (0.01s)
>     --- PASS: TestDefaultConfig/#247 (0.02s)
>     --- PASS: TestDefaultConfig/#243 (0.01s)
>     --- PASS: TestDefaultConfig/#242 (0.01s)
>     --- PASS: TestDefaultConfig/#240 (0.02s)
>     --- PASS: TestDefaultConfig/#239 (0.04s)
>     --- PASS: TestDefaultConfig/#238 (0.04s)
>     --- PASS: TestDefaultConfig/#235 (0.01s)
>     --- PASS: TestDefaultConfig/#236 (0.04s)
>     --- PASS: TestDefaultConfig/#234 (0.01s)
>     --- PASS: TestDefaultConfig/#241 (0.06s)
>     --- PASS: TestDefaultConfig/#233 (0.01s)
>     --- PASS: TestDefaultConfig/#232 (0.01s)
>     --- PASS: TestDefaultConfig/#231 (0.02s)
>     --- PASS: TestDefaultConfig/#229 (0.01s)
>     --- PASS: TestDefaultConfig/#227 (0.01s)
>     --- PASS: TestDefaultConfig/#228 (0.01s)
>     --- PASS: TestDefaultConfig/#225 (0.01s)
>     --- PASS: TestDefaultConfig/#226 (0.04s)
>     --- PASS: TestDefaultConfig/#230 (0.06s)
>     --- PASS: TestDefaultConfig/#222 (0.04s)
>     --- PASS: TestDefaultConfig/#223 (0.07s)
>     --- PASS: TestDefaultConfig/#221 (0.04s)
>     --- PASS: TestDefaultConfig/#224 (0.05s)
>     --- PASS: TestDefaultConfig/#218 (0.01s)
>     --- PASS: TestDefaultConfig/#220 (0.03s)
>     --- PASS: TestDefaultConfig/#217 (0.03s)
>     --- PASS: TestDefaultConfig/#219 (0.03s)
>     --- PASS: TestDefaultConfig/#215 (0.02s)
>     --- PASS: TestDefaultConfig/#214 (0.01s)
>     --- PASS: TestDefaultConfig/#216 (0.05s)
>     --- PASS: TestDefaultConfig/#212 (0.02s)
>     --- PASS: TestDefaultConfig/#211 (0.03s)
>     --- PASS: TestDefaultConfig/#213 (0.03s)
>     --- PASS: TestDefaultConfig/#209 (0.02s)
>     --- PASS: TestDefaultConfig/#208 (0.01s)
>     --- PASS: TestDefaultConfig/#205 (0.01s)
>     --- PASS: TestDefaultConfig/#210 (0.03s)
>     --- PASS: TestDefaultConfig/#206 (0.01s)
>     --- PASS: TestDefaultConfig/#207 (0.04s)
>     --- PASS: TestDefaultConfig/#202 (0.03s)
>     --- PASS: TestDefaultConfig/#204 (0.04s)
>     --- PASS: TestDefaultConfig/#203 (0.05s)
>     --- PASS: TestDefaultConfig/#200 (0.03s)
>     --- PASS: TestDefaultConfig/#201 (0.06s)
>     --- PASS: TestDefaultConfig/#199 (0.03s)
>     --- PASS: TestDefaultConfig/#198 (0.04s)
>     --- PASS: TestDefaultConfig/#196 (0.01s)
>     --- PASS: TestDefaultConfig/#194 (0.02s)
>     --- PASS: TestDefaultConfig/#195 (0.03s)
>     --- PASS: TestDefaultConfig/#193 (0.02s)
>     --- PASS: TestDefaultConfig/#197 (0.05s)
>     --- PASS: TestDefaultConfig/#191 (0.01s)
>     --- PASS: TestDefaultConfig/#187 (0.01s)
>     --- PASS: TestDefaultConfig/#190 (0.01s)
>     --- PASS: TestDefaultConfig/#192 (0.01s)
>     --- PASS: TestDefaultConfig/#188 (0.01s)
>     --- PASS: TestDefaultConfig/#189 (0.01s)
>     --- PASS: TestDefaultConfig/#186 (0.03s)
>     --- PASS: TestDefaultConfig/#184 (0.02s)
>     --- PASS: TestDefaultConfig/#185 (0.04s)
>     --- PASS: TestDefaultConfig/#183 (0.03s)
>     --- PASS: TestDefaultConfig/#181 (0.02s)
>     --- PASS: TestDefaultConfig/#179 (0.01s)
>     --- PASS: TestDefaultConfig/#182 (0.04s)
>     --- PASS: TestDefaultConfig/#178 (0.01s)
>     --- PASS: TestDefaultConfig/#51 (0.03s)
>     --- PASS: TestDefaultConfig/#177 (0.02s)
>     --- PASS: TestDefaultConfig/#91 (0.02s)
>     --- PASS: TestDefaultConfig/#176 (0.02s)
>     --- PASS: TestDefaultConfig/#173 (0.01s)
>     --- PASS: TestDefaultConfig/#174 (0.03s)
>     --- PASS: TestDefaultConfig/#172 (0.04s)
>     --- PASS: TestDefaultConfig/#171 (0.03s)
>     --- PASS: TestDefaultConfig/#170 (0.01s)
>     --- PASS: TestDefaultConfig/#175 (0.02s)
>     --- PASS: TestDefaultConfig/#167 (0.01s)
>     --- PASS: TestDefaultConfig/#166 (0.01s)
>     --- PASS: TestDefaultConfig/#168 (0.02s)
>     --- PASS: TestDefaultConfig/#165 (0.01s)
>     --- PASS: TestDefaultConfig/#163 (0.01s)
>     --- PASS: TestDefaultConfig/#162 (0.01s)
>     --- PASS: TestDefaultConfig/#164 (0.02s)
>     --- PASS: TestDefaultConfig/#169 (0.04s)
>     --- PASS: TestDefaultConfig/#158 (0.01s)
>     --- PASS: TestDefaultConfig/#157 (0.01s)
>     --- PASS: TestDefaultConfig/#161 (0.05s)
>     --- PASS: TestDefaultConfig/#159 (0.04s)
>     --- PASS: TestDefaultConfig/#160 (0.05s)
>     --- PASS: TestDefaultConfig/#155 (0.01s)
>     --- PASS: TestDefaultConfig/#156 (0.04s)
>     --- PASS: TestDefaultConfig/#153 (0.02s)
>     --- PASS: TestDefaultConfig/#152 (0.01s)
>     --- PASS: TestDefaultConfig/#154 (0.03s)
>     --- PASS: TestDefaultConfig/#151 (0.02s)
>     --- PASS: TestDefaultConfig/#150 (0.02s)
>     --- PASS: TestDefaultConfig/#149 (0.02s)
>     --- PASS: TestDefaultConfig/#147 (0.01s)
>     --- PASS: TestDefaultConfig/#146 (0.01s)
>     --- PASS: TestDefaultConfig/#144 (0.01s)
>     --- PASS: TestDefaultConfig/#142 (0.01s)
>     --- PASS: TestDefaultConfig/#143 (0.01s)
>     --- PASS: TestDefaultConfig/#145 (0.03s)
>     --- PASS: TestDefaultConfig/#140 (0.01s)
>     --- PASS: TestDefaultConfig/#139 (0.01s)
>     --- PASS: TestDefaultConfig/#148 (0.05s)
>     --- PASS: TestDefaultConfig/#141 (0.03s)
>     --- PASS: TestDefaultConfig/#138 (0.03s)
>     --- PASS: TestDefaultConfig/#136 (0.03s)
>     --- PASS: TestDefaultConfig/#134 (0.02s)
>     --- PASS: TestDefaultConfig/#133 (0.01s)
>     --- PASS: TestDefaultConfig/#135 (0.04s)
>     --- PASS: TestDefaultConfig/#131 (0.01s)
>     --- PASS: TestDefaultConfig/#137 (0.03s)
>     --- PASS: TestDefaultConfig/#130 (0.01s)
>     --- PASS: TestDefaultConfig/#132 (0.03s)
>     --- PASS: TestDefaultConfig/#128 (0.02s)
>     --- PASS: TestDefaultConfig/#126 (0.01s)
>     --- PASS: TestDefaultConfig/#129 (0.03s)
>     --- PASS: TestDefaultConfig/#127 (0.02s)
>     --- PASS: TestDefaultConfig/#123 (0.01s)
>     --- PASS: TestDefaultConfig/#122 (0.01s)
>     --- PASS: TestDefaultConfig/#124 (0.01s)
>     --- PASS: TestDefaultConfig/#121 (0.01s)
>     --- PASS: TestDefaultConfig/#125 (0.02s)
>     --- PASS: TestDefaultConfig/#120 (0.01s)
>     --- PASS: TestDefaultConfig/#119 (0.01s)
>     --- PASS: TestDefaultConfig/#117 (0.01s)
>     --- PASS: TestDefaultConfig/#115 (0.00s)
>     --- PASS: TestDefaultConfig/#118 (0.03s)
>     --- PASS: TestDefaultConfig/#114 (0.03s)
>     --- PASS: TestDefaultConfig/#112 (0.01s)
>     --- PASS: TestDefaultConfig/#116 (0.04s)
>     --- PASS: TestDefaultConfig/#111 (0.01s)
>     --- PASS: TestDefaultConfig/#113 (0.04s)
>     --- PASS: TestDefaultConfig/#110 (0.02s)
>     --- PASS: TestDefaultConfig/#109 (0.02s)
>     --- PASS: TestDefaultConfig/#108 (0.01s)
>     --- PASS: TestDefaultConfig/#106 (0.01s)
>     --- PASS: TestDefaultConfig/#107 (0.02s)
>     --- PASS: TestDefaultConfig/#103 (0.01s)
>     --- PASS: TestDefaultConfig/#104 (0.01s)
>     --- PASS: TestDefaultConfig/#101 (0.01s)
>     --- PASS: TestDefaultConfig/#102 (0.01s)
>     --- PASS: TestDefaultConfig/#100 (0.01s)
>     --- PASS: TestDefaultConfig/#98 (0.01s)
>     --- PASS: TestDefaultConfig/#97 (0.01s)
>     --- PASS: TestDefaultConfig/#95 (0.01s)
>     --- PASS: TestDefaultConfig/#105 (0.04s)
>     --- PASS: TestDefaultConfig/#99 (0.03s)
>     --- PASS: TestDefaultConfig/#93 (0.01s)
>     --- PASS: TestDefaultConfig/#92 (0.01s)
>     --- PASS: TestDefaultConfig/#96 (0.03s)
>     --- PASS: TestDefaultConfig/#94 (0.02s)
>     --- PASS: TestDefaultConfig/#45 (0.02s)
>     --- PASS: TestDefaultConfig/#88 (0.01s)
>     --- PASS: TestDefaultConfig/#87 (0.01s)
>     --- PASS: TestDefaultConfig/#89 (0.02s)
>     --- PASS: TestDefaultConfig/#90 (0.02s)
>     --- PASS: TestDefaultConfig/#86 (0.01s)
>     --- PASS: TestDefaultConfig/#82 (0.01s)
>     --- PASS: TestDefaultConfig/#84 (0.02s)
>     --- PASS: TestDefaultConfig/#85 (0.02s)
>     --- PASS: TestDefaultConfig/#81 (0.01s)
>     --- PASS: TestDefaultConfig/#83 (0.02s)
>     --- PASS: TestDefaultConfig/#79 (0.01s)
>     --- PASS: TestDefaultConfig/#80 (0.01s)
>     --- PASS: TestDefaultConfig/#78 (0.01s)
>     --- PASS: TestDefaultConfig/#77 (0.02s)
>     --- PASS: TestDefaultConfig/#76 (0.02s)
>     --- PASS: TestDefaultConfig/#75 (0.03s)
>     --- PASS: TestDefaultConfig/#74 (0.02s)
>     --- PASS: TestDefaultConfig/#70 (0.01s)
>     --- PASS: TestDefaultConfig/#72 (0.02s)
>     --- PASS: TestDefaultConfig/#73 (0.04s)
>     --- PASS: TestDefaultConfig/#71 (0.02s)
>     --- PASS: TestDefaultConfig/#69 (0.01s)
>     --- PASS: TestDefaultConfig/#68 (0.01s)
>     --- PASS: TestDefaultConfig/#66 (0.01s)
>     --- PASS: TestDefaultConfig/#64 (0.01s)
>     --- PASS: TestDefaultConfig/#65 (0.01s)
>     --- PASS: TestDefaultConfig/#63 (0.01s)
>     --- PASS: TestDefaultConfig/#67 (0.02s)
>     --- PASS: TestDefaultConfig/#62 (0.01s)
>     --- PASS: TestDefaultConfig/#61 (0.01s)
>     --- PASS: TestDefaultConfig/#58 (0.01s)
>     --- PASS: TestDefaultConfig/#57 (0.01s)
>     --- PASS: TestDefaultConfig/#59 (0.02s)
>     --- PASS: TestDefaultConfig/#55 (0.01s)
>     --- PASS: TestDefaultConfig/#54 (0.01s)
>     --- PASS: TestDefaultConfig/#56 (0.03s)
>     --- PASS: TestDefaultConfig/#53 (0.02s)
>     --- PASS: TestDefaultConfig/#52 (0.01s)
>     --- PASS: TestDefaultConfig/#50 (0.01s)
>     --- PASS: TestDefaultConfig/#49 (0.01s)
>     --- PASS: TestDefaultConfig/#60 (0.05s)
>     --- PASS: TestDefaultConfig/#47 (0.01s)
>     --- PASS: TestDefaultConfig/#46 (0.01s)
>     --- PASS: TestDefaultConfig/#23 (0.01s)
>     --- PASS: TestDefaultConfig/#48 (0.02s)
>     --- PASS: TestDefaultConfig/#44 (0.01s)
>     --- PASS: TestDefaultConfig/#43 (0.01s)
>     --- PASS: TestDefaultConfig/#40 (0.01s)
>     --- PASS: TestDefaultConfig/#41 (0.01s)
>     --- PASS: TestDefaultConfig/#38 (0.01s)
>     --- PASS: TestDefaultConfig/#39 (0.02s)
>     --- PASS: TestDefaultConfig/#42 (0.03s)
>     --- PASS: TestDefaultConfig/#36 (0.02s)
>     --- PASS: TestDefaultConfig/#37 (0.03s)
>     --- PASS: TestDefaultConfig/#35 (0.02s)
>     --- PASS: TestDefaultConfig/#32 (0.01s)
>     --- PASS: TestDefaultConfig/#33 (0.02s)
>     --- PASS: TestDefaultConfig/#31 (0.02s)
>     --- PASS: TestDefaultConfig/#30 (0.01s)
>     --- PASS: TestDefaultConfig/#27 (0.01s)
>     --- PASS: TestDefaultConfig/#34 (0.06s)
>     --- PASS: TestDefaultConfig/#28 (0.02s)
>     --- PASS: TestDefaultConfig/#29 (0.02s)
>     --- PASS: TestDefaultConfig/#26 (0.01s)
>     --- PASS: TestDefaultConfig/#24 (0.02s)
>     --- PASS: TestDefaultConfig/#12 (0.01s)
>     --- PASS: TestDefaultConfig/#25 (0.03s)
>     --- PASS: TestDefaultConfig/#22 (0.03s)
>     --- PASS: TestDefaultConfig/#19 (0.01s)
>     --- PASS: TestDefaultConfig/#21 (0.03s)
>     --- PASS: TestDefaultConfig/#20 (0.02s)
>     --- PASS: TestDefaultConfig/#15 (0.01s)
>     --- PASS: TestDefaultConfig/#16 (0.02s)
>     --- PASS: TestDefaultConfig/#18 (0.03s)
>     --- PASS: TestDefaultConfig/#17 (0.03s)
>     --- PASS: TestDefaultConfig/#14 (0.03s)
>     --- PASS: TestDefaultConfig/#13 (0.03s)
>     --- PASS: TestDefaultConfig/#11 (0.01s)
>     --- PASS: TestDefaultConfig/#06 (0.02s)
>     --- PASS: TestDefaultConfig/#08 (0.02s)
>     --- PASS: TestDefaultConfig/#09 (0.03s)
>     --- PASS: TestDefaultConfig/#10 (0.02s)
>     --- PASS: TestDefaultConfig/#05 (0.01s)
>     --- PASS: TestDefaultConfig/#07 (0.02s)
>     --- PASS: TestDefaultConfig/#02 (0.01s)
>     --- PASS: TestDefaultConfig/#03 (0.02s)
>     --- PASS: TestDefaultConfig/#04 (0.03s)
>     --- PASS: TestDefaultConfig/#436 (0.01s)
>     --- PASS: TestDefaultConfig/#01 (0.03s)
>     --- PASS: TestDefaultConfig/#00 (0.02s)
> === RUN   TestTxnEndpoint_Bad_JSON
> === PAUSE TestTxnEndpoint_Bad_JSON
> === RUN   TestTxnEndpoint_Bad_Size_Item
> === PAUSE TestTxnEndpoint_Bad_Size_Item
> === RUN   TestTxnEndpoint_Bad_Size_Net
> === PAUSE TestTxnEndpoint_Bad_Size_Net
> === RUN   TestTxnEndpoint_Bad_Size_Ops
> === PAUSE TestTxnEndpoint_Bad_Size_Ops
> === RUN   TestTxnEndpoint_KV_Actions
> === PAUSE TestTxnEndpoint_KV_Actions
> === RUN   TestTxnEndpoint_UpdateCheck
> === PAUSE TestTxnEndpoint_UpdateCheck
> === RUN   TestConvertOps_ContentLength
> === RUN   TestConvertOps_ContentLength/contentLength:_
> === RUN   TestConvertOps_ContentLength/contentLength:_143
> === RUN   TestConvertOps_ContentLength/contentLength:_524288
> === RUN   TestConvertOps_ContentLength/contentLength:_524388
> --- PASS: TestConvertOps_ContentLength (0.27s)
>     writer.go:29: 2020-02-23T02:46:31.293Z [WARN]  TestConvertOps_ContentLength: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:31.293Z [DEBUG] TestConvertOps_ContentLength.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:31.294Z [DEBUG] TestConvertOps_ContentLength.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:31.358Z [INFO]  TestConvertOps_ContentLength.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:84c3c349-e89a-7b07-0e9f-12bc763d3ee6 Address:127.0.0.1:16942}]"
>     writer.go:29: 2020-02-23T02:46:31.358Z [INFO]  TestConvertOps_ContentLength.server.raft: entering follower state: follower="Node at 127.0.0.1:16942 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:31.358Z [INFO]  TestConvertOps_ContentLength.server.serf.wan: serf: EventMemberJoin: Node-84c3c349-e89a-7b07-0e9f-12bc763d3ee6.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:31.359Z [INFO]  TestConvertOps_ContentLength.server.serf.lan: serf: EventMemberJoin: Node-84c3c349-e89a-7b07-0e9f-12bc763d3ee6 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:31.359Z [INFO]  TestConvertOps_ContentLength: Started DNS server: address=127.0.0.1:16937 network=udp
>     writer.go:29: 2020-02-23T02:46:31.359Z [INFO]  TestConvertOps_ContentLength.server: Handled event for server in area: event=member-join server=Node-84c3c349-e89a-7b07-0e9f-12bc763d3ee6.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:31.359Z [INFO]  TestConvertOps_ContentLength.server: Adding LAN server: server="Node-84c3c349-e89a-7b07-0e9f-12bc763d3ee6 (Addr: tcp/127.0.0.1:16942) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:31.359Z [INFO]  TestConvertOps_ContentLength: Started DNS server: address=127.0.0.1:16937 network=tcp
>     writer.go:29: 2020-02-23T02:46:31.360Z [INFO]  TestConvertOps_ContentLength: Started HTTP server: address=127.0.0.1:16938 network=tcp
>     writer.go:29: 2020-02-23T02:46:31.360Z [INFO]  TestConvertOps_ContentLength: started state syncer
>     writer.go:29: 2020-02-23T02:46:31.415Z [WARN]  TestConvertOps_ContentLength.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:31.415Z [INFO]  TestConvertOps_ContentLength.server.raft: entering candidate state: node="Node at 127.0.0.1:16942 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:31.419Z [DEBUG] TestConvertOps_ContentLength.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:31.419Z [DEBUG] TestConvertOps_ContentLength.server.raft: vote granted: from=84c3c349-e89a-7b07-0e9f-12bc763d3ee6 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:31.419Z [INFO]  TestConvertOps_ContentLength.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:31.419Z [INFO]  TestConvertOps_ContentLength.server.raft: entering leader state: leader="Node at 127.0.0.1:16942 [Leader]"
>     writer.go:29: 2020-02-23T02:46:31.419Z [INFO]  TestConvertOps_ContentLength.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:31.419Z [INFO]  TestConvertOps_ContentLength.server: New leader elected: payload=Node-84c3c349-e89a-7b07-0e9f-12bc763d3ee6
>     writer.go:29: 2020-02-23T02:46:31.426Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:31.431Z [INFO]  TestConvertOps_ContentLength: Synced node info
>     writer.go:29: 2020-02-23T02:46:31.431Z [DEBUG] TestConvertOps_ContentLength: Node info in sync
>     writer.go:29: 2020-02-23T02:46:31.435Z [INFO]  TestConvertOps_ContentLength.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:31.435Z [INFO]  TestConvertOps_ContentLength.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:31.435Z [DEBUG] TestConvertOps_ContentLength.server: Skipping self join check for node since the cluster is too small: node=Node-84c3c349-e89a-7b07-0e9f-12bc763d3ee6
>     writer.go:29: 2020-02-23T02:46:31.435Z [INFO]  TestConvertOps_ContentLength.server: member joined, marking health alive: member=Node-84c3c349-e89a-7b07-0e9f-12bc763d3ee6
>     --- PASS: TestConvertOps_ContentLength/contentLength:_ (0.00s)
>     --- PASS: TestConvertOps_ContentLength/contentLength:_143 (0.00s)
>     --- PASS: TestConvertOps_ContentLength/contentLength:_524288 (0.00s)
>     --- PASS: TestConvertOps_ContentLength/contentLength:_524388 (0.00s)
>     writer.go:29: 2020-02-23T02:46:31.551Z [INFO]  TestConvertOps_ContentLength: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:31.551Z [INFO]  TestConvertOps_ContentLength.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:31.551Z [DEBUG] TestConvertOps_ContentLength.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:31.551Z [WARN]  TestConvertOps_ContentLength.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:31.551Z [DEBUG] TestConvertOps_ContentLength.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:31.553Z [WARN]  TestConvertOps_ContentLength.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:31.554Z [INFO]  TestConvertOps_ContentLength.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:31.554Z [INFO]  TestConvertOps_ContentLength: consul server down
>     writer.go:29: 2020-02-23T02:46:31.554Z [INFO]  TestConvertOps_ContentLength: shutdown complete
>     writer.go:29: 2020-02-23T02:46:31.554Z [INFO]  TestConvertOps_ContentLength: Stopping server: protocol=DNS address=127.0.0.1:16937 network=tcp
>     writer.go:29: 2020-02-23T02:46:31.554Z [INFO]  TestConvertOps_ContentLength: Stopping server: protocol=DNS address=127.0.0.1:16937 network=udp
>     writer.go:29: 2020-02-23T02:46:31.555Z [INFO]  TestConvertOps_ContentLength: Stopping server: protocol=HTTP address=127.0.0.1:16938 network=tcp
>     writer.go:29: 2020-02-23T02:46:31.555Z [INFO]  TestConvertOps_ContentLength: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:31.555Z [INFO]  TestConvertOps_ContentLength: Endpoints down
> === RUN   TestUiIndex
> === PAUSE TestUiIndex
> === RUN   TestUiNodes
> === PAUSE TestUiNodes
> === RUN   TestUiNodes_Filter
> === PAUSE TestUiNodes_Filter
> === RUN   TestUiNodeInfo
> === PAUSE TestUiNodeInfo
> === RUN   TestUiServices
> === PAUSE TestUiServices
> === RUN   TestValidateUserEventParams
> === PAUSE TestValidateUserEventParams
> === RUN   TestShouldProcessUserEvent
> === PAUSE TestShouldProcessUserEvent
> === RUN   TestIngestUserEvent
> === PAUSE TestIngestUserEvent
> === RUN   TestFireReceiveEvent
> === PAUSE TestFireReceiveEvent
> === RUN   TestUserEventToken
> === PAUSE TestUserEventToken
> === RUN   TestStringHash
> === PAUSE TestStringHash
> === RUN   TestSetFilePermissions
> === PAUSE TestSetFilePermissions
> === RUN   TestDurationFixer
> --- PASS: TestDurationFixer (0.00s)
> === RUN   TestHelperProcess
> --- PASS: TestHelperProcess (0.00s)
> === RUN   TestForwardSignals
> === RUN   TestForwardSignals/signal-interrupt
> === RUN   TestForwardSignals/signal-terminated
> --- PASS: TestForwardSignals (0.23s)
>     --- PASS: TestForwardSignals/signal-interrupt (0.14s)
>     --- PASS: TestForwardSignals/signal-terminated (0.10s)
> === RUN   TestMakeWatchHandler
> === PAUSE TestMakeWatchHandler
> === RUN   TestMakeHTTPWatchHandler
> === PAUSE TestMakeHTTPWatchHandler
> === CONT  TestACL_Legacy_Disabled_Response
> === CONT  TestMakeHTTPWatchHandler
> === CONT  TestOperator_KeyringInstall
> === CONT  TestSetMeta
> --- PASS: TestSetMeta (0.00s)
> === CONT  TestSetLastContact
> === RUN   TestSetLastContact/neg
> === RUN   TestSetLastContact/zero
> === RUN   TestSetLastContact/pos
> === RUN   TestSetLastContact/pos_ms_only
> --- PASS: TestSetLastContact (0.00s)
>     --- PASS: TestSetLastContact/neg (0.00s)
>     --- PASS: TestSetLastContact/zero (0.00s)
>     --- PASS: TestSetLastContact/pos (0.00s)
>     --- PASS: TestSetLastContact/pos_ms_only (0.00s)
> === CONT  TestSetKnownLeader
> --- PASS: TestSetKnownLeader (0.00s)
> === CONT  TestSetIndex
> --- PASS: TestSetIndex (0.00s)
> === CONT  TestHTTPServer_UnixSocket_FileExists
> --- PASS: TestMakeHTTPWatchHandler (0.00s)
>     writer.go:29: 2020-02-23T02:46:31.795Z [TRACE] TestMakeHTTPWatchHandler: http watch handler output: watch=http://127.0.0.1:45017 output="Ok, i see"
> === CONT  TestMakeWatchHandler
> --- PASS: TestMakeWatchHandler (0.02s)
>     writer.go:29: 2020-02-23T02:46:31.813Z [DEBUG] TestMakeWatchHandler: watch handler output: watch_handler="bash -c 'echo $CONSUL_INDEX >> handler_index_out && cat >> handler_out'" output=
> === CONT  TestSetFilePermissions
> --- PASS: TestSetFilePermissions (0.01s)
> === CONT  TestStringHash
> --- PASS: TestStringHash (0.00s)
> === CONT  TestUserEventToken
> === RUN   TestACL_Legacy_Disabled_Response/0
> === RUN   TestACL_Legacy_Disabled_Response/1
> === RUN   TestACL_Legacy_Disabled_Response/2
> === RUN   TestACL_Legacy_Disabled_Response/3
> === RUN   TestACL_Legacy_Disabled_Response/4
> === RUN   TestACL_Legacy_Disabled_Response/5
> --- PASS: TestACL_Legacy_Disabled_Response (0.26s)
>     writer.go:29: 2020-02-23T02:46:31.794Z [WARN]  TestACL_Legacy_Disabled_Response: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:31.794Z [DEBUG] TestACL_Legacy_Disabled_Response.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:31.795Z [DEBUG] TestACL_Legacy_Disabled_Response.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:31.864Z [INFO]  TestACL_Legacy_Disabled_Response.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:bc8d04d3-adfc-d86a-ecb7-43981c80aa6d Address:127.0.0.1:16948}]"
>     writer.go:29: 2020-02-23T02:46:31.864Z [INFO]  TestACL_Legacy_Disabled_Response.server.serf.wan: serf: EventMemberJoin: Node-bc8d04d3-adfc-d86a-ecb7-43981c80aa6d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:31.864Z [INFO]  TestACL_Legacy_Disabled_Response.server.raft: entering follower state: follower="Node at 127.0.0.1:16948 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:31.865Z [INFO]  TestACL_Legacy_Disabled_Response.server.serf.lan: serf: EventMemberJoin: Node-bc8d04d3-adfc-d86a-ecb7-43981c80aa6d 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:31.866Z [INFO]  TestACL_Legacy_Disabled_Response: Started DNS server: address=127.0.0.1:16943 network=udp
>     writer.go:29: 2020-02-23T02:46:31.866Z [INFO]  TestACL_Legacy_Disabled_Response.server: Adding LAN server: server="Node-bc8d04d3-adfc-d86a-ecb7-43981c80aa6d (Addr: tcp/127.0.0.1:16948) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:31.866Z [INFO]  TestACL_Legacy_Disabled_Response.server: Handled event for server in area: event=member-join server=Node-bc8d04d3-adfc-d86a-ecb7-43981c80aa6d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:31.866Z [INFO]  TestACL_Legacy_Disabled_Response: Started DNS server: address=127.0.0.1:16943 network=tcp
>     writer.go:29: 2020-02-23T02:46:31.866Z [INFO]  TestACL_Legacy_Disabled_Response: Started HTTP server: address=127.0.0.1:16944 network=tcp
>     writer.go:29: 2020-02-23T02:46:31.866Z [INFO]  TestACL_Legacy_Disabled_Response: started state syncer
>     writer.go:29: 2020-02-23T02:46:31.908Z [WARN]  TestACL_Legacy_Disabled_Response.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:31.908Z [INFO]  TestACL_Legacy_Disabled_Response.server.raft: entering candidate state: node="Node at 127.0.0.1:16948 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:31.916Z [DEBUG] TestACL_Legacy_Disabled_Response.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:31.916Z [DEBUG] TestACL_Legacy_Disabled_Response.server.raft: vote granted: from=bc8d04d3-adfc-d86a-ecb7-43981c80aa6d term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:31.916Z [INFO]  TestACL_Legacy_Disabled_Response.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:31.916Z [INFO]  TestACL_Legacy_Disabled_Response.server.raft: entering leader state: leader="Node at 127.0.0.1:16948 [Leader]"
>     writer.go:29: 2020-02-23T02:46:31.917Z [INFO]  TestACL_Legacy_Disabled_Response.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:31.917Z [INFO]  TestACL_Legacy_Disabled_Response.server: New leader elected: payload=Node-bc8d04d3-adfc-d86a-ecb7-43981c80aa6d
>     writer.go:29: 2020-02-23T02:46:31.938Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:31.975Z [INFO]  TestACL_Legacy_Disabled_Response.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:31.975Z [INFO]  TestACL_Legacy_Disabled_Response.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:31.975Z [DEBUG] TestACL_Legacy_Disabled_Response.server: Skipping self join check for node since the cluster is too small: node=Node-bc8d04d3-adfc-d86a-ecb7-43981c80aa6d
>     writer.go:29: 2020-02-23T02:46:31.975Z [INFO]  TestACL_Legacy_Disabled_Response.server: member joined, marking health alive: member=Node-bc8d04d3-adfc-d86a-ecb7-43981c80aa6d
>     --- PASS: TestACL_Legacy_Disabled_Response/0 (0.00s)
>     --- PASS: TestACL_Legacy_Disabled_Response/1 (0.00s)
>     --- PASS: TestACL_Legacy_Disabled_Response/2 (0.00s)
>     --- PASS: TestACL_Legacy_Disabled_Response/3 (0.00s)
>     --- PASS: TestACL_Legacy_Disabled_Response/4 (0.00s)
>     --- PASS: TestACL_Legacy_Disabled_Response/5 (0.00s)
>     writer.go:29: 2020-02-23T02:46:32.038Z [INFO]  TestACL_Legacy_Disabled_Response: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:32.038Z [INFO]  TestACL_Legacy_Disabled_Response.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:32.038Z [DEBUG] TestACL_Legacy_Disabled_Response.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.038Z [WARN]  TestACL_Legacy_Disabled_Response.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.038Z [ERROR] TestACL_Legacy_Disabled_Response.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:32.038Z [DEBUG] TestACL_Legacy_Disabled_Response.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.042Z [WARN]  TestACL_Legacy_Disabled_Response.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.044Z [INFO]  TestACL_Legacy_Disabled_Response.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:32.044Z [INFO]  TestACL_Legacy_Disabled_Response: consul server down
>     writer.go:29: 2020-02-23T02:46:32.044Z [INFO]  TestACL_Legacy_Disabled_Response: shutdown complete
>     writer.go:29: 2020-02-23T02:46:32.044Z [INFO]  TestACL_Legacy_Disabled_Response: Stopping server: protocol=DNS address=127.0.0.1:16943 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.044Z [INFO]  TestACL_Legacy_Disabled_Response: Stopping server: protocol=DNS address=127.0.0.1:16943 network=udp
>     writer.go:29: 2020-02-23T02:46:32.044Z [INFO]  TestACL_Legacy_Disabled_Response: Stopping server: protocol=HTTP address=127.0.0.1:16944 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.044Z [INFO]  TestACL_Legacy_Disabled_Response: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:32.044Z [INFO]  TestACL_Legacy_Disabled_Response: Endpoints down
> === CONT  TestFireReceiveEvent
> --- PASS: TestOperator_KeyringInstall (0.27s)
>     writer.go:29: 2020-02-23T02:46:31.802Z [WARN]  TestOperator_KeyringInstall: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:31.822Z [DEBUG] TestOperator_KeyringInstall.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:31.822Z [DEBUG] TestOperator_KeyringInstall.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:31.867Z [INFO]  TestOperator_KeyringInstall.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4d08a652-d324-58df-5b02-d6fb09a60922 Address:127.0.0.1:16954}]"
>     writer.go:29: 2020-02-23T02:46:31.867Z [INFO]  TestOperator_KeyringInstall.server.serf.wan: serf: EventMemberJoin: Node-4d08a652-d324-58df-5b02-d6fb09a60922.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:31.868Z [INFO]  TestOperator_KeyringInstall.server.serf.lan: serf: EventMemberJoin: Node-4d08a652-d324-58df-5b02-d6fb09a60922 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:31.868Z [INFO]  TestOperator_KeyringInstall: Started DNS server: address=127.0.0.1:16949 network=udp
>     writer.go:29: 2020-02-23T02:46:31.868Z [INFO]  TestOperator_KeyringInstall.server.raft: entering follower state: follower="Node at 127.0.0.1:16954 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:31.868Z [INFO]  TestOperator_KeyringInstall.server: Adding LAN server: server="Node-4d08a652-d324-58df-5b02-d6fb09a60922 (Addr: tcp/127.0.0.1:16954) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:31.868Z [INFO]  TestOperator_KeyringInstall.server: Handled event for server in area: event=member-join server=Node-4d08a652-d324-58df-5b02-d6fb09a60922.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:31.868Z [INFO]  TestOperator_KeyringInstall: Started DNS server: address=127.0.0.1:16949 network=tcp
>     writer.go:29: 2020-02-23T02:46:31.869Z [INFO]  TestOperator_KeyringInstall: Started HTTP server: address=127.0.0.1:16950 network=tcp
>     writer.go:29: 2020-02-23T02:46:31.869Z [INFO]  TestOperator_KeyringInstall: started state syncer
>     writer.go:29: 2020-02-23T02:46:31.906Z [WARN]  TestOperator_KeyringInstall.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:31.906Z [INFO]  TestOperator_KeyringInstall.server.raft: entering candidate state: node="Node at 127.0.0.1:16954 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:31.910Z [DEBUG] TestOperator_KeyringInstall.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:31.910Z [DEBUG] TestOperator_KeyringInstall.server.raft: vote granted: from=4d08a652-d324-58df-5b02-d6fb09a60922 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:31.910Z [INFO]  TestOperator_KeyringInstall.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:31.910Z [INFO]  TestOperator_KeyringInstall.server.raft: entering leader state: leader="Node at 127.0.0.1:16954 [Leader]"
>     writer.go:29: 2020-02-23T02:46:31.910Z [INFO]  TestOperator_KeyringInstall.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:31.910Z [INFO]  TestOperator_KeyringInstall.server: New leader elected: payload=Node-4d08a652-d324-58df-5b02-d6fb09a60922
>     writer.go:29: 2020-02-23T02:46:31.938Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:31.994Z [INFO]  TestOperator_KeyringInstall.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:31.994Z [INFO]  TestOperator_KeyringInstall.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:31.994Z [DEBUG] TestOperator_KeyringInstall.server: Skipping self join check for node since the cluster is too small: node=Node-4d08a652-d324-58df-5b02-d6fb09a60922
>     writer.go:29: 2020-02-23T02:46:31.994Z [INFO]  TestOperator_KeyringInstall.server: member joined, marking health alive: member=Node-4d08a652-d324-58df-5b02-d6fb09a60922
>     writer.go:29: 2020-02-23T02:46:32.060Z [INFO]  TestOperator_KeyringInstall.server.serf.wan: serf: Received install-key query
>     writer.go:29: 2020-02-23T02:46:32.060Z [DEBUG] TestOperator_KeyringInstall.server.serf.wan: serf: messageQueryResponseType: Node-4d08a652-d324-58df-5b02-d6fb09a60922.dc1
>     writer.go:29: 2020-02-23T02:46:32.060Z [DEBUG] TestOperator_KeyringInstall.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:32.061Z [INFO]  TestOperator_KeyringInstall.server.serf.lan: serf: Received install-key query
>     writer.go:29: 2020-02-23T02:46:32.061Z [DEBUG] TestOperator_KeyringInstall.server.serf.lan: serf: messageQueryResponseType: Node-4d08a652-d324-58df-5b02-d6fb09a60922
>     writer.go:29: 2020-02-23T02:46:32.061Z [INFO]  TestOperator_KeyringInstall.server.serf.wan: serf: Received list-keys query
>     writer.go:29: 2020-02-23T02:46:32.062Z [DEBUG] TestOperator_KeyringInstall.server.serf.wan: serf: messageQueryResponseType: Node-4d08a652-d324-58df-5b02-d6fb09a60922.dc1
>     writer.go:29: 2020-02-23T02:46:32.062Z [INFO]  TestOperator_KeyringInstall.server.serf.lan: serf: Received list-keys query
>     writer.go:29: 2020-02-23T02:46:32.062Z [DEBUG] TestOperator_KeyringInstall.server.serf.lan: serf: messageQueryResponseType: Node-4d08a652-d324-58df-5b02-d6fb09a60922
>     writer.go:29: 2020-02-23T02:46:32.062Z [INFO]  TestOperator_KeyringInstall: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:32.062Z [INFO]  TestOperator_KeyringInstall.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:32.062Z [DEBUG] TestOperator_KeyringInstall.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.062Z [WARN]  TestOperator_KeyringInstall.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.062Z [ERROR] TestOperator_KeyringInstall.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:32.062Z [DEBUG] TestOperator_KeyringInstall.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.065Z [WARN]  TestOperator_KeyringInstall.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.067Z [INFO]  TestOperator_KeyringInstall: consul server down
>     writer.go:29: 2020-02-23T02:46:32.067Z [INFO]  TestOperator_KeyringInstall: shutdown complete
>     writer.go:29: 2020-02-23T02:46:32.067Z [INFO]  TestOperator_KeyringInstall: Stopping server: protocol=DNS address=127.0.0.1:16949 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.067Z [INFO]  TestOperator_KeyringInstall: Stopping server: protocol=DNS address=127.0.0.1:16949 network=udp
>     writer.go:29: 2020-02-23T02:46:32.067Z [INFO]  TestOperator_KeyringInstall: Stopping server: protocol=HTTP address=127.0.0.1:16950 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.067Z [INFO]  TestOperator_KeyringInstall: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:32.067Z [INFO]  TestOperator_KeyringInstall: Endpoints down
> === CONT  TestDNS_Compression_trimUDPResponse
> --- PASS: TestDNS_Compression_trimUDPResponse (0.01s)
> === CONT  TestIngestUserEvent
> --- PASS: TestIngestUserEvent (0.08s)
>     writer.go:29: 2020-02-23T02:46:32.079Z [WARN]  TestIngestUserEvent: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:32.079Z [DEBUG] TestIngestUserEvent.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:32.080Z [DEBUG] TestIngestUserEvent.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:32.089Z [INFO]  TestIngestUserEvent.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:88a4f662-5d69-56d8-6364-a889db243d89 Address:127.0.0.1:16978}]"
>     writer.go:29: 2020-02-23T02:46:32.089Z [INFO]  TestIngestUserEvent.server.raft: entering follower state: follower="Node at 127.0.0.1:16978 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:32.090Z [INFO]  TestIngestUserEvent.server.serf.wan: serf: EventMemberJoin: Node-88a4f662-5d69-56d8-6364-a889db243d89.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.090Z [INFO]  TestIngestUserEvent.server.serf.lan: serf: EventMemberJoin: Node-88a4f662-5d69-56d8-6364-a889db243d89 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.090Z [INFO]  TestIngestUserEvent: Started DNS server: address=127.0.0.1:16973 network=udp
>     writer.go:29: 2020-02-23T02:46:32.090Z [INFO]  TestIngestUserEvent.server: Adding LAN server: server="Node-88a4f662-5d69-56d8-6364-a889db243d89 (Addr: tcp/127.0.0.1:16978) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:32.090Z [INFO]  TestIngestUserEvent.server: Handled event for server in area: event=member-join server=Node-88a4f662-5d69-56d8-6364-a889db243d89.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:32.090Z [INFO]  TestIngestUserEvent: Started DNS server: address=127.0.0.1:16973 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.091Z [INFO]  TestIngestUserEvent: Started HTTP server: address=127.0.0.1:16974 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.091Z [INFO]  TestIngestUserEvent: started state syncer
>     writer.go:29: 2020-02-23T02:46:32.133Z [WARN]  TestIngestUserEvent.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:32.133Z [INFO]  TestIngestUserEvent.server.raft: entering candidate state: node="Node at 127.0.0.1:16978 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:32.136Z [DEBUG] TestIngestUserEvent.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:32.136Z [DEBUG] TestIngestUserEvent.server.raft: vote granted: from=88a4f662-5d69-56d8-6364-a889db243d89 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:32.136Z [INFO]  TestIngestUserEvent.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:32.136Z [INFO]  TestIngestUserEvent.server.raft: entering leader state: leader="Node at 127.0.0.1:16978 [Leader]"
>     writer.go:29: 2020-02-23T02:46:32.136Z [INFO]  TestIngestUserEvent.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:32.136Z [INFO]  TestIngestUserEvent.server: New leader elected: payload=Node-88a4f662-5d69-56d8-6364-a889db243d89
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.140Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.141Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.142Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.143Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.144Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.145Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] TestIngestUserEvent: new event: event_name=test event_id=
>     writer.go:29: 2020-02-23T02:46:32.146Z [INFO]  TestIngestUserEvent: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:32.146Z [INFO]  TestIngestUserEvent.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:32.146Z [WARN]  TestIngestUserEvent.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.146Z [ERROR] TestIngestUserEvent.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:32.148Z [WARN]  TestIngestUserEvent.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.151Z [INFO]  TestIngestUserEvent.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:32.152Z [ERROR] TestIngestUserEvent.server: failed to establish leadership: error="error generating CA root certificate: error computing next serial number: leadership lost while committing log"
>     writer.go:29: 2020-02-23T02:46:32.152Z [INFO]  TestIngestUserEvent: consul server down
>     writer.go:29: 2020-02-23T02:46:32.152Z [INFO]  TestIngestUserEvent: shutdown complete
>     writer.go:29: 2020-02-23T02:46:32.152Z [INFO]  TestIngestUserEvent: Stopping server: protocol=DNS address=127.0.0.1:16973 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.152Z [INFO]  TestIngestUserEvent: Stopping server: protocol=DNS address=127.0.0.1:16973 network=udp
>     writer.go:29: 2020-02-23T02:46:32.152Z [INFO]  TestIngestUserEvent: Stopping server: protocol=HTTP address=127.0.0.1:16974 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.152Z [INFO]  TestIngestUserEvent: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:32.152Z [INFO]  TestIngestUserEvent: Endpoints down
> === CONT  TestShouldProcessUserEvent
> --- PASS: TestFireReceiveEvent (0.22s)
>     writer.go:29: 2020-02-23T02:46:32.052Z [WARN]  TestFireReceiveEvent: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:32.052Z [DEBUG] TestFireReceiveEvent.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:32.052Z [DEBUG] TestFireReceiveEvent.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:32.064Z [INFO]  TestFireReceiveEvent.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:16d17e00-88db-d700-2558-d367fed16c67 Address:127.0.0.1:16972}]"
>     writer.go:29: 2020-02-23T02:46:32.064Z [INFO]  TestFireReceiveEvent.server.raft: entering follower state: follower="Node at 127.0.0.1:16972 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:32.065Z [INFO]  TestFireReceiveEvent.server.serf.wan: serf: EventMemberJoin: Node-16d17e00-88db-d700-2558-d367fed16c67.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.065Z [INFO]  TestFireReceiveEvent.server.serf.lan: serf: EventMemberJoin: Node-16d17e00-88db-d700-2558-d367fed16c67 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.065Z [INFO]  TestFireReceiveEvent.server: Adding LAN server: server="Node-16d17e00-88db-d700-2558-d367fed16c67 (Addr: tcp/127.0.0.1:16972) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:32.065Z [INFO]  TestFireReceiveEvent: Started DNS server: address=127.0.0.1:16967 network=udp
>     writer.go:29: 2020-02-23T02:46:32.065Z [INFO]  TestFireReceiveEvent.server: Handled event for server in area: event=member-join server=Node-16d17e00-88db-d700-2558-d367fed16c67.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:32.065Z [INFO]  TestFireReceiveEvent: Started DNS server: address=127.0.0.1:16967 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.066Z [INFO]  TestFireReceiveEvent: Started HTTP server: address=127.0.0.1:16968 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.066Z [INFO]  TestFireReceiveEvent: started state syncer
>     writer.go:29: 2020-02-23T02:46:32.132Z [WARN]  TestFireReceiveEvent.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:32.132Z [INFO]  TestFireReceiveEvent.server.raft: entering candidate state: node="Node at 127.0.0.1:16972 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:32.136Z [DEBUG] TestFireReceiveEvent.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:32.136Z [DEBUG] TestFireReceiveEvent.server.raft: vote granted: from=16d17e00-88db-d700-2558-d367fed16c67 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:32.136Z [INFO]  TestFireReceiveEvent.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:32.136Z [INFO]  TestFireReceiveEvent.server.raft: entering leader state: leader="Node at 127.0.0.1:16972 [Leader]"
>     writer.go:29: 2020-02-23T02:46:32.136Z [INFO]  TestFireReceiveEvent.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:32.136Z [INFO]  TestFireReceiveEvent.server: New leader elected: payload=Node-16d17e00-88db-d700-2558-d367fed16c67
>     writer.go:29: 2020-02-23T02:46:32.146Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:32.163Z [DEBUG] TestFireReceiveEvent.server: User event: event=deploy
>     writer.go:29: 2020-02-23T02:46:32.163Z [DEBUG] TestFireReceiveEvent.server: User event: event=deploy
>     writer.go:29: 2020-02-23T02:46:32.163Z [DEBUG] TestFireReceiveEvent: new event: event_name=deploy event_id=4a4b652c-6225-7b08-f9f1-15236b7d7bd8
>     writer.go:29: 2020-02-23T02:46:32.166Z [INFO]  TestFireReceiveEvent.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:32.166Z [INFO]  TestFireReceiveEvent.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.166Z [DEBUG] TestFireReceiveEvent.server: Skipping self join check for node since the cluster is too small: node=Node-16d17e00-88db-d700-2558-d367fed16c67
>     writer.go:29: 2020-02-23T02:46:32.166Z [INFO]  TestFireReceiveEvent.server: member joined, marking health alive: member=Node-16d17e00-88db-d700-2558-d367fed16c67
>     writer.go:29: 2020-02-23T02:46:32.195Z [INFO]  TestFireReceiveEvent: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:32.195Z [INFO]  TestFireReceiveEvent.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:32.195Z [DEBUG] TestFireReceiveEvent.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.195Z [WARN]  TestFireReceiveEvent.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.195Z [ERROR] TestFireReceiveEvent.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:32.195Z [DEBUG] TestFireReceiveEvent.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.246Z [WARN]  TestFireReceiveEvent.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.267Z [INFO]  TestFireReceiveEvent.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:32.267Z [INFO]  TestFireReceiveEvent: consul server down
>     writer.go:29: 2020-02-23T02:46:32.267Z [INFO]  TestFireReceiveEvent: shutdown complete
>     writer.go:29: 2020-02-23T02:46:32.268Z [INFO]  TestFireReceiveEvent: Stopping server: protocol=DNS address=127.0.0.1:16967 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.268Z [INFO]  TestFireReceiveEvent: Stopping server: protocol=DNS address=127.0.0.1:16967 network=udp
>     writer.go:29: 2020-02-23T02:46:32.268Z [INFO]  TestFireReceiveEvent: Stopping server: protocol=HTTP address=127.0.0.1:16968 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.268Z [INFO]  TestFireReceiveEvent: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:32.268Z [INFO]  TestFireReceiveEvent: Endpoints down
> === CONT  TestValidateUserEventParams
> --- PASS: TestValidateUserEventParams (0.00s)
> === CONT  TestUiServices
> --- PASS: TestHTTPServer_UnixSocket_FileExists (0.53s)
>     writer.go:29: 2020-02-23T02:46:31.802Z [WARN]  TestHTTPServer_UnixSocket_FileExists: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:31.803Z [DEBUG] TestHTTPServer_UnixSocket_FileExists.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:31.803Z [DEBUG] TestHTTPServer_UnixSocket_FileExists.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:31.869Z [INFO]  TestHTTPServer_UnixSocket_FileExists.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:436fd896-997c-2017-3271-a02307e270b4 Address:127.0.0.1:16960}]"
>     writer.go:29: 2020-02-23T02:46:31.869Z [INFO]  TestHTTPServer_UnixSocket_FileExists.server.raft: entering follower state: follower="Node at 127.0.0.1:16960 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:31.870Z [INFO]  TestHTTPServer_UnixSocket_FileExists.server.serf.wan: serf: EventMemberJoin: Node-436fd896-997c-2017-3271-a02307e270b4.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:31.870Z [INFO]  TestHTTPServer_UnixSocket_FileExists.server.serf.lan: serf: EventMemberJoin: Node-436fd896-997c-2017-3271-a02307e270b4 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:31.871Z [INFO]  TestHTTPServer_UnixSocket_FileExists: Started DNS server: address=127.0.0.1:16955 network=udp
>     writer.go:29: 2020-02-23T02:46:31.871Z [INFO]  TestHTTPServer_UnixSocket_FileExists.server: Adding LAN server: server="Node-436fd896-997c-2017-3271-a02307e270b4 (Addr: tcp/127.0.0.1:16960) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:31.871Z [INFO]  TestHTTPServer_UnixSocket_FileExists.server: Handled event for server in area: event=member-join server=Node-436fd896-997c-2017-3271-a02307e270b4.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:31.871Z [INFO]  TestHTTPServer_UnixSocket_FileExists: Started DNS server: address=127.0.0.1:16955 network=tcp
>     writer.go:29: 2020-02-23T02:46:31.871Z [WARN]  TestHTTPServer_UnixSocket_FileExists: Replacing socket: path=/tmp/consul-test/TestHTTPServer_UnixSocket_FileExists-consul451312337/test.sock
>     writer.go:29: 2020-02-23T02:46:31.871Z [INFO]  TestHTTPServer_UnixSocket_FileExists: Started HTTP server: address=/tmp/consul-test/TestHTTPServer_UnixSocket_FileExists-consul451312337/test.sock network=unix
>     writer.go:29: 2020-02-23T02:46:31.871Z [INFO]  TestHTTPServer_UnixSocket_FileExists: started state syncer
>     writer.go:29: 2020-02-23T02:46:31.911Z [WARN]  TestHTTPServer_UnixSocket_FileExists.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:31.911Z [INFO]  TestHTTPServer_UnixSocket_FileExists.server.raft: entering candidate state: node="Node at 127.0.0.1:16960 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:31.928Z [DEBUG] TestHTTPServer_UnixSocket_FileExists.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:31.928Z [DEBUG] TestHTTPServer_UnixSocket_FileExists.server.raft: vote granted: from=436fd896-997c-2017-3271-a02307e270b4 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:31.928Z [INFO]  TestHTTPServer_UnixSocket_FileExists.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:31.928Z [INFO]  TestHTTPServer_UnixSocket_FileExists.server.raft: entering leader state: leader="Node at 127.0.0.1:16960 [Leader]"
>     writer.go:29: 2020-02-23T02:46:31.928Z [INFO]  TestHTTPServer_UnixSocket_FileExists.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:31.928Z [INFO]  TestHTTPServer_UnixSocket_FileExists.server: New leader elected: payload=Node-436fd896-997c-2017-3271-a02307e270b4
>     writer.go:29: 2020-02-23T02:46:31.936Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:31.947Z [INFO]  TestHTTPServer_UnixSocket_FileExists.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:31.947Z [INFO]  TestHTTPServer_UnixSocket_FileExists.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:31.947Z [DEBUG] TestHTTPServer_UnixSocket_FileExists.server: Skipping self join check for node since the cluster is too small: node=Node-436fd896-997c-2017-3271-a02307e270b4
>     writer.go:29: 2020-02-23T02:46:31.947Z [INFO]  TestHTTPServer_UnixSocket_FileExists.server: member joined, marking health alive: member=Node-436fd896-997c-2017-3271-a02307e270b4
>     writer.go:29: 2020-02-23T02:46:32.030Z [INFO]  TestHTTPServer_UnixSocket_FileExists: Synced node info
>     writer.go:29: 2020-02-23T02:46:32.277Z [INFO]  TestHTTPServer_UnixSocket_FileExists: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:32.277Z [INFO]  TestHTTPServer_UnixSocket_FileExists.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:32.277Z [DEBUG] TestHTTPServer_UnixSocket_FileExists.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.277Z [WARN]  TestHTTPServer_UnixSocket_FileExists.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.277Z [DEBUG] TestHTTPServer_UnixSocket_FileExists.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.318Z [WARN]  TestHTTPServer_UnixSocket_FileExists.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.326Z [INFO]  TestHTTPServer_UnixSocket_FileExists.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:32.326Z [INFO]  TestHTTPServer_UnixSocket_FileExists: consul server down
>     writer.go:29: 2020-02-23T02:46:32.326Z [INFO]  TestHTTPServer_UnixSocket_FileExists: shutdown complete
>     writer.go:29: 2020-02-23T02:46:32.326Z [INFO]  TestHTTPServer_UnixSocket_FileExists: Stopping server: protocol=DNS address=127.0.0.1:16955 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.326Z [INFO]  TestHTTPServer_UnixSocket_FileExists: Stopping server: protocol=DNS address=127.0.0.1:16955 network=udp
>     writer.go:29: 2020-02-23T02:46:32.326Z [INFO]  TestHTTPServer_UnixSocket_FileExists: Stopping server: protocol=HTTP address=/tmp/consul-test/TestHTTPServer_UnixSocket_FileExists-consul451312337/test.sock network=unix
>     writer.go:29: 2020-02-23T02:46:32.326Z [INFO]  TestHTTPServer_UnixSocket_FileExists: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:32.326Z [INFO]  TestHTTPServer_UnixSocket_FileExists: Endpoints down
> === CONT  TestUiNodeInfo
> --- PASS: TestUserEventToken (0.52s)
>     writer.go:29: 2020-02-23T02:46:31.829Z [WARN]  TestUserEventToken: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:31.829Z [WARN]  TestUserEventToken: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:31.829Z [DEBUG] TestUserEventToken.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:31.829Z [DEBUG] TestUserEventToken.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:31.867Z [INFO]  TestUserEventToken.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:e6286c27-f584-0626-8523-a8ce6905feea Address:127.0.0.1:16966}]"
>     writer.go:29: 2020-02-23T02:46:31.867Z [INFO]  TestUserEventToken.server.serf.wan: serf: EventMemberJoin: Node-e6286c27-f584-0626-8523-a8ce6905feea.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:31.868Z [INFO]  TestUserEventToken.server.serf.lan: serf: EventMemberJoin: Node-e6286c27-f584-0626-8523-a8ce6905feea 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:31.868Z [INFO]  TestUserEventToken.server.raft: entering follower state: follower="Node at 127.0.0.1:16966 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:31.868Z [INFO]  TestUserEventToken.server: Adding LAN server: server="Node-e6286c27-f584-0626-8523-a8ce6905feea (Addr: tcp/127.0.0.1:16966) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:31.868Z [INFO]  TestUserEventToken.server: Handled event for server in area: event=member-join server=Node-e6286c27-f584-0626-8523-a8ce6905feea.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:31.868Z [INFO]  TestUserEventToken: Started DNS server: address=127.0.0.1:16961 network=udp
>     writer.go:29: 2020-02-23T02:46:31.868Z [INFO]  TestUserEventToken: Started DNS server: address=127.0.0.1:16961 network=tcp
>     writer.go:29: 2020-02-23T02:46:31.869Z [INFO]  TestUserEventToken: Started HTTP server: address=127.0.0.1:16962 network=tcp
>     writer.go:29: 2020-02-23T02:46:31.869Z [INFO]  TestUserEventToken: started state syncer
>     writer.go:29: 2020-02-23T02:46:31.936Z [WARN]  TestUserEventToken.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:31.936Z [INFO]  TestUserEventToken.server.raft: entering candidate state: node="Node at 127.0.0.1:16966 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:31.941Z [DEBUG] TestUserEventToken.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:31.941Z [DEBUG] TestUserEventToken.server.raft: vote granted: from=e6286c27-f584-0626-8523-a8ce6905feea term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:31.941Z [INFO]  TestUserEventToken.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:31.941Z [INFO]  TestUserEventToken.server.raft: entering leader state: leader="Node at 127.0.0.1:16966 [Leader]"
>     writer.go:29: 2020-02-23T02:46:31.941Z [INFO]  TestUserEventToken.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:31.941Z [INFO]  TestUserEventToken.server: New leader elected: payload=Node-e6286c27-f584-0626-8523-a8ce6905feea
>     writer.go:29: 2020-02-23T02:46:31.945Z [INFO]  TestUserEventToken.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:31.946Z [INFO]  TestUserEventToken.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:31.946Z [WARN]  TestUserEventToken.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:31.975Z [INFO]  TestUserEventToken.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:32.022Z [INFO]  TestUserEventToken.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:32.022Z [WARN]  TestUserEventToken.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:32.032Z [INFO]  TestUserEventToken.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:32.032Z [INFO]  TestUserEventToken.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:32.032Z [INFO]  TestUserEventToken.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:32.032Z [INFO]  TestUserEventToken.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:32.032Z [DEBUG] TestUserEventToken.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:46:32.032Z [INFO]  TestUserEventToken: Synced node info
>     writer.go:29: 2020-02-23T02:46:32.032Z [DEBUG] TestUserEventToken: Node info in sync
>     writer.go:29: 2020-02-23T02:46:32.032Z [INFO]  TestUserEventToken.server.serf.lan: serf: EventMemberUpdate: Node-e6286c27-f584-0626-8523-a8ce6905feea
>     writer.go:29: 2020-02-23T02:46:32.032Z [INFO]  TestUserEventToken.server.serf.wan: serf: EventMemberUpdate: Node-e6286c27-f584-0626-8523-a8ce6905feea.dc1
>     writer.go:29: 2020-02-23T02:46:32.032Z [INFO]  TestUserEventToken.server: Handled event for server in area: event=member-update server=Node-e6286c27-f584-0626-8523-a8ce6905feea.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:32.032Z [INFO]  TestUserEventToken.server.serf.lan: serf: EventMemberUpdate: Node-e6286c27-f584-0626-8523-a8ce6905feea
>     writer.go:29: 2020-02-23T02:46:32.033Z [INFO]  TestUserEventToken.server.serf.wan: serf: EventMemberUpdate: Node-e6286c27-f584-0626-8523-a8ce6905feea.dc1
>     writer.go:29: 2020-02-23T02:46:32.033Z [INFO]  TestUserEventToken.server: Handled event for server in area: event=member-update server=Node-e6286c27-f584-0626-8523-a8ce6905feea.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:32.041Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:32.054Z [INFO]  TestUserEventToken.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:32.054Z [INFO]  TestUserEventToken.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.054Z [DEBUG] TestUserEventToken.server: Skipping self join check for node since the cluster is too small: node=Node-e6286c27-f584-0626-8523-a8ce6905feea
>     writer.go:29: 2020-02-23T02:46:32.054Z [INFO]  TestUserEventToken.server: member joined, marking health alive: member=Node-e6286c27-f584-0626-8523-a8ce6905feea
>     writer.go:29: 2020-02-23T02:46:32.057Z [DEBUG] TestUserEventToken.server: Skipping self join check for node since the cluster is too small: node=Node-e6286c27-f584-0626-8523-a8ce6905feea
>     writer.go:29: 2020-02-23T02:46:32.057Z [DEBUG] TestUserEventToken.server: Skipping self join check for node since the cluster is too small: node=Node-e6286c27-f584-0626-8523-a8ce6905feea
>     writer.go:29: 2020-02-23T02:46:32.295Z [DEBUG] TestUserEventToken.acl: dropping node from result due to ACLs: node=Node-e6286c27-f584-0626-8523-a8ce6905feea
>     writer.go:29: 2020-02-23T02:46:32.326Z [WARN]  TestUserEventToken.server.internal: user event blocked by ACLs: event=foo accessorID=
>     writer.go:29: 2020-02-23T02:46:32.326Z [WARN]  TestUserEventToken.server.internal: user event blocked by ACLs: event=bar accessorID=
>     writer.go:29: 2020-02-23T02:46:32.326Z [WARN]  TestUserEventToken.server.internal: user event blocked by ACLs: event=zip accessorID=
>     writer.go:29: 2020-02-23T02:46:32.326Z [INFO]  TestUserEventToken: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:32.326Z [INFO]  TestUserEventToken.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:32.326Z [DEBUG] TestUserEventToken.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:32.326Z [DEBUG] TestUserEventToken.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:32.326Z [DEBUG] TestUserEventToken.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.326Z [WARN]  TestUserEventToken.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.326Z [DEBUG] TestUserEventToken.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:32.326Z [DEBUG] TestUserEventToken.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:32.326Z [DEBUG] TestUserEventToken.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.328Z [WARN]  TestUserEventToken.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.336Z [INFO]  TestUserEventToken.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:32.337Z [INFO]  TestUserEventToken: consul server down
>     writer.go:29: 2020-02-23T02:46:32.337Z [INFO]  TestUserEventToken: shutdown complete
>     writer.go:29: 2020-02-23T02:46:32.337Z [INFO]  TestUserEventToken: Stopping server: protocol=DNS address=127.0.0.1:16961 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.337Z [INFO]  TestUserEventToken: Stopping server: protocol=DNS address=127.0.0.1:16961 network=udp
>     writer.go:29: 2020-02-23T02:46:32.337Z [INFO]  TestUserEventToken: Stopping server: protocol=HTTP address=127.0.0.1:16962 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.337Z [INFO]  TestUserEventToken: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:32.337Z [INFO]  TestUserEventToken: Endpoints down
> === CONT  TestUiNodes_Filter
> --- PASS: TestUiNodeInfo (0.16s)
>     writer.go:29: 2020-02-23T02:46:32.363Z [WARN]  TestUiNodeInfo: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:32.363Z [DEBUG] TestUiNodeInfo.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:32.364Z [DEBUG] TestUiNodeInfo.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:32.385Z [INFO]  TestUiNodeInfo.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fda657f4-1b8b-255f-738f-42510bc33c0e Address:127.0.0.1:16996}]"
>     writer.go:29: 2020-02-23T02:46:32.386Z [INFO]  TestUiNodeInfo.server.serf.wan: serf: EventMemberJoin: Node-fda657f4-1b8b-255f-738f-42510bc33c0e.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.386Z [INFO]  TestUiNodeInfo.server.serf.lan: serf: EventMemberJoin: Node-fda657f4-1b8b-255f-738f-42510bc33c0e 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.386Z [INFO]  TestUiNodeInfo: Started DNS server: address=127.0.0.1:16991 network=udp
>     writer.go:29: 2020-02-23T02:46:32.386Z [INFO]  TestUiNodeInfo.server.raft: entering follower state: follower="Node at 127.0.0.1:16996 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:32.387Z [INFO]  TestUiNodeInfo.server: Adding LAN server: server="Node-fda657f4-1b8b-255f-738f-42510bc33c0e (Addr: tcp/127.0.0.1:16996) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:32.387Z [INFO]  TestUiNodeInfo.server: Handled event for server in area: event=member-join server=Node-fda657f4-1b8b-255f-738f-42510bc33c0e.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:32.387Z [INFO]  TestUiNodeInfo: Started DNS server: address=127.0.0.1:16991 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.387Z [INFO]  TestUiNodeInfo: Started HTTP server: address=127.0.0.1:16992 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.387Z [INFO]  TestUiNodeInfo: started state syncer
>     writer.go:29: 2020-02-23T02:46:32.454Z [WARN]  TestUiNodeInfo.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:32.454Z [INFO]  TestUiNodeInfo.server.raft: entering candidate state: node="Node at 127.0.0.1:16996 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:32.457Z [DEBUG] TestUiNodeInfo.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:32.457Z [DEBUG] TestUiNodeInfo.server.raft: vote granted: from=fda657f4-1b8b-255f-738f-42510bc33c0e term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:32.457Z [INFO]  TestUiNodeInfo.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:32.457Z [INFO]  TestUiNodeInfo.server.raft: entering leader state: leader="Node at 127.0.0.1:16996 [Leader]"
>     writer.go:29: 2020-02-23T02:46:32.457Z [INFO]  TestUiNodeInfo.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:32.457Z [INFO]  TestUiNodeInfo.server: New leader elected: payload=Node-fda657f4-1b8b-255f-738f-42510bc33c0e
>     writer.go:29: 2020-02-23T02:46:32.464Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:32.471Z [INFO]  TestUiNodeInfo.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:32.471Z [INFO]  TestUiNodeInfo.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.471Z [DEBUG] TestUiNodeInfo.server: Skipping self join check for node since the cluster is too small: node=Node-fda657f4-1b8b-255f-738f-42510bc33c0e
>     writer.go:29: 2020-02-23T02:46:32.471Z [INFO]  TestUiNodeInfo.server: member joined, marking health alive: member=Node-fda657f4-1b8b-255f-738f-42510bc33c0e
>     writer.go:29: 2020-02-23T02:46:32.477Z [INFO]  TestUiNodeInfo: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:32.477Z [INFO]  TestUiNodeInfo.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:32.478Z [DEBUG] TestUiNodeInfo.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.478Z [WARN]  TestUiNodeInfo.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.478Z [ERROR] TestUiNodeInfo.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:32.478Z [DEBUG] TestUiNodeInfo.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.479Z [WARN]  TestUiNodeInfo.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.481Z [INFO]  TestUiNodeInfo.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:32.481Z [INFO]  TestUiNodeInfo: consul server down
>     writer.go:29: 2020-02-23T02:46:32.481Z [INFO]  TestUiNodeInfo: shutdown complete
>     writer.go:29: 2020-02-23T02:46:32.481Z [INFO]  TestUiNodeInfo: Stopping server: protocol=DNS address=127.0.0.1:16991 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.481Z [INFO]  TestUiNodeInfo: Stopping server: protocol=DNS address=127.0.0.1:16991 network=udp
>     writer.go:29: 2020-02-23T02:46:32.481Z [INFO]  TestUiNodeInfo: Stopping server: protocol=HTTP address=127.0.0.1:16992 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.481Z [INFO]  TestUiNodeInfo: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:32.481Z [INFO]  TestUiNodeInfo: Endpoints down
> === CONT  TestUiNodes
> --- PASS: TestShouldProcessUserEvent (0.45s)
>     writer.go:29: 2020-02-23T02:46:32.157Z [WARN]  TestShouldProcessUserEvent: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:32.158Z [DEBUG] TestShouldProcessUserEvent.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:32.158Z [DEBUG] TestShouldProcessUserEvent.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:32.328Z [INFO]  TestShouldProcessUserEvent.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fcb32e3e-7226-f173-996d-74e516b5bdb4 Address:127.0.0.1:16984}]"
>     writer.go:29: 2020-02-23T02:46:32.329Z [INFO]  TestShouldProcessUserEvent.server.serf.wan: serf: EventMemberJoin: Node-fcb32e3e-7226-f173-996d-74e516b5bdb4.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.329Z [INFO]  TestShouldProcessUserEvent.server.raft: entering follower state: follower="Node at 127.0.0.1:16984 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:32.354Z [INFO]  TestShouldProcessUserEvent.server.serf.lan: serf: EventMemberJoin: Node-fcb32e3e-7226-f173-996d-74e516b5bdb4 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.354Z [INFO]  TestShouldProcessUserEvent: Started DNS server: address=127.0.0.1:16979 network=udp
>     writer.go:29: 2020-02-23T02:46:32.354Z [INFO]  TestShouldProcessUserEvent.server: Adding LAN server: server="Node-fcb32e3e-7226-f173-996d-74e516b5bdb4 (Addr: tcp/127.0.0.1:16984) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:32.354Z [INFO]  TestShouldProcessUserEvent.server: Handled event for server in area: event=member-join server=Node-fcb32e3e-7226-f173-996d-74e516b5bdb4.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:32.354Z [INFO]  TestShouldProcessUserEvent: Started DNS server: address=127.0.0.1:16979 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.355Z [INFO]  TestShouldProcessUserEvent: Started HTTP server: address=127.0.0.1:16980 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.355Z [INFO]  TestShouldProcessUserEvent: started state syncer
>     writer.go:29: 2020-02-23T02:46:32.398Z [WARN]  TestShouldProcessUserEvent.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:32.398Z [INFO]  TestShouldProcessUserEvent.server.raft: entering candidate state: node="Node at 127.0.0.1:16984 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:32.402Z [DEBUG] TestShouldProcessUserEvent.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:32.402Z [DEBUG] TestShouldProcessUserEvent.server.raft: vote granted: from=fcb32e3e-7226-f173-996d-74e516b5bdb4 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:32.402Z [INFO]  TestShouldProcessUserEvent.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:32.402Z [INFO]  TestShouldProcessUserEvent.server.raft: entering leader state: leader="Node at 127.0.0.1:16984 [Leader]"
>     writer.go:29: 2020-02-23T02:46:32.402Z [INFO]  TestShouldProcessUserEvent.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:32.402Z [INFO]  TestShouldProcessUserEvent.server: New leader elected: payload=Node-fcb32e3e-7226-f173-996d-74e516b5bdb4
>     writer.go:29: 2020-02-23T02:46:32.410Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:32.420Z [INFO]  TestShouldProcessUserEvent.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:32.420Z [INFO]  TestShouldProcessUserEvent.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.420Z [DEBUG] TestShouldProcessUserEvent.server: Skipping self join check for node since the cluster is too small: node=Node-fcb32e3e-7226-f173-996d-74e516b5bdb4
>     writer.go:29: 2020-02-23T02:46:32.420Z [INFO]  TestShouldProcessUserEvent.server: member joined, marking health alive: member=Node-fcb32e3e-7226-f173-996d-74e516b5bdb4
>     writer.go:29: 2020-02-23T02:46:32.495Z [DEBUG] TestShouldProcessUserEvent: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:32.498Z [INFO]  TestShouldProcessUserEvent: Synced node info
>     writer.go:29: 2020-02-23T02:46:32.593Z [INFO]  TestShouldProcessUserEvent: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:32.594Z [INFO]  TestShouldProcessUserEvent.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:32.594Z [DEBUG] TestShouldProcessUserEvent.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.594Z [WARN]  TestShouldProcessUserEvent.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.594Z [DEBUG] TestShouldProcessUserEvent.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.595Z [WARN]  TestShouldProcessUserEvent.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.597Z [INFO]  TestShouldProcessUserEvent.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:32.597Z [INFO]  TestShouldProcessUserEvent: consul server down
>     writer.go:29: 2020-02-23T02:46:32.597Z [INFO]  TestShouldProcessUserEvent: shutdown complete
>     writer.go:29: 2020-02-23T02:46:32.597Z [INFO]  TestShouldProcessUserEvent: Stopping server: protocol=DNS address=127.0.0.1:16979 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.597Z [INFO]  TestShouldProcessUserEvent: Stopping server: protocol=DNS address=127.0.0.1:16979 network=udp
>     writer.go:29: 2020-02-23T02:46:32.597Z [INFO]  TestShouldProcessUserEvent: Stopping server: protocol=HTTP address=127.0.0.1:16980 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.597Z [INFO]  TestShouldProcessUserEvent: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:32.597Z [INFO]  TestShouldProcessUserEvent: Endpoints down
> === CONT  TestUiIndex
> --- PASS: TestUiNodes_Filter (0.30s)
>     writer.go:29: 2020-02-23T02:46:32.367Z [WARN]  TestUiNodes_Filter: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:32.367Z [DEBUG] TestUiNodes_Filter.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:32.367Z [DEBUG] TestUiNodes_Filter.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:32.391Z [INFO]  TestUiNodes_Filter.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:3918d9d1-f642-194f-1a1f-a591ad77b8b4 Address:127.0.0.1:17002}]"
>     writer.go:29: 2020-02-23T02:46:32.391Z [INFO]  TestUiNodes_Filter.server.raft: entering follower state: follower="Node at 127.0.0.1:17002 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:32.392Z [INFO]  TestUiNodes_Filter.server.serf.wan: serf: EventMemberJoin: Node-3918d9d1-f642-194f-1a1f-a591ad77b8b4.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.392Z [INFO]  TestUiNodes_Filter.server.serf.lan: serf: EventMemberJoin: Node-3918d9d1-f642-194f-1a1f-a591ad77b8b4 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.392Z [INFO]  TestUiNodes_Filter.server: Handled event for server in area: event=member-join server=Node-3918d9d1-f642-194f-1a1f-a591ad77b8b4.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:32.392Z [INFO]  TestUiNodes_Filter.server: Adding LAN server: server="Node-3918d9d1-f642-194f-1a1f-a591ad77b8b4 (Addr: tcp/127.0.0.1:17002) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:32.393Z [INFO]  TestUiNodes_Filter: Started DNS server: address=127.0.0.1:16997 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.393Z [INFO]  TestUiNodes_Filter: Started DNS server: address=127.0.0.1:16997 network=udp
>     writer.go:29: 2020-02-23T02:46:32.393Z [INFO]  TestUiNodes_Filter: Started HTTP server: address=127.0.0.1:16998 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.393Z [INFO]  TestUiNodes_Filter: started state syncer
>     writer.go:29: 2020-02-23T02:46:32.426Z [WARN]  TestUiNodes_Filter.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:32.426Z [INFO]  TestUiNodes_Filter.server.raft: entering candidate state: node="Node at 127.0.0.1:17002 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:32.431Z [DEBUG] TestUiNodes_Filter.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:32.431Z [DEBUG] TestUiNodes_Filter.server.raft: vote granted: from=3918d9d1-f642-194f-1a1f-a591ad77b8b4 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:32.431Z [INFO]  TestUiNodes_Filter.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:32.431Z [INFO]  TestUiNodes_Filter.server.raft: entering leader state: leader="Node at 127.0.0.1:17002 [Leader]"
>     writer.go:29: 2020-02-23T02:46:32.431Z [INFO]  TestUiNodes_Filter.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:32.431Z [INFO]  TestUiNodes_Filter.server: New leader elected: payload=Node-3918d9d1-f642-194f-1a1f-a591ad77b8b4
>     writer.go:29: 2020-02-23T02:46:32.438Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:32.445Z [INFO]  TestUiNodes_Filter.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:32.445Z [INFO]  TestUiNodes_Filter.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.445Z [DEBUG] TestUiNodes_Filter.server: Skipping self join check for node since the cluster is too small: node=Node-3918d9d1-f642-194f-1a1f-a591ad77b8b4
>     writer.go:29: 2020-02-23T02:46:32.445Z [INFO]  TestUiNodes_Filter.server: member joined, marking health alive: member=Node-3918d9d1-f642-194f-1a1f-a591ad77b8b4
>     writer.go:29: 2020-02-23T02:46:32.632Z [INFO]  TestUiNodes_Filter: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:32.632Z [INFO]  TestUiNodes_Filter.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:32.632Z [DEBUG] TestUiNodes_Filter.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.632Z [WARN]  TestUiNodes_Filter.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.632Z [ERROR] TestUiNodes_Filter.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:32.632Z [DEBUG] TestUiNodes_Filter.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.633Z [WARN]  TestUiNodes_Filter.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.635Z [INFO]  TestUiNodes_Filter.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:32.635Z [INFO]  TestUiNodes_Filter: consul server down
>     writer.go:29: 2020-02-23T02:46:32.635Z [INFO]  TestUiNodes_Filter: shutdown complete
>     writer.go:29: 2020-02-23T02:46:32.635Z [INFO]  TestUiNodes_Filter: Stopping server: protocol=DNS address=127.0.0.1:16997 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.635Z [INFO]  TestUiNodes_Filter: Stopping server: protocol=DNS address=127.0.0.1:16997 network=udp
>     writer.go:29: 2020-02-23T02:46:32.635Z [INFO]  TestUiNodes_Filter: Stopping server: protocol=HTTP address=127.0.0.1:16998 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.635Z [INFO]  TestUiNodes_Filter: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:32.635Z [INFO]  TestUiNodes_Filter: Endpoints down
> === CONT  TestTxnEndpoint_UpdateCheck
> === RUN   TestUiServices/No_Filter
> === PAUSE TestUiServices/No_Filter
> === RUN   TestUiServices/Filtered
> === CONT  TestTxnEndpoint_KV_Actions
> === RUN   TestTxnEndpoint_KV_Actions/#00
> --- PASS: TestUiNodes (0.37s)
>     writer.go:29: 2020-02-23T02:46:32.489Z [WARN]  TestUiNodes: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:32.489Z [DEBUG] TestUiNodes.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:32.489Z [DEBUG] TestUiNodes.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:32.499Z [INFO]  TestUiNodes.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:51b50ac8-8cde-d22f-4362-db32aa07b599 Address:127.0.0.1:17008}]"
>     writer.go:29: 2020-02-23T02:46:32.499Z [INFO]  TestUiNodes.server.raft: entering follower state: follower="Node at 127.0.0.1:17008 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:32.504Z [INFO]  TestUiNodes.server.serf.wan: serf: EventMemberJoin: Node-51b50ac8-8cde-d22f-4362-db32aa07b599.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.505Z [INFO]  TestUiNodes.server.serf.lan: serf: EventMemberJoin: Node-51b50ac8-8cde-d22f-4362-db32aa07b599 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.506Z [INFO]  TestUiNodes.server: Handled event for server in area: event=member-join server=Node-51b50ac8-8cde-d22f-4362-db32aa07b599.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:32.507Z [INFO]  TestUiNodes.server: Adding LAN server: server="Node-51b50ac8-8cde-d22f-4362-db32aa07b599 (Addr: tcp/127.0.0.1:17008) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:32.507Z [INFO]  TestUiNodes: Started DNS server: address=127.0.0.1:17003 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.507Z [INFO]  TestUiNodes: Started DNS server: address=127.0.0.1:17003 network=udp
>     writer.go:29: 2020-02-23T02:46:32.507Z [INFO]  TestUiNodes: Started HTTP server: address=127.0.0.1:17004 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.507Z [INFO]  TestUiNodes: started state syncer
>     writer.go:29: 2020-02-23T02:46:32.558Z [WARN]  TestUiNodes.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:32.558Z [INFO]  TestUiNodes.server.raft: entering candidate state: node="Node at 127.0.0.1:17008 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:32.562Z [DEBUG] TestUiNodes.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:32.562Z [DEBUG] TestUiNodes.server.raft: vote granted: from=51b50ac8-8cde-d22f-4362-db32aa07b599 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:32.562Z [INFO]  TestUiNodes.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:32.562Z [INFO]  TestUiNodes.server.raft: entering leader state: leader="Node at 127.0.0.1:17008 [Leader]"
>     writer.go:29: 2020-02-23T02:46:32.562Z [INFO]  TestUiNodes.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:32.562Z [INFO]  TestUiNodes.server: New leader elected: payload=Node-51b50ac8-8cde-d22f-4362-db32aa07b599
>     writer.go:29: 2020-02-23T02:46:32.569Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:32.578Z [INFO]  TestUiNodes.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:32.578Z [INFO]  TestUiNodes.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.578Z [DEBUG] TestUiNodes.server: Skipping self join check for node since the cluster is too small: node=Node-51b50ac8-8cde-d22f-4362-db32aa07b599
>     writer.go:29: 2020-02-23T02:46:32.578Z [INFO]  TestUiNodes.server: member joined, marking health alive: member=Node-51b50ac8-8cde-d22f-4362-db32aa07b599
>     writer.go:29: 2020-02-23T02:46:32.843Z [INFO]  TestUiNodes: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:32.843Z [INFO]  TestUiNodes.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:32.843Z [DEBUG] TestUiNodes.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.843Z [WARN]  TestUiNodes.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.843Z [DEBUG] TestUiNodes.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.843Z [ERROR] TestUiNodes.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:32.844Z [WARN]  TestUiNodes.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.846Z [INFO]  TestUiNodes.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:32.846Z [INFO]  TestUiNodes: consul server down
>     writer.go:29: 2020-02-23T02:46:32.846Z [INFO]  TestUiNodes: shutdown complete
>     writer.go:29: 2020-02-23T02:46:32.846Z [INFO]  TestUiNodes: Stopping server: protocol=DNS address=127.0.0.1:17003 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.846Z [INFO]  TestUiNodes: Stopping server: protocol=DNS address=127.0.0.1:17003 network=udp
>     writer.go:29: 2020-02-23T02:46:32.846Z [INFO]  TestUiNodes: Stopping server: protocol=HTTP address=127.0.0.1:17004 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.846Z [INFO]  TestUiNodes: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:32.846Z [INFO]  TestUiNodes: Endpoints down
> === CONT  TestTxnEndpoint_Bad_Size_Ops
> --- PASS: TestTxnEndpoint_UpdateCheck (0.23s)
>     writer.go:29: 2020-02-23T02:46:32.643Z [WARN]  TestTxnEndpoint_UpdateCheck: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:32.643Z [DEBUG] TestTxnEndpoint_UpdateCheck.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:32.644Z [DEBUG] TestTxnEndpoint_UpdateCheck.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:32.656Z [INFO]  TestTxnEndpoint_UpdateCheck.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:35be27ee-3e97-8253-9a2d-e1c3c48ec184 Address:127.0.0.1:17020}]"
>     writer.go:29: 2020-02-23T02:46:32.656Z [INFO]  TestTxnEndpoint_UpdateCheck.server.raft: entering follower state: follower="Node at 127.0.0.1:17020 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:32.657Z [INFO]  TestTxnEndpoint_UpdateCheck.server.serf.wan: serf: EventMemberJoin: Node-35be27ee-3e97-8253-9a2d-e1c3c48ec184.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.658Z [INFO]  TestTxnEndpoint_UpdateCheck.server.serf.lan: serf: EventMemberJoin: Node-35be27ee-3e97-8253-9a2d-e1c3c48ec184 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.658Z [INFO]  TestTxnEndpoint_UpdateCheck.server: Handled event for server in area: event=member-join server=Node-35be27ee-3e97-8253-9a2d-e1c3c48ec184.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:32.658Z [INFO]  TestTxnEndpoint_UpdateCheck.server: Adding LAN server: server="Node-35be27ee-3e97-8253-9a2d-e1c3c48ec184 (Addr: tcp/127.0.0.1:17020) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:32.663Z [INFO]  TestTxnEndpoint_UpdateCheck: Started DNS server: address=127.0.0.1:17015 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.663Z [INFO]  TestTxnEndpoint_UpdateCheck: Started DNS server: address=127.0.0.1:17015 network=udp
>     writer.go:29: 2020-02-23T02:46:32.664Z [INFO]  TestTxnEndpoint_UpdateCheck: Started HTTP server: address=127.0.0.1:17016 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.664Z [INFO]  TestTxnEndpoint_UpdateCheck: started state syncer
>     writer.go:29: 2020-02-23T02:46:32.723Z [WARN]  TestTxnEndpoint_UpdateCheck.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:32.723Z [INFO]  TestTxnEndpoint_UpdateCheck.server.raft: entering candidate state: node="Node at 127.0.0.1:17020 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:32.727Z [DEBUG] TestTxnEndpoint_UpdateCheck.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:32.727Z [DEBUG] TestTxnEndpoint_UpdateCheck.server.raft: vote granted: from=35be27ee-3e97-8253-9a2d-e1c3c48ec184 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:32.727Z [INFO]  TestTxnEndpoint_UpdateCheck.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:32.727Z [INFO]  TestTxnEndpoint_UpdateCheck.server.raft: entering leader state: leader="Node at 127.0.0.1:17020 [Leader]"
>     writer.go:29: 2020-02-23T02:46:32.727Z [INFO]  TestTxnEndpoint_UpdateCheck.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:32.727Z [INFO]  TestTxnEndpoint_UpdateCheck.server: New leader elected: payload=Node-35be27ee-3e97-8253-9a2d-e1c3c48ec184
>     writer.go:29: 2020-02-23T02:46:32.734Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:32.743Z [INFO]  TestTxnEndpoint_UpdateCheck.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:32.743Z [INFO]  TestTxnEndpoint_UpdateCheck.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.743Z [DEBUG] TestTxnEndpoint_UpdateCheck.server: Skipping self join check for node since the cluster is too small: node=Node-35be27ee-3e97-8253-9a2d-e1c3c48ec184
>     writer.go:29: 2020-02-23T02:46:32.743Z [INFO]  TestTxnEndpoint_UpdateCheck.server: member joined, marking health alive: member=Node-35be27ee-3e97-8253-9a2d-e1c3c48ec184
>     writer.go:29: 2020-02-23T02:46:32.770Z [DEBUG] TestTxnEndpoint_UpdateCheck: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:32.772Z [INFO]  TestTxnEndpoint_UpdateCheck: Synced node info
>     writer.go:29: 2020-02-23T02:46:32.772Z [DEBUG] TestTxnEndpoint_UpdateCheck: Node info in sync
>     writer.go:29: 2020-02-23T02:46:32.858Z [INFO]  TestTxnEndpoint_UpdateCheck: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:32.858Z [INFO]  TestTxnEndpoint_UpdateCheck.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:32.858Z [DEBUG] TestTxnEndpoint_UpdateCheck.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.859Z [WARN]  TestTxnEndpoint_UpdateCheck.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.859Z [DEBUG] TestTxnEndpoint_UpdateCheck.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.860Z [WARN]  TestTxnEndpoint_UpdateCheck.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.863Z [INFO]  TestTxnEndpoint_UpdateCheck.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:32.863Z [INFO]  TestTxnEndpoint_UpdateCheck: consul server down
>     writer.go:29: 2020-02-23T02:46:32.863Z [INFO]  TestTxnEndpoint_UpdateCheck: shutdown complete
>     writer.go:29: 2020-02-23T02:46:32.863Z [INFO]  TestTxnEndpoint_UpdateCheck: Stopping server: protocol=DNS address=127.0.0.1:17015 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.863Z [INFO]  TestTxnEndpoint_UpdateCheck: Stopping server: protocol=DNS address=127.0.0.1:17015 network=udp
>     writer.go:29: 2020-02-23T02:46:32.863Z [INFO]  TestTxnEndpoint_UpdateCheck: Stopping server: protocol=HTTP address=127.0.0.1:17016 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.863Z [INFO]  TestTxnEndpoint_UpdateCheck: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:32.863Z [INFO]  TestTxnEndpoint_UpdateCheck: Endpoints down
> === CONT  TestTxnEndpoint_Bad_Size_Net
> === RUN   TestTxnEndpoint_Bad_Size_Net/toobig
> --- PASS: TestUiIndex (0.37s)
>     writer.go:29: 2020-02-23T02:46:32.606Z [WARN]  TestUiIndex: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:32.606Z [DEBUG] TestUiIndex.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:32.606Z [DEBUG] TestUiIndex.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:32.619Z [INFO]  TestUiIndex.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:2f051e07-1d45-a79a-e5eb-d8163235851a Address:127.0.0.1:17014}]"
>     writer.go:29: 2020-02-23T02:46:32.619Z [INFO]  TestUiIndex.server.raft: entering follower state: follower="Node at 127.0.0.1:17014 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:32.620Z [INFO]  TestUiIndex.server.serf.wan: serf: EventMemberJoin: Node-2f051e07-1d45-a79a-e5eb-d8163235851a.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.620Z [INFO]  TestUiIndex.server.serf.lan: serf: EventMemberJoin: Node-2f051e07-1d45-a79a-e5eb-d8163235851a 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.620Z [INFO]  TestUiIndex.server: Handled event for server in area: event=member-join server=Node-2f051e07-1d45-a79a-e5eb-d8163235851a.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:32.620Z [INFO]  TestUiIndex.server: Adding LAN server: server="Node-2f051e07-1d45-a79a-e5eb-d8163235851a (Addr: tcp/127.0.0.1:17014) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:32.621Z [INFO]  TestUiIndex: Started DNS server: address=127.0.0.1:17009 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.621Z [INFO]  TestUiIndex: Started DNS server: address=127.0.0.1:17009 network=udp
>     writer.go:29: 2020-02-23T02:46:32.621Z [INFO]  TestUiIndex: Started HTTP server: address=127.0.0.1:17010 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.621Z [INFO]  TestUiIndex: started state syncer
>     writer.go:29: 2020-02-23T02:46:32.675Z [WARN]  TestUiIndex.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:32.675Z [INFO]  TestUiIndex.server.raft: entering candidate state: node="Node at 127.0.0.1:17014 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:32.683Z [DEBUG] TestUiIndex.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:32.683Z [DEBUG] TestUiIndex.server.raft: vote granted: from=2f051e07-1d45-a79a-e5eb-d8163235851a term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:32.683Z [INFO]  TestUiIndex.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:32.683Z [INFO]  TestUiIndex.server.raft: entering leader state: leader="Node at 127.0.0.1:17014 [Leader]"
>     writer.go:29: 2020-02-23T02:46:32.683Z [INFO]  TestUiIndex.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:32.683Z [INFO]  TestUiIndex.server: New leader elected: payload=Node-2f051e07-1d45-a79a-e5eb-d8163235851a
>     writer.go:29: 2020-02-23T02:46:32.690Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:32.698Z [INFO]  TestUiIndex.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:32.698Z [INFO]  TestUiIndex.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.698Z [DEBUG] TestUiIndex.server: Skipping self join check for node since the cluster is too small: node=Node-2f051e07-1d45-a79a-e5eb-d8163235851a
>     writer.go:29: 2020-02-23T02:46:32.698Z [INFO]  TestUiIndex.server: member joined, marking health alive: member=Node-2f051e07-1d45-a79a-e5eb-d8163235851a
>     writer.go:29: 2020-02-23T02:46:32.735Z [DEBUG] TestUiIndex: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:32.737Z [INFO]  TestUiIndex: Synced node info
>     writer.go:29: 2020-02-23T02:46:32.964Z [INFO]  TestUiIndex: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:32.964Z [INFO]  TestUiIndex.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:32.964Z [DEBUG] TestUiIndex.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.964Z [WARN]  TestUiIndex.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.964Z [DEBUG] TestUiIndex.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.966Z [WARN]  TestUiIndex.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:32.968Z [INFO]  TestUiIndex.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:32.968Z [INFO]  TestUiIndex: consul server down
>     writer.go:29: 2020-02-23T02:46:32.968Z [INFO]  TestUiIndex: shutdown complete
>     writer.go:29: 2020-02-23T02:46:32.968Z [INFO]  TestUiIndex: Stopping server: protocol=DNS address=127.0.0.1:17009 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.968Z [INFO]  TestUiIndex: Stopping server: protocol=DNS address=127.0.0.1:17009 network=udp
>     writer.go:29: 2020-02-23T02:46:32.968Z [INFO]  TestUiIndex: Stopping server: protocol=HTTP address=127.0.0.1:17010 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.968Z [INFO]  TestUiIndex: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:32.968Z [INFO]  TestUiIndex: Endpoints down
> === CONT  TestTxnEndpoint_Bad_Size_Item
> === RUN   TestTxnEndpoint_Bad_Size_Item/toobig
> === RUN   TestTxnEndpoint_KV_Actions/#01
> --- PASS: TestTxnEndpoint_Bad_Size_Ops (0.28s)
>     writer.go:29: 2020-02-23T02:46:32.853Z [WARN]  TestTxnEndpoint_Bad_Size_Ops: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:32.854Z [DEBUG] TestTxnEndpoint_Bad_Size_Ops.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:32.854Z [DEBUG] TestTxnEndpoint_Bad_Size_Ops.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:32.864Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:5865bce0-8a26-ec87-619b-a59c708fff27 Address:127.0.0.1:17032}]"
>     writer.go:29: 2020-02-23T02:46:32.864Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.server.serf.wan: serf: EventMemberJoin: Node-5865bce0-8a26-ec87-619b-a59c708fff27.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.865Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.server.serf.lan: serf: EventMemberJoin: Node-5865bce0-8a26-ec87-619b-a59c708fff27 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:32.865Z [INFO]  TestTxnEndpoint_Bad_Size_Ops: Started DNS server: address=127.0.0.1:17027 network=udp
>     writer.go:29: 2020-02-23T02:46:32.865Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.server.raft: entering follower state: follower="Node at 127.0.0.1:17032 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:32.865Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.server: Adding LAN server: server="Node-5865bce0-8a26-ec87-619b-a59c708fff27 (Addr: tcp/127.0.0.1:17032) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:32.865Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.server: Handled event for server in area: event=member-join server=Node-5865bce0-8a26-ec87-619b-a59c708fff27.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:32.865Z [INFO]  TestTxnEndpoint_Bad_Size_Ops: Started DNS server: address=127.0.0.1:17027 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.866Z [INFO]  TestTxnEndpoint_Bad_Size_Ops: Started HTTP server: address=127.0.0.1:17028 network=tcp
>     writer.go:29: 2020-02-23T02:46:32.866Z [INFO]  TestTxnEndpoint_Bad_Size_Ops: started state syncer
>     writer.go:29: 2020-02-23T02:46:32.917Z [WARN]  TestTxnEndpoint_Bad_Size_Ops.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:32.917Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.server.raft: entering candidate state: node="Node at 127.0.0.1:17032 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:32.921Z [DEBUG] TestTxnEndpoint_Bad_Size_Ops.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:32.921Z [DEBUG] TestTxnEndpoint_Bad_Size_Ops.server.raft: vote granted: from=5865bce0-8a26-ec87-619b-a59c708fff27 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:32.921Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:32.921Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.server.raft: entering leader state: leader="Node at 127.0.0.1:17032 [Leader]"
>     writer.go:29: 2020-02-23T02:46:32.921Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:32.921Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.server: New leader elected: payload=Node-5865bce0-8a26-ec87-619b-a59c708fff27
>     writer.go:29: 2020-02-23T02:46:32.930Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:32.944Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:32.944Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:32.944Z [DEBUG] TestTxnEndpoint_Bad_Size_Ops.server: Skipping self join check for node since the cluster is too small: node=Node-5865bce0-8a26-ec87-619b-a59c708fff27
>     writer.go:29: 2020-02-23T02:46:32.944Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.server: member joined, marking health alive: member=Node-5865bce0-8a26-ec87-619b-a59c708fff27
>     writer.go:29: 2020-02-23T02:46:32.962Z [DEBUG] TestTxnEndpoint_Bad_Size_Ops: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:32.965Z [INFO]  TestTxnEndpoint_Bad_Size_Ops: Synced node info
>     writer.go:29: 2020-02-23T02:46:32.965Z [DEBUG] TestTxnEndpoint_Bad_Size_Ops: Node info in sync
>     writer.go:29: 2020-02-23T02:46:33.126Z [INFO]  TestTxnEndpoint_Bad_Size_Ops: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:33.126Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:33.126Z [DEBUG] TestTxnEndpoint_Bad_Size_Ops.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:33.126Z [WARN]  TestTxnEndpoint_Bad_Size_Ops.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:33.126Z [DEBUG] TestTxnEndpoint_Bad_Size_Ops.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:33.129Z [WARN]  TestTxnEndpoint_Bad_Size_Ops.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:33.130Z [INFO]  TestTxnEndpoint_Bad_Size_Ops.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:33.130Z [INFO]  TestTxnEndpoint_Bad_Size_Ops: consul server down
>     writer.go:29: 2020-02-23T02:46:33.130Z [INFO]  TestTxnEndpoint_Bad_Size_Ops: shutdown complete
>     writer.go:29: 2020-02-23T02:46:33.131Z [INFO]  TestTxnEndpoint_Bad_Size_Ops: Stopping server: protocol=DNS address=127.0.0.1:17027 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.131Z [INFO]  TestTxnEndpoint_Bad_Size_Ops: Stopping server: protocol=DNS address=127.0.0.1:17027 network=udp
>     writer.go:29: 2020-02-23T02:46:33.131Z [INFO]  TestTxnEndpoint_Bad_Size_Ops: Stopping server: protocol=HTTP address=127.0.0.1:17028 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.131Z [INFO]  TestTxnEndpoint_Bad_Size_Ops: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:33.131Z [INFO]  TestTxnEndpoint_Bad_Size_Ops: Endpoints down
> === CONT  TestTxnEndpoint_Bad_JSON
> --- PASS: TestTxnEndpoint_KV_Actions (0.64s)
>     --- PASS: TestTxnEndpoint_KV_Actions/#00 (0.29s)
>         writer.go:29: 2020-02-23T02:46:32.685Z [WARN]  TestTxnEndpoint_KV_Actions/#00: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:32.685Z [DEBUG] TestTxnEndpoint_KV_Actions/#00.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:32.686Z [DEBUG] TestTxnEndpoint_KV_Actions/#00.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:32.697Z [INFO]  TestTxnEndpoint_KV_Actions/#00.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:885fda0c-ef45-76a1-eb77-e4dfad5a9cba Address:127.0.0.1:17026}]"
>         writer.go:29: 2020-02-23T02:46:32.697Z [INFO]  TestTxnEndpoint_KV_Actions/#00.server.raft: entering follower state: follower="Node at 127.0.0.1:17026 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:32.698Z [INFO]  TestTxnEndpoint_KV_Actions/#00.server.serf.wan: serf: EventMemberJoin: Node-885fda0c-ef45-76a1-eb77-e4dfad5a9cba.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:32.699Z [INFO]  TestTxnEndpoint_KV_Actions/#00.server.serf.lan: serf: EventMemberJoin: Node-885fda0c-ef45-76a1-eb77-e4dfad5a9cba 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:32.699Z [INFO]  TestTxnEndpoint_KV_Actions/#00.server: Handled event for server in area: event=member-join server=Node-885fda0c-ef45-76a1-eb77-e4dfad5a9cba.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:32.699Z [INFO]  TestTxnEndpoint_KV_Actions/#00.server: Adding LAN server: server="Node-885fda0c-ef45-76a1-eb77-e4dfad5a9cba (Addr: tcp/127.0.0.1:17026) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:32.699Z [INFO]  TestTxnEndpoint_KV_Actions/#00: Started DNS server: address=127.0.0.1:17021 network=tcp
>         writer.go:29: 2020-02-23T02:46:32.699Z [INFO]  TestTxnEndpoint_KV_Actions/#00: Started DNS server: address=127.0.0.1:17021 network=udp
>         writer.go:29: 2020-02-23T02:46:32.700Z [INFO]  TestTxnEndpoint_KV_Actions/#00: Started HTTP server: address=127.0.0.1:17022 network=tcp
>         writer.go:29: 2020-02-23T02:46:32.700Z [INFO]  TestTxnEndpoint_KV_Actions/#00: started state syncer
>         writer.go:29: 2020-02-23T02:46:32.752Z [WARN]  TestTxnEndpoint_KV_Actions/#00.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:32.752Z [INFO]  TestTxnEndpoint_KV_Actions/#00.server.raft: entering candidate state: node="Node at 127.0.0.1:17026 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:32.755Z [DEBUG] TestTxnEndpoint_KV_Actions/#00.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:32.755Z [DEBUG] TestTxnEndpoint_KV_Actions/#00.server.raft: vote granted: from=885fda0c-ef45-76a1-eb77-e4dfad5a9cba term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:32.755Z [INFO]  TestTxnEndpoint_KV_Actions/#00.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:32.755Z [INFO]  TestTxnEndpoint_KV_Actions/#00.server.raft: entering leader state: leader="Node at 127.0.0.1:17026 [Leader]"
>         writer.go:29: 2020-02-23T02:46:32.755Z [INFO]  TestTxnEndpoint_KV_Actions/#00.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:32.755Z [INFO]  TestTxnEndpoint_KV_Actions/#00.server: New leader elected: payload=Node-885fda0c-ef45-76a1-eb77-e4dfad5a9cba
>         writer.go:29: 2020-02-23T02:46:32.763Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:32.777Z [INFO]  TestTxnEndpoint_KV_Actions/#00.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:32.777Z [INFO]  TestTxnEndpoint_KV_Actions/#00.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:32.777Z [DEBUG] TestTxnEndpoint_KV_Actions/#00.server: Skipping self join check for node since the cluster is too small: node=Node-885fda0c-ef45-76a1-eb77-e4dfad5a9cba
>         writer.go:29: 2020-02-23T02:46:32.778Z [INFO]  TestTxnEndpoint_KV_Actions/#00.server: member joined, marking health alive: member=Node-885fda0c-ef45-76a1-eb77-e4dfad5a9cba
>         writer.go:29: 2020-02-23T02:46:32.850Z [DEBUG] TestTxnEndpoint_KV_Actions/#00: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:32.853Z [INFO]  TestTxnEndpoint_KV_Actions/#00: Synced node info
>         writer.go:29: 2020-02-23T02:46:32.967Z [INFO]  TestTxnEndpoint_KV_Actions/#00: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:32.967Z [INFO]  TestTxnEndpoint_KV_Actions/#00.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:32.967Z [DEBUG] TestTxnEndpoint_KV_Actions/#00.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:32.967Z [WARN]  TestTxnEndpoint_KV_Actions/#00.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:32.967Z [DEBUG] TestTxnEndpoint_KV_Actions/#00.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:32.969Z [WARN]  TestTxnEndpoint_KV_Actions/#00.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:32.971Z [INFO]  TestTxnEndpoint_KV_Actions/#00.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:32.971Z [INFO]  TestTxnEndpoint_KV_Actions/#00: consul server down
>         writer.go:29: 2020-02-23T02:46:32.971Z [INFO]  TestTxnEndpoint_KV_Actions/#00: shutdown complete
>         writer.go:29: 2020-02-23T02:46:32.971Z [INFO]  TestTxnEndpoint_KV_Actions/#00: Stopping server: protocol=DNS address=127.0.0.1:17021 network=tcp
>         writer.go:29: 2020-02-23T02:46:32.971Z [INFO]  TestTxnEndpoint_KV_Actions/#00: Stopping server: protocol=DNS address=127.0.0.1:17021 network=udp
>         writer.go:29: 2020-02-23T02:46:32.971Z [INFO]  TestTxnEndpoint_KV_Actions/#00: Stopping server: protocol=HTTP address=127.0.0.1:17022 network=tcp
>         writer.go:29: 2020-02-23T02:46:32.971Z [INFO]  TestTxnEndpoint_KV_Actions/#00: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:32.971Z [INFO]  TestTxnEndpoint_KV_Actions/#00: Endpoints down
>     --- PASS: TestTxnEndpoint_KV_Actions/#01 (0.35s)
>         writer.go:29: 2020-02-23T02:46:32.980Z [WARN]  TestTxnEndpoint_KV_Actions/#01: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:32.981Z [DEBUG] TestTxnEndpoint_KV_Actions/#01.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:32.981Z [DEBUG] TestTxnEndpoint_KV_Actions/#01.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:32.990Z [INFO]  TestTxnEndpoint_KV_Actions/#01.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:ffad5cde-aff2-f4c4-52eb-711cc28a6750 Address:127.0.0.1:17050}]"
>         writer.go:29: 2020-02-23T02:46:32.990Z [INFO]  TestTxnEndpoint_KV_Actions/#01.server.raft: entering follower state: follower="Node at 127.0.0.1:17050 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:32.991Z [INFO]  TestTxnEndpoint_KV_Actions/#01.server.serf.wan: serf: EventMemberJoin: Node-ffad5cde-aff2-f4c4-52eb-711cc28a6750.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:32.991Z [INFO]  TestTxnEndpoint_KV_Actions/#01.server.serf.lan: serf: EventMemberJoin: Node-ffad5cde-aff2-f4c4-52eb-711cc28a6750 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:32.991Z [INFO]  TestTxnEndpoint_KV_Actions/#01: Started DNS server: address=127.0.0.1:17045 network=udp
>         writer.go:29: 2020-02-23T02:46:32.991Z [INFO]  TestTxnEndpoint_KV_Actions/#01.server: Adding LAN server: server="Node-ffad5cde-aff2-f4c4-52eb-711cc28a6750 (Addr: tcp/127.0.0.1:17050) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:32.992Z [INFO]  TestTxnEndpoint_KV_Actions/#01.server: Handled event for server in area: event=member-join server=Node-ffad5cde-aff2-f4c4-52eb-711cc28a6750.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:32.992Z [INFO]  TestTxnEndpoint_KV_Actions/#01: Started DNS server: address=127.0.0.1:17045 network=tcp
>         writer.go:29: 2020-02-23T02:46:32.992Z [INFO]  TestTxnEndpoint_KV_Actions/#01: Started HTTP server: address=127.0.0.1:17046 network=tcp
>         writer.go:29: 2020-02-23T02:46:32.992Z [INFO]  TestTxnEndpoint_KV_Actions/#01: started state syncer
>         writer.go:29: 2020-02-23T02:46:33.055Z [WARN]  TestTxnEndpoint_KV_Actions/#01.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:33.055Z [INFO]  TestTxnEndpoint_KV_Actions/#01.server.raft: entering candidate state: node="Node at 127.0.0.1:17050 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:33.061Z [DEBUG] TestTxnEndpoint_KV_Actions/#01.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:33.061Z [DEBUG] TestTxnEndpoint_KV_Actions/#01.server.raft: vote granted: from=ffad5cde-aff2-f4c4-52eb-711cc28a6750 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:33.061Z [INFO]  TestTxnEndpoint_KV_Actions/#01.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:33.061Z [INFO]  TestTxnEndpoint_KV_Actions/#01.server.raft: entering leader state: leader="Node at 127.0.0.1:17050 [Leader]"
>         writer.go:29: 2020-02-23T02:46:33.061Z [INFO]  TestTxnEndpoint_KV_Actions/#01.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:33.061Z [INFO]  TestTxnEndpoint_KV_Actions/#01.server: New leader elected: payload=Node-ffad5cde-aff2-f4c4-52eb-711cc28a6750
>         writer.go:29: 2020-02-23T02:46:33.068Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:33.076Z [INFO]  TestTxnEndpoint_KV_Actions/#01.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:33.076Z [INFO]  TestTxnEndpoint_KV_Actions/#01.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:33.076Z [DEBUG] TestTxnEndpoint_KV_Actions/#01.server: Skipping self join check for node since the cluster is too small: node=Node-ffad5cde-aff2-f4c4-52eb-711cc28a6750
>         writer.go:29: 2020-02-23T02:46:33.076Z [INFO]  TestTxnEndpoint_KV_Actions/#01.server: member joined, marking health alive: member=Node-ffad5cde-aff2-f4c4-52eb-711cc28a6750
>         writer.go:29: 2020-02-23T02:46:33.298Z [INFO]  TestTxnEndpoint_KV_Actions/#01: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:33.298Z [INFO]  TestTxnEndpoint_KV_Actions/#01.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:33.298Z [DEBUG] TestTxnEndpoint_KV_Actions/#01.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:33.298Z [WARN]  TestTxnEndpoint_KV_Actions/#01.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:33.298Z [ERROR] TestTxnEndpoint_KV_Actions/#01.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:33.299Z [DEBUG] TestTxnEndpoint_KV_Actions/#01.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:33.300Z [WARN]  TestTxnEndpoint_KV_Actions/#01.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:33.320Z [INFO]  TestTxnEndpoint_KV_Actions/#01.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:33.321Z [INFO]  TestTxnEndpoint_KV_Actions/#01: consul server down
>         writer.go:29: 2020-02-23T02:46:33.321Z [INFO]  TestTxnEndpoint_KV_Actions/#01: shutdown complete
>         writer.go:29: 2020-02-23T02:46:33.321Z [INFO]  TestTxnEndpoint_KV_Actions/#01: Stopping server: protocol=DNS address=127.0.0.1:17045 network=tcp
>         writer.go:29: 2020-02-23T02:46:33.321Z [INFO]  TestTxnEndpoint_KV_Actions/#01: Stopping server: protocol=DNS address=127.0.0.1:17045 network=udp
>         writer.go:29: 2020-02-23T02:46:33.321Z [INFO]  TestTxnEndpoint_KV_Actions/#01: Stopping server: protocol=HTTP address=127.0.0.1:17046 network=tcp
>         writer.go:29: 2020-02-23T02:46:33.321Z [INFO]  TestTxnEndpoint_KV_Actions/#01: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:33.321Z [INFO]  TestTxnEndpoint_KV_Actions/#01: Endpoints down
> === CONT  TestStatusPeersSecondary
> === RUN   TestTxnEndpoint_Bad_Size_Item/allowed
> --- PASS: TestTxnEndpoint_Bad_JSON (0.30s)
>     writer.go:29: 2020-02-23T02:46:33.139Z [WARN]  TestTxnEndpoint_Bad_JSON: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:33.139Z [DEBUG] TestTxnEndpoint_Bad_JSON.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:33.139Z [DEBUG] TestTxnEndpoint_Bad_JSON.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:33.152Z [INFO]  TestTxnEndpoint_Bad_JSON.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:46ab2a3b-d7bc-cabf-fba5-5e33c1ba379d Address:127.0.0.1:17056}]"
>     writer.go:29: 2020-02-23T02:46:33.152Z [INFO]  TestTxnEndpoint_Bad_JSON.server.raft: entering follower state: follower="Node at 127.0.0.1:17056 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:33.153Z [INFO]  TestTxnEndpoint_Bad_JSON.server.serf.wan: serf: EventMemberJoin: Node-46ab2a3b-d7bc-cabf-fba5-5e33c1ba379d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:33.154Z [INFO]  TestTxnEndpoint_Bad_JSON.server.serf.lan: serf: EventMemberJoin: Node-46ab2a3b-d7bc-cabf-fba5-5e33c1ba379d 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:33.154Z [INFO]  TestTxnEndpoint_Bad_JSON.server: Handled event for server in area: event=member-join server=Node-46ab2a3b-d7bc-cabf-fba5-5e33c1ba379d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:33.154Z [INFO]  TestTxnEndpoint_Bad_JSON.server: Adding LAN server: server="Node-46ab2a3b-d7bc-cabf-fba5-5e33c1ba379d (Addr: tcp/127.0.0.1:17056) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:33.154Z [INFO]  TestTxnEndpoint_Bad_JSON: Started DNS server: address=127.0.0.1:17051 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.154Z [INFO]  TestTxnEndpoint_Bad_JSON: Started DNS server: address=127.0.0.1:17051 network=udp
>     writer.go:29: 2020-02-23T02:46:33.155Z [INFO]  TestTxnEndpoint_Bad_JSON: Started HTTP server: address=127.0.0.1:17052 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.155Z [INFO]  TestTxnEndpoint_Bad_JSON: started state syncer
>     writer.go:29: 2020-02-23T02:46:33.215Z [WARN]  TestTxnEndpoint_Bad_JSON.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:33.215Z [INFO]  TestTxnEndpoint_Bad_JSON.server.raft: entering candidate state: node="Node at 127.0.0.1:17056 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:33.218Z [DEBUG] TestTxnEndpoint_Bad_JSON.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:33.218Z [DEBUG] TestTxnEndpoint_Bad_JSON.server.raft: vote granted: from=46ab2a3b-d7bc-cabf-fba5-5e33c1ba379d term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:33.218Z [INFO]  TestTxnEndpoint_Bad_JSON.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:33.218Z [INFO]  TestTxnEndpoint_Bad_JSON.server.raft: entering leader state: leader="Node at 127.0.0.1:17056 [Leader]"
>     writer.go:29: 2020-02-23T02:46:33.218Z [INFO]  TestTxnEndpoint_Bad_JSON.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:33.218Z [INFO]  TestTxnEndpoint_Bad_JSON.server: New leader elected: payload=Node-46ab2a3b-d7bc-cabf-fba5-5e33c1ba379d
>     writer.go:29: 2020-02-23T02:46:33.226Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:33.262Z [INFO]  TestTxnEndpoint_Bad_JSON.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:33.262Z [INFO]  TestTxnEndpoint_Bad_JSON.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:33.262Z [DEBUG] TestTxnEndpoint_Bad_JSON.server: Skipping self join check for node since the cluster is too small: node=Node-46ab2a3b-d7bc-cabf-fba5-5e33c1ba379d
>     writer.go:29: 2020-02-23T02:46:33.262Z [INFO]  TestTxnEndpoint_Bad_JSON.server: member joined, marking health alive: member=Node-46ab2a3b-d7bc-cabf-fba5-5e33c1ba379d
>     writer.go:29: 2020-02-23T02:46:33.428Z [INFO]  TestTxnEndpoint_Bad_JSON: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:33.428Z [INFO]  TestTxnEndpoint_Bad_JSON.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:33.428Z [DEBUG] TestTxnEndpoint_Bad_JSON.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:33.428Z [WARN]  TestTxnEndpoint_Bad_JSON.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:33.428Z [ERROR] TestTxnEndpoint_Bad_JSON.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:33.428Z [DEBUG] TestTxnEndpoint_Bad_JSON.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:33.430Z [WARN]  TestTxnEndpoint_Bad_JSON.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:33.432Z [INFO]  TestTxnEndpoint_Bad_JSON.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:33.432Z [INFO]  TestTxnEndpoint_Bad_JSON: consul server down
>     writer.go:29: 2020-02-23T02:46:33.432Z [INFO]  TestTxnEndpoint_Bad_JSON: shutdown complete
>     writer.go:29: 2020-02-23T02:46:33.432Z [INFO]  TestTxnEndpoint_Bad_JSON: Stopping server: protocol=DNS address=127.0.0.1:17051 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.432Z [INFO]  TestTxnEndpoint_Bad_JSON: Stopping server: protocol=DNS address=127.0.0.1:17051 network=udp
>     writer.go:29: 2020-02-23T02:46:33.432Z [INFO]  TestTxnEndpoint_Bad_JSON: Stopping server: protocol=HTTP address=127.0.0.1:17052 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.432Z [INFO]  TestTxnEndpoint_Bad_JSON: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:33.432Z [INFO]  TestTxnEndpoint_Bad_JSON: Endpoints down
> === CONT  TestStatusPeers
> === RUN   TestTxnEndpoint_Bad_Size_Net/allowed
> --- PASS: TestStatusPeersSecondary (0.41s)
>     writer.go:29: 2020-02-23T02:46:33.352Z [WARN]  TestStatusPeersSecondary: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:33.352Z [DEBUG] TestStatusPeersSecondary.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:33.352Z [DEBUG] TestStatusPeersSecondary.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:33.361Z [INFO]  TestStatusPeersSecondary.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d56a2f48-5461-7db1-bb29-447d5610611b Address:127.0.0.1:17062}]"
>     writer.go:29: 2020-02-23T02:46:33.361Z [INFO]  TestStatusPeersSecondary.server.raft: entering follower state: follower="Node at 127.0.0.1:17062 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:33.362Z [INFO]  TestStatusPeersSecondary.server.serf.wan: serf: EventMemberJoin: Node-d56a2f48-5461-7db1-bb29-447d5610611b.primary 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:33.362Z [INFO]  TestStatusPeersSecondary.server.serf.lan: serf: EventMemberJoin: Node-d56a2f48-5461-7db1-bb29-447d5610611b 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:33.362Z [INFO]  TestStatusPeersSecondary.server: Adding LAN server: server="Node-d56a2f48-5461-7db1-bb29-447d5610611b (Addr: tcp/127.0.0.1:17062) (DC: primary)"
>     writer.go:29: 2020-02-23T02:46:33.362Z [INFO]  TestStatusPeersSecondary: Started DNS server: address=127.0.0.1:17057 network=udp
>     writer.go:29: 2020-02-23T02:46:33.362Z [INFO]  TestStatusPeersSecondary.server: Handled event for server in area: event=member-join server=Node-d56a2f48-5461-7db1-bb29-447d5610611b.primary area=wan
>     writer.go:29: 2020-02-23T02:46:33.362Z [INFO]  TestStatusPeersSecondary: Started DNS server: address=127.0.0.1:17057 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.363Z [INFO]  TestStatusPeersSecondary: Started HTTP server: address=127.0.0.1:17058 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.363Z [INFO]  TestStatusPeersSecondary: started state syncer
>     writer.go:29: 2020-02-23T02:46:33.411Z [WARN]  TestStatusPeersSecondary.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:33.411Z [INFO]  TestStatusPeersSecondary.server.raft: entering candidate state: node="Node at 127.0.0.1:17062 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:33.421Z [DEBUG] TestStatusPeersSecondary.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:33.421Z [DEBUG] TestStatusPeersSecondary.server.raft: vote granted: from=d56a2f48-5461-7db1-bb29-447d5610611b term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:33.421Z [INFO]  TestStatusPeersSecondary.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:33.421Z [INFO]  TestStatusPeersSecondary.server.raft: entering leader state: leader="Node at 127.0.0.1:17062 [Leader]"
>     writer.go:29: 2020-02-23T02:46:33.422Z [INFO]  TestStatusPeersSecondary.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:33.422Z [INFO]  TestStatusPeersSecondary.server: New leader elected: payload=Node-d56a2f48-5461-7db1-bb29-447d5610611b
>     writer.go:29: 2020-02-23T02:46:33.429Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:33.443Z [INFO]  TestStatusPeersSecondary.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:33.443Z [INFO]  TestStatusPeersSecondary.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:33.443Z [DEBUG] TestStatusPeersSecondary.server: Skipping self join check for node since the cluster is too small: node=Node-d56a2f48-5461-7db1-bb29-447d5610611b
>     writer.go:29: 2020-02-23T02:46:33.443Z [INFO]  TestStatusPeersSecondary.server: member joined, marking health alive: member=Node-d56a2f48-5461-7db1-bb29-447d5610611b
>     writer.go:29: 2020-02-23T02:46:33.550Z [DEBUG] TestStatusPeersSecondary: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:33.552Z [INFO]  TestStatusPeersSecondary: Synced node info
>     writer.go:29: 2020-02-23T02:46:33.588Z [WARN]  TestStatusPeersSecondary: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:33.588Z [DEBUG] TestStatusPeersSecondary.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:33.588Z [DEBUG] TestStatusPeersSecondary.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:33.603Z [INFO]  TestStatusPeersSecondary.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:39185783-0cc2-3247-4390-fc5f1db117cf Address:127.0.0.1:17080}]"
>     writer.go:29: 2020-02-23T02:46:33.603Z [INFO]  TestStatusPeersSecondary.server.serf.wan: serf: EventMemberJoin: Node-39185783-0cc2-3247-4390-fc5f1db117cf.secondary 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:33.603Z [INFO]  TestStatusPeersSecondary.server.serf.lan: serf: EventMemberJoin: Node-39185783-0cc2-3247-4390-fc5f1db117cf 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:33.604Z [INFO]  TestStatusPeersSecondary: Started DNS server: address=127.0.0.1:17075 network=udp
>     writer.go:29: 2020-02-23T02:46:33.604Z [INFO]  TestStatusPeersSecondary.server.raft: entering follower state: follower="Node at 127.0.0.1:17080 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:33.604Z [INFO]  TestStatusPeersSecondary.server: Adding LAN server: server="Node-39185783-0cc2-3247-4390-fc5f1db117cf (Addr: tcp/127.0.0.1:17080) (DC: secondary)"
>     writer.go:29: 2020-02-23T02:46:33.604Z [INFO]  TestStatusPeersSecondary.server: Handled event for server in area: event=member-join server=Node-39185783-0cc2-3247-4390-fc5f1db117cf.secondary area=wan
>     writer.go:29: 2020-02-23T02:46:33.604Z [INFO]  TestStatusPeersSecondary: Started DNS server: address=127.0.0.1:17075 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.604Z [INFO]  TestStatusPeersSecondary: Started HTTP server: address=127.0.0.1:17076 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.604Z [INFO]  TestStatusPeersSecondary: started state syncer
>     writer.go:29: 2020-02-23T02:46:33.662Z [WARN]  TestStatusPeersSecondary.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:33.662Z [INFO]  TestStatusPeersSecondary.server.raft: entering candidate state: node="Node at 127.0.0.1:17080 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:33.665Z [DEBUG] TestStatusPeersSecondary.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:33.665Z [DEBUG] TestStatusPeersSecondary.server.raft: vote granted: from=39185783-0cc2-3247-4390-fc5f1db117cf term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:33.665Z [INFO]  TestStatusPeersSecondary.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:33.665Z [INFO]  TestStatusPeersSecondary.server.raft: entering leader state: leader="Node at 127.0.0.1:17080 [Leader]"
>     writer.go:29: 2020-02-23T02:46:33.665Z [INFO]  TestStatusPeersSecondary.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:33.665Z [INFO]  TestStatusPeersSecondary.server: New leader elected: payload=Node-39185783-0cc2-3247-4390-fc5f1db117cf
>     writer.go:29: 2020-02-23T02:46:33.672Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:33.680Z [INFO]  TestStatusPeersSecondary.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:33.680Z [INFO]  TestStatusPeersSecondary.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:33.680Z [DEBUG] TestStatusPeersSecondary.server: Skipping self join check for node since the cluster is too small: node=Node-39185783-0cc2-3247-4390-fc5f1db117cf
>     writer.go:29: 2020-02-23T02:46:33.680Z [INFO]  TestStatusPeersSecondary.server: member joined, marking health alive: member=Node-39185783-0cc2-3247-4390-fc5f1db117cf
>     writer.go:29: 2020-02-23T02:46:33.704Z [INFO]  TestStatusPeersSecondary: (WAN) joining: wan_addresses=[127.0.0.1:17061]
>     writer.go:29: 2020-02-23T02:46:33.704Z [DEBUG] TestStatusPeersSecondary.server.memberlist.wan: memberlist: Stream connection from=127.0.0.1:40276
>     writer.go:29: 2020-02-23T02:46:33.705Z [DEBUG] TestStatusPeersSecondary.server.memberlist.wan: memberlist: Initiating push/pull sync with: 127.0.0.1:17061
>     writer.go:29: 2020-02-23T02:46:33.705Z [INFO]  TestStatusPeersSecondary.server.serf.wan: serf: EventMemberJoin: Node-39185783-0cc2-3247-4390-fc5f1db117cf.secondary 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:33.705Z [INFO]  TestStatusPeersSecondary.server: Handled event for server in area: event=member-join server=Node-39185783-0cc2-3247-4390-fc5f1db117cf.secondary area=wan
>     writer.go:29: 2020-02-23T02:46:33.705Z [INFO]  TestStatusPeersSecondary.server.serf.wan: serf: EventMemberJoin: Node-d56a2f48-5461-7db1-bb29-447d5610611b.primary 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:33.705Z [INFO]  TestStatusPeersSecondary: (WAN) joined: number_of_nodes=1
>     writer.go:29: 2020-02-23T02:46:33.706Z [DEBUG] TestStatusPeersSecondary.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:33.706Z [INFO]  TestStatusPeersSecondary.server: Handled event for server in area: event=member-join server=Node-d56a2f48-5461-7db1-bb29-447d5610611b.primary area=wan
>     writer.go:29: 2020-02-23T02:46:33.706Z [DEBUG] TestStatusPeersSecondary.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:33.707Z [INFO]  TestStatusPeersSecondary: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:33.707Z [INFO]  TestStatusPeersSecondary.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:33.707Z [DEBUG] TestStatusPeersSecondary.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:33.707Z [WARN]  TestStatusPeersSecondary.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:33.707Z [ERROR] TestStatusPeersSecondary.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:33.707Z [DEBUG] TestStatusPeersSecondary.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:33.709Z [WARN]  TestStatusPeersSecondary.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:33.711Z [INFO]  TestStatusPeersSecondary.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:33.711Z [INFO]  TestStatusPeersSecondary.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:33.711Z [INFO]  TestStatusPeersSecondary: consul server down
>     writer.go:29: 2020-02-23T02:46:33.711Z [INFO]  TestStatusPeersSecondary: shutdown complete
>     writer.go:29: 2020-02-23T02:46:33.711Z [INFO]  TestStatusPeersSecondary: Stopping server: protocol=DNS address=127.0.0.1:17075 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.711Z [INFO]  TestStatusPeersSecondary: Stopping server: protocol=DNS address=127.0.0.1:17075 network=udp
>     writer.go:29: 2020-02-23T02:46:33.711Z [INFO]  TestStatusPeersSecondary: Stopping server: protocol=HTTP address=127.0.0.1:17076 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.711Z [INFO]  TestStatusPeersSecondary: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:33.711Z [INFO]  TestStatusPeersSecondary: Endpoints down
>     writer.go:29: 2020-02-23T02:46:33.711Z [INFO]  TestStatusPeersSecondary: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:33.711Z [INFO]  TestStatusPeersSecondary.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:33.711Z [DEBUG] TestStatusPeersSecondary.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:33.711Z [WARN]  TestStatusPeersSecondary.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:33.711Z [DEBUG] TestStatusPeersSecondary.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:33.713Z [WARN]  TestStatusPeersSecondary.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:33.715Z [INFO]  TestStatusPeersSecondary.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:33.715Z [INFO]  TestStatusPeersSecondary.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:33.727Z [INFO]  TestStatusPeersSecondary: consul server down
>     writer.go:29: 2020-02-23T02:46:33.727Z [INFO]  TestStatusPeersSecondary: shutdown complete
>     writer.go:29: 2020-02-23T02:46:33.727Z [INFO]  TestStatusPeersSecondary: Stopping server: protocol=DNS address=127.0.0.1:17057 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.727Z [INFO]  TestStatusPeersSecondary: Stopping server: protocol=DNS address=127.0.0.1:17057 network=udp
>     writer.go:29: 2020-02-23T02:46:33.727Z [INFO]  TestStatusPeersSecondary: Stopping server: protocol=HTTP address=127.0.0.1:17058 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.727Z [INFO]  TestStatusPeersSecondary: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:33.727Z [INFO]  TestStatusPeersSecondary: Endpoints down
> === CONT  TestStatusLeaderSecondary
> --- PASS: TestStatusPeers (0.38s)
>     writer.go:29: 2020-02-23T02:46:33.439Z [WARN]  TestStatusPeers: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:33.439Z [DEBUG] TestStatusPeers.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:33.440Z [DEBUG] TestStatusPeers.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:33.450Z [INFO]  TestStatusPeers.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:83654903-eab2-0901-5bec-34c4ec142b02 Address:127.0.0.1:17074}]"
>     writer.go:29: 2020-02-23T02:46:33.450Z [INFO]  TestStatusPeers.server.serf.wan: serf: EventMemberJoin: Node-83654903-eab2-0901-5bec-34c4ec142b02.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:33.451Z [INFO]  TestStatusPeers.server.serf.lan: serf: EventMemberJoin: Node-83654903-eab2-0901-5bec-34c4ec142b02 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:33.451Z [INFO]  TestStatusPeers: Started DNS server: address=127.0.0.1:17069 network=udp
>     writer.go:29: 2020-02-23T02:46:33.451Z [INFO]  TestStatusPeers.server.raft: entering follower state: follower="Node at 127.0.0.1:17074 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:33.451Z [INFO]  TestStatusPeers.server: Adding LAN server: server="Node-83654903-eab2-0901-5bec-34c4ec142b02 (Addr: tcp/127.0.0.1:17074) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:33.451Z [INFO]  TestStatusPeers.server: Handled event for server in area: event=member-join server=Node-83654903-eab2-0901-5bec-34c4ec142b02.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:33.451Z [INFO]  TestStatusPeers: Started DNS server: address=127.0.0.1:17069 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.452Z [INFO]  TestStatusPeers: Started HTTP server: address=127.0.0.1:17070 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.452Z [INFO]  TestStatusPeers: started state syncer
>     writer.go:29: 2020-02-23T02:46:33.518Z [WARN]  TestStatusPeers.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:33.518Z [INFO]  TestStatusPeers.server.raft: entering candidate state: node="Node at 127.0.0.1:17074 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:33.522Z [DEBUG] TestStatusPeers.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:33.522Z [DEBUG] TestStatusPeers.server.raft: vote granted: from=83654903-eab2-0901-5bec-34c4ec142b02 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:33.522Z [INFO]  TestStatusPeers.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:33.522Z [INFO]  TestStatusPeers.server.raft: entering leader state: leader="Node at 127.0.0.1:17074 [Leader]"
>     writer.go:29: 2020-02-23T02:46:33.522Z [INFO]  TestStatusPeers.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:33.522Z [INFO]  TestStatusPeers.server: New leader elected: payload=Node-83654903-eab2-0901-5bec-34c4ec142b02
>     writer.go:29: 2020-02-23T02:46:33.533Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:33.544Z [INFO]  TestStatusPeers.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:33.544Z [INFO]  TestStatusPeers.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:33.544Z [DEBUG] TestStatusPeers.server: Skipping self join check for node since the cluster is too small: node=Node-83654903-eab2-0901-5bec-34c4ec142b02
>     writer.go:29: 2020-02-23T02:46:33.544Z [INFO]  TestStatusPeers.server: member joined, marking health alive: member=Node-83654903-eab2-0901-5bec-34c4ec142b02
>     writer.go:29: 2020-02-23T02:46:33.695Z [DEBUG] TestStatusPeers: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:33.697Z [INFO]  TestStatusPeers: Synced node info
>     writer.go:29: 2020-02-23T02:46:33.697Z [DEBUG] TestStatusPeers: Node info in sync
>     writer.go:29: 2020-02-23T02:46:33.795Z [INFO]  TestStatusPeers: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:33.800Z [INFO]  TestStatusPeers.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:33.800Z [DEBUG] TestStatusPeers.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:33.800Z [WARN]  TestStatusPeers.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:33.800Z [DEBUG] TestStatusPeers.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:33.810Z [WARN]  TestStatusPeers.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:33.811Z [INFO]  TestStatusPeers.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:33.811Z [INFO]  TestStatusPeers: consul server down
>     writer.go:29: 2020-02-23T02:46:33.811Z [INFO]  TestStatusPeers: shutdown complete
>     writer.go:29: 2020-02-23T02:46:33.811Z [INFO]  TestStatusPeers: Stopping server: protocol=DNS address=127.0.0.1:17069 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.811Z [INFO]  TestStatusPeers: Stopping server: protocol=DNS address=127.0.0.1:17069 network=udp
>     writer.go:29: 2020-02-23T02:46:33.811Z [INFO]  TestStatusPeers: Stopping server: protocol=HTTP address=127.0.0.1:17070 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.812Z [INFO]  TestStatusPeers: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:33.812Z [INFO]  TestStatusPeers: Endpoints down
> === CONT  TestSnapshot_Options
> === RUN   TestSnapshot_Options/GET
> --- PASS: TestTxnEndpoint_Bad_Size_Item (1.06s)
>     --- PASS: TestTxnEndpoint_Bad_Size_Item/toobig (0.42s)
>         writer.go:29: 2020-02-23T02:46:32.976Z [WARN]  TestTxnEndpoint_Bad_Size_Item/toobig: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:32.976Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/toobig.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:32.976Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/toobig.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:32.987Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b5064ef5-9876-38c5-e6ab-2daa3223770b Address:127.0.0.1:17044}]"
>         writer.go:29: 2020-02-23T02:46:32.987Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.server.raft: entering follower state: follower="Node at 127.0.0.1:17044 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:32.988Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.server.serf.wan: serf: EventMemberJoin: Node-b5064ef5-9876-38c5-e6ab-2daa3223770b.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:32.988Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.server.serf.lan: serf: EventMemberJoin: Node-b5064ef5-9876-38c5-e6ab-2daa3223770b 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:32.988Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.server: Adding LAN server: server="Node-b5064ef5-9876-38c5-e6ab-2daa3223770b (Addr: tcp/127.0.0.1:17044) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:32.988Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig: Started DNS server: address=127.0.0.1:17039 network=udp
>         writer.go:29: 2020-02-23T02:46:32.989Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.server: Handled event for server in area: event=member-join server=Node-b5064ef5-9876-38c5-e6ab-2daa3223770b.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:32.989Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig: Started DNS server: address=127.0.0.1:17039 network=tcp
>         writer.go:29: 2020-02-23T02:46:32.989Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig: Started HTTP server: address=127.0.0.1:17040 network=tcp
>         writer.go:29: 2020-02-23T02:46:32.989Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig: started state syncer
>         writer.go:29: 2020-02-23T02:46:33.040Z [WARN]  TestTxnEndpoint_Bad_Size_Item/toobig.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:33.040Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.server.raft: entering candidate state: node="Node at 127.0.0.1:17044 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:33.044Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/toobig.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:33.044Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/toobig.server.raft: vote granted: from=b5064ef5-9876-38c5-e6ab-2daa3223770b term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:33.044Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:33.044Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.server.raft: entering leader state: leader="Node at 127.0.0.1:17044 [Leader]"
>         writer.go:29: 2020-02-23T02:46:33.044Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:33.044Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.server: New leader elected: payload=Node-b5064ef5-9876-38c5-e6ab-2daa3223770b
>         writer.go:29: 2020-02-23T02:46:33.051Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:33.058Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:33.058Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:33.058Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/toobig.server: Skipping self join check for node since the cluster is too small: node=Node-b5064ef5-9876-38c5-e6ab-2daa3223770b
>         writer.go:29: 2020-02-23T02:46:33.058Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.server: member joined, marking health alive: member=Node-b5064ef5-9876-38c5-e6ab-2daa3223770b
>         writer.go:29: 2020-02-23T02:46:33.274Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/toobig: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:33.277Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig: Synced node info
>         writer.go:29: 2020-02-23T02:46:33.299Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/toobig: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:33.299Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/toobig: Node info in sync
>         writer.go:29: 2020-02-23T02:46:33.299Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/toobig: Node info in sync
>         writer.go:29: 2020-02-23T02:46:33.380Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:33.380Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:33.380Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/toobig.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:33.380Z [WARN]  TestTxnEndpoint_Bad_Size_Item/toobig.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:33.380Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/toobig.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:33.382Z [WARN]  TestTxnEndpoint_Bad_Size_Item/toobig.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:33.384Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:33.384Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig: consul server down
>         writer.go:29: 2020-02-23T02:46:33.384Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig: shutdown complete
>         writer.go:29: 2020-02-23T02:46:33.384Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig: Stopping server: protocol=DNS address=127.0.0.1:17039 network=tcp
>         writer.go:29: 2020-02-23T02:46:33.384Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig: Stopping server: protocol=DNS address=127.0.0.1:17039 network=udp
>         writer.go:29: 2020-02-23T02:46:33.384Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig: Stopping server: protocol=HTTP address=127.0.0.1:17040 network=tcp
>         writer.go:29: 2020-02-23T02:46:33.384Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:33.384Z [INFO]  TestTxnEndpoint_Bad_Size_Item/toobig: Endpoints down
>     --- PASS: TestTxnEndpoint_Bad_Size_Item/allowed (0.65s)
>         writer.go:29: 2020-02-23T02:46:33.395Z [WARN]  TestTxnEndpoint_Bad_Size_Item/allowed: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:33.395Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/allowed.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:33.395Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/allowed.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:33.407Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f4c65e41-6d90-e880-bedb-3909f99da0f8 Address:127.0.0.1:17068}]"
>         writer.go:29: 2020-02-23T02:46:33.407Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.server.raft: entering follower state: follower="Node at 127.0.0.1:17068 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:33.407Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.server.serf.wan: serf: EventMemberJoin: Node-f4c65e41-6d90-e880-bedb-3909f99da0f8.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:33.408Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.server.serf.lan: serf: EventMemberJoin: Node-f4c65e41-6d90-e880-bedb-3909f99da0f8 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:33.408Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.server: Adding LAN server: server="Node-f4c65e41-6d90-e880-bedb-3909f99da0f8 (Addr: tcp/127.0.0.1:17068) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:33.408Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.server: Handled event for server in area: event=member-join server=Node-f4c65e41-6d90-e880-bedb-3909f99da0f8.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:33.408Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed: Started DNS server: address=127.0.0.1:17063 network=udp
>         writer.go:29: 2020-02-23T02:46:33.408Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed: Started DNS server: address=127.0.0.1:17063 network=tcp
>         writer.go:29: 2020-02-23T02:46:33.408Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed: Started HTTP server: address=127.0.0.1:17064 network=tcp
>         writer.go:29: 2020-02-23T02:46:33.408Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed: started state syncer
>         writer.go:29: 2020-02-23T02:46:33.454Z [WARN]  TestTxnEndpoint_Bad_Size_Item/allowed.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:33.454Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.server.raft: entering candidate state: node="Node at 127.0.0.1:17068 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:33.457Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/allowed.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:33.457Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/allowed.server.raft: vote granted: from=f4c65e41-6d90-e880-bedb-3909f99da0f8 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:33.457Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:33.457Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.server.raft: entering leader state: leader="Node at 127.0.0.1:17068 [Leader]"
>         writer.go:29: 2020-02-23T02:46:33.457Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:33.457Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.server: New leader elected: payload=Node-f4c65e41-6d90-e880-bedb-3909f99da0f8
>         writer.go:29: 2020-02-23T02:46:33.464Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:33.472Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:33.472Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:33.472Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/allowed.server: Skipping self join check for node since the cluster is too small: node=Node-f4c65e41-6d90-e880-bedb-3909f99da0f8
>         writer.go:29: 2020-02-23T02:46:33.472Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.server: member joined, marking health alive: member=Node-f4c65e41-6d90-e880-bedb-3909f99da0f8
>         writer.go:29: 2020-02-23T02:46:33.485Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/allowed: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:33.488Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed: Synced node info
>         writer.go:29: 2020-02-23T02:46:33.488Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/allowed: Node info in sync
>         writer.go:29: 2020-02-23T02:46:34.009Z [WARN]  TestTxnEndpoint_Bad_Size_Item/allowed.server.rpc: Attempting to apply large raft entry: size_in_bytes=1573029
>         writer.go:29: 2020-02-23T02:46:34.026Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:34.026Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:34.026Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/allowed.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.026Z [WARN]  TestTxnEndpoint_Bad_Size_Item/allowed.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.026Z [DEBUG] TestTxnEndpoint_Bad_Size_Item/allowed.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.028Z [WARN]  TestTxnEndpoint_Bad_Size_Item/allowed.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.029Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:34.029Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed: consul server down
>         writer.go:29: 2020-02-23T02:46:34.029Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed: shutdown complete
>         writer.go:29: 2020-02-23T02:46:34.029Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed: Stopping server: protocol=DNS address=127.0.0.1:17063 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.029Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed: Stopping server: protocol=DNS address=127.0.0.1:17063 network=udp
>         writer.go:29: 2020-02-23T02:46:34.029Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed: Stopping server: protocol=HTTP address=127.0.0.1:17064 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.029Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:34.029Z [INFO]  TestTxnEndpoint_Bad_Size_Item/allowed: Endpoints down
> === CONT  TestSessionDeleteDestroy
> === RUN   TestSnapshot_Options/GET#01
> --- PASS: TestSessionDeleteDestroy (0.10s)
>     writer.go:29: 2020-02-23T02:46:34.037Z [WARN]  TestSessionDeleteDestroy: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:34.037Z [DEBUG] TestSessionDeleteDestroy.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:34.038Z [DEBUG] TestSessionDeleteDestroy.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:34.049Z [INFO]  TestSessionDeleteDestroy.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:81eaf82f-29a1-72bf-cd18-581a3c0ecee9 Address:127.0.0.1:17110}]"
>     writer.go:29: 2020-02-23T02:46:34.050Z [INFO]  TestSessionDeleteDestroy.server.serf.wan: serf: EventMemberJoin: Node-81eaf82f-29a1-72bf-cd18-581a3c0ecee9.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:34.050Z [INFO]  TestSessionDeleteDestroy.server.serf.lan: serf: EventMemberJoin: Node-81eaf82f-29a1-72bf-cd18-581a3c0ecee9 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:34.050Z [INFO]  TestSessionDeleteDestroy: Started DNS server: address=127.0.0.1:17105 network=udp
>     writer.go:29: 2020-02-23T02:46:34.050Z [INFO]  TestSessionDeleteDestroy.server.raft: entering follower state: follower="Node at 127.0.0.1:17110 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:34.051Z [INFO]  TestSessionDeleteDestroy.server: Adding LAN server: server="Node-81eaf82f-29a1-72bf-cd18-581a3c0ecee9 (Addr: tcp/127.0.0.1:17110) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:34.051Z [INFO]  TestSessionDeleteDestroy.server: Handled event for server in area: event=member-join server=Node-81eaf82f-29a1-72bf-cd18-581a3c0ecee9.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:34.051Z [INFO]  TestSessionDeleteDestroy: Started DNS server: address=127.0.0.1:17105 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.051Z [INFO]  TestSessionDeleteDestroy: Started HTTP server: address=127.0.0.1:17106 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.051Z [INFO]  TestSessionDeleteDestroy: started state syncer
>     writer.go:29: 2020-02-23T02:46:34.087Z [WARN]  TestSessionDeleteDestroy.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:34.087Z [INFO]  TestSessionDeleteDestroy.server.raft: entering candidate state: node="Node at 127.0.0.1:17110 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:34.091Z [DEBUG] TestSessionDeleteDestroy.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:34.091Z [DEBUG] TestSessionDeleteDestroy.server.raft: vote granted: from=81eaf82f-29a1-72bf-cd18-581a3c0ecee9 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:34.091Z [INFO]  TestSessionDeleteDestroy.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:34.091Z [INFO]  TestSessionDeleteDestroy.server.raft: entering leader state: leader="Node at 127.0.0.1:17110 [Leader]"
>     writer.go:29: 2020-02-23T02:46:34.091Z [INFO]  TestSessionDeleteDestroy.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:34.091Z [INFO]  TestSessionDeleteDestroy.server: New leader elected: payload=Node-81eaf82f-29a1-72bf-cd18-581a3c0ecee9
>     writer.go:29: 2020-02-23T02:46:34.099Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:34.107Z [INFO]  TestSessionDeleteDestroy.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:34.107Z [INFO]  TestSessionDeleteDestroy.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.107Z [DEBUG] TestSessionDeleteDestroy.server: Skipping self join check for node since the cluster is too small: node=Node-81eaf82f-29a1-72bf-cd18-581a3c0ecee9
>     writer.go:29: 2020-02-23T02:46:34.107Z [INFO]  TestSessionDeleteDestroy.server: member joined, marking health alive: member=Node-81eaf82f-29a1-72bf-cd18-581a3c0ecee9
>     writer.go:29: 2020-02-23T02:46:34.126Z [INFO]  TestSessionDeleteDestroy: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:34.126Z [INFO]  TestSessionDeleteDestroy.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:34.126Z [DEBUG] TestSessionDeleteDestroy.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.126Z [WARN]  TestSessionDeleteDestroy.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:34.126Z [ERROR] TestSessionDeleteDestroy.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:34.126Z [DEBUG] TestSessionDeleteDestroy.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.128Z [WARN]  TestSessionDeleteDestroy.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:34.130Z [INFO]  TestSessionDeleteDestroy.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:34.130Z [INFO]  TestSessionDeleteDestroy: consul server down
>     writer.go:29: 2020-02-23T02:46:34.130Z [INFO]  TestSessionDeleteDestroy: shutdown complete
>     writer.go:29: 2020-02-23T02:46:34.130Z [INFO]  TestSessionDeleteDestroy: Stopping server: protocol=DNS address=127.0.0.1:17105 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.130Z [INFO]  TestSessionDeleteDestroy: Stopping server: protocol=DNS address=127.0.0.1:17105 network=udp
>     writer.go:29: 2020-02-23T02:46:34.130Z [INFO]  TestSessionDeleteDestroy: Stopping server: protocol=HTTP address=127.0.0.1:17106 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.130Z [INFO]  TestSessionDeleteDestroy: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:34.130Z [INFO]  TestSessionDeleteDestroy: Endpoints down
> === CONT  TestSessionGet
> === RUN   TestSessionGet/#00
> --- PASS: TestStatusLeaderSecondary (0.63s)
>     writer.go:29: 2020-02-23T02:46:33.747Z [WARN]  TestStatusLeaderSecondary: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:33.747Z [DEBUG] TestStatusLeaderSecondary.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:33.747Z [DEBUG] TestStatusLeaderSecondary.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:33.759Z [INFO]  TestStatusLeaderSecondary.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:1d178f77-9bee-35b2-2ba0-b9f711132bb0 Address:127.0.0.1:17092}]"
>     writer.go:29: 2020-02-23T02:46:33.759Z [INFO]  TestStatusLeaderSecondary.server.serf.wan: serf: EventMemberJoin: Node-1d178f77-9bee-35b2-2ba0-b9f711132bb0.primary 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:33.760Z [INFO]  TestStatusLeaderSecondary.server.serf.lan: serf: EventMemberJoin: Node-1d178f77-9bee-35b2-2ba0-b9f711132bb0 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:33.760Z [INFO]  TestStatusLeaderSecondary: Started DNS server: address=127.0.0.1:17087 network=udp
>     writer.go:29: 2020-02-23T02:46:33.760Z [INFO]  TestStatusLeaderSecondary.server.raft: entering follower state: follower="Node at 127.0.0.1:17092 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:33.760Z [INFO]  TestStatusLeaderSecondary.server: Adding LAN server: server="Node-1d178f77-9bee-35b2-2ba0-b9f711132bb0 (Addr: tcp/127.0.0.1:17092) (DC: primary)"
>     writer.go:29: 2020-02-23T02:46:33.760Z [INFO]  TestStatusLeaderSecondary.server: Handled event for server in area: event=member-join server=Node-1d178f77-9bee-35b2-2ba0-b9f711132bb0.primary area=wan
>     writer.go:29: 2020-02-23T02:46:33.760Z [INFO]  TestStatusLeaderSecondary: Started DNS server: address=127.0.0.1:17087 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.761Z [INFO]  TestStatusLeaderSecondary: Started HTTP server: address=127.0.0.1:17088 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.761Z [INFO]  TestStatusLeaderSecondary: started state syncer
>     writer.go:29: 2020-02-23T02:46:33.807Z [WARN]  TestStatusLeaderSecondary.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:33.808Z [INFO]  TestStatusLeaderSecondary.server.raft: entering candidate state: node="Node at 127.0.0.1:17092 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:33.813Z [DEBUG] TestStatusLeaderSecondary.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:33.813Z [DEBUG] TestStatusLeaderSecondary.server.raft: vote granted: from=1d178f77-9bee-35b2-2ba0-b9f711132bb0 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:33.813Z [INFO]  TestStatusLeaderSecondary.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:33.813Z [INFO]  TestStatusLeaderSecondary.server.raft: entering leader state: leader="Node at 127.0.0.1:17092 [Leader]"
>     writer.go:29: 2020-02-23T02:46:33.813Z [INFO]  TestStatusLeaderSecondary.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:33.813Z [INFO]  TestStatusLeaderSecondary.server: New leader elected: payload=Node-1d178f77-9bee-35b2-2ba0-b9f711132bb0
>     writer.go:29: 2020-02-23T02:46:33.839Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:33.853Z [INFO]  TestStatusLeaderSecondary.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:33.853Z [INFO]  TestStatusLeaderSecondary.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:33.853Z [DEBUG] TestStatusLeaderSecondary.server: Skipping self join check for node since the cluster is too small: node=Node-1d178f77-9bee-35b2-2ba0-b9f711132bb0
>     writer.go:29: 2020-02-23T02:46:33.853Z [INFO]  TestStatusLeaderSecondary.server: member joined, marking health alive: member=Node-1d178f77-9bee-35b2-2ba0-b9f711132bb0
>     writer.go:29: 2020-02-23T02:46:33.917Z [WARN]  TestStatusLeaderSecondary: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:33.917Z [DEBUG] TestStatusLeaderSecondary.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:33.917Z [DEBUG] TestStatusLeaderSecondary.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:33.928Z [INFO]  TestStatusLeaderSecondary.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:5140a62f-d1a4-db2f-c7bf-15316921fb68 Address:127.0.0.1:17104}]"
>     writer.go:29: 2020-02-23T02:46:33.928Z [INFO]  TestStatusLeaderSecondary.server.raft: entering follower state: follower="Node at 127.0.0.1:17104 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:33.929Z [INFO]  TestStatusLeaderSecondary.server.serf.wan: serf: EventMemberJoin: Node-5140a62f-d1a4-db2f-c7bf-15316921fb68.secondary 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:33.948Z [INFO]  TestStatusLeaderSecondary.server.serf.lan: serf: EventMemberJoin: Node-5140a62f-d1a4-db2f-c7bf-15316921fb68 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:33.949Z [INFO]  TestStatusLeaderSecondary: Started DNS server: address=127.0.0.1:17099 network=udp
>     writer.go:29: 2020-02-23T02:46:33.950Z [INFO]  TestStatusLeaderSecondary.server: Adding LAN server: server="Node-5140a62f-d1a4-db2f-c7bf-15316921fb68 (Addr: tcp/127.0.0.1:17104) (DC: secondary)"
>     writer.go:29: 2020-02-23T02:46:33.950Z [INFO]  TestStatusLeaderSecondary.server: Handled event for server in area: event=member-join server=Node-5140a62f-d1a4-db2f-c7bf-15316921fb68.secondary area=wan
>     writer.go:29: 2020-02-23T02:46:33.951Z [INFO]  TestStatusLeaderSecondary: Started DNS server: address=127.0.0.1:17099 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.952Z [INFO]  TestStatusLeaderSecondary: Started HTTP server: address=127.0.0.1:17100 network=tcp
>     writer.go:29: 2020-02-23T02:46:33.952Z [INFO]  TestStatusLeaderSecondary: started state syncer
>     writer.go:29: 2020-02-23T02:46:33.985Z [WARN]  TestStatusLeaderSecondary.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:33.985Z [INFO]  TestStatusLeaderSecondary.server.raft: entering candidate state: node="Node at 127.0.0.1:17104 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:33.988Z [DEBUG] TestStatusLeaderSecondary.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:33.988Z [DEBUG] TestStatusLeaderSecondary.server.raft: vote granted: from=5140a62f-d1a4-db2f-c7bf-15316921fb68 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:33.988Z [INFO]  TestStatusLeaderSecondary.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:33.988Z [INFO]  TestStatusLeaderSecondary.server.raft: entering leader state: leader="Node at 127.0.0.1:17104 [Leader]"
>     writer.go:29: 2020-02-23T02:46:33.989Z [INFO]  TestStatusLeaderSecondary.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:33.996Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:34.000Z [DEBUG] TestStatusLeaderSecondary: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:34.003Z [INFO]  TestStatusLeaderSecondary: Synced node info
>     writer.go:29: 2020-02-23T02:46:34.003Z [DEBUG] TestStatusLeaderSecondary: Node info in sync
>     writer.go:29: 2020-02-23T02:46:34.010Z [INFO]  TestStatusLeaderSecondary.server: New leader elected: payload=Node-5140a62f-d1a4-db2f-c7bf-15316921fb68
>     writer.go:29: 2020-02-23T02:46:34.011Z [INFO]  TestStatusLeaderSecondary.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:34.011Z [INFO]  TestStatusLeaderSecondary.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.011Z [DEBUG] TestStatusLeaderSecondary.server: Skipping self join check for node since the cluster is too small: node=Node-5140a62f-d1a4-db2f-c7bf-15316921fb68
>     writer.go:29: 2020-02-23T02:46:34.011Z [INFO]  TestStatusLeaderSecondary.server: member joined, marking health alive: member=Node-5140a62f-d1a4-db2f-c7bf-15316921fb68
>     writer.go:29: 2020-02-23T02:46:34.017Z [INFO]  TestStatusLeaderSecondary: Synced node info
>     writer.go:29: 2020-02-23T02:46:34.323Z [INFO]  TestStatusLeaderSecondary: (WAN) joining: wan_addresses=[127.0.0.1:17091]
>     writer.go:29: 2020-02-23T02:46:34.323Z [DEBUG] TestStatusLeaderSecondary.server.memberlist.wan: memberlist: Stream connection from=127.0.0.1:38998
>     writer.go:29: 2020-02-23T02:46:34.323Z [DEBUG] TestStatusLeaderSecondary.server.memberlist.wan: memberlist: Initiating push/pull sync with: 127.0.0.1:17091
>     writer.go:29: 2020-02-23T02:46:34.324Z [INFO]  TestStatusLeaderSecondary.server.serf.wan: serf: EventMemberJoin: Node-5140a62f-d1a4-db2f-c7bf-15316921fb68.secondary 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:34.324Z [INFO]  TestStatusLeaderSecondary.server: Handled event for server in area: event=member-join server=Node-5140a62f-d1a4-db2f-c7bf-15316921fb68.secondary area=wan
>     writer.go:29: 2020-02-23T02:46:34.324Z [INFO]  TestStatusLeaderSecondary.server.serf.wan: serf: EventMemberJoin: Node-1d178f77-9bee-35b2-2ba0-b9f711132bb0.primary 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:34.324Z [INFO]  TestStatusLeaderSecondary: (WAN) joined: number_of_nodes=1
>     writer.go:29: 2020-02-23T02:46:34.324Z [DEBUG] TestStatusLeaderSecondary.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:34.324Z [INFO]  TestStatusLeaderSecondary.server: Handled event for server in area: event=member-join server=Node-1d178f77-9bee-35b2-2ba0-b9f711132bb0.primary area=wan
>     writer.go:29: 2020-02-23T02:46:34.325Z [DEBUG] TestStatusLeaderSecondary.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:34.325Z [INFO]  TestStatusLeaderSecondary: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:34.325Z [INFO]  TestStatusLeaderSecondary.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:34.325Z [DEBUG] TestStatusLeaderSecondary.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.325Z [WARN]  TestStatusLeaderSecondary.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:34.326Z [DEBUG] TestStatusLeaderSecondary.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.332Z [WARN]  TestStatusLeaderSecondary.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:34.339Z [INFO]  TestStatusLeaderSecondary.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:34.339Z [INFO]  TestStatusLeaderSecondary.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:34.339Z [INFO]  TestStatusLeaderSecondary: consul server down
>     writer.go:29: 2020-02-23T02:46:34.339Z [INFO]  TestStatusLeaderSecondary: shutdown complete
>     writer.go:29: 2020-02-23T02:46:34.339Z [INFO]  TestStatusLeaderSecondary: Stopping server: protocol=DNS address=127.0.0.1:17099 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.339Z [INFO]  TestStatusLeaderSecondary: Stopping server: protocol=DNS address=127.0.0.1:17099 network=udp
>     writer.go:29: 2020-02-23T02:46:34.339Z [INFO]  TestStatusLeaderSecondary: Stopping server: protocol=HTTP address=127.0.0.1:17100 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.339Z [INFO]  TestStatusLeaderSecondary: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:34.339Z [INFO]  TestStatusLeaderSecondary: Endpoints down
>     writer.go:29: 2020-02-23T02:46:34.339Z [INFO]  TestStatusLeaderSecondary: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:34.339Z [INFO]  TestStatusLeaderSecondary.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:34.339Z [DEBUG] TestStatusLeaderSecondary.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.339Z [WARN]  TestStatusLeaderSecondary.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:34.339Z [DEBUG] TestStatusLeaderSecondary.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.350Z [WARN]  TestStatusLeaderSecondary.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:34.354Z [INFO]  TestStatusLeaderSecondary.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:34.354Z [INFO]  TestStatusLeaderSecondary.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:34.355Z [INFO]  TestStatusLeaderSecondary: consul server down
>     writer.go:29: 2020-02-23T02:46:34.355Z [INFO]  TestStatusLeaderSecondary: shutdown complete
>     writer.go:29: 2020-02-23T02:46:34.355Z [INFO]  TestStatusLeaderSecondary: Stopping server: protocol=DNS address=127.0.0.1:17087 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.355Z [INFO]  TestStatusLeaderSecondary: Stopping server: protocol=DNS address=127.0.0.1:17087 network=udp
>     writer.go:29: 2020-02-23T02:46:34.355Z [INFO]  TestStatusLeaderSecondary: Stopping server: protocol=HTTP address=127.0.0.1:17088 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.355Z [INFO]  TestStatusLeaderSecondary: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:34.355Z [INFO]  TestStatusLeaderSecondary: Endpoints down
> === CONT  TestSessionCustomTTL
> --- PASS: TestTxnEndpoint_Bad_Size_Net (1.50s)
>     --- PASS: TestTxnEndpoint_Bad_Size_Net/toobig (0.78s)
>         writer.go:29: 2020-02-23T02:46:32.883Z [WARN]  TestTxnEndpoint_Bad_Size_Net/toobig: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:32.883Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/toobig.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:32.883Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/toobig.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:32.896Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a5a816a5-9e59-5219-5e4d-4e192dff2644 Address:127.0.0.1:17038}]"
>         writer.go:29: 2020-02-23T02:46:32.896Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.server.raft: entering follower state: follower="Node at 127.0.0.1:17038 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:32.897Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.server.serf.wan: serf: EventMemberJoin: Node-a5a816a5-9e59-5219-5e4d-4e192dff2644.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:32.897Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.server.serf.lan: serf: EventMemberJoin: Node-a5a816a5-9e59-5219-5e4d-4e192dff2644 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:32.897Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.server: Adding LAN server: server="Node-a5a816a5-9e59-5219-5e4d-4e192dff2644 (Addr: tcp/127.0.0.1:17038) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:32.897Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.server: Handled event for server in area: event=member-join server=Node-a5a816a5-9e59-5219-5e4d-4e192dff2644.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:32.898Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig: Started DNS server: address=127.0.0.1:17033 network=udp
>         writer.go:29: 2020-02-23T02:46:32.898Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig: Started DNS server: address=127.0.0.1:17033 network=tcp
>         writer.go:29: 2020-02-23T02:46:32.898Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig: Started HTTP server: address=127.0.0.1:17034 network=tcp
>         writer.go:29: 2020-02-23T02:46:32.898Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig: started state syncer
>         writer.go:29: 2020-02-23T02:46:32.955Z [WARN]  TestTxnEndpoint_Bad_Size_Net/toobig.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:32.955Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.server.raft: entering candidate state: node="Node at 127.0.0.1:17038 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:32.958Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/toobig.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:32.958Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/toobig.server.raft: vote granted: from=a5a816a5-9e59-5219-5e4d-4e192dff2644 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:32.958Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:32.958Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.server.raft: entering leader state: leader="Node at 127.0.0.1:17038 [Leader]"
>         writer.go:29: 2020-02-23T02:46:32.958Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:32.958Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.server: New leader elected: payload=Node-a5a816a5-9e59-5219-5e4d-4e192dff2644
>         writer.go:29: 2020-02-23T02:46:32.967Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:32.979Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:32.979Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:32.979Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/toobig.server: Skipping self join check for node since the cluster is too small: node=Node-a5a816a5-9e59-5219-5e4d-4e192dff2644
>         writer.go:29: 2020-02-23T02:46:32.979Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.server: member joined, marking health alive: member=Node-a5a816a5-9e59-5219-5e4d-4e192dff2644
>         writer.go:29: 2020-02-23T02:46:33.341Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/toobig: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:33.351Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig: Synced node info
>         writer.go:29: 2020-02-23T02:46:33.640Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:33.640Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:33.640Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/toobig.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:33.640Z [WARN]  TestTxnEndpoint_Bad_Size_Net/toobig.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:33.640Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/toobig.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:33.642Z [WARN]  TestTxnEndpoint_Bad_Size_Net/toobig.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:33.644Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:33.644Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig: consul server down
>         writer.go:29: 2020-02-23T02:46:33.644Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig: shutdown complete
>         writer.go:29: 2020-02-23T02:46:33.644Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig: Stopping server: protocol=DNS address=127.0.0.1:17033 network=tcp
>         writer.go:29: 2020-02-23T02:46:33.644Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig: Stopping server: protocol=DNS address=127.0.0.1:17033 network=udp
>         writer.go:29: 2020-02-23T02:46:33.644Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig: Stopping server: protocol=HTTP address=127.0.0.1:17034 network=tcp
>         writer.go:29: 2020-02-23T02:46:33.644Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:33.644Z [INFO]  TestTxnEndpoint_Bad_Size_Net/toobig: Endpoints down
>     --- PASS: TestTxnEndpoint_Bad_Size_Net/allowed (0.72s)
>         writer.go:29: 2020-02-23T02:46:33.651Z [WARN]  TestTxnEndpoint_Bad_Size_Net/allowed: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:33.651Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/allowed.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:33.652Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/allowed.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:33.661Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a249b607-d606-647b-7563-96d251b72a5a Address:127.0.0.1:17086}]"
>         writer.go:29: 2020-02-23T02:46:33.661Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.server.raft: entering follower state: follower="Node at 127.0.0.1:17086 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:33.662Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.server.serf.wan: serf: EventMemberJoin: Node-a249b607-d606-647b-7563-96d251b72a5a.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:33.662Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.server.serf.lan: serf: EventMemberJoin: Node-a249b607-d606-647b-7563-96d251b72a5a 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:33.663Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.server: Handled event for server in area: event=member-join server=Node-a249b607-d606-647b-7563-96d251b72a5a.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:33.663Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.server: Adding LAN server: server="Node-a249b607-d606-647b-7563-96d251b72a5a (Addr: tcp/127.0.0.1:17086) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:33.663Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed: Started DNS server: address=127.0.0.1:17081 network=tcp
>         writer.go:29: 2020-02-23T02:46:33.663Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed: Started DNS server: address=127.0.0.1:17081 network=udp
>         writer.go:29: 2020-02-23T02:46:33.664Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed: Started HTTP server: address=127.0.0.1:17082 network=tcp
>         writer.go:29: 2020-02-23T02:46:33.664Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed: started state syncer
>         writer.go:29: 2020-02-23T02:46:33.707Z [WARN]  TestTxnEndpoint_Bad_Size_Net/allowed.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:33.707Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.server.raft: entering candidate state: node="Node at 127.0.0.1:17086 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:33.711Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/allowed.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:33.711Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/allowed.server.raft: vote granted: from=a249b607-d606-647b-7563-96d251b72a5a term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:33.711Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:33.711Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.server.raft: entering leader state: leader="Node at 127.0.0.1:17086 [Leader]"
>         writer.go:29: 2020-02-23T02:46:33.712Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:33.712Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.server: New leader elected: payload=Node-a249b607-d606-647b-7563-96d251b72a5a
>         writer.go:29: 2020-02-23T02:46:33.720Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:33.741Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed: Synced node info
>         writer.go:29: 2020-02-23T02:46:33.741Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/allowed: Node info in sync
>         writer.go:29: 2020-02-23T02:46:33.742Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:33.742Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:33.742Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/allowed.server: Skipping self join check for node since the cluster is too small: node=Node-a249b607-d606-647b-7563-96d251b72a5a
>         writer.go:29: 2020-02-23T02:46:33.742Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.server: member joined, marking health alive: member=Node-a249b607-d606-647b-7563-96d251b72a5a
>         writer.go:29: 2020-02-23T02:46:34.327Z [WARN]  TestTxnEndpoint_Bad_Size_Net/allowed.server.rpc: Attempting to apply large raft entry: size_in_bytes=4719032
>         writer.go:29: 2020-02-23T02:46:34.354Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:34.354Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:34.354Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/allowed.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.354Z [WARN]  TestTxnEndpoint_Bad_Size_Net/allowed.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.354Z [DEBUG] TestTxnEndpoint_Bad_Size_Net/allowed.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.359Z [WARN]  TestTxnEndpoint_Bad_Size_Net/allowed.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.361Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:34.361Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed: consul server down
>         writer.go:29: 2020-02-23T02:46:34.361Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed: shutdown complete
>         writer.go:29: 2020-02-23T02:46:34.361Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed: Stopping server: protocol=DNS address=127.0.0.1:17081 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.361Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed: Stopping server: protocol=DNS address=127.0.0.1:17081 network=udp
>         writer.go:29: 2020-02-23T02:46:34.361Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed: Stopping server: protocol=HTTP address=127.0.0.1:17082 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.361Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:34.361Z [INFO]  TestTxnEndpoint_Bad_Size_Net/allowed: Endpoints down
> === CONT  TestSessionDestroy
> === RUN   TestSnapshot_Options/GET#02
> === RUN   TestSessionGet/#01
> === RUN   TestSnapshot_Options/PUT
> --- PASS: TestSessionGet (0.47s)
>     --- PASS: TestSessionGet/#00 (0.34s)
>         writer.go:29: 2020-02-23T02:46:34.139Z [WARN]  TestSessionGet/#00: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:34.139Z [DEBUG] TestSessionGet/#00.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:34.139Z [DEBUG] TestSessionGet/#00.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:34.152Z [INFO]  TestSessionGet/#00.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a103a03a-e315-4781-baea-afe286627b7a Address:127.0.0.1:17122}]"
>         writer.go:29: 2020-02-23T02:46:34.152Z [INFO]  TestSessionGet/#00.server.raft: entering follower state: follower="Node at 127.0.0.1:17122 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:34.153Z [INFO]  TestSessionGet/#00.server.serf.wan: serf: EventMemberJoin: Node-a103a03a-e315-4781-baea-afe286627b7a.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:34.154Z [INFO]  TestSessionGet/#00.server.serf.lan: serf: EventMemberJoin: Node-a103a03a-e315-4781-baea-afe286627b7a 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:34.154Z [INFO]  TestSessionGet/#00.server: Adding LAN server: server="Node-a103a03a-e315-4781-baea-afe286627b7a (Addr: tcp/127.0.0.1:17122) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:34.154Z [INFO]  TestSessionGet/#00.server: Handled event for server in area: event=member-join server=Node-a103a03a-e315-4781-baea-afe286627b7a.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:34.155Z [INFO]  TestSessionGet/#00: Started DNS server: address=127.0.0.1:17117 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.155Z [INFO]  TestSessionGet/#00: Started DNS server: address=127.0.0.1:17117 network=udp
>         writer.go:29: 2020-02-23T02:46:34.156Z [INFO]  TestSessionGet/#00: Started HTTP server: address=127.0.0.1:17118 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.156Z [INFO]  TestSessionGet/#00: started state syncer
>         writer.go:29: 2020-02-23T02:46:34.201Z [WARN]  TestSessionGet/#00.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:34.201Z [INFO]  TestSessionGet/#00.server.raft: entering candidate state: node="Node at 127.0.0.1:17122 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:34.219Z [DEBUG] TestSessionGet/#00.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:34.219Z [DEBUG] TestSessionGet/#00.server.raft: vote granted: from=a103a03a-e315-4781-baea-afe286627b7a term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:34.219Z [INFO]  TestSessionGet/#00.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:34.219Z [INFO]  TestSessionGet/#00.server.raft: entering leader state: leader="Node at 127.0.0.1:17122 [Leader]"
>         writer.go:29: 2020-02-23T02:46:34.219Z [INFO]  TestSessionGet/#00.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:34.219Z [INFO]  TestSessionGet/#00.server: New leader elected: payload=Node-a103a03a-e315-4781-baea-afe286627b7a
>         writer.go:29: 2020-02-23T02:46:34.227Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:34.235Z [INFO]  TestSessionGet/#00.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:34.235Z [INFO]  TestSessionGet/#00.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.235Z [DEBUG] TestSessionGet/#00.server: Skipping self join check for node since the cluster is too small: node=Node-a103a03a-e315-4781-baea-afe286627b7a
>         writer.go:29: 2020-02-23T02:46:34.235Z [INFO]  TestSessionGet/#00.server: member joined, marking health alive: member=Node-a103a03a-e315-4781-baea-afe286627b7a
>         writer.go:29: 2020-02-23T02:46:34.239Z [DEBUG] TestSessionGet/#00: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:34.241Z [INFO]  TestSessionGet/#00: Synced node info
>         writer.go:29: 2020-02-23T02:46:34.241Z [DEBUG] TestSessionGet/#00: Node info in sync
>         writer.go:29: 2020-02-23T02:46:34.469Z [INFO]  TestSessionGet/#00: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:34.469Z [INFO]  TestSessionGet/#00.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:34.469Z [DEBUG] TestSessionGet/#00.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.469Z [WARN]  TestSessionGet/#00.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.469Z [DEBUG] TestSessionGet/#00.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.471Z [WARN]  TestSessionGet/#00.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.472Z [INFO]  TestSessionGet/#00.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:34.473Z [INFO]  TestSessionGet/#00: consul server down
>         writer.go:29: 2020-02-23T02:46:34.473Z [INFO]  TestSessionGet/#00: shutdown complete
>         writer.go:29: 2020-02-23T02:46:34.473Z [INFO]  TestSessionGet/#00: Stopping server: protocol=DNS address=127.0.0.1:17117 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.473Z [INFO]  TestSessionGet/#00: Stopping server: protocol=DNS address=127.0.0.1:17117 network=udp
>         writer.go:29: 2020-02-23T02:46:34.473Z [INFO]  TestSessionGet/#00: Stopping server: protocol=HTTP address=127.0.0.1:17118 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.473Z [INFO]  TestSessionGet/#00: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:34.473Z [INFO]  TestSessionGet/#00: Endpoints down
>     --- PASS: TestSessionGet/#01 (0.13s)
>         writer.go:29: 2020-02-23T02:46:34.483Z [WARN]  TestSessionGet/#01: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:34.483Z [DEBUG] TestSessionGet/#01.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:34.484Z [DEBUG] TestSessionGet/#01.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:34.494Z [INFO]  TestSessionGet/#01.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:eefb135a-6008-d7a8-13cb-96d8a009e47d Address:127.0.0.1:17146}]"
>         writer.go:29: 2020-02-23T02:46:34.495Z [INFO]  TestSessionGet/#01.server.raft: entering follower state: follower="Node at 127.0.0.1:17146 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:34.495Z [INFO]  TestSessionGet/#01.server.serf.wan: serf: EventMemberJoin: Node-eefb135a-6008-d7a8-13cb-96d8a009e47d.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:34.496Z [INFO]  TestSessionGet/#01.server.serf.lan: serf: EventMemberJoin: Node-eefb135a-6008-d7a8-13cb-96d8a009e47d 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:34.496Z [INFO]  TestSessionGet/#01.server: Handled event for server in area: event=member-join server=Node-eefb135a-6008-d7a8-13cb-96d8a009e47d.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:34.496Z [INFO]  TestSessionGet/#01.server: Adding LAN server: server="Node-eefb135a-6008-d7a8-13cb-96d8a009e47d (Addr: tcp/127.0.0.1:17146) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:34.496Z [INFO]  TestSessionGet/#01: Started DNS server: address=127.0.0.1:17141 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.496Z [INFO]  TestSessionGet/#01: Started DNS server: address=127.0.0.1:17141 network=udp
>         writer.go:29: 2020-02-23T02:46:34.497Z [INFO]  TestSessionGet/#01: Started HTTP server: address=127.0.0.1:17142 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.497Z [INFO]  TestSessionGet/#01: started state syncer
>         writer.go:29: 2020-02-23T02:46:34.553Z [WARN]  TestSessionGet/#01.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:34.553Z [INFO]  TestSessionGet/#01.server.raft: entering candidate state: node="Node at 127.0.0.1:17146 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:34.556Z [DEBUG] TestSessionGet/#01.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:34.556Z [DEBUG] TestSessionGet/#01.server.raft: vote granted: from=eefb135a-6008-d7a8-13cb-96d8a009e47d term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:34.556Z [INFO]  TestSessionGet/#01.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:34.556Z [INFO]  TestSessionGet/#01.server.raft: entering leader state: leader="Node at 127.0.0.1:17146 [Leader]"
>         writer.go:29: 2020-02-23T02:46:34.556Z [INFO]  TestSessionGet/#01.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:34.556Z [INFO]  TestSessionGet/#01.server: New leader elected: payload=Node-eefb135a-6008-d7a8-13cb-96d8a009e47d
>         writer.go:29: 2020-02-23T02:46:34.564Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:34.578Z [INFO]  TestSessionGet/#01: Synced node info
>         writer.go:29: 2020-02-23T02:46:34.578Z [DEBUG] TestSessionGet/#01: Node info in sync
>         writer.go:29: 2020-02-23T02:46:34.579Z [INFO]  TestSessionGet/#01.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:34.579Z [INFO]  TestSessionGet/#01.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.579Z [DEBUG] TestSessionGet/#01.server: Skipping self join check for node since the cluster is too small: node=Node-eefb135a-6008-d7a8-13cb-96d8a009e47d
>         writer.go:29: 2020-02-23T02:46:34.579Z [INFO]  TestSessionGet/#01.server: member joined, marking health alive: member=Node-eefb135a-6008-d7a8-13cb-96d8a009e47d
>         writer.go:29: 2020-02-23T02:46:34.597Z [INFO]  TestSessionGet/#01: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:34.597Z [INFO]  TestSessionGet/#01.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:34.597Z [DEBUG] TestSessionGet/#01.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.597Z [WARN]  TestSessionGet/#01.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.597Z [DEBUG] TestSessionGet/#01.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.599Z [WARN]  TestSessionGet/#01.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.601Z [INFO]  TestSessionGet/#01.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:34.601Z [INFO]  TestSessionGet/#01: consul server down
>         writer.go:29: 2020-02-23T02:46:34.601Z [INFO]  TestSessionGet/#01: shutdown complete
>         writer.go:29: 2020-02-23T02:46:34.601Z [INFO]  TestSessionGet/#01: Stopping server: protocol=DNS address=127.0.0.1:17141 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.601Z [INFO]  TestSessionGet/#01: Stopping server: protocol=DNS address=127.0.0.1:17141 network=udp
>         writer.go:29: 2020-02-23T02:46:34.601Z [INFO]  TestSessionGet/#01: Stopping server: protocol=HTTP address=127.0.0.1:17142 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.601Z [INFO]  TestSessionGet/#01: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:34.601Z [INFO]  TestSessionGet/#01: Endpoints down
> === CONT  TestSessionCreate_NoCheck
> --- PASS: TestSessionDestroy (0.34s)
>     writer.go:29: 2020-02-23T02:46:34.373Z [WARN]  TestSessionDestroy: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:34.373Z [DEBUG] TestSessionDestroy.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:34.373Z [DEBUG] TestSessionDestroy.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:34.396Z [INFO]  TestSessionDestroy.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:9df3d725-f9a4-ae9e-070a-b9c57c758cd9 Address:127.0.0.1:17134}]"
>     writer.go:29: 2020-02-23T02:46:34.396Z [INFO]  TestSessionDestroy.server.raft: entering follower state: follower="Node at 127.0.0.1:17134 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:34.397Z [INFO]  TestSessionDestroy.server.serf.wan: serf: EventMemberJoin: Node-9df3d725-f9a4-ae9e-070a-b9c57c758cd9.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:34.397Z [INFO]  TestSessionDestroy.server.serf.lan: serf: EventMemberJoin: Node-9df3d725-f9a4-ae9e-070a-b9c57c758cd9 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:34.397Z [INFO]  TestSessionDestroy.server: Adding LAN server: server="Node-9df3d725-f9a4-ae9e-070a-b9c57c758cd9 (Addr: tcp/127.0.0.1:17134) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:34.397Z [INFO]  TestSessionDestroy.server: Handled event for server in area: event=member-join server=Node-9df3d725-f9a4-ae9e-070a-b9c57c758cd9.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:34.398Z [INFO]  TestSessionDestroy: Started DNS server: address=127.0.0.1:17129 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.398Z [INFO]  TestSessionDestroy: Started DNS server: address=127.0.0.1:17129 network=udp
>     writer.go:29: 2020-02-23T02:46:34.398Z [INFO]  TestSessionDestroy: Started HTTP server: address=127.0.0.1:17130 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.398Z [INFO]  TestSessionDestroy: started state syncer
>     writer.go:29: 2020-02-23T02:46:34.451Z [WARN]  TestSessionDestroy.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:34.451Z [INFO]  TestSessionDestroy.server.raft: entering candidate state: node="Node at 127.0.0.1:17134 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:34.455Z [DEBUG] TestSessionDestroy.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:34.455Z [DEBUG] TestSessionDestroy.server.raft: vote granted: from=9df3d725-f9a4-ae9e-070a-b9c57c758cd9 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:34.455Z [INFO]  TestSessionDestroy.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:34.455Z [INFO]  TestSessionDestroy.server.raft: entering leader state: leader="Node at 127.0.0.1:17134 [Leader]"
>     writer.go:29: 2020-02-23T02:46:34.455Z [INFO]  TestSessionDestroy.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:34.455Z [INFO]  TestSessionDestroy.server: New leader elected: payload=Node-9df3d725-f9a4-ae9e-070a-b9c57c758cd9
>     writer.go:29: 2020-02-23T02:46:34.464Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:34.479Z [INFO]  TestSessionDestroy.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:34.479Z [INFO]  TestSessionDestroy.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.480Z [DEBUG] TestSessionDestroy.server: Skipping self join check for node since the cluster is too small: node=Node-9df3d725-f9a4-ae9e-070a-b9c57c758cd9
>     writer.go:29: 2020-02-23T02:46:34.480Z [INFO]  TestSessionDestroy.server: member joined, marking health alive: member=Node-9df3d725-f9a4-ae9e-070a-b9c57c758cd9
>     writer.go:29: 2020-02-23T02:46:34.495Z [DEBUG] TestSessionDestroy: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:34.498Z [INFO]  TestSessionDestroy: Synced node info
>     writer.go:29: 2020-02-23T02:46:34.698Z [INFO]  TestSessionDestroy: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:34.698Z [INFO]  TestSessionDestroy.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:34.698Z [DEBUG] TestSessionDestroy.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.698Z [WARN]  TestSessionDestroy.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:34.698Z [DEBUG] TestSessionDestroy.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.700Z [WARN]  TestSessionDestroy.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:34.701Z [INFO]  TestSessionDestroy.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:34.701Z [INFO]  TestSessionDestroy: consul server down
>     writer.go:29: 2020-02-23T02:46:34.701Z [INFO]  TestSessionDestroy: shutdown complete
>     writer.go:29: 2020-02-23T02:46:34.701Z [INFO]  TestSessionDestroy: Stopping server: protocol=DNS address=127.0.0.1:17129 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.701Z [INFO]  TestSessionDestroy: Stopping server: protocol=DNS address=127.0.0.1:17129 network=udp
>     writer.go:29: 2020-02-23T02:46:34.701Z [INFO]  TestSessionDestroy: Stopping server: protocol=HTTP address=127.0.0.1:17130 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.701Z [INFO]  TestSessionDestroy: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:34.701Z [INFO]  TestSessionDestroy: Endpoints down
> === CONT  TestSessionCreate_DefaultCheck
> --- PASS: TestSessionCreate_DefaultCheck (0.28s)
>     writer.go:29: 2020-02-23T02:46:34.709Z [WARN]  TestSessionCreate_DefaultCheck: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:34.709Z [DEBUG] TestSessionCreate_DefaultCheck.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:34.709Z [DEBUG] TestSessionCreate_DefaultCheck.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:34.720Z [INFO]  TestSessionCreate_DefaultCheck.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:8e66ef84-9b87-2f0e-b174-05edcf9d9e1d Address:127.0.0.1:16144}]"
>     writer.go:29: 2020-02-23T02:46:34.720Z [INFO]  TestSessionCreate_DefaultCheck.server.raft: entering follower state: follower="Node at 127.0.0.1:16144 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:34.721Z [INFO]  TestSessionCreate_DefaultCheck.server.serf.wan: serf: EventMemberJoin: Node-8e66ef84-9b87-2f0e-b174-05edcf9d9e1d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:34.721Z [INFO]  TestSessionCreate_DefaultCheck.server.serf.lan: serf: EventMemberJoin: Node-8e66ef84-9b87-2f0e-b174-05edcf9d9e1d 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:34.721Z [INFO]  TestSessionCreate_DefaultCheck.server: Adding LAN server: server="Node-8e66ef84-9b87-2f0e-b174-05edcf9d9e1d (Addr: tcp/127.0.0.1:16144) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:34.721Z [INFO]  TestSessionCreate_DefaultCheck: Started DNS server: address=127.0.0.1:16139 network=udp
>     writer.go:29: 2020-02-23T02:46:34.721Z [INFO]  TestSessionCreate_DefaultCheck.server: Handled event for server in area: event=member-join server=Node-8e66ef84-9b87-2f0e-b174-05edcf9d9e1d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:34.722Z [INFO]  TestSessionCreate_DefaultCheck: Started DNS server: address=127.0.0.1:16139 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.722Z [INFO]  TestSessionCreate_DefaultCheck: Started HTTP server: address=127.0.0.1:16140 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.722Z [INFO]  TestSessionCreate_DefaultCheck: started state syncer
>     writer.go:29: 2020-02-23T02:46:34.783Z [WARN]  TestSessionCreate_DefaultCheck.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:34.783Z [INFO]  TestSessionCreate_DefaultCheck.server.raft: entering candidate state: node="Node at 127.0.0.1:16144 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:34.787Z [DEBUG] TestSessionCreate_DefaultCheck.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:34.787Z [DEBUG] TestSessionCreate_DefaultCheck.server.raft: vote granted: from=8e66ef84-9b87-2f0e-b174-05edcf9d9e1d term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:34.787Z [INFO]  TestSessionCreate_DefaultCheck.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:34.787Z [INFO]  TestSessionCreate_DefaultCheck.server.raft: entering leader state: leader="Node at 127.0.0.1:16144 [Leader]"
>     writer.go:29: 2020-02-23T02:46:34.787Z [INFO]  TestSessionCreate_DefaultCheck.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:34.787Z [INFO]  TestSessionCreate_DefaultCheck.server: New leader elected: payload=Node-8e66ef84-9b87-2f0e-b174-05edcf9d9e1d
>     writer.go:29: 2020-02-23T02:46:34.795Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:34.802Z [INFO]  TestSessionCreate_DefaultCheck.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:34.802Z [INFO]  TestSessionCreate_DefaultCheck.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.803Z [DEBUG] TestSessionCreate_DefaultCheck.server: Skipping self join check for node since the cluster is too small: node=Node-8e66ef84-9b87-2f0e-b174-05edcf9d9e1d
>     writer.go:29: 2020-02-23T02:46:34.803Z [INFO]  TestSessionCreate_DefaultCheck.server: member joined, marking health alive: member=Node-8e66ef84-9b87-2f0e-b174-05edcf9d9e1d
>     writer.go:29: 2020-02-23T02:46:34.975Z [INFO]  TestSessionCreate_DefaultCheck: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:34.975Z [INFO]  TestSessionCreate_DefaultCheck.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:34.975Z [DEBUG] TestSessionCreate_DefaultCheck.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.975Z [WARN]  TestSessionCreate_DefaultCheck.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:34.976Z [ERROR] TestSessionCreate_DefaultCheck.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:34.976Z [DEBUG] TestSessionCreate_DefaultCheck.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.977Z [WARN]  TestSessionCreate_DefaultCheck.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:34.979Z [INFO]  TestSessionCreate_DefaultCheck.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:34.979Z [INFO]  TestSessionCreate_DefaultCheck: consul server down
>     writer.go:29: 2020-02-23T02:46:34.979Z [INFO]  TestSessionCreate_DefaultCheck: shutdown complete
>     writer.go:29: 2020-02-23T02:46:34.979Z [INFO]  TestSessionCreate_DefaultCheck: Stopping server: protocol=DNS address=127.0.0.1:16139 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.979Z [INFO]  TestSessionCreate_DefaultCheck: Stopping server: protocol=DNS address=127.0.0.1:16139 network=udp
>     writer.go:29: 2020-02-23T02:46:34.979Z [INFO]  TestSessionCreate_DefaultCheck: Stopping server: protocol=HTTP address=127.0.0.1:16140 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.979Z [INFO]  TestSessionCreate_DefaultCheck: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:34.979Z [INFO]  TestSessionCreate_DefaultCheck: Endpoints down
> === CONT  TestSessionCreate_Delete
> === RUN   TestSnapshot_Options/PUT#01
> === RUN   TestSessionCreate_NoCheck/no_check_fields_should_yield_default_serfHealth
> === RUN   TestSessionCreate_NoCheck/overwrite_nodechecks_to_associate_with_no_checks
> === RUN   TestSessionCreate_NoCheck/overwrite_checks_to_associate_with_no_checks
> --- PASS: TestSessionCreate_NoCheck (0.43s)
>     writer.go:29: 2020-02-23T02:46:34.608Z [WARN]  TestSessionCreate_NoCheck: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:34.608Z [DEBUG] TestSessionCreate_NoCheck.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:34.608Z [DEBUG] TestSessionCreate_NoCheck.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:34.618Z [INFO]  TestSessionCreate_NoCheck.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:cd60a053-fffd-895c-9348-e6c282f4a943 Address:127.0.0.1:16132}]"
>     writer.go:29: 2020-02-23T02:46:34.618Z [INFO]  TestSessionCreate_NoCheck.server.raft: entering follower state: follower="Node at 127.0.0.1:16132 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:34.618Z [INFO]  TestSessionCreate_NoCheck.server.serf.wan: serf: EventMemberJoin: Node-cd60a053-fffd-895c-9348-e6c282f4a943.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:34.619Z [INFO]  TestSessionCreate_NoCheck.server.serf.lan: serf: EventMemberJoin: Node-cd60a053-fffd-895c-9348-e6c282f4a943 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:34.619Z [INFO]  TestSessionCreate_NoCheck.server: Adding LAN server: server="Node-cd60a053-fffd-895c-9348-e6c282f4a943 (Addr: tcp/127.0.0.1:16132) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:34.619Z [INFO]  TestSessionCreate_NoCheck.server: Handled event for server in area: event=member-join server=Node-cd60a053-fffd-895c-9348-e6c282f4a943.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:34.619Z [INFO]  TestSessionCreate_NoCheck: Started DNS server: address=127.0.0.1:16127 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.619Z [INFO]  TestSessionCreate_NoCheck: Started DNS server: address=127.0.0.1:16127 network=udp
>     writer.go:29: 2020-02-23T02:46:34.620Z [INFO]  TestSessionCreate_NoCheck: Started HTTP server: address=127.0.0.1:16128 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.620Z [INFO]  TestSessionCreate_NoCheck: started state syncer
>     writer.go:29: 2020-02-23T02:46:34.661Z [WARN]  TestSessionCreate_NoCheck.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:34.661Z [INFO]  TestSessionCreate_NoCheck.server.raft: entering candidate state: node="Node at 127.0.0.1:16132 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:34.665Z [DEBUG] TestSessionCreate_NoCheck.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:34.665Z [DEBUG] TestSessionCreate_NoCheck.server.raft: vote granted: from=cd60a053-fffd-895c-9348-e6c282f4a943 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:34.665Z [INFO]  TestSessionCreate_NoCheck.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:34.665Z [INFO]  TestSessionCreate_NoCheck.server.raft: entering leader state: leader="Node at 127.0.0.1:16132 [Leader]"
>     writer.go:29: 2020-02-23T02:46:34.665Z [INFO]  TestSessionCreate_NoCheck.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:34.665Z [INFO]  TestSessionCreate_NoCheck.server: New leader elected: payload=Node-cd60a053-fffd-895c-9348-e6c282f4a943
>     writer.go:29: 2020-02-23T02:46:34.673Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:34.680Z [INFO]  TestSessionCreate_NoCheck.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:34.680Z [INFO]  TestSessionCreate_NoCheck.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.680Z [DEBUG] TestSessionCreate_NoCheck.server: Skipping self join check for node since the cluster is too small: node=Node-cd60a053-fffd-895c-9348-e6c282f4a943
>     writer.go:29: 2020-02-23T02:46:34.680Z [INFO]  TestSessionCreate_NoCheck.server: member joined, marking health alive: member=Node-cd60a053-fffd-895c-9348-e6c282f4a943
>     --- PASS: TestSessionCreate_NoCheck/no_check_fields_should_yield_default_serfHealth (0.00s)
>     --- PASS: TestSessionCreate_NoCheck/overwrite_nodechecks_to_associate_with_no_checks (0.00s)
>     --- PASS: TestSessionCreate_NoCheck/overwrite_checks_to_associate_with_no_checks (0.00s)
>     writer.go:29: 2020-02-23T02:46:35.031Z [INFO]  TestSessionCreate_NoCheck: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:35.031Z [INFO]  TestSessionCreate_NoCheck.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:35.031Z [DEBUG] TestSessionCreate_NoCheck.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.031Z [WARN]  TestSessionCreate_NoCheck.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.031Z [ERROR] TestSessionCreate_NoCheck.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:35.031Z [DEBUG] TestSessionCreate_NoCheck.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.033Z [WARN]  TestSessionCreate_NoCheck.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.035Z [INFO]  TestSessionCreate_NoCheck.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:35.035Z [INFO]  TestSessionCreate_NoCheck: consul server down
>     writer.go:29: 2020-02-23T02:46:35.035Z [INFO]  TestSessionCreate_NoCheck: shutdown complete
>     writer.go:29: 2020-02-23T02:46:35.035Z [INFO]  TestSessionCreate_NoCheck: Stopping server: protocol=DNS address=127.0.0.1:16127 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.035Z [INFO]  TestSessionCreate_NoCheck: Stopping server: protocol=DNS address=127.0.0.1:16127 network=udp
>     writer.go:29: 2020-02-23T02:46:35.035Z [INFO]  TestSessionCreate_NoCheck: Stopping server: protocol=HTTP address=127.0.0.1:16128 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.035Z [INFO]  TestSessionCreate_NoCheck: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:35.035Z [INFO]  TestSessionCreate_NoCheck: Endpoints down
> === CONT  TestSessionCreate_NodeChecks
> === RUN   TestSnapshot_Options/PUT#02
> --- PASS: TestSessionCustomTTL (0.95s)
>     writer.go:29: 2020-02-23T02:46:34.367Z [WARN]  TestSessionCustomTTL: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:34.367Z [DEBUG] TestSessionCustomTTL.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:34.367Z [DEBUG] TestSessionCustomTTL.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:34.381Z [INFO]  TestSessionCustomTTL.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:2dd8a1ba-9083-9220-6a4e-84d174b9f079 Address:127.0.0.1:17128}]"
>     writer.go:29: 2020-02-23T02:46:34.381Z [INFO]  TestSessionCustomTTL.server.raft: entering follower state: follower="Node at 127.0.0.1:17128 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:34.382Z [INFO]  TestSessionCustomTTL.server.serf.wan: serf: EventMemberJoin: Node-2dd8a1ba-9083-9220-6a4e-84d174b9f079.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:34.382Z [INFO]  TestSessionCustomTTL.server.serf.lan: serf: EventMemberJoin: Node-2dd8a1ba-9083-9220-6a4e-84d174b9f079 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:34.382Z [INFO]  TestSessionCustomTTL: Started DNS server: address=127.0.0.1:17123 network=udp
>     writer.go:29: 2020-02-23T02:46:34.382Z [INFO]  TestSessionCustomTTL.server: Adding LAN server: server="Node-2dd8a1ba-9083-9220-6a4e-84d174b9f079 (Addr: tcp/127.0.0.1:17128) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:34.382Z [INFO]  TestSessionCustomTTL.server: Handled event for server in area: event=member-join server=Node-2dd8a1ba-9083-9220-6a4e-84d174b9f079.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:34.383Z [INFO]  TestSessionCustomTTL: Started DNS server: address=127.0.0.1:17123 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.383Z [INFO]  TestSessionCustomTTL: Started HTTP server: address=127.0.0.1:17124 network=tcp
>     writer.go:29: 2020-02-23T02:46:34.383Z [INFO]  TestSessionCustomTTL: started state syncer
>     writer.go:29: 2020-02-23T02:46:34.442Z [WARN]  TestSessionCustomTTL.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:34.442Z [INFO]  TestSessionCustomTTL.server.raft: entering candidate state: node="Node at 127.0.0.1:17128 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:34.445Z [DEBUG] TestSessionCustomTTL.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:34.445Z [DEBUG] TestSessionCustomTTL.server.raft: vote granted: from=2dd8a1ba-9083-9220-6a4e-84d174b9f079 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:34.445Z [INFO]  TestSessionCustomTTL.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:34.445Z [INFO]  TestSessionCustomTTL.server.raft: entering leader state: leader="Node at 127.0.0.1:17128 [Leader]"
>     writer.go:29: 2020-02-23T02:46:34.445Z [INFO]  TestSessionCustomTTL.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:34.445Z [INFO]  TestSessionCustomTTL.server: New leader elected: payload=Node-2dd8a1ba-9083-9220-6a4e-84d174b9f079
>     writer.go:29: 2020-02-23T02:46:34.452Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:34.462Z [INFO]  TestSessionCustomTTL.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:34.462Z [INFO]  TestSessionCustomTTL.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:34.462Z [DEBUG] TestSessionCustomTTL.server: Skipping self join check for node since the cluster is too small: node=Node-2dd8a1ba-9083-9220-6a4e-84d174b9f079
>     writer.go:29: 2020-02-23T02:46:34.462Z [INFO]  TestSessionCustomTTL.server: member joined, marking health alive: member=Node-2dd8a1ba-9083-9220-6a4e-84d174b9f079
>     writer.go:29: 2020-02-23T02:46:34.504Z [DEBUG] TestSessionCustomTTL: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:34.507Z [INFO]  TestSessionCustomTTL: Synced node info
>     writer.go:29: 2020-02-23T02:46:35.062Z [DEBUG] TestSessionCustomTTL.server: Session TTL expired: session=b24a0e84-6a01-84d1-15e0-e21706bc9066
>     writer.go:29: 2020-02-23T02:46:35.301Z [INFO]  TestSessionCustomTTL: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:35.301Z [INFO]  TestSessionCustomTTL.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:35.301Z [DEBUG] TestSessionCustomTTL.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.301Z [WARN]  TestSessionCustomTTL.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.301Z [DEBUG] TestSessionCustomTTL.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.303Z [WARN]  TestSessionCustomTTL.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.305Z [INFO]  TestSessionCustomTTL.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:35.305Z [INFO]  TestSessionCustomTTL: consul server down
>     writer.go:29: 2020-02-23T02:46:35.305Z [INFO]  TestSessionCustomTTL: shutdown complete
>     writer.go:29: 2020-02-23T02:46:35.305Z [INFO]  TestSessionCustomTTL: Stopping server: protocol=DNS address=127.0.0.1:17123 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.305Z [INFO]  TestSessionCustomTTL: Stopping server: protocol=DNS address=127.0.0.1:17123 network=udp
>     writer.go:29: 2020-02-23T02:46:35.305Z [INFO]  TestSessionCustomTTL: Stopping server: protocol=HTTP address=127.0.0.1:17124 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.305Z [INFO]  TestSessionCustomTTL: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:35.305Z [INFO]  TestSessionCustomTTL: Endpoints down
> === CONT  TestSessionCreate
> --- PASS: TestSessionCreate_NodeChecks (0.37s)
>     writer.go:29: 2020-02-23T02:46:35.060Z [WARN]  TestSessionCreate_NodeChecks: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:35.060Z [DEBUG] TestSessionCreate_NodeChecks.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:35.061Z [DEBUG] TestSessionCreate_NodeChecks.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:35.085Z [INFO]  TestSessionCreate_NodeChecks.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:249adf91-96b4-5d82-8b83-bea24aca028c Address:127.0.0.1:16162}]"
>     writer.go:29: 2020-02-23T02:46:35.086Z [INFO]  TestSessionCreate_NodeChecks.server.serf.wan: serf: EventMemberJoin: Node-249adf91-96b4-5d82-8b83-bea24aca028c.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.086Z [INFO]  TestSessionCreate_NodeChecks.server.serf.lan: serf: EventMemberJoin: Node-249adf91-96b4-5d82-8b83-bea24aca028c 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.086Z [INFO]  TestSessionCreate_NodeChecks: Started DNS server: address=127.0.0.1:16157 network=udp
>     writer.go:29: 2020-02-23T02:46:35.086Z [INFO]  TestSessionCreate_NodeChecks.server.raft: entering follower state: follower="Node at 127.0.0.1:16162 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:35.087Z [INFO]  TestSessionCreate_NodeChecks.server: Handled event for server in area: event=member-join server=Node-249adf91-96b4-5d82-8b83-bea24aca028c.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:35.087Z [INFO]  TestSessionCreate_NodeChecks.server: Adding LAN server: server="Node-249adf91-96b4-5d82-8b83-bea24aca028c (Addr: tcp/127.0.0.1:16162) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:35.087Z [INFO]  TestSessionCreate_NodeChecks: Started DNS server: address=127.0.0.1:16157 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.087Z [INFO]  TestSessionCreate_NodeChecks: Started HTTP server: address=127.0.0.1:16158 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.087Z [INFO]  TestSessionCreate_NodeChecks: started state syncer
>     writer.go:29: 2020-02-23T02:46:35.153Z [WARN]  TestSessionCreate_NodeChecks.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:35.153Z [INFO]  TestSessionCreate_NodeChecks.server.raft: entering candidate state: node="Node at 127.0.0.1:16162 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:35.156Z [DEBUG] TestSessionCreate_NodeChecks.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:35.156Z [DEBUG] TestSessionCreate_NodeChecks.server.raft: vote granted: from=249adf91-96b4-5d82-8b83-bea24aca028c term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:35.157Z [INFO]  TestSessionCreate_NodeChecks.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:35.157Z [INFO]  TestSessionCreate_NodeChecks.server.raft: entering leader state: leader="Node at 127.0.0.1:16162 [Leader]"
>     writer.go:29: 2020-02-23T02:46:35.157Z [INFO]  TestSessionCreate_NodeChecks.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:35.157Z [INFO]  TestSessionCreate_NodeChecks.server: New leader elected: payload=Node-249adf91-96b4-5d82-8b83-bea24aca028c
>     writer.go:29: 2020-02-23T02:46:35.164Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:35.223Z [INFO]  TestSessionCreate_NodeChecks.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:35.223Z [INFO]  TestSessionCreate_NodeChecks.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.223Z [DEBUG] TestSessionCreate_NodeChecks.server: Skipping self join check for node since the cluster is too small: node=Node-249adf91-96b4-5d82-8b83-bea24aca028c
>     writer.go:29: 2020-02-23T02:46:35.223Z [INFO]  TestSessionCreate_NodeChecks.server: member joined, marking health alive: member=Node-249adf91-96b4-5d82-8b83-bea24aca028c
>     writer.go:29: 2020-02-23T02:46:35.333Z [DEBUG] TestSessionCreate_NodeChecks: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:35.380Z [INFO]  TestSessionCreate_NodeChecks: Synced node info
>     writer.go:29: 2020-02-23T02:46:35.380Z [DEBUG] TestSessionCreate_NodeChecks: Node info in sync
>     writer.go:29: 2020-02-23T02:46:35.398Z [INFO]  TestSessionCreate_NodeChecks: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:35.398Z [INFO]  TestSessionCreate_NodeChecks.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:35.398Z [DEBUG] TestSessionCreate_NodeChecks.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.398Z [WARN]  TestSessionCreate_NodeChecks.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.398Z [DEBUG] TestSessionCreate_NodeChecks.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.401Z [WARN]  TestSessionCreate_NodeChecks.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.403Z [INFO]  TestSessionCreate_NodeChecks.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:35.403Z [INFO]  TestSessionCreate_NodeChecks: consul server down
>     writer.go:29: 2020-02-23T02:46:35.403Z [INFO]  TestSessionCreate_NodeChecks: shutdown complete
>     writer.go:29: 2020-02-23T02:46:35.403Z [INFO]  TestSessionCreate_NodeChecks: Stopping server: protocol=DNS address=127.0.0.1:16157 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.403Z [INFO]  TestSessionCreate_NodeChecks: Stopping server: protocol=DNS address=127.0.0.1:16157 network=udp
>     writer.go:29: 2020-02-23T02:46:35.403Z [INFO]  TestSessionCreate_NodeChecks: Stopping server: protocol=HTTP address=127.0.0.1:16158 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.403Z [INFO]  TestSessionCreate_NodeChecks: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:35.403Z [INFO]  TestSessionCreate_NodeChecks: Endpoints down
> === CONT  TestServiceManager_PersistService_ConfigFiles
> --- PASS: TestSessionCreate_Delete (0.43s)
>     writer.go:29: 2020-02-23T02:46:34.991Z [WARN]  TestSessionCreate_Delete: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:34.991Z [DEBUG] TestSessionCreate_Delete.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:34.991Z [DEBUG] TestSessionCreate_Delete.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:35.014Z [INFO]  TestSessionCreate_Delete.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fd5436c4-6725-de39-914e-1514d2e49b66 Address:127.0.0.1:16150}]"
>     writer.go:29: 2020-02-23T02:46:35.014Z [INFO]  TestSessionCreate_Delete.server.raft: entering follower state: follower="Node at 127.0.0.1:16150 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:35.015Z [INFO]  TestSessionCreate_Delete.server.serf.wan: serf: EventMemberJoin: Node-fd5436c4-6725-de39-914e-1514d2e49b66.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.015Z [INFO]  TestSessionCreate_Delete.server.serf.lan: serf: EventMemberJoin: Node-fd5436c4-6725-de39-914e-1514d2e49b66 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.015Z [INFO]  TestSessionCreate_Delete.server: Adding LAN server: server="Node-fd5436c4-6725-de39-914e-1514d2e49b66 (Addr: tcp/127.0.0.1:16150) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:35.015Z [INFO]  TestSessionCreate_Delete: Started DNS server: address=127.0.0.1:16145 network=udp
>     writer.go:29: 2020-02-23T02:46:35.015Z [INFO]  TestSessionCreate_Delete.server: Handled event for server in area: event=member-join server=Node-fd5436c4-6725-de39-914e-1514d2e49b66.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:35.015Z [INFO]  TestSessionCreate_Delete: Started DNS server: address=127.0.0.1:16145 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.016Z [INFO]  TestSessionCreate_Delete: Started HTTP server: address=127.0.0.1:16146 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.016Z [INFO]  TestSessionCreate_Delete: started state syncer
>     writer.go:29: 2020-02-23T02:46:35.070Z [WARN]  TestSessionCreate_Delete.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:35.070Z [INFO]  TestSessionCreate_Delete.server.raft: entering candidate state: node="Node at 127.0.0.1:16150 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:35.073Z [DEBUG] TestSessionCreate_Delete.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:35.073Z [DEBUG] TestSessionCreate_Delete.server.raft: vote granted: from=fd5436c4-6725-de39-914e-1514d2e49b66 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:35.073Z [INFO]  TestSessionCreate_Delete.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:35.073Z [INFO]  TestSessionCreate_Delete.server.raft: entering leader state: leader="Node at 127.0.0.1:16150 [Leader]"
>     writer.go:29: 2020-02-23T02:46:35.073Z [INFO]  TestSessionCreate_Delete.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:35.073Z [INFO]  TestSessionCreate_Delete.server: New leader elected: payload=Node-fd5436c4-6725-de39-914e-1514d2e49b66
>     writer.go:29: 2020-02-23T02:46:35.084Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:35.092Z [INFO]  TestSessionCreate_Delete.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:35.092Z [INFO]  TestSessionCreate_Delete.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.092Z [DEBUG] TestSessionCreate_Delete.server: Skipping self join check for node since the cluster is too small: node=Node-fd5436c4-6725-de39-914e-1514d2e49b66
>     writer.go:29: 2020-02-23T02:46:35.092Z [INFO]  TestSessionCreate_Delete.server: member joined, marking health alive: member=Node-fd5436c4-6725-de39-914e-1514d2e49b66
>     writer.go:29: 2020-02-23T02:46:35.400Z [INFO]  TestSessionCreate_Delete: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:35.400Z [INFO]  TestSessionCreate_Delete.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:35.400Z [DEBUG] TestSessionCreate_Delete.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.400Z [WARN]  TestSessionCreate_Delete.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.400Z [ERROR] TestSessionCreate_Delete.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:35.400Z [DEBUG] TestSessionCreate_Delete.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.402Z [WARN]  TestSessionCreate_Delete.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.410Z [INFO]  TestSessionCreate_Delete.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:35.410Z [INFO]  TestSessionCreate_Delete: consul server down
>     writer.go:29: 2020-02-23T02:46:35.410Z [INFO]  TestSessionCreate_Delete: shutdown complete
>     writer.go:29: 2020-02-23T02:46:35.410Z [INFO]  TestSessionCreate_Delete: Stopping server: protocol=DNS address=127.0.0.1:16145 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.410Z [INFO]  TestSessionCreate_Delete: Stopping server: protocol=DNS address=127.0.0.1:16145 network=udp
>     writer.go:29: 2020-02-23T02:46:35.410Z [INFO]  TestSessionCreate_Delete: Stopping server: protocol=HTTP address=127.0.0.1:16146 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.410Z [INFO]  TestSessionCreate_Delete: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:35.410Z [INFO]  TestSessionCreate_Delete: Endpoints down
> === CONT  TestServiceManager_PersistService_API
> --- PASS: TestSnapshot_Options (1.63s)
>     --- PASS: TestSnapshot_Options/GET (0.22s)
>         writer.go:29: 2020-02-23T02:46:33.827Z [WARN]  TestSnapshot_Options/GET: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:33.827Z [WARN]  TestSnapshot_Options/GET: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:33.827Z [DEBUG] TestSnapshot_Options/GET.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:33.827Z [DEBUG] TestSnapshot_Options/GET.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:33.844Z [INFO]  TestSnapshot_Options/GET.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:2038f903-19c0-207c-9893-283ed83bd3f2 Address:127.0.0.1:17098}]"
>         writer.go:29: 2020-02-23T02:46:33.844Z [INFO]  TestSnapshot_Options/GET.server.serf.wan: serf: EventMemberJoin: Node-2038f903-19c0-207c-9893-283ed83bd3f2.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:33.845Z [INFO]  TestSnapshot_Options/GET.server.serf.lan: serf: EventMemberJoin: Node-2038f903-19c0-207c-9893-283ed83bd3f2 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:33.845Z [INFO]  TestSnapshot_Options/GET: Started DNS server: address=127.0.0.1:17093 network=udp
>         writer.go:29: 2020-02-23T02:46:33.845Z [INFO]  TestSnapshot_Options/GET.server.raft: entering follower state: follower="Node at 127.0.0.1:17098 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:33.845Z [INFO]  TestSnapshot_Options/GET.server: Adding LAN server: server="Node-2038f903-19c0-207c-9893-283ed83bd3f2 (Addr: tcp/127.0.0.1:17098) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:33.845Z [INFO]  TestSnapshot_Options/GET.server: Handled event for server in area: event=member-join server=Node-2038f903-19c0-207c-9893-283ed83bd3f2.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:33.845Z [INFO]  TestSnapshot_Options/GET: Started DNS server: address=127.0.0.1:17093 network=tcp
>         writer.go:29: 2020-02-23T02:46:33.846Z [INFO]  TestSnapshot_Options/GET: Started HTTP server: address=127.0.0.1:17094 network=tcp
>         writer.go:29: 2020-02-23T02:46:33.846Z [INFO]  TestSnapshot_Options/GET: started state syncer
>         writer.go:29: 2020-02-23T02:46:33.889Z [WARN]  TestSnapshot_Options/GET.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:33.889Z [INFO]  TestSnapshot_Options/GET.server.raft: entering candidate state: node="Node at 127.0.0.1:17098 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:33.892Z [DEBUG] TestSnapshot_Options/GET.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:33.892Z [DEBUG] TestSnapshot_Options/GET.server.raft: vote granted: from=2038f903-19c0-207c-9893-283ed83bd3f2 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:33.893Z [INFO]  TestSnapshot_Options/GET.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:33.893Z [INFO]  TestSnapshot_Options/GET.server.raft: entering leader state: leader="Node at 127.0.0.1:17098 [Leader]"
>         writer.go:29: 2020-02-23T02:46:33.893Z [INFO]  TestSnapshot_Options/GET.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:33.893Z [INFO]  TestSnapshot_Options/GET.server: New leader elected: payload=Node-2038f903-19c0-207c-9893-283ed83bd3f2
>         writer.go:29: 2020-02-23T02:46:33.895Z [INFO]  TestSnapshot_Options/GET.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:33.895Z [INFO]  TestSnapshot_Options/GET.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:33.898Z [INFO]  TestSnapshot_Options/GET.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:33.898Z [WARN]  TestSnapshot_Options/GET.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:33.898Z [INFO]  TestSnapshot_Options/GET.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:33.898Z [WARN]  TestSnapshot_Options/GET.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:33.901Z [INFO]  TestSnapshot_Options/GET.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:33.904Z [INFO]  TestSnapshot_Options/GET.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:33.904Z [INFO]  TestSnapshot_Options/GET.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:33.904Z [INFO]  TestSnapshot_Options/GET.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:33.904Z [DEBUG] TestSnapshot_Options/GET.server: transitioning out of legacy ACL mode
>         writer.go:29: 2020-02-23T02:46:33.904Z [INFO]  TestSnapshot_Options/GET.server.serf.lan: serf: EventMemberUpdate: Node-2038f903-19c0-207c-9893-283ed83bd3f2
>         writer.go:29: 2020-02-23T02:46:33.904Z [INFO]  TestSnapshot_Options/GET.server.serf.wan: serf: EventMemberUpdate: Node-2038f903-19c0-207c-9893-283ed83bd3f2.dc1
>         writer.go:29: 2020-02-23T02:46:33.904Z [INFO]  TestSnapshot_Options/GET.server: Handled event for server in area: event=member-update server=Node-2038f903-19c0-207c-9893-283ed83bd3f2.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:33.904Z [INFO]  TestSnapshot_Options/GET.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:33.904Z [INFO]  TestSnapshot_Options/GET.server.serf.lan: serf: EventMemberUpdate: Node-2038f903-19c0-207c-9893-283ed83bd3f2
>         writer.go:29: 2020-02-23T02:46:33.904Z [INFO]  TestSnapshot_Options/GET.server.serf.wan: serf: EventMemberUpdate: Node-2038f903-19c0-207c-9893-283ed83bd3f2.dc1
>         writer.go:29: 2020-02-23T02:46:33.919Z [INFO]  TestSnapshot_Options/GET.server: Handled event for server in area: event=member-update server=Node-2038f903-19c0-207c-9893-283ed83bd3f2.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:33.921Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:33.929Z [INFO]  TestSnapshot_Options/GET.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:33.929Z [INFO]  TestSnapshot_Options/GET.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:33.929Z [DEBUG] TestSnapshot_Options/GET.server: Skipping self join check for node since the cluster is too small: node=Node-2038f903-19c0-207c-9893-283ed83bd3f2
>         writer.go:29: 2020-02-23T02:46:33.929Z [INFO]  TestSnapshot_Options/GET.server: member joined, marking health alive: member=Node-2038f903-19c0-207c-9893-283ed83bd3f2
>         writer.go:29: 2020-02-23T02:46:33.931Z [DEBUG] TestSnapshot_Options/GET.server: Skipping self join check for node since the cluster is too small: node=Node-2038f903-19c0-207c-9893-283ed83bd3f2
>         writer.go:29: 2020-02-23T02:46:33.931Z [DEBUG] TestSnapshot_Options/GET.server: Skipping self join check for node since the cluster is too small: node=Node-2038f903-19c0-207c-9893-283ed83bd3f2
>         writer.go:29: 2020-02-23T02:46:34.029Z [DEBUG] TestSnapshot_Options/GET.acl: dropping node from result due to ACLs: node=Node-2038f903-19c0-207c-9893-283ed83bd3f2
>         writer.go:29: 2020-02-23T02:46:34.029Z [INFO]  TestSnapshot_Options/GET: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:34.029Z [INFO]  TestSnapshot_Options/GET.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:34.029Z [DEBUG] TestSnapshot_Options/GET.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:34.029Z [DEBUG] TestSnapshot_Options/GET.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.029Z [DEBUG] TestSnapshot_Options/GET.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:34.029Z [WARN]  TestSnapshot_Options/GET.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.029Z [ERROR] TestSnapshot_Options/GET.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:34.029Z [DEBUG] TestSnapshot_Options/GET.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.029Z [DEBUG] TestSnapshot_Options/GET.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:34.029Z [DEBUG] TestSnapshot_Options/GET.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:34.031Z [WARN]  TestSnapshot_Options/GET.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.032Z [INFO]  TestSnapshot_Options/GET.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:34.032Z [INFO]  TestSnapshot_Options/GET: consul server down
>         writer.go:29: 2020-02-23T02:46:34.032Z [INFO]  TestSnapshot_Options/GET: shutdown complete
>         writer.go:29: 2020-02-23T02:46:34.032Z [INFO]  TestSnapshot_Options/GET: Stopping server: protocol=DNS address=127.0.0.1:17093 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.032Z [INFO]  TestSnapshot_Options/GET: Stopping server: protocol=DNS address=127.0.0.1:17093 network=udp
>         writer.go:29: 2020-02-23T02:46:34.032Z [INFO]  TestSnapshot_Options/GET: Stopping server: protocol=HTTP address=127.0.0.1:17094 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.032Z [INFO]  TestSnapshot_Options/GET: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:34.032Z [INFO]  TestSnapshot_Options/GET: Endpoints down
>     --- PASS: TestSnapshot_Options/GET#01 (0.35s)
>         writer.go:29: 2020-02-23T02:46:34.040Z [WARN]  TestSnapshot_Options/GET#01: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:34.040Z [WARN]  TestSnapshot_Options/GET#01: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:34.041Z [DEBUG] TestSnapshot_Options/GET#01.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:34.041Z [DEBUG] TestSnapshot_Options/GET#01.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:34.053Z [INFO]  TestSnapshot_Options/GET#01.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:9029c2cd-88a5-ad24-5c1c-e4e3d1ba8ab8 Address:127.0.0.1:17116}]"
>         writer.go:29: 2020-02-23T02:46:34.054Z [INFO]  TestSnapshot_Options/GET#01.server.serf.wan: serf: EventMemberJoin: Node-9029c2cd-88a5-ad24-5c1c-e4e3d1ba8ab8.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:34.054Z [INFO]  TestSnapshot_Options/GET#01.server.serf.lan: serf: EventMemberJoin: Node-9029c2cd-88a5-ad24-5c1c-e4e3d1ba8ab8 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:34.054Z [INFO]  TestSnapshot_Options/GET#01: Started DNS server: address=127.0.0.1:17111 network=udp
>         writer.go:29: 2020-02-23T02:46:34.054Z [INFO]  TestSnapshot_Options/GET#01.server.raft: entering follower state: follower="Node at 127.0.0.1:17116 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:34.055Z [INFO]  TestSnapshot_Options/GET#01.server: Adding LAN server: server="Node-9029c2cd-88a5-ad24-5c1c-e4e3d1ba8ab8 (Addr: tcp/127.0.0.1:17116) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:34.055Z [INFO]  TestSnapshot_Options/GET#01.server: Handled event for server in area: event=member-join server=Node-9029c2cd-88a5-ad24-5c1c-e4e3d1ba8ab8.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:34.055Z [INFO]  TestSnapshot_Options/GET#01: Started DNS server: address=127.0.0.1:17111 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.055Z [INFO]  TestSnapshot_Options/GET#01: Started HTTP server: address=127.0.0.1:17112 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.055Z [INFO]  TestSnapshot_Options/GET#01: started state syncer
>         writer.go:29: 2020-02-23T02:46:34.121Z [WARN]  TestSnapshot_Options/GET#01.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:34.121Z [INFO]  TestSnapshot_Options/GET#01.server.raft: entering candidate state: node="Node at 127.0.0.1:17116 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:34.126Z [DEBUG] TestSnapshot_Options/GET#01.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:34.126Z [DEBUG] TestSnapshot_Options/GET#01.server.raft: vote granted: from=9029c2cd-88a5-ad24-5c1c-e4e3d1ba8ab8 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:34.126Z [INFO]  TestSnapshot_Options/GET#01.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:34.126Z [INFO]  TestSnapshot_Options/GET#01.server.raft: entering leader state: leader="Node at 127.0.0.1:17116 [Leader]"
>         writer.go:29: 2020-02-23T02:46:34.126Z [INFO]  TestSnapshot_Options/GET#01.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:34.126Z [INFO]  TestSnapshot_Options/GET#01.server: New leader elected: payload=Node-9029c2cd-88a5-ad24-5c1c-e4e3d1ba8ab8
>         writer.go:29: 2020-02-23T02:46:34.135Z [INFO]  TestSnapshot_Options/GET#01.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:34.136Z [ERROR] TestSnapshot_Options/GET#01.anti_entropy: failed to sync remote state: error="ACL not found"
>         writer.go:29: 2020-02-23T02:46:34.136Z [INFO]  TestSnapshot_Options/GET#01.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:34.136Z [WARN]  TestSnapshot_Options/GET#01.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:34.139Z [INFO]  TestSnapshot_Options/GET#01.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:34.143Z [INFO]  TestSnapshot_Options/GET#01.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:34.143Z [INFO]  TestSnapshot_Options/GET#01.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:34.143Z [INFO]  TestSnapshot_Options/GET#01.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:34.143Z [INFO]  TestSnapshot_Options/GET#01.server.serf.lan: serf: EventMemberUpdate: Node-9029c2cd-88a5-ad24-5c1c-e4e3d1ba8ab8
>         writer.go:29: 2020-02-23T02:46:34.143Z [INFO]  TestSnapshot_Options/GET#01.server.serf.wan: serf: EventMemberUpdate: Node-9029c2cd-88a5-ad24-5c1c-e4e3d1ba8ab8.dc1
>         writer.go:29: 2020-02-23T02:46:34.143Z [INFO]  TestSnapshot_Options/GET#01.server: Handled event for server in area: event=member-update server=Node-9029c2cd-88a5-ad24-5c1c-e4e3d1ba8ab8.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:34.148Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:34.155Z [INFO]  TestSnapshot_Options/GET#01.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:34.155Z [INFO]  TestSnapshot_Options/GET#01.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.155Z [DEBUG] TestSnapshot_Options/GET#01.server: Skipping self join check for node since the cluster is too small: node=Node-9029c2cd-88a5-ad24-5c1c-e4e3d1ba8ab8
>         writer.go:29: 2020-02-23T02:46:34.155Z [INFO]  TestSnapshot_Options/GET#01.server: member joined, marking health alive: member=Node-9029c2cd-88a5-ad24-5c1c-e4e3d1ba8ab8
>         writer.go:29: 2020-02-23T02:46:34.158Z [DEBUG] TestSnapshot_Options/GET#01.server: Skipping self join check for node since the cluster is too small: node=Node-9029c2cd-88a5-ad24-5c1c-e4e3d1ba8ab8
>         writer.go:29: 2020-02-23T02:46:34.379Z [DEBUG] TestSnapshot_Options/GET#01.acl: dropping node from result due to ACLs: node=Node-9029c2cd-88a5-ad24-5c1c-e4e3d1ba8ab8
>         writer.go:29: 2020-02-23T02:46:34.379Z [INFO]  TestSnapshot_Options/GET#01: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:34.379Z [INFO]  TestSnapshot_Options/GET#01.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:34.379Z [DEBUG] TestSnapshot_Options/GET#01.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:34.379Z [DEBUG] TestSnapshot_Options/GET#01.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:34.379Z [DEBUG] TestSnapshot_Options/GET#01.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.379Z [WARN]  TestSnapshot_Options/GET#01.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.379Z [DEBUG] TestSnapshot_Options/GET#01.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:34.379Z [DEBUG] TestSnapshot_Options/GET#01.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:34.379Z [DEBUG] TestSnapshot_Options/GET#01.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.381Z [WARN]  TestSnapshot_Options/GET#01.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.384Z [INFO]  TestSnapshot_Options/GET#01.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:34.384Z [INFO]  TestSnapshot_Options/GET#01: consul server down
>         writer.go:29: 2020-02-23T02:46:34.384Z [INFO]  TestSnapshot_Options/GET#01: shutdown complete
>         writer.go:29: 2020-02-23T02:46:34.384Z [INFO]  TestSnapshot_Options/GET#01: Stopping server: protocol=DNS address=127.0.0.1:17111 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.384Z [INFO]  TestSnapshot_Options/GET#01: Stopping server: protocol=DNS address=127.0.0.1:17111 network=udp
>         writer.go:29: 2020-02-23T02:46:34.384Z [INFO]  TestSnapshot_Options/GET#01: Stopping server: protocol=HTTP address=127.0.0.1:17112 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.384Z [INFO]  TestSnapshot_Options/GET#01: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:34.384Z [INFO]  TestSnapshot_Options/GET#01: Endpoints down
>     --- PASS: TestSnapshot_Options/GET#02 (0.17s)
>         writer.go:29: 2020-02-23T02:46:34.392Z [WARN]  TestSnapshot_Options/GET#02: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:34.392Z [WARN]  TestSnapshot_Options/GET#02: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:34.392Z [DEBUG] TestSnapshot_Options/GET#02.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:34.392Z [DEBUG] TestSnapshot_Options/GET#02.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:34.402Z [INFO]  TestSnapshot_Options/GET#02.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:92b95c0e-efde-4552-7047-be3f4a3b2896 Address:127.0.0.1:17140}]"
>         writer.go:29: 2020-02-23T02:46:34.402Z [INFO]  TestSnapshot_Options/GET#02.server.raft: entering follower state: follower="Node at 127.0.0.1:17140 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:34.403Z [INFO]  TestSnapshot_Options/GET#02.server.serf.wan: serf: EventMemberJoin: Node-92b95c0e-efde-4552-7047-be3f4a3b2896.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:34.403Z [INFO]  TestSnapshot_Options/GET#02.server.serf.lan: serf: EventMemberJoin: Node-92b95c0e-efde-4552-7047-be3f4a3b2896 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:34.403Z [INFO]  TestSnapshot_Options/GET#02.server: Adding LAN server: server="Node-92b95c0e-efde-4552-7047-be3f4a3b2896 (Addr: tcp/127.0.0.1:17140) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:34.403Z [INFO]  TestSnapshot_Options/GET#02.server: Handled event for server in area: event=member-join server=Node-92b95c0e-efde-4552-7047-be3f4a3b2896.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:34.404Z [INFO]  TestSnapshot_Options/GET#02: Started DNS server: address=127.0.0.1:17135 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.404Z [INFO]  TestSnapshot_Options/GET#02: Started DNS server: address=127.0.0.1:17135 network=udp
>         writer.go:29: 2020-02-23T02:46:34.404Z [INFO]  TestSnapshot_Options/GET#02: Started HTTP server: address=127.0.0.1:17136 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.404Z [INFO]  TestSnapshot_Options/GET#02: started state syncer
>         writer.go:29: 2020-02-23T02:46:34.453Z [WARN]  TestSnapshot_Options/GET#02.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:34.453Z [INFO]  TestSnapshot_Options/GET#02.server.raft: entering candidate state: node="Node at 127.0.0.1:17140 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:34.457Z [DEBUG] TestSnapshot_Options/GET#02.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:34.457Z [DEBUG] TestSnapshot_Options/GET#02.server.raft: vote granted: from=92b95c0e-efde-4552-7047-be3f4a3b2896 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:34.457Z [INFO]  TestSnapshot_Options/GET#02.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:34.457Z [INFO]  TestSnapshot_Options/GET#02.server.raft: entering leader state: leader="Node at 127.0.0.1:17140 [Leader]"
>         writer.go:29: 2020-02-23T02:46:34.458Z [INFO]  TestSnapshot_Options/GET#02.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:34.458Z [INFO]  TestSnapshot_Options/GET#02.server: New leader elected: payload=Node-92b95c0e-efde-4552-7047-be3f4a3b2896
>         writer.go:29: 2020-02-23T02:46:34.460Z [INFO]  TestSnapshot_Options/GET#02.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:34.461Z [INFO]  TestSnapshot_Options/GET#02.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:34.461Z [WARN]  TestSnapshot_Options/GET#02.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:34.464Z [INFO]  TestSnapshot_Options/GET#02.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:34.468Z [INFO]  TestSnapshot_Options/GET#02.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:34.469Z [INFO]  TestSnapshot_Options/GET#02.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:34.469Z [INFO]  TestSnapshot_Options/GET#02.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:34.469Z [INFO]  TestSnapshot_Options/GET#02.server.serf.lan: serf: EventMemberUpdate: Node-92b95c0e-efde-4552-7047-be3f4a3b2896
>         writer.go:29: 2020-02-23T02:46:34.469Z [INFO]  TestSnapshot_Options/GET#02.server.serf.wan: serf: EventMemberUpdate: Node-92b95c0e-efde-4552-7047-be3f4a3b2896.dc1
>         writer.go:29: 2020-02-23T02:46:34.469Z [INFO]  TestSnapshot_Options/GET#02.server: Handled event for server in area: event=member-update server=Node-92b95c0e-efde-4552-7047-be3f4a3b2896.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:34.478Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:34.485Z [INFO]  TestSnapshot_Options/GET#02.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:34.485Z [INFO]  TestSnapshot_Options/GET#02.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.485Z [DEBUG] TestSnapshot_Options/GET#02.server: Skipping self join check for node since the cluster is too small: node=Node-92b95c0e-efde-4552-7047-be3f4a3b2896
>         writer.go:29: 2020-02-23T02:46:34.485Z [INFO]  TestSnapshot_Options/GET#02.server: member joined, marking health alive: member=Node-92b95c0e-efde-4552-7047-be3f4a3b2896
>         writer.go:29: 2020-02-23T02:46:34.489Z [DEBUG] TestSnapshot_Options/GET#02.server: Skipping self join check for node since the cluster is too small: node=Node-92b95c0e-efde-4552-7047-be3f4a3b2896
>         writer.go:29: 2020-02-23T02:46:34.538Z [DEBUG] TestSnapshot_Options/GET#02.acl: dropping node from result due to ACLs: node=Node-92b95c0e-efde-4552-7047-be3f4a3b2896
>         writer.go:29: 2020-02-23T02:46:34.539Z [INFO]  TestSnapshot_Options/GET#02.server.fsm: snapshot created: duration=39.736µs
>         writer.go:29: 2020-02-23T02:46:34.539Z [INFO]  TestSnapshot_Options/GET#02.server.raft: starting snapshot up to: index=13
>         writer.go:29: 2020-02-23T02:46:34.539Z [INFO]  snapshot: creating new snapshot: path=/tmp/TestSnapshot_Options_GET#02-agent422070428/raft/snapshots/2-13-1582425994539.tmp
>         writer.go:29: 2020-02-23T02:46:34.546Z [INFO]  TestSnapshot_Options/GET#02.server.raft: snapshot complete up to: index=13
>         writer.go:29: 2020-02-23T02:46:34.549Z [INFO]  TestSnapshot_Options/GET#02: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:34.549Z [INFO]  TestSnapshot_Options/GET#02.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:34.549Z [DEBUG] TestSnapshot_Options/GET#02.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:34.549Z [DEBUG] TestSnapshot_Options/GET#02.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:34.549Z [DEBUG] TestSnapshot_Options/GET#02.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.549Z [WARN]  TestSnapshot_Options/GET#02.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.549Z [ERROR] TestSnapshot_Options/GET#02.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:34.549Z [DEBUG] TestSnapshot_Options/GET#02.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:34.549Z [DEBUG] TestSnapshot_Options/GET#02.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:34.549Z [DEBUG] TestSnapshot_Options/GET#02.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.551Z [WARN]  TestSnapshot_Options/GET#02.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.552Z [INFO]  TestSnapshot_Options/GET#02.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:34.553Z [INFO]  TestSnapshot_Options/GET#02: consul server down
>         writer.go:29: 2020-02-23T02:46:34.553Z [INFO]  TestSnapshot_Options/GET#02: shutdown complete
>         writer.go:29: 2020-02-23T02:46:34.553Z [INFO]  TestSnapshot_Options/GET#02: Stopping server: protocol=DNS address=127.0.0.1:17135 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.553Z [INFO]  TestSnapshot_Options/GET#02: Stopping server: protocol=DNS address=127.0.0.1:17135 network=udp
>         writer.go:29: 2020-02-23T02:46:34.553Z [INFO]  TestSnapshot_Options/GET#02: Stopping server: protocol=HTTP address=127.0.0.1:17136 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.553Z [INFO]  TestSnapshot_Options/GET#02: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:34.553Z [INFO]  TestSnapshot_Options/GET#02: Endpoints down
>     --- PASS: TestSnapshot_Options/PUT (0.44s)
>         writer.go:29: 2020-02-23T02:46:34.561Z [WARN]  TestSnapshot_Options/PUT: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:34.561Z [WARN]  TestSnapshot_Options/PUT: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:34.561Z [DEBUG] TestSnapshot_Options/PUT.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:34.561Z [DEBUG] TestSnapshot_Options/PUT.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:34.575Z [INFO]  TestSnapshot_Options/PUT.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:53629e49-7d45-1fbf-21b3-6101ad68cfb0 Address:127.0.0.1:16138}]"
>         writer.go:29: 2020-02-23T02:46:34.576Z [INFO]  TestSnapshot_Options/PUT.server.serf.wan: serf: EventMemberJoin: Node-53629e49-7d45-1fbf-21b3-6101ad68cfb0.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:34.576Z [INFO]  TestSnapshot_Options/PUT.server.serf.lan: serf: EventMemberJoin: Node-53629e49-7d45-1fbf-21b3-6101ad68cfb0 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:34.576Z [INFO]  TestSnapshot_Options/PUT: Started DNS server: address=127.0.0.1:16133 network=udp
>         writer.go:29: 2020-02-23T02:46:34.576Z [INFO]  TestSnapshot_Options/PUT.server.raft: entering follower state: follower="Node at 127.0.0.1:16138 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:34.577Z [INFO]  TestSnapshot_Options/PUT.server: Adding LAN server: server="Node-53629e49-7d45-1fbf-21b3-6101ad68cfb0 (Addr: tcp/127.0.0.1:16138) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:34.577Z [INFO]  TestSnapshot_Options/PUT.server: Handled event for server in area: event=member-join server=Node-53629e49-7d45-1fbf-21b3-6101ad68cfb0.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:34.577Z [INFO]  TestSnapshot_Options/PUT: Started DNS server: address=127.0.0.1:16133 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.577Z [INFO]  TestSnapshot_Options/PUT: Started HTTP server: address=127.0.0.1:16134 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.577Z [INFO]  TestSnapshot_Options/PUT: started state syncer
>         writer.go:29: 2020-02-23T02:46:34.643Z [WARN]  TestSnapshot_Options/PUT.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:34.643Z [INFO]  TestSnapshot_Options/PUT.server.raft: entering candidate state: node="Node at 127.0.0.1:16138 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:34.647Z [DEBUG] TestSnapshot_Options/PUT.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:34.647Z [DEBUG] TestSnapshot_Options/PUT.server.raft: vote granted: from=53629e49-7d45-1fbf-21b3-6101ad68cfb0 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:34.647Z [INFO]  TestSnapshot_Options/PUT.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:34.647Z [INFO]  TestSnapshot_Options/PUT.server.raft: entering leader state: leader="Node at 127.0.0.1:16138 [Leader]"
>         writer.go:29: 2020-02-23T02:46:34.647Z [INFO]  TestSnapshot_Options/PUT.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:34.647Z [INFO]  TestSnapshot_Options/PUT.server: New leader elected: payload=Node-53629e49-7d45-1fbf-21b3-6101ad68cfb0
>         writer.go:29: 2020-02-23T02:46:34.649Z [INFO]  TestSnapshot_Options/PUT.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:34.650Z [INFO]  TestSnapshot_Options/PUT.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:34.650Z [WARN]  TestSnapshot_Options/PUT.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:34.653Z [INFO]  TestSnapshot_Options/PUT.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:34.656Z [INFO]  TestSnapshot_Options/PUT.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:34.656Z [INFO]  TestSnapshot_Options/PUT.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:34.656Z [INFO]  TestSnapshot_Options/PUT.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:34.656Z [INFO]  TestSnapshot_Options/PUT.server.serf.lan: serf: EventMemberUpdate: Node-53629e49-7d45-1fbf-21b3-6101ad68cfb0
>         writer.go:29: 2020-02-23T02:46:34.656Z [INFO]  TestSnapshot_Options/PUT.server.serf.wan: serf: EventMemberUpdate: Node-53629e49-7d45-1fbf-21b3-6101ad68cfb0.dc1
>         writer.go:29: 2020-02-23T02:46:34.657Z [INFO]  TestSnapshot_Options/PUT.server: Handled event for server in area: event=member-update server=Node-53629e49-7d45-1fbf-21b3-6101ad68cfb0.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:34.660Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:34.667Z [INFO]  TestSnapshot_Options/PUT.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:34.667Z [INFO]  TestSnapshot_Options/PUT.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.667Z [DEBUG] TestSnapshot_Options/PUT.server: Skipping self join check for node since the cluster is too small: node=Node-53629e49-7d45-1fbf-21b3-6101ad68cfb0
>         writer.go:29: 2020-02-23T02:46:34.667Z [INFO]  TestSnapshot_Options/PUT.server: member joined, marking health alive: member=Node-53629e49-7d45-1fbf-21b3-6101ad68cfb0
>         writer.go:29: 2020-02-23T02:46:34.670Z [DEBUG] TestSnapshot_Options/PUT.server: Skipping self join check for node since the cluster is too small: node=Node-53629e49-7d45-1fbf-21b3-6101ad68cfb0
>         writer.go:29: 2020-02-23T02:46:34.896Z [DEBUG] TestSnapshot_Options/PUT: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:34.899Z [INFO]  TestSnapshot_Options/PUT: Synced node info
>         writer.go:29: 2020-02-23T02:46:34.987Z [DEBUG] TestSnapshot_Options/PUT.acl: dropping node from result due to ACLs: node=Node-53629e49-7d45-1fbf-21b3-6101ad68cfb0
>         writer.go:29: 2020-02-23T02:46:34.987Z [INFO]  TestSnapshot_Options/PUT: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:34.987Z [INFO]  TestSnapshot_Options/PUT.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:34.987Z [DEBUG] TestSnapshot_Options/PUT.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:34.987Z [DEBUG] TestSnapshot_Options/PUT.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:34.987Z [DEBUG] TestSnapshot_Options/PUT.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.987Z [WARN]  TestSnapshot_Options/PUT.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.987Z [DEBUG] TestSnapshot_Options/PUT.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:34.987Z [DEBUG] TestSnapshot_Options/PUT.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:34.987Z [DEBUG] TestSnapshot_Options/PUT.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:34.989Z [WARN]  TestSnapshot_Options/PUT.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:34.990Z [INFO]  TestSnapshot_Options/PUT.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:34.991Z [INFO]  TestSnapshot_Options/PUT: consul server down
>         writer.go:29: 2020-02-23T02:46:34.991Z [INFO]  TestSnapshot_Options/PUT: shutdown complete
>         writer.go:29: 2020-02-23T02:46:34.991Z [INFO]  TestSnapshot_Options/PUT: Stopping server: protocol=DNS address=127.0.0.1:16133 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.991Z [INFO]  TestSnapshot_Options/PUT: Stopping server: protocol=DNS address=127.0.0.1:16133 network=udp
>         writer.go:29: 2020-02-23T02:46:34.991Z [INFO]  TestSnapshot_Options/PUT: Stopping server: protocol=HTTP address=127.0.0.1:16134 network=tcp
>         writer.go:29: 2020-02-23T02:46:34.991Z [INFO]  TestSnapshot_Options/PUT: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:34.991Z [INFO]  TestSnapshot_Options/PUT: Endpoints down
>     --- PASS: TestSnapshot_Options/PUT#01 (0.30s)
>         writer.go:29: 2020-02-23T02:46:34.998Z [WARN]  TestSnapshot_Options/PUT#01: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:34.998Z [WARN]  TestSnapshot_Options/PUT#01: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:34.998Z [DEBUG] TestSnapshot_Options/PUT#01.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:34.998Z [DEBUG] TestSnapshot_Options/PUT#01.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:35.011Z [INFO]  TestSnapshot_Options/PUT#01.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:1f062e4e-7bff-2d6a-227b-b87126c2fc38 Address:127.0.0.1:16156}]"
>         writer.go:29: 2020-02-23T02:46:35.012Z [INFO]  TestSnapshot_Options/PUT#01.server.serf.wan: serf: EventMemberJoin: Node-1f062e4e-7bff-2d6a-227b-b87126c2fc38.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:35.012Z [INFO]  TestSnapshot_Options/PUT#01.server.serf.lan: serf: EventMemberJoin: Node-1f062e4e-7bff-2d6a-227b-b87126c2fc38 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:35.012Z [INFO]  TestSnapshot_Options/PUT#01: Started DNS server: address=127.0.0.1:16151 network=udp
>         writer.go:29: 2020-02-23T02:46:35.012Z [INFO]  TestSnapshot_Options/PUT#01.server.raft: entering follower state: follower="Node at 127.0.0.1:16156 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:35.013Z [INFO]  TestSnapshot_Options/PUT#01.server: Adding LAN server: server="Node-1f062e4e-7bff-2d6a-227b-b87126c2fc38 (Addr: tcp/127.0.0.1:16156) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:35.013Z [INFO]  TestSnapshot_Options/PUT#01.server: Handled event for server in area: event=member-join server=Node-1f062e4e-7bff-2d6a-227b-b87126c2fc38.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:35.013Z [INFO]  TestSnapshot_Options/PUT#01: Started DNS server: address=127.0.0.1:16151 network=tcp
>         writer.go:29: 2020-02-23T02:46:35.013Z [INFO]  TestSnapshot_Options/PUT#01: Started HTTP server: address=127.0.0.1:16152 network=tcp
>         writer.go:29: 2020-02-23T02:46:35.013Z [INFO]  TestSnapshot_Options/PUT#01: started state syncer
>         writer.go:29: 2020-02-23T02:46:35.087Z [WARN]  TestSnapshot_Options/PUT#01.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:35.087Z [INFO]  TestSnapshot_Options/PUT#01.server.raft: entering candidate state: node="Node at 127.0.0.1:16156 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:35.091Z [DEBUG] TestSnapshot_Options/PUT#01.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:35.091Z [DEBUG] TestSnapshot_Options/PUT#01.server.raft: vote granted: from=1f062e4e-7bff-2d6a-227b-b87126c2fc38 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:35.091Z [INFO]  TestSnapshot_Options/PUT#01.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:35.091Z [INFO]  TestSnapshot_Options/PUT#01.server.raft: entering leader state: leader="Node at 127.0.0.1:16156 [Leader]"
>         writer.go:29: 2020-02-23T02:46:35.091Z [INFO]  TestSnapshot_Options/PUT#01.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:35.091Z [INFO]  TestSnapshot_Options/PUT#01.server: New leader elected: payload=Node-1f062e4e-7bff-2d6a-227b-b87126c2fc38
>         writer.go:29: 2020-02-23T02:46:35.095Z [INFO]  TestSnapshot_Options/PUT#01.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:35.096Z [INFO]  TestSnapshot_Options/PUT#01.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:35.096Z [WARN]  TestSnapshot_Options/PUT#01.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:35.098Z [INFO]  TestSnapshot_Options/PUT#01.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:35.102Z [INFO]  TestSnapshot_Options/PUT#01.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:35.102Z [INFO]  TestSnapshot_Options/PUT#01.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:35.102Z [INFO]  TestSnapshot_Options/PUT#01.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:35.102Z [INFO]  TestSnapshot_Options/PUT#01.server.serf.lan: serf: EventMemberUpdate: Node-1f062e4e-7bff-2d6a-227b-b87126c2fc38
>         writer.go:29: 2020-02-23T02:46:35.102Z [INFO]  TestSnapshot_Options/PUT#01.server.serf.wan: serf: EventMemberUpdate: Node-1f062e4e-7bff-2d6a-227b-b87126c2fc38.dc1
>         writer.go:29: 2020-02-23T02:46:35.102Z [INFO]  TestSnapshot_Options/PUT#01.server: Handled event for server in area: event=member-update server=Node-1f062e4e-7bff-2d6a-227b-b87126c2fc38.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:35.106Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:35.113Z [INFO]  TestSnapshot_Options/PUT#01.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:35.113Z [INFO]  TestSnapshot_Options/PUT#01.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:35.113Z [DEBUG] TestSnapshot_Options/PUT#01.server: Skipping self join check for node since the cluster is too small: node=Node-1f062e4e-7bff-2d6a-227b-b87126c2fc38
>         writer.go:29: 2020-02-23T02:46:35.113Z [INFO]  TestSnapshot_Options/PUT#01.server: member joined, marking health alive: member=Node-1f062e4e-7bff-2d6a-227b-b87126c2fc38
>         writer.go:29: 2020-02-23T02:46:35.115Z [DEBUG] TestSnapshot_Options/PUT#01.server: Skipping self join check for node since the cluster is too small: node=Node-1f062e4e-7bff-2d6a-227b-b87126c2fc38
>         writer.go:29: 2020-02-23T02:46:35.192Z [DEBUG] TestSnapshot_Options/PUT#01: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:35.243Z [INFO]  TestSnapshot_Options/PUT#01: Synced node info
>         writer.go:29: 2020-02-23T02:46:35.282Z [DEBUG] TestSnapshot_Options/PUT#01.acl: dropping node from result due to ACLs: node=Node-1f062e4e-7bff-2d6a-227b-b87126c2fc38
>         writer.go:29: 2020-02-23T02:46:35.282Z [INFO]  TestSnapshot_Options/PUT#01: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:35.282Z [INFO]  TestSnapshot_Options/PUT#01.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:35.282Z [DEBUG] TestSnapshot_Options/PUT#01.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:35.282Z [DEBUG] TestSnapshot_Options/PUT#01.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:35.282Z [DEBUG] TestSnapshot_Options/PUT#01.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:35.282Z [WARN]  TestSnapshot_Options/PUT#01.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:35.282Z [DEBUG] TestSnapshot_Options/PUT#01.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:35.282Z [DEBUG] TestSnapshot_Options/PUT#01.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:35.282Z [DEBUG] TestSnapshot_Options/PUT#01.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:35.284Z [WARN]  TestSnapshot_Options/PUT#01.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:35.286Z [INFO]  TestSnapshot_Options/PUT#01.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:35.286Z [INFO]  TestSnapshot_Options/PUT#01: consul server down
>         writer.go:29: 2020-02-23T02:46:35.286Z [INFO]  TestSnapshot_Options/PUT#01: shutdown complete
>         writer.go:29: 2020-02-23T02:46:35.286Z [INFO]  TestSnapshot_Options/PUT#01: Stopping server: protocol=DNS address=127.0.0.1:16151 network=tcp
>         writer.go:29: 2020-02-23T02:46:35.286Z [INFO]  TestSnapshot_Options/PUT#01: Stopping server: protocol=DNS address=127.0.0.1:16151 network=udp
>         writer.go:29: 2020-02-23T02:46:35.286Z [INFO]  TestSnapshot_Options/PUT#01: Stopping server: protocol=HTTP address=127.0.0.1:16152 network=tcp
>         writer.go:29: 2020-02-23T02:46:35.286Z [INFO]  TestSnapshot_Options/PUT#01: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:35.286Z [INFO]  TestSnapshot_Options/PUT#01: Endpoints down
>     --- PASS: TestSnapshot_Options/PUT#02 (0.16s)
>         writer.go:29: 2020-02-23T02:46:35.294Z [WARN]  TestSnapshot_Options/PUT#02: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:35.294Z [WARN]  TestSnapshot_Options/PUT#02: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:35.295Z [DEBUG] TestSnapshot_Options/PUT#02.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:35.295Z [DEBUG] TestSnapshot_Options/PUT#02.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:35.327Z [INFO]  TestSnapshot_Options/PUT#02.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:76738a1d-cad1-be57-7245-638008cbd018 Address:127.0.0.1:16168}]"
>         writer.go:29: 2020-02-23T02:46:35.327Z [INFO]  TestSnapshot_Options/PUT#02.server.raft: entering follower state: follower="Node at 127.0.0.1:16168 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:35.340Z [INFO]  TestSnapshot_Options/PUT#02.server.serf.wan: serf: EventMemberJoin: Node-76738a1d-cad1-be57-7245-638008cbd018.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:35.341Z [INFO]  TestSnapshot_Options/PUT#02.server.serf.lan: serf: EventMemberJoin: Node-76738a1d-cad1-be57-7245-638008cbd018 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:35.341Z [INFO]  TestSnapshot_Options/PUT#02.server: Adding LAN server: server="Node-76738a1d-cad1-be57-7245-638008cbd018 (Addr: tcp/127.0.0.1:16168) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:35.341Z [INFO]  TestSnapshot_Options/PUT#02.server: Handled event for server in area: event=member-join server=Node-76738a1d-cad1-be57-7245-638008cbd018.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:35.341Z [INFO]  TestSnapshot_Options/PUT#02: Started DNS server: address=127.0.0.1:16163 network=tcp
>         writer.go:29: 2020-02-23T02:46:35.341Z [INFO]  TestSnapshot_Options/PUT#02: Started DNS server: address=127.0.0.1:16163 network=udp
>         writer.go:29: 2020-02-23T02:46:35.342Z [INFO]  TestSnapshot_Options/PUT#02: Started HTTP server: address=127.0.0.1:16164 network=tcp
>         writer.go:29: 2020-02-23T02:46:35.342Z [INFO]  TestSnapshot_Options/PUT#02: started state syncer
>         writer.go:29: 2020-02-23T02:46:35.380Z [WARN]  TestSnapshot_Options/PUT#02.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:35.380Z [INFO]  TestSnapshot_Options/PUT#02.server.raft: entering candidate state: node="Node at 127.0.0.1:16168 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:35.399Z [DEBUG] TestSnapshot_Options/PUT#02.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:35.399Z [DEBUG] TestSnapshot_Options/PUT#02.server.raft: vote granted: from=76738a1d-cad1-be57-7245-638008cbd018 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:35.399Z [INFO]  TestSnapshot_Options/PUT#02.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:35.399Z [INFO]  TestSnapshot_Options/PUT#02.server.raft: entering leader state: leader="Node at 127.0.0.1:16168 [Leader]"
>         writer.go:29: 2020-02-23T02:46:35.399Z [INFO]  TestSnapshot_Options/PUT#02.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:35.399Z [INFO]  TestSnapshot_Options/PUT#02.server: New leader elected: payload=Node-76738a1d-cad1-be57-7245-638008cbd018
>         writer.go:29: 2020-02-23T02:46:35.402Z [INFO]  TestSnapshot_Options/PUT#02.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:35.410Z [INFO]  TestSnapshot_Options/PUT#02.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:35.410Z [WARN]  TestSnapshot_Options/PUT#02.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:35.413Z [INFO]  TestSnapshot_Options/PUT#02.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:35.416Z [INFO]  TestSnapshot_Options/PUT#02.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:35.416Z [INFO]  TestSnapshot_Options/PUT#02.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:35.416Z [INFO]  TestSnapshot_Options/PUT#02.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:35.416Z [INFO]  TestSnapshot_Options/PUT#02.server.serf.lan: serf: EventMemberUpdate: Node-76738a1d-cad1-be57-7245-638008cbd018
>         writer.go:29: 2020-02-23T02:46:35.416Z [INFO]  TestSnapshot_Options/PUT#02.server.serf.wan: serf: EventMemberUpdate: Node-76738a1d-cad1-be57-7245-638008cbd018.dc1
>         writer.go:29: 2020-02-23T02:46:35.417Z [INFO]  TestSnapshot_Options/PUT#02.server: Handled event for server in area: event=member-update server=Node-76738a1d-cad1-be57-7245-638008cbd018.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:35.421Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:35.435Z [INFO]  TestSnapshot_Options/PUT#02: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:35.435Z [INFO]  TestSnapshot_Options/PUT#02.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:35.435Z [DEBUG] TestSnapshot_Options/PUT#02.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:35.435Z [DEBUG] TestSnapshot_Options/PUT#02.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:35.435Z [WARN]  TestSnapshot_Options/PUT#02.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:35.435Z [ERROR] TestSnapshot_Options/PUT#02.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:35.435Z [DEBUG] TestSnapshot_Options/PUT#02.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:35.435Z [DEBUG] TestSnapshot_Options/PUT#02.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:35.438Z [WARN]  TestSnapshot_Options/PUT#02.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:35.440Z [INFO]  TestSnapshot_Options/PUT#02.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:35.440Z [INFO]  TestSnapshot_Options/PUT#02.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:35.440Z [DEBUG] TestSnapshot_Options/PUT#02.server: Skipping self join check for node since the cluster is too small: node=Node-76738a1d-cad1-be57-7245-638008cbd018
>         writer.go:29: 2020-02-23T02:46:35.440Z [INFO]  TestSnapshot_Options/PUT#02.server: member joined, marking health alive: member=Node-76738a1d-cad1-be57-7245-638008cbd018
>         writer.go:29: 2020-02-23T02:46:35.441Z [INFO]  TestSnapshot_Options/PUT#02.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:35.444Z [INFO]  TestSnapshot_Options/PUT#02: consul server down
>         writer.go:29: 2020-02-23T02:46:35.444Z [INFO]  TestSnapshot_Options/PUT#02: shutdown complete
>         writer.go:29: 2020-02-23T02:46:35.444Z [INFO]  TestSnapshot_Options/PUT#02: Stopping server: protocol=DNS address=127.0.0.1:16163 network=tcp
>         writer.go:29: 2020-02-23T02:46:35.444Z [INFO]  TestSnapshot_Options/PUT#02: Stopping server: protocol=DNS address=127.0.0.1:16163 network=udp
>         writer.go:29: 2020-02-23T02:46:35.444Z [INFO]  TestSnapshot_Options/PUT#02: Stopping server: protocol=HTTP address=127.0.0.1:16164 network=tcp
>         writer.go:29: 2020-02-23T02:46:35.444Z [INFO]  TestSnapshot_Options/PUT#02: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:35.444Z [INFO]  TestSnapshot_Options/PUT#02: Endpoints down
> === CONT  TestAgent_ServiceHTTPChecksNotification
> --- PASS: TestSessionCreate (0.21s)
>     writer.go:29: 2020-02-23T02:46:35.312Z [WARN]  TestSessionCreate: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:35.312Z [DEBUG] TestSessionCreate.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:35.313Z [DEBUG] TestSessionCreate.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:35.419Z [INFO]  TestSessionCreate.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:555f33ae-eb9b-b807-eca9-46e21493b0cf Address:127.0.0.1:16174}]"
>     writer.go:29: 2020-02-23T02:46:35.419Z [INFO]  TestSessionCreate.server.raft: entering follower state: follower="Node at 127.0.0.1:16174 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:35.419Z [INFO]  TestSessionCreate.server.serf.wan: serf: EventMemberJoin: Node-555f33ae-eb9b-b807-eca9-46e21493b0cf.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.419Z [INFO]  TestSessionCreate.server.serf.lan: serf: EventMemberJoin: Node-555f33ae-eb9b-b807-eca9-46e21493b0cf 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.419Z [INFO]  TestSessionCreate.server: Handled event for server in area: event=member-join server=Node-555f33ae-eb9b-b807-eca9-46e21493b0cf.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:35.420Z [INFO]  TestSessionCreate.server: Adding LAN server: server="Node-555f33ae-eb9b-b807-eca9-46e21493b0cf (Addr: tcp/127.0.0.1:16174) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:35.420Z [INFO]  TestSessionCreate: Started DNS server: address=127.0.0.1:16169 network=udp
>     writer.go:29: 2020-02-23T02:46:35.420Z [INFO]  TestSessionCreate: Started DNS server: address=127.0.0.1:16169 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.420Z [INFO]  TestSessionCreate: Started HTTP server: address=127.0.0.1:16170 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.420Z [INFO]  TestSessionCreate: started state syncer
>     writer.go:29: 2020-02-23T02:46:35.457Z [WARN]  TestSessionCreate.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:35.457Z [INFO]  TestSessionCreate.server.raft: entering candidate state: node="Node at 127.0.0.1:16174 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:35.461Z [DEBUG] TestSessionCreate.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:35.461Z [DEBUG] TestSessionCreate.server.raft: vote granted: from=555f33ae-eb9b-b807-eca9-46e21493b0cf term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:35.461Z [INFO]  TestSessionCreate.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:35.461Z [INFO]  TestSessionCreate.server.raft: entering leader state: leader="Node at 127.0.0.1:16174 [Leader]"
>     writer.go:29: 2020-02-23T02:46:35.461Z [INFO]  TestSessionCreate.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:35.461Z [INFO]  TestSessionCreate.server: New leader elected: payload=Node-555f33ae-eb9b-b807-eca9-46e21493b0cf
>     writer.go:29: 2020-02-23T02:46:35.469Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:35.476Z [INFO]  TestSessionCreate.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:35.476Z [INFO]  TestSessionCreate.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.476Z [DEBUG] TestSessionCreate.server: Skipping self join check for node since the cluster is too small: node=Node-555f33ae-eb9b-b807-eca9-46e21493b0cf
>     writer.go:29: 2020-02-23T02:46:35.476Z [INFO]  TestSessionCreate.server: member joined, marking health alive: member=Node-555f33ae-eb9b-b807-eca9-46e21493b0cf
>     writer.go:29: 2020-02-23T02:46:35.507Z [INFO]  TestSessionCreate: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:35.507Z [INFO]  TestSessionCreate.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:35.507Z [DEBUG] TestSessionCreate.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.507Z [WARN]  TestSessionCreate.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.507Z [ERROR] TestSessionCreate.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:35.507Z [DEBUG] TestSessionCreate.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.510Z [WARN]  TestSessionCreate.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.512Z [INFO]  TestSessionCreate.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:35.512Z [INFO]  TestSessionCreate: consul server down
>     writer.go:29: 2020-02-23T02:46:35.512Z [INFO]  TestSessionCreate: shutdown complete
>     writer.go:29: 2020-02-23T02:46:35.512Z [INFO]  TestSessionCreate: Stopping server: protocol=DNS address=127.0.0.1:16169 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.512Z [INFO]  TestSessionCreate: Stopping server: protocol=DNS address=127.0.0.1:16169 network=udp
>     writer.go:29: 2020-02-23T02:46:35.512Z [INFO]  TestSessionCreate: Stopping server: protocol=HTTP address=127.0.0.1:16170 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.512Z [INFO]  TestSessionCreate: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:35.512Z [INFO]  TestSessionCreate: Endpoints down
> === CONT  TestHandleRemoteExecFailed
> --- PASS: TestServiceManager_PersistService_API (0.29s)
>     writer.go:29: 2020-02-23T02:46:35.418Z [WARN]  TestServiceManager_PersistService_API: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:35.418Z [DEBUG] TestServiceManager_PersistService_API.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:35.418Z [DEBUG] TestServiceManager_PersistService_API.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:35.456Z [INFO]  TestServiceManager_PersistService_API.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b56a837d-9ee2-4cb2-3166-29665ed9aafb Address:127.0.0.1:16180}]"
>     writer.go:29: 2020-02-23T02:46:35.456Z [INFO]  TestServiceManager_PersistService_API.server.raft: entering follower state: follower="Node at 127.0.0.1:16180 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:35.457Z [INFO]  TestServiceManager_PersistService_API.server.serf.wan: serf: EventMemberJoin: Node-b56a837d-9ee2-4cb2-3166-29665ed9aafb.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.457Z [INFO]  TestServiceManager_PersistService_API.server.serf.lan: serf: EventMemberJoin: Node-b56a837d-9ee2-4cb2-3166-29665ed9aafb 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.457Z [INFO]  TestServiceManager_PersistService_API.server: Handled event for server in area: event=member-join server=Node-b56a837d-9ee2-4cb2-3166-29665ed9aafb.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:35.457Z [INFO]  TestServiceManager_PersistService_API.server: Adding LAN server: server="Node-b56a837d-9ee2-4cb2-3166-29665ed9aafb (Addr: tcp/127.0.0.1:16180) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:35.457Z [INFO]  TestServiceManager_PersistService_API: Started DNS server: address=127.0.0.1:16175 network=udp
>     writer.go:29: 2020-02-23T02:46:35.457Z [INFO]  TestServiceManager_PersistService_API: Started DNS server: address=127.0.0.1:16175 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.458Z [INFO]  TestServiceManager_PersistService_API: Started HTTP server: address=127.0.0.1:16176 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.458Z [INFO]  TestServiceManager_PersistService_API: started state syncer
>     writer.go:29: 2020-02-23T02:46:35.521Z [WARN]  TestServiceManager_PersistService_API.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:35.521Z [INFO]  TestServiceManager_PersistService_API.server.raft: entering candidate state: node="Node at 127.0.0.1:16180 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:35.526Z [DEBUG] TestServiceManager_PersistService_API.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:35.526Z [DEBUG] TestServiceManager_PersistService_API.server.raft: vote granted: from=b56a837d-9ee2-4cb2-3166-29665ed9aafb term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:35.526Z [INFO]  TestServiceManager_PersistService_API.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:35.526Z [INFO]  TestServiceManager_PersistService_API.server.raft: entering leader state: leader="Node at 127.0.0.1:16180 [Leader]"
>     writer.go:29: 2020-02-23T02:46:35.526Z [INFO]  TestServiceManager_PersistService_API.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:35.526Z [INFO]  TestServiceManager_PersistService_API.server: New leader elected: payload=Node-b56a837d-9ee2-4cb2-3166-29665ed9aafb
>     writer.go:29: 2020-02-23T02:46:35.535Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:35.543Z [INFO]  TestServiceManager_PersistService_API.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:35.543Z [INFO]  TestServiceManager_PersistService_API.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.543Z [DEBUG] TestServiceManager_PersistService_API.server: Skipping self join check for node since the cluster is too small: node=Node-b56a837d-9ee2-4cb2-3166-29665ed9aafb
>     writer.go:29: 2020-02-23T02:46:35.544Z [INFO]  TestServiceManager_PersistService_API.server: member joined, marking health alive: member=Node-b56a837d-9ee2-4cb2-3166-29665ed9aafb
>     writer.go:29: 2020-02-23T02:46:35.650Z [DEBUG] TestServiceManager_PersistService_API: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:35.651Z [INFO]  TestServiceManager_PersistService_API: Synced node info
>     writer.go:29: 2020-02-23T02:46:35.651Z [DEBUG] TestServiceManager_PersistService_API: Node info in sync
>     writer.go:29: 2020-02-23T02:46:35.653Z [DEBUG] TestServiceManager_PersistService_API.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:35.654Z [INFO]  TestServiceManager_PersistService_API.client.serf.lan: serf: EventMemberJoin: Node-a724bdd7-6350-0fb5-679e-ebcd757d50a2 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.654Z [INFO]  TestServiceManager_PersistService_API: Started DNS server: address=127.0.0.1:16205 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.654Z [INFO]  TestServiceManager_PersistService_API: Started DNS server: address=127.0.0.1:16205 network=udp
>     writer.go:29: 2020-02-23T02:46:35.655Z [INFO]  TestServiceManager_PersistService_API: Started HTTP server: address=127.0.0.1:16206 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.655Z [INFO]  TestServiceManager_PersistService_API: started state syncer
>     writer.go:29: 2020-02-23T02:46:35.655Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.655Z [ERROR] TestServiceManager_PersistService_API.anti_entropy: failed to sync remote state: error="No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.655Z [INFO]  TestServiceManager_PersistService_API: (LAN) joining: lan_addresses=[127.0.0.1:16178]
>     writer.go:29: 2020-02-23T02:46:35.655Z [DEBUG] TestServiceManager_PersistService_API.server.memberlist.lan: memberlist: Stream connection from=127.0.0.1:42366
>     writer.go:29: 2020-02-23T02:46:35.655Z [DEBUG] TestServiceManager_PersistService_API.client.memberlist.lan: memberlist: Initiating push/pull sync with: 127.0.0.1:16178
>     writer.go:29: 2020-02-23T02:46:35.656Z [INFO]  TestServiceManager_PersistService_API.server.serf.lan: serf: EventMemberJoin: Node-a724bdd7-6350-0fb5-679e-ebcd757d50a2 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.656Z [INFO]  TestServiceManager_PersistService_API.server: member joined, marking health alive: member=Node-a724bdd7-6350-0fb5-679e-ebcd757d50a2
>     writer.go:29: 2020-02-23T02:46:35.656Z [INFO]  TestServiceManager_PersistService_API.client.serf.lan: serf: EventMemberJoin: Node-b56a837d-9ee2-4cb2-3166-29665ed9aafb 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.656Z [INFO]  TestServiceManager_PersistService_API.client: adding server: server="Node-b56a837d-9ee2-4cb2-3166-29665ed9aafb (Addr: tcp/127.0.0.1:16180) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:35.657Z [INFO]  TestServiceManager_PersistService_API: (LAN) joined: number_of_nodes=1
>     writer.go:29: 2020-02-23T02:46:35.657Z [DEBUG] TestServiceManager_PersistService_API: systemd notify failed: error="No socket"
>     writer.go:29: 2020-02-23T02:46:35.657Z [DEBUG] TestServiceManager_PersistService_API.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:35.658Z [DEBUG] TestServiceManager_PersistService_API.client.serf.lan: serf: messageUserEventType: consul:new-leader
>     writer.go:29: 2020-02-23T02:46:35.661Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=service-http-checks:web error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>     writer.go:29: 2020-02-23T02:46:35.661Z [DEBUG] TestServiceManager_PersistService_API: added local registration for service: service=web-sidecar-proxy
>     writer.go:29: 2020-02-23T02:46:35.663Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=service-http-checks:web error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>     writer.go:29: 2020-02-23T02:46:35.667Z [DEBUG] TestServiceManager_PersistService_API: updated local registration for service: service=web-sidecar-proxy
>     writer.go:29: 2020-02-23T02:46:35.668Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=service-http-checks:web error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>     writer.go:29: 2020-02-23T02:46:35.671Z [DEBUG] TestServiceManager_PersistService_API: updated local registration for service: service=web-sidecar-proxy
>     writer.go:29: 2020-02-23T02:46:35.671Z [INFO]  TestServiceManager_PersistService_API: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:35.671Z [INFO]  TestServiceManager_PersistService_API.client: shutting down client
>     writer.go:29: 2020-02-23T02:46:35.671Z [WARN]  TestServiceManager_PersistService_API.client.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.672Z [INFO]  TestServiceManager_PersistService_API.client.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:35.673Z [INFO]  TestServiceManager_PersistService_API: consul client down
>     writer.go:29: 2020-02-23T02:46:35.673Z [INFO]  TestServiceManager_PersistService_API: shutdown complete
>     writer.go:29: 2020-02-23T02:46:35.673Z [INFO]  TestServiceManager_PersistService_API: Stopping server: protocol=DNS address=127.0.0.1:16205 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.673Z [INFO]  TestServiceManager_PersistService_API: Stopping server: protocol=DNS address=127.0.0.1:16205 network=udp
>     writer.go:29: 2020-02-23T02:46:35.673Z [INFO]  TestServiceManager_PersistService_API: Stopping server: protocol=HTTP address=127.0.0.1:16206 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.673Z [INFO]  TestServiceManager_PersistService_API: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:35.673Z [INFO]  TestServiceManager_PersistService_API: Endpoints down
>     writer.go:29: 2020-02-23T02:46:35.673Z [INFO]  TestServiceManager_PersistService_API: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:35.673Z [INFO]  TestServiceManager_PersistService_API.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:35.673Z [DEBUG] TestServiceManager_PersistService_API.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.673Z [WARN]  TestServiceManager_PersistService_API.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.674Z [ERROR] TestServiceManager_PersistService_API.client: RPC failed to server: method=Health.ServiceNodes server=127.0.0.1:16180 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:35.674Z [ERROR] TestServiceManager_PersistService_API.client: RPC failed to server: method=ConfigEntry.ResolveServiceConfig server=127.0.0.1:16180 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:35.674Z [ERROR] TestServiceManager_PersistService_API.client: RPC failed to server: method=DiscoveryChain.Get server=127.0.0.1:16180 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:35.674Z [ERROR] TestServiceManager_PersistService_API.client: RPC failed to server: method=ConnectCA.Sign server=127.0.0.1:16180 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:35.674Z [ERROR] TestServiceManager_PersistService_API.client: RPC failed to server: method=Health.ServiceNodes server=127.0.0.1:16180 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:35.674Z [ERROR] TestServiceManager_PersistService_API.client: RPC failed to server: method=Intention.Match server=127.0.0.1:16180 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:35.674Z [ERROR] TestServiceManager_PersistService_API.client: RPC failed to server: method=ConnectCA.Roots server=127.0.0.1:16180 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:35.674Z [ERROR] TestServiceManager_PersistService_API.client: RPC failed to server: method=Intention.Match server=127.0.0.1:16180 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:35.674Z [ERROR] TestServiceManager_PersistService_API.client: RPC failed to server: method=ConfigEntry.ResolveServiceConfig server=127.0.0.1:16180 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:35.674Z [ERROR] TestServiceManager_PersistService_API.client: RPC failed to server: method=ConnectCA.Roots server=127.0.0.1:16180 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:35.674Z [ERROR] TestServiceManager_PersistService_API.client: RPC failed to server: method=DiscoveryChain.Get server=127.0.0.1:16180 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:35.674Z [DEBUG] TestServiceManager_PersistService_API.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.675Z [WARN]  TestServiceManager_PersistService_API.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.677Z [INFO]  TestServiceManager_PersistService_API.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:35.677Z [INFO]  TestServiceManager_PersistService_API: consul server down
>     writer.go:29: 2020-02-23T02:46:35.677Z [INFO]  TestServiceManager_PersistService_API: shutdown complete
>     writer.go:29: 2020-02-23T02:46:35.677Z [INFO]  TestServiceManager_PersistService_API: Stopping server: protocol=DNS address=127.0.0.1:16175 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.677Z [INFO]  TestServiceManager_PersistService_API: Stopping server: protocol=DNS address=127.0.0.1:16175 network=udp
>     writer.go:29: 2020-02-23T02:46:35.677Z [INFO]  TestServiceManager_PersistService_API: Stopping server: protocol=HTTP address=127.0.0.1:16176 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.677Z [INFO]  TestServiceManager_PersistService_API: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:35.677Z [INFO]  TestServiceManager_PersistService_API: Endpoints down
>     writer.go:29: 2020-02-23T02:46:35.684Z [DEBUG] TestServiceManager_PersistService_API.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:35.684Z [INFO]  TestServiceManager_PersistService_API.client.serf.lan: serf: EventMemberJoin: Node-e067853f-0587-920c-614e-c024946f3c56 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.685Z [INFO]  TestServiceManager_PersistService_API.client.serf.lan: serf: Attempting re-join to previously known node: Node-b56a837d-9ee2-4cb2-3166-29665ed9aafb: 127.0.0.1:16178
>     writer.go:29: 2020-02-23T02:46:35.685Z [DEBUG] TestServiceManager_PersistService_API.client.memberlist.lan: memberlist: Failed to join 127.0.0.1: dial tcp 127.0.0.1:16178: connect: connection refused
>     writer.go:29: 2020-02-23T02:46:35.685Z [INFO]  TestServiceManager_PersistService_API.client.serf.lan: serf: Attempting re-join to previously known node: Node-a724bdd7-6350-0fb5-679e-ebcd757d50a2: 127.0.0.1:16208
>     writer.go:29: 2020-02-23T02:46:35.685Z [DEBUG] TestServiceManager_PersistService_API.client.memberlist.lan: memberlist: Failed to join 127.0.0.1: dial tcp 127.0.0.1:16208: connect: connection refused
>     writer.go:29: 2020-02-23T02:46:35.685Z [WARN]  TestServiceManager_PersistService_API.client.serf.lan: serf: Failed to re-join any previously known node
>     writer.go:29: 2020-02-23T02:46:35.685Z [DEBUG] TestServiceManager_PersistService_API: restored service definition from file: service=web-sidecar-proxy file=/tmp/consul-test/TestServiceManager_PersistService_API-agent273258625/services/252ecd1f329632e74a20f35a05bc80fd
>     writer.go:29: 2020-02-23T02:46:35.685Z [DEBUG] TestServiceManager_PersistService_API: added local registration for service: service=web-sidecar-proxy
>     writer.go:29: 2020-02-23T02:46:35.685Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.685Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.685Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.685Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [ERROR] TestServiceManager_PersistService_API: error handling service update: error="error watching service config: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.686Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [ERROR] TestServiceManager_PersistService_API: error handling service update: error="error watching service config: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.687Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.687Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=service-http-checks:web error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>     writer.go:29: 2020-02-23T02:46:35.687Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=discovery-chain:redis error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.687Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [ERROR] TestServiceManager_PersistService_API: error handling service update: error="error watching service config: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.688Z [ERROR] TestServiceManager_PersistService_API: error handling service update: error="error watching service config: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.688Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.688Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.688Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.688Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=discovery-chain:redis error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.688Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.688Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.688Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.688Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=discovery-chain:redis error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.688Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.688Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.688Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.688Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.689Z [ERROR] TestServiceManager_PersistService_API.proxycfg: watch error: id=discovery-chain:redis error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.689Z [INFO]  TestServiceManager_PersistService_API: Started DNS server: address=127.0.0.1:16199 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.689Z [INFO]  TestServiceManager_PersistService_API: Started DNS server: address=127.0.0.1:16199 network=udp
>     writer.go:29: 2020-02-23T02:46:35.689Z [INFO]  TestServiceManager_PersistService_API: Started HTTP server: address=127.0.0.1:16200 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.689Z [INFO]  TestServiceManager_PersistService_API: started state syncer
>     writer.go:29: 2020-02-23T02:46:35.689Z [WARN]  TestServiceManager_PersistService_API.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.689Z [ERROR] TestServiceManager_PersistService_API.anti_entropy: failed to sync remote state: error="No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.690Z [DEBUG] TestServiceManager_PersistService_API: removed service: service=web-sidecar-proxy
>     writer.go:29: 2020-02-23T02:46:35.690Z [INFO]  TestServiceManager_PersistService_API: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:35.690Z [INFO]  TestServiceManager_PersistService_API.client: shutting down client
>     writer.go:29: 2020-02-23T02:46:35.690Z [WARN]  TestServiceManager_PersistService_API.client.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.690Z [INFO]  TestServiceManager_PersistService_API.client.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:35.697Z [INFO]  TestServiceManager_PersistService_API: consul client down
>     writer.go:29: 2020-02-23T02:46:35.697Z [INFO]  TestServiceManager_PersistService_API: shutdown complete
>     writer.go:29: 2020-02-23T02:46:35.697Z [INFO]  TestServiceManager_PersistService_API: Stopping server: protocol=DNS address=127.0.0.1:16199 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.697Z [INFO]  TestServiceManager_PersistService_API: Stopping server: protocol=DNS address=127.0.0.1:16199 network=udp
>     writer.go:29: 2020-02-23T02:46:35.697Z [INFO]  TestServiceManager_PersistService_API: Stopping server: protocol=HTTP address=127.0.0.1:16200 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.697Z [INFO]  TestServiceManager_PersistService_API: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:35.697Z [INFO]  TestServiceManager_PersistService_API: Endpoints down
> === CONT  TestHandleRemoteExec
> --- PASS: TestAgent_ServiceHTTPChecksNotification (0.42s)
>     writer.go:29: 2020-02-23T02:46:35.451Z [WARN]  TestAgent_ServiceHTTPChecksNotification: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:35.452Z [DEBUG] TestAgent_ServiceHTTPChecksNotification.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:35.452Z [DEBUG] TestAgent_ServiceHTTPChecksNotification.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:35.465Z [INFO]  TestAgent_ServiceHTTPChecksNotification.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:c6e8e388-df25-bf46-7934-0d29809458fa Address:127.0.0.1:16192}]"
>     writer.go:29: 2020-02-23T02:46:35.465Z [INFO]  TestAgent_ServiceHTTPChecksNotification.server.raft: entering follower state: follower="Node at 127.0.0.1:16192 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:35.465Z [INFO]  TestAgent_ServiceHTTPChecksNotification.server.serf.wan: serf: EventMemberJoin: Node-c6e8e388-df25-bf46-7934-0d29809458fa.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.466Z [INFO]  TestAgent_ServiceHTTPChecksNotification.server.serf.lan: serf: EventMemberJoin: Node-c6e8e388-df25-bf46-7934-0d29809458fa 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.466Z [INFO]  TestAgent_ServiceHTTPChecksNotification.server: Adding LAN server: server="Node-c6e8e388-df25-bf46-7934-0d29809458fa (Addr: tcp/127.0.0.1:16192) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:35.466Z [INFO]  TestAgent_ServiceHTTPChecksNotification.server: Handled event for server in area: event=member-join server=Node-c6e8e388-df25-bf46-7934-0d29809458fa.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:35.466Z [INFO]  TestAgent_ServiceHTTPChecksNotification: Started DNS server: address=127.0.0.1:16187 network=udp
>     writer.go:29: 2020-02-23T02:46:35.466Z [INFO]  TestAgent_ServiceHTTPChecksNotification: Started DNS server: address=127.0.0.1:16187 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.467Z [INFO]  TestAgent_ServiceHTTPChecksNotification: Started HTTP server: address=127.0.0.1:16188 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.467Z [INFO]  TestAgent_ServiceHTTPChecksNotification: started state syncer
>     writer.go:29: 2020-02-23T02:46:35.523Z [WARN]  TestAgent_ServiceHTTPChecksNotification.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:35.523Z [INFO]  TestAgent_ServiceHTTPChecksNotification.server.raft: entering candidate state: node="Node at 127.0.0.1:16192 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:35.527Z [DEBUG] TestAgent_ServiceHTTPChecksNotification.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:35.528Z [DEBUG] TestAgent_ServiceHTTPChecksNotification.server.raft: vote granted: from=c6e8e388-df25-bf46-7934-0d29809458fa term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:35.528Z [INFO]  TestAgent_ServiceHTTPChecksNotification.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:35.528Z [INFO]  TestAgent_ServiceHTTPChecksNotification.server.raft: entering leader state: leader="Node at 127.0.0.1:16192 [Leader]"
>     writer.go:29: 2020-02-23T02:46:35.528Z [INFO]  TestAgent_ServiceHTTPChecksNotification.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:35.528Z [INFO]  TestAgent_ServiceHTTPChecksNotification.server: New leader elected: payload=Node-c6e8e388-df25-bf46-7934-0d29809458fa
>     writer.go:29: 2020-02-23T02:46:35.537Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:35.546Z [INFO]  TestAgent_ServiceHTTPChecksNotification.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:35.546Z [INFO]  TestAgent_ServiceHTTPChecksNotification.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.546Z [DEBUG] TestAgent_ServiceHTTPChecksNotification.server: Skipping self join check for node since the cluster is too small: node=Node-c6e8e388-df25-bf46-7934-0d29809458fa
>     writer.go:29: 2020-02-23T02:46:35.546Z [INFO]  TestAgent_ServiceHTTPChecksNotification.server: member joined, marking health alive: member=Node-c6e8e388-df25-bf46-7934-0d29809458fa
>     writer.go:29: 2020-02-23T02:46:35.806Z [DEBUG] TestAgent_ServiceHTTPChecksNotification.tlsutil: OutgoingTLSConfigForCheck: version=1
>     writer.go:29: 2020-02-23T02:46:35.806Z [DEBUG] TestAgent_ServiceHTTPChecksNotification: removed check: check=grpc-check
>     writer.go:29: 2020-02-23T02:46:35.807Z [INFO]  TestAgent_ServiceHTTPChecksNotification: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:35.807Z [INFO]  TestAgent_ServiceHTTPChecksNotification.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:35.807Z [DEBUG] TestAgent_ServiceHTTPChecksNotification.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.807Z [WARN]  TestAgent_ServiceHTTPChecksNotification.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.807Z [ERROR] TestAgent_ServiceHTTPChecksNotification.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:35.807Z [DEBUG] TestAgent_ServiceHTTPChecksNotification.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.838Z [WARN]  TestAgent_ServiceHTTPChecksNotification.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.860Z [INFO]  TestAgent_ServiceHTTPChecksNotification.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:35.860Z [INFO]  TestAgent_ServiceHTTPChecksNotification: consul server down
>     writer.go:29: 2020-02-23T02:46:35.860Z [INFO]  TestAgent_ServiceHTTPChecksNotification: shutdown complete
>     writer.go:29: 2020-02-23T02:46:35.860Z [INFO]  TestAgent_ServiceHTTPChecksNotification: Stopping server: protocol=DNS address=127.0.0.1:16187 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.860Z [INFO]  TestAgent_ServiceHTTPChecksNotification: Stopping server: protocol=DNS address=127.0.0.1:16187 network=udp
>     writer.go:29: 2020-02-23T02:46:35.860Z [INFO]  TestAgent_ServiceHTTPChecksNotification: Stopping server: protocol=HTTP address=127.0.0.1:16188 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.860Z [INFO]  TestAgent_ServiceHTTPChecksNotification: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:35.860Z [INFO]  TestAgent_ServiceHTTPChecksNotification: Endpoints down
> === CONT  TestRemoteExecWrites_ACLDeny
> --- PASS: TestHandleRemoteExecFailed (0.42s)
>     writer.go:29: 2020-02-23T02:46:35.520Z [WARN]  TestHandleRemoteExecFailed: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:35.521Z [DEBUG] TestHandleRemoteExecFailed.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:35.521Z [DEBUG] TestHandleRemoteExecFailed.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:35.533Z [INFO]  TestHandleRemoteExecFailed.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:0b38af15-e788-5467-f3bb-f10f507820d3 Address:127.0.0.1:16198}]"
>     writer.go:29: 2020-02-23T02:46:35.533Z [INFO]  TestHandleRemoteExecFailed.server.serf.wan: serf: EventMemberJoin: Node-0b38af15-e788-5467-f3bb-f10f507820d3.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.534Z [INFO]  TestHandleRemoteExecFailed.server.serf.lan: serf: EventMemberJoin: Node-0b38af15-e788-5467-f3bb-f10f507820d3 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.534Z [INFO]  TestHandleRemoteExecFailed: Started DNS server: address=127.0.0.1:16193 network=udp
>     writer.go:29: 2020-02-23T02:46:35.534Z [INFO]  TestHandleRemoteExecFailed.server.raft: entering follower state: follower="Node at 127.0.0.1:16198 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:35.534Z [INFO]  TestHandleRemoteExecFailed.server: Adding LAN server: server="Node-0b38af15-e788-5467-f3bb-f10f507820d3 (Addr: tcp/127.0.0.1:16198) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:35.534Z [INFO]  TestHandleRemoteExecFailed.server: Handled event for server in area: event=member-join server=Node-0b38af15-e788-5467-f3bb-f10f507820d3.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:35.534Z [INFO]  TestHandleRemoteExecFailed: Started DNS server: address=127.0.0.1:16193 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.535Z [INFO]  TestHandleRemoteExecFailed: Started HTTP server: address=127.0.0.1:16194 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.535Z [INFO]  TestHandleRemoteExecFailed: started state syncer
>     writer.go:29: 2020-02-23T02:46:35.581Z [WARN]  TestHandleRemoteExecFailed.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:35.581Z [INFO]  TestHandleRemoteExecFailed.server.raft: entering candidate state: node="Node at 127.0.0.1:16198 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:35.584Z [DEBUG] TestHandleRemoteExecFailed.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:35.584Z [DEBUG] TestHandleRemoteExecFailed.server.raft: vote granted: from=0b38af15-e788-5467-f3bb-f10f507820d3 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:35.584Z [INFO]  TestHandleRemoteExecFailed.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:35.584Z [INFO]  TestHandleRemoteExecFailed.server.raft: entering leader state: leader="Node at 127.0.0.1:16198 [Leader]"
>     writer.go:29: 2020-02-23T02:46:35.585Z [INFO]  TestHandleRemoteExecFailed.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:35.585Z [INFO]  TestHandleRemoteExecFailed.server: New leader elected: payload=Node-0b38af15-e788-5467-f3bb-f10f507820d3
>     writer.go:29: 2020-02-23T02:46:35.592Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:35.600Z [INFO]  TestHandleRemoteExecFailed.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:35.600Z [INFO]  TestHandleRemoteExecFailed.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.600Z [DEBUG] TestHandleRemoteExecFailed.server: Skipping self join check for node since the cluster is too small: node=Node-0b38af15-e788-5467-f3bb-f10f507820d3
>     writer.go:29: 2020-02-23T02:46:35.600Z [INFO]  TestHandleRemoteExecFailed.server: member joined, marking health alive: member=Node-0b38af15-e788-5467-f3bb-f10f507820d3
>     writer.go:29: 2020-02-23T02:46:35.621Z [DEBUG] TestHandleRemoteExecFailed: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:35.623Z [INFO]  TestHandleRemoteExecFailed: Synced node info
>     writer.go:29: 2020-02-23T02:46:35.876Z [DEBUG] TestHandleRemoteExecFailed: received remote exec event: id=a6d8cf55-23dd-51aa-621c-2fa9b4902370
>     writer.go:29: 2020-02-23T02:46:35.901Z [INFO]  TestHandleRemoteExecFailed: remote exec script: script="echo failing;exit 2"
>     writer.go:29: 2020-02-23T02:46:35.924Z [INFO]  TestHandleRemoteExecFailed: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:35.924Z [INFO]  TestHandleRemoteExecFailed.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:35.924Z [DEBUG] TestHandleRemoteExecFailed.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.924Z [WARN]  TestHandleRemoteExecFailed.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.924Z [DEBUG] TestHandleRemoteExecFailed.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.927Z [WARN]  TestHandleRemoteExecFailed.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.929Z [INFO]  TestHandleRemoteExecFailed.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:35.929Z [INFO]  TestHandleRemoteExecFailed: consul server down
>     writer.go:29: 2020-02-23T02:46:35.929Z [INFO]  TestHandleRemoteExecFailed: shutdown complete
>     writer.go:29: 2020-02-23T02:46:35.929Z [INFO]  TestHandleRemoteExecFailed: Stopping server: protocol=DNS address=127.0.0.1:16193 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.929Z [INFO]  TestHandleRemoteExecFailed: Stopping server: protocol=DNS address=127.0.0.1:16193 network=udp
>     writer.go:29: 2020-02-23T02:46:35.929Z [INFO]  TestHandleRemoteExecFailed: Stopping server: protocol=HTTP address=127.0.0.1:16194 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.929Z [INFO]  TestHandleRemoteExecFailed: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:35.929Z [INFO]  TestHandleRemoteExecFailed: Endpoints down
> === CONT  TestRemoteExecWrites_ACLAgentToken
> --- PASS: TestHandleRemoteExec (0.30s)
>     writer.go:29: 2020-02-23T02:46:35.736Z [WARN]  TestHandleRemoteExec: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:35.736Z [DEBUG] TestHandleRemoteExec.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:35.736Z [DEBUG] TestHandleRemoteExec.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:35.756Z [INFO]  TestHandleRemoteExec.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:0404ca01-12dc-7a07-9b6b-860f6b177502 Address:127.0.0.1:16222}]"
>     writer.go:29: 2020-02-23T02:46:35.756Z [INFO]  TestHandleRemoteExec.server.raft: entering follower state: follower="Node at 127.0.0.1:16222 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:35.756Z [INFO]  TestHandleRemoteExec.server.serf.wan: serf: EventMemberJoin: Node-0404ca01-12dc-7a07-9b6b-860f6b177502.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.757Z [INFO]  TestHandleRemoteExec.server.serf.lan: serf: EventMemberJoin: Node-0404ca01-12dc-7a07-9b6b-860f6b177502 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.757Z [INFO]  TestHandleRemoteExec: Started DNS server: address=127.0.0.1:16217 network=udp
>     writer.go:29: 2020-02-23T02:46:35.757Z [INFO]  TestHandleRemoteExec.server: Adding LAN server: server="Node-0404ca01-12dc-7a07-9b6b-860f6b177502 (Addr: tcp/127.0.0.1:16222) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:35.757Z [INFO]  TestHandleRemoteExec.server: Handled event for server in area: event=member-join server=Node-0404ca01-12dc-7a07-9b6b-860f6b177502.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:35.757Z [INFO]  TestHandleRemoteExec: Started DNS server: address=127.0.0.1:16217 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.757Z [INFO]  TestHandleRemoteExec: Started HTTP server: address=127.0.0.1:16218 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.757Z [INFO]  TestHandleRemoteExec: started state syncer
>     writer.go:29: 2020-02-23T02:46:35.808Z [WARN]  TestHandleRemoteExec.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:35.808Z [INFO]  TestHandleRemoteExec.server.raft: entering candidate state: node="Node at 127.0.0.1:16222 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:35.888Z [DEBUG] TestHandleRemoteExec.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:35.888Z [DEBUG] TestHandleRemoteExec.server.raft: vote granted: from=0404ca01-12dc-7a07-9b6b-860f6b177502 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:35.888Z [INFO]  TestHandleRemoteExec.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:35.888Z [INFO]  TestHandleRemoteExec.server.raft: entering leader state: leader="Node at 127.0.0.1:16222 [Leader]"
>     writer.go:29: 2020-02-23T02:46:35.888Z [INFO]  TestHandleRemoteExec.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:35.888Z [INFO]  TestHandleRemoteExec.server: New leader elected: payload=Node-0404ca01-12dc-7a07-9b6b-860f6b177502
>     writer.go:29: 2020-02-23T02:46:35.927Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:35.941Z [INFO]  TestHandleRemoteExec.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:35.941Z [INFO]  TestHandleRemoteExec.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.941Z [DEBUG] TestHandleRemoteExec.server: Skipping self join check for node since the cluster is too small: node=Node-0404ca01-12dc-7a07-9b6b-860f6b177502
>     writer.go:29: 2020-02-23T02:46:35.941Z [INFO]  TestHandleRemoteExec.server: member joined, marking health alive: member=Node-0404ca01-12dc-7a07-9b6b-860f6b177502
>     writer.go:29: 2020-02-23T02:46:35.960Z [DEBUG] TestHandleRemoteExec: received remote exec event: id=f4c5255c-8961-5d3d-a2f0-59bf446eb5d2
>     writer.go:29: 2020-02-23T02:46:35.962Z [INFO]  TestHandleRemoteExec: remote exec script: script=uptime
>     writer.go:29: 2020-02-23T02:46:35.996Z [INFO]  TestHandleRemoteExec: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:35.997Z [INFO]  TestHandleRemoteExec.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:35.997Z [DEBUG] TestHandleRemoteExec.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.997Z [WARN]  TestHandleRemoteExec.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:35.997Z [ERROR] TestHandleRemoteExec.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:35.997Z [DEBUG] TestHandleRemoteExec.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.999Z [WARN]  TestHandleRemoteExec.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:36.002Z [INFO]  TestHandleRemoteExec.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:36.002Z [INFO]  TestHandleRemoteExec: consul server down
>     writer.go:29: 2020-02-23T02:46:36.002Z [INFO]  TestHandleRemoteExec: shutdown complete
>     writer.go:29: 2020-02-23T02:46:36.002Z [INFO]  TestHandleRemoteExec: Stopping server: protocol=DNS address=127.0.0.1:16217 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.002Z [INFO]  TestHandleRemoteExec: Stopping server: protocol=DNS address=127.0.0.1:16217 network=udp
>     writer.go:29: 2020-02-23T02:46:36.002Z [INFO]  TestHandleRemoteExec: Stopping server: protocol=HTTP address=127.0.0.1:16218 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.002Z [INFO]  TestHandleRemoteExec: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:36.002Z [INFO]  TestHandleRemoteExec: Endpoints down
> === CONT  TestRemoteExecWrites_ACLToken
> --- PASS: TestRemoteExecWrites_ACLAgentToken (0.16s)
>     writer.go:29: 2020-02-23T02:46:35.939Z [WARN]  TestRemoteExecWrites_ACLAgentToken: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:35.939Z [WARN]  TestRemoteExecWrites_ACLAgentToken: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:35.939Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:35.940Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:35.950Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fcc8c773-efc7-62b7-6a75-c8fe05edb5d0 Address:127.0.0.1:16240}]"
>     writer.go:29: 2020-02-23T02:46:35.951Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server.serf.wan: serf: EventMemberJoin: Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.951Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server.serf.lan: serf: EventMemberJoin: Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.951Z [INFO]  TestRemoteExecWrites_ACLAgentToken: Started DNS server: address=127.0.0.1:16235 network=udp
>     writer.go:29: 2020-02-23T02:46:35.951Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server.raft: entering follower state: follower="Node at 127.0.0.1:16240 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:35.952Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: Adding LAN server: server="Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0 (Addr: tcp/127.0.0.1:16240) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:35.952Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: Handled event for server in area: event=member-join server=Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:35.952Z [INFO]  TestRemoteExecWrites_ACLAgentToken: Started DNS server: address=127.0.0.1:16235 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.952Z [INFO]  TestRemoteExecWrites_ACLAgentToken: Started HTTP server: address=127.0.0.1:16236 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.952Z [INFO]  TestRemoteExecWrites_ACLAgentToken: started state syncer
>     writer.go:29: 2020-02-23T02:46:35.992Z [WARN]  TestRemoteExecWrites_ACLAgentToken.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:35.992Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server.raft: entering candidate state: node="Node at 127.0.0.1:16240 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:35.998Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:35.998Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.server.raft: vote granted: from=fcc8c773-efc7-62b7-6a75-c8fe05edb5d0 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:35.998Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:35.998Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server.raft: entering leader state: leader="Node at 127.0.0.1:16240 [Leader]"
>     writer.go:29: 2020-02-23T02:46:35.998Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:35.998Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: New leader elected: payload=Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0
>     writer.go:29: 2020-02-23T02:46:36.002Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:36.002Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:36.011Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:36.011Z [WARN]  TestRemoteExecWrites_ACLAgentToken.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:36.011Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:36.011Z [WARN]  TestRemoteExecWrites_ACLAgentToken.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:36.021Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:36.021Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:36.024Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:36.024Z [INFO]  TestRemoteExecWrites_ACLAgentToken.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.024Z [INFO]  TestRemoteExecWrites_ACLAgentToken.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.024Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server.serf.lan: serf: EventMemberUpdate: Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0
>     writer.go:29: 2020-02-23T02:46:36.024Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server.serf.wan: serf: EventMemberUpdate: Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0.dc1
>     writer.go:29: 2020-02-23T02:46:36.025Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:36.025Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:46:36.025Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server.serf.lan: serf: EventMemberUpdate: Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0
>     writer.go:29: 2020-02-23T02:46:36.025Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server.serf.wan: serf: EventMemberUpdate: Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0.dc1
>     writer.go:29: 2020-02-23T02:46:36.025Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: Handled event for server in area: event=member-update server=Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:36.025Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: Handled event for server in area: event=member-update server=Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:36.031Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:36.038Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:36.038Z [INFO]  TestRemoteExecWrites_ACLAgentToken.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.038Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.server: Skipping self join check for node since the cluster is too small: node=Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0
>     writer.go:29: 2020-02-23T02:46:36.038Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: member joined, marking health alive: member=Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0
>     writer.go:29: 2020-02-23T02:46:36.041Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.server: Skipping self join check for node since the cluster is too small: node=Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0
>     writer.go:29: 2020-02-23T02:46:36.041Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.server: Skipping self join check for node since the cluster is too small: node=Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0
>     writer.go:29: 2020-02-23T02:46:36.070Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.acl: dropping node from result due to ACLs: node=Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0
>     writer.go:29: 2020-02-23T02:46:36.070Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.acl: dropping node from result due to ACLs: node=Node-fcc8c773-efc7-62b7-6a75-c8fe05edb5d0
>     writer.go:29: 2020-02-23T02:46:36.082Z [INFO]  TestRemoteExecWrites_ACLAgentToken: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:36.082Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:36.082Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.082Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.082Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.082Z [WARN]  TestRemoteExecWrites_ACLAgentToken.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:36.082Z [ERROR] TestRemoteExecWrites_ACLAgentToken.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:36.082Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.082Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.082Z [DEBUG] TestRemoteExecWrites_ACLAgentToken.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.084Z [WARN]  TestRemoteExecWrites_ACLAgentToken.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:36.086Z [INFO]  TestRemoteExecWrites_ACLAgentToken.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:36.086Z [INFO]  TestRemoteExecWrites_ACLAgentToken: consul server down
>     writer.go:29: 2020-02-23T02:46:36.086Z [INFO]  TestRemoteExecWrites_ACLAgentToken: shutdown complete
>     writer.go:29: 2020-02-23T02:46:36.086Z [INFO]  TestRemoteExecWrites_ACLAgentToken: Stopping server: protocol=DNS address=127.0.0.1:16235 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.086Z [INFO]  TestRemoteExecWrites_ACLAgentToken: Stopping server: protocol=DNS address=127.0.0.1:16235 network=udp
>     writer.go:29: 2020-02-23T02:46:36.086Z [INFO]  TestRemoteExecWrites_ACLAgentToken: Stopping server: protocol=HTTP address=127.0.0.1:16236 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.086Z [INFO]  TestRemoteExecWrites_ACLAgentToken: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:36.086Z [INFO]  TestRemoteExecWrites_ACLAgentToken: Endpoints down
> === CONT  TestRemoteExecWrites
> --- PASS: TestRemoteExecWrites_ACLToken (0.29s)
>     writer.go:29: 2020-02-23T02:46:36.011Z [WARN]  TestRemoteExecWrites_ACLToken: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:36.011Z [WARN]  TestRemoteExecWrites_ACLToken: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:36.011Z [DEBUG] TestRemoteExecWrites_ACLToken.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:36.012Z [DEBUG] TestRemoteExecWrites_ACLToken.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:36.031Z [INFO]  TestRemoteExecWrites_ACLToken.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:add083d0-fd66-716b-a977-75ee06d737be Address:127.0.0.1:16234}]"
>     writer.go:29: 2020-02-23T02:46:36.031Z [INFO]  TestRemoteExecWrites_ACLToken.server.raft: entering follower state: follower="Node at 127.0.0.1:16234 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:36.032Z [INFO]  TestRemoteExecWrites_ACLToken.server.serf.wan: serf: EventMemberJoin: Node-add083d0-fd66-716b-a977-75ee06d737be.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:36.033Z [INFO]  TestRemoteExecWrites_ACLToken.server.serf.lan: serf: EventMemberJoin: Node-add083d0-fd66-716b-a977-75ee06d737be 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:36.033Z [INFO]  TestRemoteExecWrites_ACLToken.server: Handled event for server in area: event=member-join server=Node-add083d0-fd66-716b-a977-75ee06d737be.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:36.033Z [INFO]  TestRemoteExecWrites_ACLToken.server: Adding LAN server: server="Node-add083d0-fd66-716b-a977-75ee06d737be (Addr: tcp/127.0.0.1:16234) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:36.033Z [INFO]  TestRemoteExecWrites_ACLToken: Started DNS server: address=127.0.0.1:16229 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.033Z [INFO]  TestRemoteExecWrites_ACLToken: Started DNS server: address=127.0.0.1:16229 network=udp
>     writer.go:29: 2020-02-23T02:46:36.034Z [INFO]  TestRemoteExecWrites_ACLToken: Started HTTP server: address=127.0.0.1:16230 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.034Z [INFO]  TestRemoteExecWrites_ACLToken: started state syncer
>     writer.go:29: 2020-02-23T02:46:36.099Z [WARN]  TestRemoteExecWrites_ACLToken.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:36.099Z [INFO]  TestRemoteExecWrites_ACLToken.server.raft: entering candidate state: node="Node at 127.0.0.1:16234 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:36.102Z [DEBUG] TestRemoteExecWrites_ACLToken.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:36.102Z [DEBUG] TestRemoteExecWrites_ACLToken.server.raft: vote granted: from=add083d0-fd66-716b-a977-75ee06d737be term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:36.102Z [INFO]  TestRemoteExecWrites_ACLToken.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:36.102Z [INFO]  TestRemoteExecWrites_ACLToken.server.raft: entering leader state: leader="Node at 127.0.0.1:16234 [Leader]"
>     writer.go:29: 2020-02-23T02:46:36.102Z [INFO]  TestRemoteExecWrites_ACLToken.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:36.103Z [INFO]  TestRemoteExecWrites_ACLToken.server: New leader elected: payload=Node-add083d0-fd66-716b-a977-75ee06d737be
>     writer.go:29: 2020-02-23T02:46:36.105Z [INFO]  TestRemoteExecWrites_ACLToken.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:36.106Z [INFO]  TestRemoteExecWrites_ACLToken.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:36.106Z [WARN]  TestRemoteExecWrites_ACLToken.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:36.109Z [INFO]  TestRemoteExecWrites_ACLToken.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:36.113Z [INFO]  TestRemoteExecWrites_ACLToken.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:36.113Z [INFO]  TestRemoteExecWrites_ACLToken.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.113Z [INFO]  TestRemoteExecWrites_ACLToken.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.113Z [INFO]  TestRemoteExecWrites_ACLToken.server.serf.lan: serf: EventMemberUpdate: Node-add083d0-fd66-716b-a977-75ee06d737be
>     writer.go:29: 2020-02-23T02:46:36.113Z [INFO]  TestRemoteExecWrites_ACLToken.server.serf.wan: serf: EventMemberUpdate: Node-add083d0-fd66-716b-a977-75ee06d737be.dc1
>     writer.go:29: 2020-02-23T02:46:36.113Z [INFO]  TestRemoteExecWrites_ACLToken.server: Handled event for server in area: event=member-update server=Node-add083d0-fd66-716b-a977-75ee06d737be.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:36.117Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:36.124Z [INFO]  TestRemoteExecWrites_ACLToken.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:36.124Z [INFO]  TestRemoteExecWrites_ACLToken.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.124Z [DEBUG] TestRemoteExecWrites_ACLToken.server: Skipping self join check for node since the cluster is too small: node=Node-add083d0-fd66-716b-a977-75ee06d737be
>     writer.go:29: 2020-02-23T02:46:36.124Z [INFO]  TestRemoteExecWrites_ACLToken.server: member joined, marking health alive: member=Node-add083d0-fd66-716b-a977-75ee06d737be
>     writer.go:29: 2020-02-23T02:46:36.127Z [DEBUG] TestRemoteExecWrites_ACLToken.server: Skipping self join check for node since the cluster is too small: node=Node-add083d0-fd66-716b-a977-75ee06d737be
>     writer.go:29: 2020-02-23T02:46:36.251Z [DEBUG] TestRemoteExecWrites_ACLToken.acl: dropping node from result due to ACLs: node=Node-add083d0-fd66-716b-a977-75ee06d737be
>     writer.go:29: 2020-02-23T02:46:36.251Z [DEBUG] TestRemoteExecWrites_ACLToken.acl: dropping node from result due to ACLs: node=Node-add083d0-fd66-716b-a977-75ee06d737be
>     writer.go:29: 2020-02-23T02:46:36.292Z [INFO]  TestRemoteExecWrites_ACLToken: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:36.292Z [INFO]  TestRemoteExecWrites_ACLToken.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:36.292Z [DEBUG] TestRemoteExecWrites_ACLToken.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.292Z [DEBUG] TestRemoteExecWrites_ACLToken.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.292Z [DEBUG] TestRemoteExecWrites_ACLToken.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.292Z [WARN]  TestRemoteExecWrites_ACLToken.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:36.292Z [DEBUG] TestRemoteExecWrites_ACLToken.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.292Z [ERROR] TestRemoteExecWrites_ACLToken.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:36.292Z [DEBUG] TestRemoteExecWrites_ACLToken.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.292Z [DEBUG] TestRemoteExecWrites_ACLToken.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.294Z [WARN]  TestRemoteExecWrites_ACLToken.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:36.295Z [INFO]  TestRemoteExecWrites_ACLToken.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:36.295Z [INFO]  TestRemoteExecWrites_ACLToken: consul server down
>     writer.go:29: 2020-02-23T02:46:36.295Z [INFO]  TestRemoteExecWrites_ACLToken: shutdown complete
>     writer.go:29: 2020-02-23T02:46:36.295Z [INFO]  TestRemoteExecWrites_ACLToken: Stopping server: protocol=DNS address=127.0.0.1:16229 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.296Z [INFO]  TestRemoteExecWrites_ACLToken: Stopping server: protocol=DNS address=127.0.0.1:16229 network=udp
>     writer.go:29: 2020-02-23T02:46:36.296Z [INFO]  TestRemoteExecWrites_ACLToken: Stopping server: protocol=HTTP address=127.0.0.1:16230 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.296Z [INFO]  TestRemoteExecWrites_ACLToken: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:36.296Z [INFO]  TestRemoteExecWrites_ACLToken: Endpoints down
> === CONT  TestRemoteExecGetSpec_ACLDeny
> --- PASS: TestRemoteExecWrites_ACLDeny (0.48s)
>     writer.go:29: 2020-02-23T02:46:35.868Z [WARN]  TestRemoteExecWrites_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:35.868Z [WARN]  TestRemoteExecWrites_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:35.868Z [DEBUG] TestRemoteExecWrites_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:35.869Z [DEBUG] TestRemoteExecWrites_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:35.925Z [INFO]  TestRemoteExecWrites_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a5d95d56-091c-3d19-8af4-cdb3059fd152 Address:127.0.0.1:16216}]"
>     writer.go:29: 2020-02-23T02:46:35.926Z [INFO]  TestRemoteExecWrites_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-a5d95d56-091c-3d19-8af4-cdb3059fd152.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.926Z [INFO]  TestRemoteExecWrites_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-a5d95d56-091c-3d19-8af4-cdb3059fd152 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.926Z [INFO]  TestRemoteExecWrites_ACLDeny: Started DNS server: address=127.0.0.1:16211 network=udp
>     writer.go:29: 2020-02-23T02:46:35.926Z [INFO]  TestRemoteExecWrites_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16216 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:35.927Z [INFO]  TestRemoteExecWrites_ACLDeny.server: Adding LAN server: server="Node-a5d95d56-091c-3d19-8af4-cdb3059fd152 (Addr: tcp/127.0.0.1:16216) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:35.927Z [INFO]  TestRemoteExecWrites_ACLDeny.server: Handled event for server in area: event=member-join server=Node-a5d95d56-091c-3d19-8af4-cdb3059fd152.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:35.927Z [INFO]  TestRemoteExecWrites_ACLDeny: Started DNS server: address=127.0.0.1:16211 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.927Z [INFO]  TestRemoteExecWrites_ACLDeny: Started HTTP server: address=127.0.0.1:16212 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.927Z [INFO]  TestRemoteExecWrites_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:46:35.989Z [WARN]  TestRemoteExecWrites_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:35.989Z [INFO]  TestRemoteExecWrites_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16216 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:35.995Z [DEBUG] TestRemoteExecWrites_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:35.995Z [DEBUG] TestRemoteExecWrites_ACLDeny.server.raft: vote granted: from=a5d95d56-091c-3d19-8af4-cdb3059fd152 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:35.995Z [INFO]  TestRemoteExecWrites_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:35.995Z [INFO]  TestRemoteExecWrites_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16216 [Leader]"
>     writer.go:29: 2020-02-23T02:46:35.995Z [INFO]  TestRemoteExecWrites_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:35.995Z [INFO]  TestRemoteExecWrites_ACLDeny.server: New leader elected: payload=Node-a5d95d56-091c-3d19-8af4-cdb3059fd152
>     writer.go:29: 2020-02-23T02:46:35.999Z [INFO]  TestRemoteExecWrites_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:36.001Z [INFO]  TestRemoteExecWrites_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:36.001Z [WARN]  TestRemoteExecWrites_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:36.007Z [INFO]  TestRemoteExecWrites_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:36.013Z [INFO]  TestRemoteExecWrites_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:36.013Z [INFO]  TestRemoteExecWrites_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.013Z [INFO]  TestRemoteExecWrites_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.013Z [INFO]  TestRemoteExecWrites_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-a5d95d56-091c-3d19-8af4-cdb3059fd152
>     writer.go:29: 2020-02-23T02:46:36.013Z [INFO]  TestRemoteExecWrites_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-a5d95d56-091c-3d19-8af4-cdb3059fd152.dc1
>     writer.go:29: 2020-02-23T02:46:36.014Z [INFO]  TestRemoteExecWrites_ACLDeny.server: Handled event for server in area: event=member-update server=Node-a5d95d56-091c-3d19-8af4-cdb3059fd152.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:36.022Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:36.030Z [INFO]  TestRemoteExecWrites_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:36.030Z [INFO]  TestRemoteExecWrites_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.030Z [DEBUG] TestRemoteExecWrites_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-a5d95d56-091c-3d19-8af4-cdb3059fd152
>     writer.go:29: 2020-02-23T02:46:36.030Z [INFO]  TestRemoteExecWrites_ACLDeny.server: member joined, marking health alive: member=Node-a5d95d56-091c-3d19-8af4-cdb3059fd152
>     writer.go:29: 2020-02-23T02:46:36.034Z [DEBUG] TestRemoteExecWrites_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-a5d95d56-091c-3d19-8af4-cdb3059fd152
>     writer.go:29: 2020-02-23T02:46:36.333Z [DEBUG] TestRemoteExecWrites_ACLDeny.acl: dropping node from result due to ACLs: node=Node-a5d95d56-091c-3d19-8af4-cdb3059fd152
>     writer.go:29: 2020-02-23T02:46:36.333Z [DEBUG] TestRemoteExecWrites_ACLDeny.acl: dropping node from result due to ACLs: node=Node-a5d95d56-091c-3d19-8af4-cdb3059fd152
>     writer.go:29: 2020-02-23T02:46:36.337Z [ERROR] TestRemoteExecWrites_ACLDeny: failed to ack remote exec job: error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:36.337Z [ERROR] TestRemoteExecWrites_ACLDeny: failed to write output for remote exec job: error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:36.337Z [ERROR] TestRemoteExecWrites_ACLDeny: failed to write output for remote exec job: error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:36.340Z [INFO]  TestRemoteExecWrites_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:36.340Z [INFO]  TestRemoteExecWrites_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:36.340Z [DEBUG] TestRemoteExecWrites_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.340Z [DEBUG] TestRemoteExecWrites_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.340Z [DEBUG] TestRemoteExecWrites_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.340Z [WARN]  TestRemoteExecWrites_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:36.340Z [ERROR] TestRemoteExecWrites_ACLDeny.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:36.340Z [DEBUG] TestRemoteExecWrites_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.340Z [DEBUG] TestRemoteExecWrites_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.340Z [DEBUG] TestRemoteExecWrites_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.342Z [WARN]  TestRemoteExecWrites_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:36.343Z [INFO]  TestRemoteExecWrites_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:36.343Z [INFO]  TestRemoteExecWrites_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:46:36.343Z [INFO]  TestRemoteExecWrites_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:46:36.343Z [INFO]  TestRemoteExecWrites_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16211 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.343Z [INFO]  TestRemoteExecWrites_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16211 network=udp
>     writer.go:29: 2020-02-23T02:46:36.343Z [INFO]  TestRemoteExecWrites_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16212 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.344Z [INFO]  TestRemoteExecWrites_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:36.344Z [INFO]  TestRemoteExecWrites_ACLDeny: Endpoints down
> === CONT  TestRemoteExecGetSpec_ACLAgentToken
> --- PASS: TestRemoteExecWrites (0.45s)
>     writer.go:29: 2020-02-23T02:46:36.093Z [WARN]  TestRemoteExecWrites: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:36.093Z [DEBUG] TestRemoteExecWrites.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:36.094Z [DEBUG] TestRemoteExecWrites.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:36.103Z [INFO]  TestRemoteExecWrites.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f4dc6b6f-8762-6049-4e0e-4d47da13c150 Address:127.0.0.1:16252}]"
>     writer.go:29: 2020-02-23T02:46:36.103Z [INFO]  TestRemoteExecWrites.server.raft: entering follower state: follower="Node at 127.0.0.1:16252 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:36.104Z [INFO]  TestRemoteExecWrites.server.serf.wan: serf: EventMemberJoin: Node-f4dc6b6f-8762-6049-4e0e-4d47da13c150.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:36.105Z [INFO]  TestRemoteExecWrites.server.serf.lan: serf: EventMemberJoin: Node-f4dc6b6f-8762-6049-4e0e-4d47da13c150 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:36.105Z [INFO]  TestRemoteExecWrites.server: Handled event for server in area: event=member-join server=Node-f4dc6b6f-8762-6049-4e0e-4d47da13c150.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:36.105Z [INFO]  TestRemoteExecWrites.server: Adding LAN server: server="Node-f4dc6b6f-8762-6049-4e0e-4d47da13c150 (Addr: tcp/127.0.0.1:16252) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:36.105Z [INFO]  TestRemoteExecWrites: Started DNS server: address=127.0.0.1:16247 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.105Z [INFO]  TestRemoteExecWrites: Started DNS server: address=127.0.0.1:16247 network=udp
>     writer.go:29: 2020-02-23T02:46:36.106Z [INFO]  TestRemoteExecWrites: Started HTTP server: address=127.0.0.1:16248 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.106Z [INFO]  TestRemoteExecWrites: started state syncer
>     writer.go:29: 2020-02-23T02:46:36.160Z [WARN]  TestRemoteExecWrites.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:36.160Z [INFO]  TestRemoteExecWrites.server.raft: entering candidate state: node="Node at 127.0.0.1:16252 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:36.164Z [DEBUG] TestRemoteExecWrites.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:36.164Z [DEBUG] TestRemoteExecWrites.server.raft: vote granted: from=f4dc6b6f-8762-6049-4e0e-4d47da13c150 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:36.164Z [INFO]  TestRemoteExecWrites.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:36.165Z [INFO]  TestRemoteExecWrites.server.raft: entering leader state: leader="Node at 127.0.0.1:16252 [Leader]"
>     writer.go:29: 2020-02-23T02:46:36.165Z [INFO]  TestRemoteExecWrites.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:36.165Z [INFO]  TestRemoteExecWrites.server: New leader elected: payload=Node-f4dc6b6f-8762-6049-4e0e-4d47da13c150
>     writer.go:29: 2020-02-23T02:46:36.172Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:36.180Z [INFO]  TestRemoteExecWrites.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:36.180Z [INFO]  TestRemoteExecWrites.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.180Z [DEBUG] TestRemoteExecWrites.server: Skipping self join check for node since the cluster is too small: node=Node-f4dc6b6f-8762-6049-4e0e-4d47da13c150
>     writer.go:29: 2020-02-23T02:46:36.180Z [INFO]  TestRemoteExecWrites.server: member joined, marking health alive: member=Node-f4dc6b6f-8762-6049-4e0e-4d47da13c150
>     writer.go:29: 2020-02-23T02:46:36.529Z [INFO]  TestRemoteExecWrites: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:36.529Z [INFO]  TestRemoteExecWrites.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:36.529Z [DEBUG] TestRemoteExecWrites.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.529Z [WARN]  TestRemoteExecWrites.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:36.529Z [ERROR] TestRemoteExecWrites.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:36.529Z [DEBUG] TestRemoteExecWrites.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.530Z [WARN]  TestRemoteExecWrites.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:36.532Z [INFO]  TestRemoteExecWrites.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:36.532Z [INFO]  TestRemoteExecWrites: consul server down
>     writer.go:29: 2020-02-23T02:46:36.532Z [INFO]  TestRemoteExecWrites: shutdown complete
>     writer.go:29: 2020-02-23T02:46:36.532Z [INFO]  TestRemoteExecWrites: Stopping server: protocol=DNS address=127.0.0.1:16247 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.532Z [INFO]  TestRemoteExecWrites: Stopping server: protocol=DNS address=127.0.0.1:16247 network=udp
>     writer.go:29: 2020-02-23T02:46:36.532Z [INFO]  TestRemoteExecWrites: Stopping server: protocol=HTTP address=127.0.0.1:16248 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.532Z [INFO]  TestRemoteExecWrites: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:36.532Z [INFO]  TestRemoteExecWrites: Endpoints down
> === CONT  TestRemoteExecGetSpec_ACLToken
> --- PASS: TestRemoteExecGetSpec_ACLDeny (0.43s)
>     writer.go:29: 2020-02-23T02:46:36.303Z [WARN]  TestRemoteExecGetSpec_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:36.303Z [WARN]  TestRemoteExecGetSpec_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:36.303Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:36.303Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:36.316Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:50b157af-1735-e655-aef8-bbe6fd90ef54 Address:127.0.0.1:16246}]"
>     writer.go:29: 2020-02-23T02:46:36.316Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16246 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:36.317Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-50b157af-1735-e655-aef8-bbe6fd90ef54.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:36.317Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-50b157af-1735-e655-aef8-bbe6fd90ef54 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:36.317Z [INFO]  TestRemoteExecGetSpec_ACLDeny: Started DNS server: address=127.0.0.1:16241 network=udp
>     writer.go:29: 2020-02-23T02:46:36.317Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server: Adding LAN server: server="Node-50b157af-1735-e655-aef8-bbe6fd90ef54 (Addr: tcp/127.0.0.1:16246) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:36.317Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server: Handled event for server in area: event=member-join server=Node-50b157af-1735-e655-aef8-bbe6fd90ef54.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:36.317Z [INFO]  TestRemoteExecGetSpec_ACLDeny: Started DNS server: address=127.0.0.1:16241 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.318Z [INFO]  TestRemoteExecGetSpec_ACLDeny: Started HTTP server: address=127.0.0.1:16242 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.318Z [INFO]  TestRemoteExecGetSpec_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:46:36.362Z [WARN]  TestRemoteExecGetSpec_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:36.362Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16246 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:36.365Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:36.365Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.server.raft: vote granted: from=50b157af-1735-e655-aef8-bbe6fd90ef54 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:36.365Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:36.365Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16246 [Leader]"
>     writer.go:29: 2020-02-23T02:46:36.366Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:36.367Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server: New leader elected: payload=Node-50b157af-1735-e655-aef8-bbe6fd90ef54
>     writer.go:29: 2020-02-23T02:46:36.367Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:36.369Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:36.369Z [WARN]  TestRemoteExecGetSpec_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:36.371Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:36.372Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:36.372Z [WARN]  TestRemoteExecGetSpec_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:36.376Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:36.376Z [INFO]  TestRemoteExecGetSpec_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.376Z [INFO]  TestRemoteExecGetSpec_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.377Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-50b157af-1735-e655-aef8-bbe6fd90ef54
>     writer.go:29: 2020-02-23T02:46:36.377Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-50b157af-1735-e655-aef8-bbe6fd90ef54.dc1
>     writer.go:29: 2020-02-23T02:46:36.381Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:36.381Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:36.381Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:46:36.382Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-50b157af-1735-e655-aef8-bbe6fd90ef54
>     writer.go:29: 2020-02-23T02:46:36.382Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-50b157af-1735-e655-aef8-bbe6fd90ef54.dc1
>     writer.go:29: 2020-02-23T02:46:36.382Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server: Handled event for server in area: event=member-update server=Node-50b157af-1735-e655-aef8-bbe6fd90ef54.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:36.382Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server: Handled event for server in area: event=member-update server=Node-50b157af-1735-e655-aef8-bbe6fd90ef54.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:36.393Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:36.393Z [INFO]  TestRemoteExecGetSpec_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.394Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-50b157af-1735-e655-aef8-bbe6fd90ef54
>     writer.go:29: 2020-02-23T02:46:36.394Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server: member joined, marking health alive: member=Node-50b157af-1735-e655-aef8-bbe6fd90ef54
>     writer.go:29: 2020-02-23T02:46:36.395Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-50b157af-1735-e655-aef8-bbe6fd90ef54
>     writer.go:29: 2020-02-23T02:46:36.395Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-50b157af-1735-e655-aef8-bbe6fd90ef54
>     writer.go:29: 2020-02-23T02:46:36.474Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.acl: dropping check from result due to ACLs: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:36.474Z [WARN]  TestRemoteExecGetSpec_ACLDeny: Node info update blocked by ACLs: node=50b157af-1735-e655-aef8-bbe6fd90ef54 accessorID=
>     writer.go:29: 2020-02-23T02:46:36.474Z [DEBUG] TestRemoteExecGetSpec_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:46:36.714Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.acl: dropping node from result due to ACLs: node=Node-50b157af-1735-e655-aef8-bbe6fd90ef54
>     writer.go:29: 2020-02-23T02:46:36.715Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.acl: dropping node from result due to ACLs: node=Node-50b157af-1735-e655-aef8-bbe6fd90ef54
>     writer.go:29: 2020-02-23T02:46:36.720Z [ERROR] TestRemoteExecGetSpec_ACLDeny: failed to get remote exec job: error="Permission denied"
>     writer.go:29: 2020-02-23T02:46:36.722Z [INFO]  TestRemoteExecGetSpec_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:36.722Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:36.722Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.722Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.722Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.722Z [WARN]  TestRemoteExecGetSpec_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:36.722Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.722Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.722Z [DEBUG] TestRemoteExecGetSpec_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.726Z [WARN]  TestRemoteExecGetSpec_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:36.729Z [INFO]  TestRemoteExecGetSpec_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:36.729Z [INFO]  TestRemoteExecGetSpec_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:46:36.729Z [INFO]  TestRemoteExecGetSpec_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:46:36.729Z [INFO]  TestRemoteExecGetSpec_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16241 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.729Z [INFO]  TestRemoteExecGetSpec_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16241 network=udp
>     writer.go:29: 2020-02-23T02:46:36.729Z [INFO]  TestRemoteExecGetSpec_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16242 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.729Z [INFO]  TestRemoteExecGetSpec_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:36.729Z [INFO]  TestRemoteExecGetSpec_ACLDeny: Endpoints down
> === CONT  TestRemoteExecGetSpec
> --- PASS: TestRemoteExecGetSpec_ACLAgentToken (0.42s)
>     writer.go:29: 2020-02-23T02:46:36.350Z [WARN]  TestRemoteExecGetSpec_ACLAgentToken: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:36.350Z [WARN]  TestRemoteExecGetSpec_ACLAgentToken: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:36.350Z [DEBUG] TestRemoteExecGetSpec_ACLAgentToken.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:36.350Z [DEBUG] TestRemoteExecGetSpec_ACLAgentToken.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:36.360Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:da6d1762-c884-62fe-ca04-b57747396848 Address:127.0.0.1:16258}]"
>     writer.go:29: 2020-02-23T02:46:36.361Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server.raft: entering follower state: follower="Node at 127.0.0.1:16258 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:36.371Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server.serf.wan: serf: EventMemberJoin: Node-da6d1762-c884-62fe-ca04-b57747396848.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:36.372Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server.serf.lan: serf: EventMemberJoin: Node-da6d1762-c884-62fe-ca04-b57747396848 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:36.373Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken: Started DNS server: address=127.0.0.1:16253 network=udp
>     writer.go:29: 2020-02-23T02:46:36.373Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server: Adding LAN server: server="Node-da6d1762-c884-62fe-ca04-b57747396848 (Addr: tcp/127.0.0.1:16258) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:36.373Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server: Handled event for server in area: event=member-join server=Node-da6d1762-c884-62fe-ca04-b57747396848.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:36.373Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken: Started DNS server: address=127.0.0.1:16253 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.382Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken: Started HTTP server: address=127.0.0.1:16254 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.382Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken: started state syncer
>     writer.go:29: 2020-02-23T02:46:36.406Z [WARN]  TestRemoteExecGetSpec_ACLAgentToken.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:36.406Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server.raft: entering candidate state: node="Node at 127.0.0.1:16258 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:36.410Z [DEBUG] TestRemoteExecGetSpec_ACLAgentToken.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:36.410Z [DEBUG] TestRemoteExecGetSpec_ACLAgentToken.server.raft: vote granted: from=da6d1762-c884-62fe-ca04-b57747396848 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:36.410Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:36.410Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server.raft: entering leader state: leader="Node at 127.0.0.1:16258 [Leader]"
>     writer.go:29: 2020-02-23T02:46:36.410Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:36.410Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server: New leader elected: payload=Node-da6d1762-c884-62fe-ca04-b57747396848
>     writer.go:29: 2020-02-23T02:46:36.413Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:36.414Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:36.414Z [WARN]  TestRemoteExecGetSpec_ACLAgentToken.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:36.417Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:36.420Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:36.420Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.420Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.427Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server.serf.lan: serf: EventMemberUpdate: Node-da6d1762-c884-62fe-ca04-b57747396848
>     writer.go:29: 2020-02-23T02:46:36.427Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server.serf.wan: serf: EventMemberUpdate: Node-da6d1762-c884-62fe-ca04-b57747396848.dc1
>     writer.go:29: 2020-02-23T02:46:36.427Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server: Handled event for server in area: event=member-update server=Node-da6d1762-c884-62fe-ca04-b57747396848.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:36.433Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:36.441Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:36.441Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.441Z [DEBUG] TestRemoteExecGetSpec_ACLAgentToken.server: Skipping self join check for node since the cluster is too small: node=Node-da6d1762-c884-62fe-ca04-b57747396848
>     writer.go:29: 2020-02-23T02:46:36.441Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server: member joined, marking health alive: member=Node-da6d1762-c884-62fe-ca04-b57747396848
>     writer.go:29: 2020-02-23T02:46:36.444Z [DEBUG] TestRemoteExecGetSpec_ACLAgentToken.server: Skipping self join check for node since the cluster is too small: node=Node-da6d1762-c884-62fe-ca04-b57747396848
>     writer.go:29: 2020-02-23T02:46:36.465Z [DEBUG] TestRemoteExecGetSpec_ACLAgentToken: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:36.501Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken: Synced node info
>     writer.go:29: 2020-02-23T02:46:36.751Z [DEBUG] TestRemoteExecGetSpec_ACLAgentToken.acl: dropping node from result due to ACLs: node=Node-da6d1762-c884-62fe-ca04-b57747396848
>     writer.go:29: 2020-02-23T02:46:36.751Z [DEBUG] TestRemoteExecGetSpec_ACLAgentToken.acl: dropping node from result due to ACLs: node=Node-da6d1762-c884-62fe-ca04-b57747396848
>     writer.go:29: 2020-02-23T02:46:36.758Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:36.758Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:36.758Z [DEBUG] TestRemoteExecGetSpec_ACLAgentToken.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.758Z [DEBUG] TestRemoteExecGetSpec_ACLAgentToken.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.758Z [DEBUG] TestRemoteExecGetSpec_ACLAgentToken.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.758Z [WARN]  TestRemoteExecGetSpec_ACLAgentToken.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:36.758Z [DEBUG] TestRemoteExecGetSpec_ACLAgentToken.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.758Z [DEBUG] TestRemoteExecGetSpec_ACLAgentToken.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.758Z [DEBUG] TestRemoteExecGetSpec_ACLAgentToken.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.760Z [WARN]  TestRemoteExecGetSpec_ACLAgentToken.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:36.762Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:36.762Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken: consul server down
>     writer.go:29: 2020-02-23T02:46:36.762Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken: shutdown complete
>     writer.go:29: 2020-02-23T02:46:36.762Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken: Stopping server: protocol=DNS address=127.0.0.1:16253 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.762Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken: Stopping server: protocol=DNS address=127.0.0.1:16253 network=udp
>     writer.go:29: 2020-02-23T02:46:36.762Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken: Stopping server: protocol=HTTP address=127.0.0.1:16254 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.762Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:36.762Z [INFO]  TestRemoteExecGetSpec_ACLAgentToken: Endpoints down
> === CONT  TestPreparedQuery_parseLimit
> --- PASS: TestPreparedQuery_parseLimit (0.00s)
> === CONT  TestPreparedQuery_Delete
> --- PASS: TestRemoteExecGetSpec_ACLToken (0.45s)
>     writer.go:29: 2020-02-23T02:46:36.539Z [WARN]  TestRemoteExecGetSpec_ACLToken: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:36.540Z [WARN]  TestRemoteExecGetSpec_ACLToken: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:36.540Z [DEBUG] TestRemoteExecGetSpec_ACLToken.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:36.540Z [DEBUG] TestRemoteExecGetSpec_ACLToken.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:36.549Z [INFO]  TestRemoteExecGetSpec_ACLToken.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:6f7639d4-25f6-87eb-b970-678f8ef072cf Address:127.0.0.1:16264}]"
>     writer.go:29: 2020-02-23T02:46:36.549Z [INFO]  TestRemoteExecGetSpec_ACLToken.server.raft: entering follower state: follower="Node at 127.0.0.1:16264 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:36.550Z [INFO]  TestRemoteExecGetSpec_ACLToken.server.serf.wan: serf: EventMemberJoin: Node-6f7639d4-25f6-87eb-b970-678f8ef072cf.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:36.550Z [INFO]  TestRemoteExecGetSpec_ACLToken.server.serf.lan: serf: EventMemberJoin: Node-6f7639d4-25f6-87eb-b970-678f8ef072cf 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:36.550Z [INFO]  TestRemoteExecGetSpec_ACLToken.server: Adding LAN server: server="Node-6f7639d4-25f6-87eb-b970-678f8ef072cf (Addr: tcp/127.0.0.1:16264) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:36.550Z [INFO]  TestRemoteExecGetSpec_ACLToken: Started DNS server: address=127.0.0.1:16259 network=udp
>     writer.go:29: 2020-02-23T02:46:36.550Z [INFO]  TestRemoteExecGetSpec_ACLToken.server: Handled event for server in area: event=member-join server=Node-6f7639d4-25f6-87eb-b970-678f8ef072cf.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:36.550Z [INFO]  TestRemoteExecGetSpec_ACLToken: Started DNS server: address=127.0.0.1:16259 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.551Z [INFO]  TestRemoteExecGetSpec_ACLToken: Started HTTP server: address=127.0.0.1:16260 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.551Z [INFO]  TestRemoteExecGetSpec_ACLToken: started state syncer
>     writer.go:29: 2020-02-23T02:46:36.594Z [WARN]  TestRemoteExecGetSpec_ACLToken.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:36.594Z [INFO]  TestRemoteExecGetSpec_ACLToken.server.raft: entering candidate state: node="Node at 127.0.0.1:16264 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:36.674Z [DEBUG] TestRemoteExecGetSpec_ACLToken.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:36.674Z [DEBUG] TestRemoteExecGetSpec_ACLToken.server.raft: vote granted: from=6f7639d4-25f6-87eb-b970-678f8ef072cf term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:36.674Z [INFO]  TestRemoteExecGetSpec_ACLToken.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:36.674Z [INFO]  TestRemoteExecGetSpec_ACLToken.server.raft: entering leader state: leader="Node at 127.0.0.1:16264 [Leader]"
>     writer.go:29: 2020-02-23T02:46:36.674Z [INFO]  TestRemoteExecGetSpec_ACLToken.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:36.674Z [INFO]  TestRemoteExecGetSpec_ACLToken.server: New leader elected: payload=Node-6f7639d4-25f6-87eb-b970-678f8ef072cf
>     writer.go:29: 2020-02-23T02:46:36.677Z [INFO]  TestRemoteExecGetSpec_ACLToken.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:36.678Z [INFO]  TestRemoteExecGetSpec_ACLToken.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:36.678Z [WARN]  TestRemoteExecGetSpec_ACLToken.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:36.681Z [INFO]  TestRemoteExecGetSpec_ACLToken.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:36.690Z [INFO]  TestRemoteExecGetSpec_ACLToken.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:36.690Z [INFO]  TestRemoteExecGetSpec_ACLToken.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.690Z [INFO]  TestRemoteExecGetSpec_ACLToken.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.690Z [INFO]  TestRemoteExecGetSpec_ACLToken.server.serf.lan: serf: EventMemberUpdate: Node-6f7639d4-25f6-87eb-b970-678f8ef072cf
>     writer.go:29: 2020-02-23T02:46:36.690Z [INFO]  TestRemoteExecGetSpec_ACLToken.server.serf.wan: serf: EventMemberUpdate: Node-6f7639d4-25f6-87eb-b970-678f8ef072cf.dc1
>     writer.go:29: 2020-02-23T02:46:36.691Z [INFO]  TestRemoteExecGetSpec_ACLToken.server: Handled event for server in area: event=member-update server=Node-6f7639d4-25f6-87eb-b970-678f8ef072cf.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:36.694Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:36.701Z [INFO]  TestRemoteExecGetSpec_ACLToken.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:36.701Z [INFO]  TestRemoteExecGetSpec_ACLToken.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.701Z [DEBUG] TestRemoteExecGetSpec_ACLToken.server: Skipping self join check for node since the cluster is too small: node=Node-6f7639d4-25f6-87eb-b970-678f8ef072cf
>     writer.go:29: 2020-02-23T02:46:36.701Z [INFO]  TestRemoteExecGetSpec_ACLToken.server: member joined, marking health alive: member=Node-6f7639d4-25f6-87eb-b970-678f8ef072cf
>     writer.go:29: 2020-02-23T02:46:36.705Z [DEBUG] TestRemoteExecGetSpec_ACLToken.server: Skipping self join check for node since the cluster is too small: node=Node-6f7639d4-25f6-87eb-b970-678f8ef072cf
>     writer.go:29: 2020-02-23T02:46:36.842Z [DEBUG] TestRemoteExecGetSpec_ACLToken: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:36.880Z [INFO]  TestRemoteExecGetSpec_ACLToken: Synced node info
>     writer.go:29: 2020-02-23T02:46:36.880Z [DEBUG] TestRemoteExecGetSpec_ACLToken: Node info in sync
>     writer.go:29: 2020-02-23T02:46:36.890Z [DEBUG] TestRemoteExecGetSpec_ACLToken.acl: dropping node from result due to ACLs: node=Node-6f7639d4-25f6-87eb-b970-678f8ef072cf
>     writer.go:29: 2020-02-23T02:46:36.890Z [DEBUG] TestRemoteExecGetSpec_ACLToken.acl: dropping node from result due to ACLs: node=Node-6f7639d4-25f6-87eb-b970-678f8ef072cf
>     writer.go:29: 2020-02-23T02:46:36.981Z [INFO]  TestRemoteExecGetSpec_ACLToken: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:36.981Z [INFO]  TestRemoteExecGetSpec_ACLToken.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:36.981Z [DEBUG] TestRemoteExecGetSpec_ACLToken.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.981Z [DEBUG] TestRemoteExecGetSpec_ACLToken.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.981Z [DEBUG] TestRemoteExecGetSpec_ACLToken.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.981Z [WARN]  TestRemoteExecGetSpec_ACLToken.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:36.981Z [DEBUG] TestRemoteExecGetSpec_ACLToken.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:36.981Z [DEBUG] TestRemoteExecGetSpec_ACLToken.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:36.981Z [DEBUG] TestRemoteExecGetSpec_ACLToken.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.982Z [WARN]  TestRemoteExecGetSpec_ACLToken.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:36.984Z [INFO]  TestRemoteExecGetSpec_ACLToken.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:36.984Z [INFO]  TestRemoteExecGetSpec_ACLToken: consul server down
>     writer.go:29: 2020-02-23T02:46:36.984Z [INFO]  TestRemoteExecGetSpec_ACLToken: shutdown complete
>     writer.go:29: 2020-02-23T02:46:36.984Z [INFO]  TestRemoteExecGetSpec_ACLToken: Stopping server: protocol=DNS address=127.0.0.1:16259 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.984Z [INFO]  TestRemoteExecGetSpec_ACLToken: Stopping server: protocol=DNS address=127.0.0.1:16259 network=udp
>     writer.go:29: 2020-02-23T02:46:36.984Z [INFO]  TestRemoteExecGetSpec_ACLToken: Stopping server: protocol=HTTP address=127.0.0.1:16260 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.984Z [INFO]  TestRemoteExecGetSpec_ACLToken: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:36.984Z [INFO]  TestRemoteExecGetSpec_ACLToken: Endpoints down
> === CONT  TestPreparedQuery_Update
> --- PASS: TestPreparedQuery_Delete (0.30s)
>     writer.go:29: 2020-02-23T02:46:36.769Z [WARN]  TestPreparedQuery_Delete: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:36.769Z [DEBUG] TestPreparedQuery_Delete.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:36.770Z [DEBUG] TestPreparedQuery_Delete.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:36.780Z [INFO]  TestPreparedQuery_Delete.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:5e7fcd98-0ea2-d30d-af11-250343449f26 Address:127.0.0.1:16276}]"
>     writer.go:29: 2020-02-23T02:46:36.780Z [INFO]  TestPreparedQuery_Delete.server.raft: entering follower state: follower="Node at 127.0.0.1:16276 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:36.781Z [INFO]  TestPreparedQuery_Delete.server.serf.wan: serf: EventMemberJoin: Node-5e7fcd98-0ea2-d30d-af11-250343449f26.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:36.781Z [INFO]  TestPreparedQuery_Delete.server.serf.lan: serf: EventMemberJoin: Node-5e7fcd98-0ea2-d30d-af11-250343449f26 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:36.782Z [INFO]  TestPreparedQuery_Delete.server: Handled event for server in area: event=member-join server=Node-5e7fcd98-0ea2-d30d-af11-250343449f26.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:36.782Z [INFO]  TestPreparedQuery_Delete.server: Adding LAN server: server="Node-5e7fcd98-0ea2-d30d-af11-250343449f26 (Addr: tcp/127.0.0.1:16276) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:36.782Z [INFO]  TestPreparedQuery_Delete: Started DNS server: address=127.0.0.1:16271 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.782Z [INFO]  TestPreparedQuery_Delete: Started DNS server: address=127.0.0.1:16271 network=udp
>     writer.go:29: 2020-02-23T02:46:36.782Z [INFO]  TestPreparedQuery_Delete: Started HTTP server: address=127.0.0.1:16272 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.782Z [INFO]  TestPreparedQuery_Delete: started state syncer
>     writer.go:29: 2020-02-23T02:46:36.839Z [WARN]  TestPreparedQuery_Delete.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:36.839Z [INFO]  TestPreparedQuery_Delete.server.raft: entering candidate state: node="Node at 127.0.0.1:16276 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:36.866Z [DEBUG] TestPreparedQuery_Delete.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:36.866Z [DEBUG] TestPreparedQuery_Delete.server.raft: vote granted: from=5e7fcd98-0ea2-d30d-af11-250343449f26 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:36.866Z [INFO]  TestPreparedQuery_Delete.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:36.866Z [INFO]  TestPreparedQuery_Delete.server.raft: entering leader state: leader="Node at 127.0.0.1:16276 [Leader]"
>     writer.go:29: 2020-02-23T02:46:36.866Z [INFO]  TestPreparedQuery_Delete.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:36.866Z [INFO]  TestPreparedQuery_Delete.server: New leader elected: payload=Node-5e7fcd98-0ea2-d30d-af11-250343449f26
>     writer.go:29: 2020-02-23T02:46:36.983Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:36.991Z [INFO]  TestPreparedQuery_Delete.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:36.991Z [INFO]  TestPreparedQuery_Delete.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.991Z [DEBUG] TestPreparedQuery_Delete.server: Skipping self join check for node since the cluster is too small: node=Node-5e7fcd98-0ea2-d30d-af11-250343449f26
>     writer.go:29: 2020-02-23T02:46:36.991Z [INFO]  TestPreparedQuery_Delete.server: member joined, marking health alive: member=Node-5e7fcd98-0ea2-d30d-af11-250343449f26
>     writer.go:29: 2020-02-23T02:46:37.061Z [WARN]  TestPreparedQuery_Delete.server: endpoint injected; this should only be used for testing
>     writer.go:29: 2020-02-23T02:46:37.061Z [INFO]  TestPreparedQuery_Delete: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:37.061Z [INFO]  TestPreparedQuery_Delete.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:37.061Z [DEBUG] TestPreparedQuery_Delete.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:37.061Z [WARN]  TestPreparedQuery_Delete.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:37.061Z [ERROR] TestPreparedQuery_Delete.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:37.061Z [DEBUG] TestPreparedQuery_Delete.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:37.063Z [WARN]  TestPreparedQuery_Delete.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:37.065Z [INFO]  TestPreparedQuery_Delete.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:37.065Z [INFO]  TestPreparedQuery_Delete: consul server down
>     writer.go:29: 2020-02-23T02:46:37.065Z [INFO]  TestPreparedQuery_Delete: shutdown complete
>     writer.go:29: 2020-02-23T02:46:37.065Z [INFO]  TestPreparedQuery_Delete: Stopping server: protocol=DNS address=127.0.0.1:16271 network=tcp
>     writer.go:29: 2020-02-23T02:46:37.065Z [INFO]  TestPreparedQuery_Delete: Stopping server: protocol=DNS address=127.0.0.1:16271 network=udp
>     writer.go:29: 2020-02-23T02:46:37.065Z [INFO]  TestPreparedQuery_Delete: Stopping server: protocol=HTTP address=127.0.0.1:16272 network=tcp
>     writer.go:29: 2020-02-23T02:46:37.065Z [INFO]  TestPreparedQuery_Delete: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:37.065Z [INFO]  TestPreparedQuery_Delete: Endpoints down
> === CONT  TestPreparedQuery_Get
> === RUN   TestPreparedQuery_Get/#00
> --- PASS: TestRemoteExecGetSpec (0.43s)
>     writer.go:29: 2020-02-23T02:46:36.736Z [WARN]  TestRemoteExecGetSpec: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:36.737Z [DEBUG] TestRemoteExecGetSpec.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:36.737Z [DEBUG] TestRemoteExecGetSpec.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:36.746Z [INFO]  TestRemoteExecGetSpec.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:5ff1714b-cd14-dcf6-a3f9-ab4ca0d797fa Address:127.0.0.1:16270}]"
>     writer.go:29: 2020-02-23T02:46:36.746Z [INFO]  TestRemoteExecGetSpec.server.raft: entering follower state: follower="Node at 127.0.0.1:16270 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:36.747Z [INFO]  TestRemoteExecGetSpec.server.serf.wan: serf: EventMemberJoin: Node-5ff1714b-cd14-dcf6-a3f9-ab4ca0d797fa.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:36.747Z [INFO]  TestRemoteExecGetSpec.server.serf.lan: serf: EventMemberJoin: Node-5ff1714b-cd14-dcf6-a3f9-ab4ca0d797fa 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:36.748Z [INFO]  TestRemoteExecGetSpec: Started DNS server: address=127.0.0.1:16265 network=udp
>     writer.go:29: 2020-02-23T02:46:36.748Z [INFO]  TestRemoteExecGetSpec.server: Adding LAN server: server="Node-5ff1714b-cd14-dcf6-a3f9-ab4ca0d797fa (Addr: tcp/127.0.0.1:16270) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:36.748Z [INFO]  TestRemoteExecGetSpec.server: Handled event for server in area: event=member-join server=Node-5ff1714b-cd14-dcf6-a3f9-ab4ca0d797fa.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:36.748Z [INFO]  TestRemoteExecGetSpec: Started DNS server: address=127.0.0.1:16265 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.748Z [INFO]  TestRemoteExecGetSpec: Started HTTP server: address=127.0.0.1:16266 network=tcp
>     writer.go:29: 2020-02-23T02:46:36.748Z [INFO]  TestRemoteExecGetSpec: started state syncer
>     writer.go:29: 2020-02-23T02:46:36.797Z [WARN]  TestRemoteExecGetSpec.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:36.797Z [INFO]  TestRemoteExecGetSpec.server.raft: entering candidate state: node="Node at 127.0.0.1:16270 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:36.800Z [DEBUG] TestRemoteExecGetSpec.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:36.800Z [DEBUG] TestRemoteExecGetSpec.server.raft: vote granted: from=5ff1714b-cd14-dcf6-a3f9-ab4ca0d797fa term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:36.800Z [INFO]  TestRemoteExecGetSpec.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:36.800Z [INFO]  TestRemoteExecGetSpec.server.raft: entering leader state: leader="Node at 127.0.0.1:16270 [Leader]"
>     writer.go:29: 2020-02-23T02:46:36.800Z [INFO]  TestRemoteExecGetSpec.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:36.800Z [INFO]  TestRemoteExecGetSpec.server: New leader elected: payload=Node-5ff1714b-cd14-dcf6-a3f9-ab4ca0d797fa
>     writer.go:29: 2020-02-23T02:46:36.807Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:36.815Z [INFO]  TestRemoteExecGetSpec.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:36.815Z [INFO]  TestRemoteExecGetSpec.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:36.815Z [DEBUG] TestRemoteExecGetSpec.server: Skipping self join check for node since the cluster is too small: node=Node-5ff1714b-cd14-dcf6-a3f9-ab4ca0d797fa
>     writer.go:29: 2020-02-23T02:46:36.815Z [INFO]  TestRemoteExecGetSpec.server: member joined, marking health alive: member=Node-5ff1714b-cd14-dcf6-a3f9-ab4ca0d797fa
>     writer.go:29: 2020-02-23T02:46:36.936Z [DEBUG] TestRemoteExecGetSpec: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:36.980Z [INFO]  TestRemoteExecGetSpec: Synced node info
>     writer.go:29: 2020-02-23T02:46:37.156Z [INFO]  TestRemoteExecGetSpec: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:37.156Z [INFO]  TestRemoteExecGetSpec.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:37.156Z [DEBUG] TestRemoteExecGetSpec.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:37.156Z [WARN]  TestRemoteExecGetSpec.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:37.156Z [DEBUG] TestRemoteExecGetSpec.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:37.159Z [WARN]  TestRemoteExecGetSpec.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:37.161Z [INFO]  TestRemoteExecGetSpec.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:37.161Z [INFO]  TestRemoteExecGetSpec: consul server down
>     writer.go:29: 2020-02-23T02:46:37.161Z [INFO]  TestRemoteExecGetSpec: shutdown complete
>     writer.go:29: 2020-02-23T02:46:37.161Z [INFO]  TestRemoteExecGetSpec: Stopping server: protocol=DNS address=127.0.0.1:16265 network=tcp
>     writer.go:29: 2020-02-23T02:46:37.161Z [INFO]  TestRemoteExecGetSpec: Stopping server: protocol=DNS address=127.0.0.1:16265 network=udp
>     writer.go:29: 2020-02-23T02:46:37.161Z [INFO]  TestRemoteExecGetSpec: Stopping server: protocol=HTTP address=127.0.0.1:16266 network=tcp
>     writer.go:29: 2020-02-23T02:46:37.161Z [INFO]  TestRemoteExecGetSpec: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:37.161Z [INFO]  TestRemoteExecGetSpec: Endpoints down
> === CONT  TestPreparedQuery_Explain
> === RUN   TestPreparedQuery_Explain/#00
> --- PASS: TestPreparedQuery_Update (0.36s)
>     writer.go:29: 2020-02-23T02:46:36.992Z [WARN]  TestPreparedQuery_Update: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:36.992Z [DEBUG] TestPreparedQuery_Update.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:36.992Z [DEBUG] TestPreparedQuery_Update.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:37.001Z [INFO]  TestPreparedQuery_Update.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:bb801463-4cec-f6f7-d9d8-0ce349bb2605 Address:127.0.0.1:16282}]"
>     writer.go:29: 2020-02-23T02:46:37.002Z [INFO]  TestPreparedQuery_Update.server.serf.wan: serf: EventMemberJoin: Node-bb801463-4cec-f6f7-d9d8-0ce349bb2605.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:37.002Z [INFO]  TestPreparedQuery_Update.server.serf.lan: serf: EventMemberJoin: Node-bb801463-4cec-f6f7-d9d8-0ce349bb2605 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:37.002Z [INFO]  TestPreparedQuery_Update: Started DNS server: address=127.0.0.1:16277 network=udp
>     writer.go:29: 2020-02-23T02:46:37.002Z [INFO]  TestPreparedQuery_Update.server.raft: entering follower state: follower="Node at 127.0.0.1:16282 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:37.002Z [INFO]  TestPreparedQuery_Update.server: Adding LAN server: server="Node-bb801463-4cec-f6f7-d9d8-0ce349bb2605 (Addr: tcp/127.0.0.1:16282) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:37.002Z [INFO]  TestPreparedQuery_Update.server: Handled event for server in area: event=member-join server=Node-bb801463-4cec-f6f7-d9d8-0ce349bb2605.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:37.002Z [INFO]  TestPreparedQuery_Update: Started DNS server: address=127.0.0.1:16277 network=tcp
>     writer.go:29: 2020-02-23T02:46:37.003Z [INFO]  TestPreparedQuery_Update: Started HTTP server: address=127.0.0.1:16278 network=tcp
>     writer.go:29: 2020-02-23T02:46:37.003Z [INFO]  TestPreparedQuery_Update: started state syncer
>     writer.go:29: 2020-02-23T02:46:37.059Z [WARN]  TestPreparedQuery_Update.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:37.059Z [INFO]  TestPreparedQuery_Update.server.raft: entering candidate state: node="Node at 127.0.0.1:16282 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:37.062Z [DEBUG] TestPreparedQuery_Update.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:37.062Z [DEBUG] TestPreparedQuery_Update.server.raft: vote granted: from=bb801463-4cec-f6f7-d9d8-0ce349bb2605 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:37.062Z [INFO]  TestPreparedQuery_Update.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:37.062Z [INFO]  TestPreparedQuery_Update.server.raft: entering leader state: leader="Node at 127.0.0.1:16282 [Leader]"
>     writer.go:29: 2020-02-23T02:46:37.062Z [INFO]  TestPreparedQuery_Update.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:37.062Z [INFO]  TestPreparedQuery_Update.server: New leader elected: payload=Node-bb801463-4cec-f6f7-d9d8-0ce349bb2605
>     writer.go:29: 2020-02-23T02:46:37.076Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:37.083Z [INFO]  TestPreparedQuery_Update.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:37.083Z [INFO]  TestPreparedQuery_Update.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:37.083Z [DEBUG] TestPreparedQuery_Update.server: Skipping self join check for node since the cluster is too small: node=Node-bb801463-4cec-f6f7-d9d8-0ce349bb2605
>     writer.go:29: 2020-02-23T02:46:37.083Z [INFO]  TestPreparedQuery_Update.server: member joined, marking health alive: member=Node-bb801463-4cec-f6f7-d9d8-0ce349bb2605
>     writer.go:29: 2020-02-23T02:46:37.204Z [DEBUG] TestPreparedQuery_Update: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:37.207Z [INFO]  TestPreparedQuery_Update: Synced node info
>     writer.go:29: 2020-02-23T02:46:37.207Z [DEBUG] TestPreparedQuery_Update: Node info in sync
>     writer.go:29: 2020-02-23T02:46:37.345Z [WARN]  TestPreparedQuery_Update.server: endpoint injected; this should only be used for testing
>     writer.go:29: 2020-02-23T02:46:37.345Z [INFO]  TestPreparedQuery_Update: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:37.345Z [INFO]  TestPreparedQuery_Update.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:37.345Z [DEBUG] TestPreparedQuery_Update.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:37.345Z [WARN]  TestPreparedQuery_Update.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:37.345Z [DEBUG] TestPreparedQuery_Update.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:37.347Z [WARN]  TestPreparedQuery_Update.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:37.348Z [INFO]  TestPreparedQuery_Update.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:37.349Z [INFO]  TestPreparedQuery_Update: consul server down
>     writer.go:29: 2020-02-23T02:46:37.349Z [INFO]  TestPreparedQuery_Update: shutdown complete
>     writer.go:29: 2020-02-23T02:46:37.349Z [INFO]  TestPreparedQuery_Update: Stopping server: protocol=DNS address=127.0.0.1:16277 network=tcp
>     writer.go:29: 2020-02-23T02:46:37.349Z [INFO]  TestPreparedQuery_Update: Stopping server: protocol=DNS address=127.0.0.1:16277 network=udp
>     writer.go:29: 2020-02-23T02:46:37.349Z [INFO]  TestPreparedQuery_Update: Stopping server: protocol=HTTP address=127.0.0.1:16278 network=tcp
>     writer.go:29: 2020-02-23T02:46:37.349Z [INFO]  TestPreparedQuery_Update: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:37.349Z [INFO]  TestPreparedQuery_Update: Endpoints down
> === CONT  TestPreparedQuery_ExecuteCached
> === RUN   TestPreparedQuery_Explain/#01
> === RUN   TestPreparedQuery_Get/#01
> --- PASS: TestPreparedQuery_Get (0.57s)
>     --- PASS: TestPreparedQuery_Get/#00 (0.34s)
>         writer.go:29: 2020-02-23T02:46:37.074Z [WARN]  TestPreparedQuery_Get/#00: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:37.074Z [DEBUG] TestPreparedQuery_Get/#00.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:37.075Z [DEBUG] TestPreparedQuery_Get/#00.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:37.087Z [INFO]  TestPreparedQuery_Get/#00.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:ac1638af-1571-3904-e0eb-ae20cb6a2ee8 Address:127.0.0.1:16288}]"
>         writer.go:29: 2020-02-23T02:46:37.088Z [INFO]  TestPreparedQuery_Get/#00.server.serf.wan: serf: EventMemberJoin: Node-ac1638af-1571-3904-e0eb-ae20cb6a2ee8.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.088Z [INFO]  TestPreparedQuery_Get/#00.server.serf.lan: serf: EventMemberJoin: Node-ac1638af-1571-3904-e0eb-ae20cb6a2ee8 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.088Z [INFO]  TestPreparedQuery_Get/#00: Started DNS server: address=127.0.0.1:16283 network=udp
>         writer.go:29: 2020-02-23T02:46:37.088Z [INFO]  TestPreparedQuery_Get/#00.server.raft: entering follower state: follower="Node at 127.0.0.1:16288 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:37.089Z [INFO]  TestPreparedQuery_Get/#00.server: Adding LAN server: server="Node-ac1638af-1571-3904-e0eb-ae20cb6a2ee8 (Addr: tcp/127.0.0.1:16288) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:37.089Z [INFO]  TestPreparedQuery_Get/#00.server: Handled event for server in area: event=member-join server=Node-ac1638af-1571-3904-e0eb-ae20cb6a2ee8.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:37.089Z [INFO]  TestPreparedQuery_Get/#00: Started DNS server: address=127.0.0.1:16283 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.089Z [INFO]  TestPreparedQuery_Get/#00: Started HTTP server: address=127.0.0.1:16284 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.089Z [INFO]  TestPreparedQuery_Get/#00: started state syncer
>         writer.go:29: 2020-02-23T02:46:37.151Z [WARN]  TestPreparedQuery_Get/#00.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:37.151Z [INFO]  TestPreparedQuery_Get/#00.server.raft: entering candidate state: node="Node at 127.0.0.1:16288 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:37.155Z [DEBUG] TestPreparedQuery_Get/#00.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:37.155Z [DEBUG] TestPreparedQuery_Get/#00.server.raft: vote granted: from=ac1638af-1571-3904-e0eb-ae20cb6a2ee8 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:37.155Z [INFO]  TestPreparedQuery_Get/#00.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:37.155Z [INFO]  TestPreparedQuery_Get/#00.server.raft: entering leader state: leader="Node at 127.0.0.1:16288 [Leader]"
>         writer.go:29: 2020-02-23T02:46:37.155Z [INFO]  TestPreparedQuery_Get/#00.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:37.155Z [INFO]  TestPreparedQuery_Get/#00.server: New leader elected: payload=Node-ac1638af-1571-3904-e0eb-ae20cb6a2ee8
>         writer.go:29: 2020-02-23T02:46:37.167Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:37.176Z [INFO]  TestPreparedQuery_Get/#00.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:37.176Z [INFO]  TestPreparedQuery_Get/#00.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.176Z [DEBUG] TestPreparedQuery_Get/#00.server: Skipping self join check for node since the cluster is too small: node=Node-ac1638af-1571-3904-e0eb-ae20cb6a2ee8
>         writer.go:29: 2020-02-23T02:46:37.176Z [INFO]  TestPreparedQuery_Get/#00.server: member joined, marking health alive: member=Node-ac1638af-1571-3904-e0eb-ae20cb6a2ee8
>         writer.go:29: 2020-02-23T02:46:37.404Z [WARN]  TestPreparedQuery_Get/#00.server: endpoint injected; this should only be used for testing
>         writer.go:29: 2020-02-23T02:46:37.405Z [INFO]  TestPreparedQuery_Get/#00: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:37.405Z [INFO]  TestPreparedQuery_Get/#00.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:37.405Z [DEBUG] TestPreparedQuery_Get/#00.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.405Z [WARN]  TestPreparedQuery_Get/#00.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:37.404Z [DEBUG] TestPreparedQuery_Get/#00: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:37.405Z [DEBUG] TestPreparedQuery_Get/#00.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.407Z [WARN]  TestPreparedQuery_Get/#00.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:37.407Z [INFO]  TestPreparedQuery_Get/#00: Synced node info
>         writer.go:29: 2020-02-23T02:46:37.408Z [DEBUG] TestPreparedQuery_Get/#00: Node info in sync
>         writer.go:29: 2020-02-23T02:46:37.409Z [INFO]  TestPreparedQuery_Get/#00.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:37.409Z [INFO]  TestPreparedQuery_Get/#00: consul server down
>         writer.go:29: 2020-02-23T02:46:37.409Z [INFO]  TestPreparedQuery_Get/#00: shutdown complete
>         writer.go:29: 2020-02-23T02:46:37.409Z [INFO]  TestPreparedQuery_Get/#00: Stopping server: protocol=DNS address=127.0.0.1:16283 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.409Z [INFO]  TestPreparedQuery_Get/#00: Stopping server: protocol=DNS address=127.0.0.1:16283 network=udp
>         writer.go:29: 2020-02-23T02:46:37.409Z [INFO]  TestPreparedQuery_Get/#00: Stopping server: protocol=HTTP address=127.0.0.1:16284 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.409Z [INFO]  TestPreparedQuery_Get/#00: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:37.409Z [INFO]  TestPreparedQuery_Get/#00: Endpoints down
>     --- PASS: TestPreparedQuery_Get/#01 (0.23s)
>         writer.go:29: 2020-02-23T02:46:37.416Z [WARN]  TestPreparedQuery_Get/#01: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:37.416Z [DEBUG] TestPreparedQuery_Get/#01.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:37.417Z [DEBUG] TestPreparedQuery_Get/#01.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:37.429Z [INFO]  TestPreparedQuery_Get/#01.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:bf1ee4f4-b600-86d4-6079-3d44559ad913 Address:127.0.0.1:16312}]"
>         writer.go:29: 2020-02-23T02:46:37.429Z [INFO]  TestPreparedQuery_Get/#01.server.raft: entering follower state: follower="Node at 127.0.0.1:16312 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:37.430Z [INFO]  TestPreparedQuery_Get/#01.server.serf.wan: serf: EventMemberJoin: Node-bf1ee4f4-b600-86d4-6079-3d44559ad913.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.430Z [INFO]  TestPreparedQuery_Get/#01.server.serf.lan: serf: EventMemberJoin: Node-bf1ee4f4-b600-86d4-6079-3d44559ad913 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.430Z [INFO]  TestPreparedQuery_Get/#01.server: Handled event for server in area: event=member-join server=Node-bf1ee4f4-b600-86d4-6079-3d44559ad913.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:37.430Z [INFO]  TestPreparedQuery_Get/#01.server: Adding LAN server: server="Node-bf1ee4f4-b600-86d4-6079-3d44559ad913 (Addr: tcp/127.0.0.1:16312) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:37.431Z [INFO]  TestPreparedQuery_Get/#01: Started DNS server: address=127.0.0.1:16307 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.431Z [INFO]  TestPreparedQuery_Get/#01: Started DNS server: address=127.0.0.1:16307 network=udp
>         writer.go:29: 2020-02-23T02:46:37.431Z [INFO]  TestPreparedQuery_Get/#01: Started HTTP server: address=127.0.0.1:16308 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.431Z [INFO]  TestPreparedQuery_Get/#01: started state syncer
>         writer.go:29: 2020-02-23T02:46:37.472Z [WARN]  TestPreparedQuery_Get/#01.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:37.472Z [INFO]  TestPreparedQuery_Get/#01.server.raft: entering candidate state: node="Node at 127.0.0.1:16312 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:37.476Z [DEBUG] TestPreparedQuery_Get/#01.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:37.476Z [DEBUG] TestPreparedQuery_Get/#01.server.raft: vote granted: from=bf1ee4f4-b600-86d4-6079-3d44559ad913 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:37.476Z [INFO]  TestPreparedQuery_Get/#01.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:37.476Z [INFO]  TestPreparedQuery_Get/#01.server.raft: entering leader state: leader="Node at 127.0.0.1:16312 [Leader]"
>         writer.go:29: 2020-02-23T02:46:37.477Z [INFO]  TestPreparedQuery_Get/#01.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:37.477Z [INFO]  TestPreparedQuery_Get/#01.server: New leader elected: payload=Node-bf1ee4f4-b600-86d4-6079-3d44559ad913
>         writer.go:29: 2020-02-23T02:46:37.484Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:37.494Z [INFO]  TestPreparedQuery_Get/#01.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:37.494Z [INFO]  TestPreparedQuery_Get/#01.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.494Z [DEBUG] TestPreparedQuery_Get/#01.server: Skipping self join check for node since the cluster is too small: node=Node-bf1ee4f4-b600-86d4-6079-3d44559ad913
>         writer.go:29: 2020-02-23T02:46:37.494Z [INFO]  TestPreparedQuery_Get/#01.server: member joined, marking health alive: member=Node-bf1ee4f4-b600-86d4-6079-3d44559ad913
>         writer.go:29: 2020-02-23T02:46:37.608Z [DEBUG] TestPreparedQuery_Get/#01: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:37.631Z [INFO]  TestPreparedQuery_Get/#01: Synced node info
>         writer.go:29: 2020-02-23T02:46:37.635Z [INFO]  TestPreparedQuery_Get/#01: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:37.635Z [INFO]  TestPreparedQuery_Get/#01.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:37.635Z [DEBUG] TestPreparedQuery_Get/#01.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.635Z [WARN]  TestPreparedQuery_Get/#01.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:37.635Z [DEBUG] TestPreparedQuery_Get/#01.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.637Z [WARN]  TestPreparedQuery_Get/#01.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:37.638Z [INFO]  TestPreparedQuery_Get/#01.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:37.639Z [INFO]  TestPreparedQuery_Get/#01: consul server down
>         writer.go:29: 2020-02-23T02:46:37.639Z [INFO]  TestPreparedQuery_Get/#01: shutdown complete
>         writer.go:29: 2020-02-23T02:46:37.639Z [INFO]  TestPreparedQuery_Get/#01: Stopping server: protocol=DNS address=127.0.0.1:16307 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.639Z [INFO]  TestPreparedQuery_Get/#01: Stopping server: protocol=DNS address=127.0.0.1:16307 network=udp
>         writer.go:29: 2020-02-23T02:46:37.639Z [INFO]  TestPreparedQuery_Get/#01: Stopping server: protocol=HTTP address=127.0.0.1:16308 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.639Z [INFO]  TestPreparedQuery_Get/#01: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:37.639Z [INFO]  TestPreparedQuery_Get/#01: Endpoints down
> === CONT  TestPreparedQuery_Execute
> === RUN   TestPreparedQuery_Execute/#00
> --- PASS: TestPreparedQuery_ExecuteCached (0.30s)
>     writer.go:29: 2020-02-23T02:46:37.356Z [WARN]  TestPreparedQuery_ExecuteCached: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:37.356Z [DEBUG] TestPreparedQuery_ExecuteCached.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:37.357Z [DEBUG] TestPreparedQuery_ExecuteCached.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:37.366Z [INFO]  TestPreparedQuery_ExecuteCached.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a112ef80-98da-10f7-7642-f21a4450c508 Address:127.0.0.1:16300}]"
>     writer.go:29: 2020-02-23T02:46:37.367Z [INFO]  TestPreparedQuery_ExecuteCached.server.serf.wan: serf: EventMemberJoin: Node-a112ef80-98da-10f7-7642-f21a4450c508.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:37.370Z [INFO]  TestPreparedQuery_ExecuteCached.server.raft: entering follower state: follower="Node at 127.0.0.1:16300 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:37.375Z [INFO]  TestPreparedQuery_ExecuteCached.server.serf.lan: serf: EventMemberJoin: Node-a112ef80-98da-10f7-7642-f21a4450c508 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:37.376Z [INFO]  TestPreparedQuery_ExecuteCached: Started DNS server: address=127.0.0.1:16295 network=udp
>     writer.go:29: 2020-02-23T02:46:37.377Z [INFO]  TestPreparedQuery_ExecuteCached.server: Adding LAN server: server="Node-a112ef80-98da-10f7-7642-f21a4450c508 (Addr: tcp/127.0.0.1:16300) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:37.377Z [INFO]  TestPreparedQuery_ExecuteCached.server: Handled event for server in area: event=member-join server=Node-a112ef80-98da-10f7-7642-f21a4450c508.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:37.377Z [INFO]  TestPreparedQuery_ExecuteCached: Started DNS server: address=127.0.0.1:16295 network=tcp
>     writer.go:29: 2020-02-23T02:46:37.378Z [INFO]  TestPreparedQuery_ExecuteCached: Started HTTP server: address=127.0.0.1:16296 network=tcp
>     writer.go:29: 2020-02-23T02:46:37.378Z [INFO]  TestPreparedQuery_ExecuteCached: started state syncer
>     writer.go:29: 2020-02-23T02:46:37.432Z [WARN]  TestPreparedQuery_ExecuteCached.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:37.432Z [INFO]  TestPreparedQuery_ExecuteCached.server.raft: entering candidate state: node="Node at 127.0.0.1:16300 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:37.435Z [DEBUG] TestPreparedQuery_ExecuteCached.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:37.435Z [DEBUG] TestPreparedQuery_ExecuteCached.server.raft: vote granted: from=a112ef80-98da-10f7-7642-f21a4450c508 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:37.435Z [INFO]  TestPreparedQuery_ExecuteCached.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:37.435Z [INFO]  TestPreparedQuery_ExecuteCached.server.raft: entering leader state: leader="Node at 127.0.0.1:16300 [Leader]"
>     writer.go:29: 2020-02-23T02:46:37.435Z [INFO]  TestPreparedQuery_ExecuteCached.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:37.435Z [INFO]  TestPreparedQuery_ExecuteCached.server: New leader elected: payload=Node-a112ef80-98da-10f7-7642-f21a4450c508
>     writer.go:29: 2020-02-23T02:46:37.442Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:37.452Z [INFO]  TestPreparedQuery_ExecuteCached.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:37.453Z [INFO]  TestPreparedQuery_ExecuteCached.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:37.453Z [DEBUG] TestPreparedQuery_ExecuteCached.server: Skipping self join check for node since the cluster is too small: node=Node-a112ef80-98da-10f7-7642-f21a4450c508
>     writer.go:29: 2020-02-23T02:46:37.453Z [INFO]  TestPreparedQuery_ExecuteCached.server: member joined, marking health alive: member=Node-a112ef80-98da-10f7-7642-f21a4450c508
>     writer.go:29: 2020-02-23T02:46:37.517Z [DEBUG] TestPreparedQuery_ExecuteCached: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:37.519Z [INFO]  TestPreparedQuery_ExecuteCached: Synced node info
>     writer.go:29: 2020-02-23T02:46:37.519Z [DEBUG] TestPreparedQuery_ExecuteCached: Node info in sync
>     writer.go:29: 2020-02-23T02:46:37.645Z [WARN]  TestPreparedQuery_ExecuteCached.server: endpoint injected; this should only be used for testing
>     writer.go:29: 2020-02-23T02:46:37.645Z [INFO]  TestPreparedQuery_ExecuteCached: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:37.645Z [INFO]  TestPreparedQuery_ExecuteCached.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:37.645Z [DEBUG] TestPreparedQuery_ExecuteCached.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:37.645Z [WARN]  TestPreparedQuery_ExecuteCached.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:37.645Z [DEBUG] TestPreparedQuery_ExecuteCached.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:37.647Z [WARN]  TestPreparedQuery_ExecuteCached.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:37.649Z [INFO]  TestPreparedQuery_ExecuteCached.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:37.649Z [INFO]  TestPreparedQuery_ExecuteCached: consul server down
>     writer.go:29: 2020-02-23T02:46:37.649Z [INFO]  TestPreparedQuery_ExecuteCached: shutdown complete
>     writer.go:29: 2020-02-23T02:46:37.649Z [INFO]  TestPreparedQuery_ExecuteCached: Stopping server: protocol=DNS address=127.0.0.1:16295 network=tcp
>     writer.go:29: 2020-02-23T02:46:37.649Z [INFO]  TestPreparedQuery_ExecuteCached: Stopping server: protocol=DNS address=127.0.0.1:16295 network=udp
>     writer.go:29: 2020-02-23T02:46:37.649Z [INFO]  TestPreparedQuery_ExecuteCached: Stopping server: protocol=HTTP address=127.0.0.1:16296 network=tcp
>     writer.go:29: 2020-02-23T02:46:37.649Z [INFO]  TestPreparedQuery_ExecuteCached: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:37.649Z [INFO]  TestPreparedQuery_ExecuteCached: Endpoints down
> === CONT  TestPreparedQuery_List
> === RUN   TestPreparedQuery_List/#00
> === RUN   TestPreparedQuery_Explain/#02
> === RUN   TestPreparedQuery_List/#01
> --- PASS: TestPreparedQuery_Explain (0.72s)
>     --- PASS: TestPreparedQuery_Explain/#00 (0.23s)
>         writer.go:29: 2020-02-23T02:46:37.169Z [WARN]  TestPreparedQuery_Explain/#00: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:37.170Z [DEBUG] TestPreparedQuery_Explain/#00.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:37.170Z [DEBUG] TestPreparedQuery_Explain/#00.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:37.204Z [INFO]  TestPreparedQuery_Explain/#00.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:c6e66e23-1e77-609e-1914-6373a30bd066 Address:127.0.0.1:16294}]"
>         writer.go:29: 2020-02-23T02:46:37.204Z [INFO]  TestPreparedQuery_Explain/#00.server.raft: entering follower state: follower="Node at 127.0.0.1:16294 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:37.205Z [INFO]  TestPreparedQuery_Explain/#00.server.serf.wan: serf: EventMemberJoin: Node-c6e66e23-1e77-609e-1914-6373a30bd066.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.206Z [INFO]  TestPreparedQuery_Explain/#00.server.serf.lan: serf: EventMemberJoin: Node-c6e66e23-1e77-609e-1914-6373a30bd066 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.206Z [INFO]  TestPreparedQuery_Explain/#00: Started DNS server: address=127.0.0.1:16289 network=udp
>         writer.go:29: 2020-02-23T02:46:37.206Z [INFO]  TestPreparedQuery_Explain/#00.server: Adding LAN server: server="Node-c6e66e23-1e77-609e-1914-6373a30bd066 (Addr: tcp/127.0.0.1:16294) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:37.206Z [INFO]  TestPreparedQuery_Explain/#00.server: Handled event for server in area: event=member-join server=Node-c6e66e23-1e77-609e-1914-6373a30bd066.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:37.206Z [INFO]  TestPreparedQuery_Explain/#00: Started DNS server: address=127.0.0.1:16289 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.206Z [INFO]  TestPreparedQuery_Explain/#00: Started HTTP server: address=127.0.0.1:16290 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.206Z [INFO]  TestPreparedQuery_Explain/#00: started state syncer
>         writer.go:29: 2020-02-23T02:46:37.263Z [WARN]  TestPreparedQuery_Explain/#00.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:37.263Z [INFO]  TestPreparedQuery_Explain/#00.server.raft: entering candidate state: node="Node at 127.0.0.1:16294 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:37.267Z [DEBUG] TestPreparedQuery_Explain/#00.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:37.267Z [DEBUG] TestPreparedQuery_Explain/#00.server.raft: vote granted: from=c6e66e23-1e77-609e-1914-6373a30bd066 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:37.267Z [INFO]  TestPreparedQuery_Explain/#00.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:37.267Z [INFO]  TestPreparedQuery_Explain/#00.server.raft: entering leader state: leader="Node at 127.0.0.1:16294 [Leader]"
>         writer.go:29: 2020-02-23T02:46:37.267Z [INFO]  TestPreparedQuery_Explain/#00.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:37.267Z [INFO]  TestPreparedQuery_Explain/#00.server: New leader elected: payload=Node-c6e66e23-1e77-609e-1914-6373a30bd066
>         writer.go:29: 2020-02-23T02:46:37.275Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:37.283Z [INFO]  TestPreparedQuery_Explain/#00.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:37.283Z [INFO]  TestPreparedQuery_Explain/#00.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.283Z [DEBUG] TestPreparedQuery_Explain/#00.server: Skipping self join check for node since the cluster is too small: node=Node-c6e66e23-1e77-609e-1914-6373a30bd066
>         writer.go:29: 2020-02-23T02:46:37.283Z [INFO]  TestPreparedQuery_Explain/#00.server: member joined, marking health alive: member=Node-c6e66e23-1e77-609e-1914-6373a30bd066
>         writer.go:29: 2020-02-23T02:46:37.308Z [DEBUG] TestPreparedQuery_Explain/#00: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:37.310Z [INFO]  TestPreparedQuery_Explain/#00: Synced node info
>         writer.go:29: 2020-02-23T02:46:37.310Z [DEBUG] TestPreparedQuery_Explain/#00: Node info in sync
>         writer.go:29: 2020-02-23T02:46:37.387Z [WARN]  TestPreparedQuery_Explain/#00.server: endpoint injected; this should only be used for testing
>         writer.go:29: 2020-02-23T02:46:37.388Z [INFO]  TestPreparedQuery_Explain/#00: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:37.388Z [INFO]  TestPreparedQuery_Explain/#00.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:37.388Z [DEBUG] TestPreparedQuery_Explain/#00.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.388Z [WARN]  TestPreparedQuery_Explain/#00.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:37.388Z [DEBUG] TestPreparedQuery_Explain/#00.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.390Z [WARN]  TestPreparedQuery_Explain/#00.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:37.394Z [INFO]  TestPreparedQuery_Explain/#00.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:37.394Z [INFO]  TestPreparedQuery_Explain/#00: consul server down
>         writer.go:29: 2020-02-23T02:46:37.394Z [INFO]  TestPreparedQuery_Explain/#00: shutdown complete
>         writer.go:29: 2020-02-23T02:46:37.394Z [INFO]  TestPreparedQuery_Explain/#00: Stopping server: protocol=DNS address=127.0.0.1:16289 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.394Z [INFO]  TestPreparedQuery_Explain/#00: Stopping server: protocol=DNS address=127.0.0.1:16289 network=udp
>         writer.go:29: 2020-02-23T02:46:37.394Z [INFO]  TestPreparedQuery_Explain/#00: Stopping server: protocol=HTTP address=127.0.0.1:16290 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.394Z [INFO]  TestPreparedQuery_Explain/#00: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:37.394Z [INFO]  TestPreparedQuery_Explain/#00: Endpoints down
>     --- PASS: TestPreparedQuery_Explain/#01 (0.30s)
>         writer.go:29: 2020-02-23T02:46:37.409Z [WARN]  TestPreparedQuery_Explain/#01: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:37.409Z [DEBUG] TestPreparedQuery_Explain/#01.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:37.409Z [DEBUG] TestPreparedQuery_Explain/#01.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:37.420Z [INFO]  TestPreparedQuery_Explain/#01.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:724f8a3d-8d78-8b35-f925-f1c566391124 Address:127.0.0.1:16306}]"
>         writer.go:29: 2020-02-23T02:46:37.420Z [INFO]  TestPreparedQuery_Explain/#01.server.raft: entering follower state: follower="Node at 127.0.0.1:16306 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:37.420Z [INFO]  TestPreparedQuery_Explain/#01.server.serf.wan: serf: EventMemberJoin: Node-724f8a3d-8d78-8b35-f925-f1c566391124.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.420Z [INFO]  TestPreparedQuery_Explain/#01.server.serf.lan: serf: EventMemberJoin: Node-724f8a3d-8d78-8b35-f925-f1c566391124 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.421Z [INFO]  TestPreparedQuery_Explain/#01: Started DNS server: address=127.0.0.1:16301 network=udp
>         writer.go:29: 2020-02-23T02:46:37.421Z [INFO]  TestPreparedQuery_Explain/#01.server: Adding LAN server: server="Node-724f8a3d-8d78-8b35-f925-f1c566391124 (Addr: tcp/127.0.0.1:16306) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:37.421Z [INFO]  TestPreparedQuery_Explain/#01.server: Handled event for server in area: event=member-join server=Node-724f8a3d-8d78-8b35-f925-f1c566391124.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:37.421Z [INFO]  TestPreparedQuery_Explain/#01: Started DNS server: address=127.0.0.1:16301 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.421Z [INFO]  TestPreparedQuery_Explain/#01: Started HTTP server: address=127.0.0.1:16302 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.421Z [INFO]  TestPreparedQuery_Explain/#01: started state syncer
>         writer.go:29: 2020-02-23T02:46:37.475Z [WARN]  TestPreparedQuery_Explain/#01.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:37.475Z [INFO]  TestPreparedQuery_Explain/#01.server.raft: entering candidate state: node="Node at 127.0.0.1:16306 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:37.478Z [DEBUG] TestPreparedQuery_Explain/#01.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:37.479Z [DEBUG] TestPreparedQuery_Explain/#01.server.raft: vote granted: from=724f8a3d-8d78-8b35-f925-f1c566391124 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:37.479Z [INFO]  TestPreparedQuery_Explain/#01.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:37.479Z [INFO]  TestPreparedQuery_Explain/#01.server.raft: entering leader state: leader="Node at 127.0.0.1:16306 [Leader]"
>         writer.go:29: 2020-02-23T02:46:37.479Z [INFO]  TestPreparedQuery_Explain/#01.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:37.479Z [INFO]  TestPreparedQuery_Explain/#01.server: New leader elected: payload=Node-724f8a3d-8d78-8b35-f925-f1c566391124
>         writer.go:29: 2020-02-23T02:46:37.486Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:37.494Z [INFO]  TestPreparedQuery_Explain/#01.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:37.494Z [INFO]  TestPreparedQuery_Explain/#01.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.494Z [DEBUG] TestPreparedQuery_Explain/#01.server: Skipping self join check for node since the cluster is too small: node=Node-724f8a3d-8d78-8b35-f925-f1c566391124
>         writer.go:29: 2020-02-23T02:46:37.494Z [INFO]  TestPreparedQuery_Explain/#01.server: member joined, marking health alive: member=Node-724f8a3d-8d78-8b35-f925-f1c566391124
>         writer.go:29: 2020-02-23T02:46:37.687Z [INFO]  TestPreparedQuery_Explain/#01: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:37.687Z [INFO]  TestPreparedQuery_Explain/#01.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:37.687Z [DEBUG] TestPreparedQuery_Explain/#01.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.687Z [WARN]  TestPreparedQuery_Explain/#01.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:37.687Z [ERROR] TestPreparedQuery_Explain/#01.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:37.687Z [DEBUG] TestPreparedQuery_Explain/#01.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.689Z [WARN]  TestPreparedQuery_Explain/#01.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:37.691Z [INFO]  TestPreparedQuery_Explain/#01.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:37.691Z [INFO]  TestPreparedQuery_Explain/#01: consul server down
>         writer.go:29: 2020-02-23T02:46:37.691Z [INFO]  TestPreparedQuery_Explain/#01: shutdown complete
>         writer.go:29: 2020-02-23T02:46:37.691Z [INFO]  TestPreparedQuery_Explain/#01: Stopping server: protocol=DNS address=127.0.0.1:16301 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.691Z [INFO]  TestPreparedQuery_Explain/#01: Stopping server: protocol=DNS address=127.0.0.1:16301 network=udp
>         writer.go:29: 2020-02-23T02:46:37.691Z [INFO]  TestPreparedQuery_Explain/#01: Stopping server: protocol=HTTP address=127.0.0.1:16302 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.691Z [INFO]  TestPreparedQuery_Explain/#01: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:37.691Z [INFO]  TestPreparedQuery_Explain/#01: Endpoints down
>     --- PASS: TestPreparedQuery_Explain/#02 (0.19s)
>         writer.go:29: 2020-02-23T02:46:37.699Z [WARN]  TestPreparedQuery_Explain/#02: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:37.699Z [DEBUG] TestPreparedQuery_Explain/#02.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:37.699Z [DEBUG] TestPreparedQuery_Explain/#02.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:37.788Z [INFO]  TestPreparedQuery_Explain/#02.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:9ec60058-39fb-ab6a-1438-c90d22635c21 Address:127.0.0.1:16330}]"
>         writer.go:29: 2020-02-23T02:46:37.788Z [INFO]  TestPreparedQuery_Explain/#02.server.raft: entering follower state: follower="Node at 127.0.0.1:16330 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:37.788Z [INFO]  TestPreparedQuery_Explain/#02.server.serf.wan: serf: EventMemberJoin: Node-9ec60058-39fb-ab6a-1438-c90d22635c21.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.789Z [INFO]  TestPreparedQuery_Explain/#02.server.serf.lan: serf: EventMemberJoin: Node-9ec60058-39fb-ab6a-1438-c90d22635c21 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.790Z [INFO]  TestPreparedQuery_Explain/#02.server: Adding LAN server: server="Node-9ec60058-39fb-ab6a-1438-c90d22635c21 (Addr: tcp/127.0.0.1:16330) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:37.790Z [INFO]  TestPreparedQuery_Explain/#02: Started DNS server: address=127.0.0.1:16325 network=udp
>         writer.go:29: 2020-02-23T02:46:37.790Z [INFO]  TestPreparedQuery_Explain/#02.server: Handled event for server in area: event=member-join server=Node-9ec60058-39fb-ab6a-1438-c90d22635c21.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:37.790Z [INFO]  TestPreparedQuery_Explain/#02: Started DNS server: address=127.0.0.1:16325 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.790Z [INFO]  TestPreparedQuery_Explain/#02: Started HTTP server: address=127.0.0.1:16326 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.790Z [INFO]  TestPreparedQuery_Explain/#02: started state syncer
>         writer.go:29: 2020-02-23T02:46:37.852Z [WARN]  TestPreparedQuery_Explain/#02.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:37.852Z [INFO]  TestPreparedQuery_Explain/#02.server.raft: entering candidate state: node="Node at 127.0.0.1:16330 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:37.855Z [DEBUG] TestPreparedQuery_Explain/#02.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:37.855Z [DEBUG] TestPreparedQuery_Explain/#02.server.raft: vote granted: from=9ec60058-39fb-ab6a-1438-c90d22635c21 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:37.855Z [INFO]  TestPreparedQuery_Explain/#02.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:37.855Z [INFO]  TestPreparedQuery_Explain/#02.server.raft: entering leader state: leader="Node at 127.0.0.1:16330 [Leader]"
>         writer.go:29: 2020-02-23T02:46:37.855Z [INFO]  TestPreparedQuery_Explain/#02.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:37.855Z [INFO]  TestPreparedQuery_Explain/#02.server: New leader elected: payload=Node-9ec60058-39fb-ab6a-1438-c90d22635c21
>         writer.go:29: 2020-02-23T02:46:37.863Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:37.871Z [INFO]  TestPreparedQuery_Explain/#02.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:37.871Z [INFO]  TestPreparedQuery_Explain/#02.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.871Z [DEBUG] TestPreparedQuery_Explain/#02.server: Skipping self join check for node since the cluster is too small: node=Node-9ec60058-39fb-ab6a-1438-c90d22635c21
>         writer.go:29: 2020-02-23T02:46:37.871Z [INFO]  TestPreparedQuery_Explain/#02.server: member joined, marking health alive: member=Node-9ec60058-39fb-ab6a-1438-c90d22635c21
>         writer.go:29: 2020-02-23T02:46:37.880Z [WARN]  TestPreparedQuery_Explain/#02.server: endpoint injected; this should only be used for testing
>         writer.go:29: 2020-02-23T02:46:37.880Z [INFO]  TestPreparedQuery_Explain/#02: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:37.880Z [INFO]  TestPreparedQuery_Explain/#02.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:37.880Z [DEBUG] TestPreparedQuery_Explain/#02.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.880Z [WARN]  TestPreparedQuery_Explain/#02.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:37.880Z [ERROR] TestPreparedQuery_Explain/#02.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:37.880Z [DEBUG] TestPreparedQuery_Explain/#02.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.882Z [WARN]  TestPreparedQuery_Explain/#02.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:37.883Z [INFO]  TestPreparedQuery_Explain/#02.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:37.884Z [INFO]  TestPreparedQuery_Explain/#02: consul server down
>         writer.go:29: 2020-02-23T02:46:37.884Z [INFO]  TestPreparedQuery_Explain/#02: shutdown complete
>         writer.go:29: 2020-02-23T02:46:37.884Z [INFO]  TestPreparedQuery_Explain/#02: Stopping server: protocol=DNS address=127.0.0.1:16325 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.884Z [INFO]  TestPreparedQuery_Explain/#02: Stopping server: protocol=DNS address=127.0.0.1:16325 network=udp
>         writer.go:29: 2020-02-23T02:46:37.884Z [INFO]  TestPreparedQuery_Explain/#02: Stopping server: protocol=HTTP address=127.0.0.1:16326 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.884Z [INFO]  TestPreparedQuery_Explain/#02: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:37.884Z [INFO]  TestPreparedQuery_Explain/#02: Endpoints down
> === CONT  TestPreparedQuery_Create
> === RUN   TestPreparedQuery_Execute/#01
> --- PASS: TestPreparedQuery_List (0.60s)
>     --- PASS: TestPreparedQuery_List/#00 (0.19s)
>         writer.go:29: 2020-02-23T02:46:37.656Z [WARN]  TestPreparedQuery_List/#00: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:37.656Z [DEBUG] TestPreparedQuery_List/#00.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:37.656Z [DEBUG] TestPreparedQuery_List/#00.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:37.666Z [INFO]  TestPreparedQuery_List/#00.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:77314078-e399-b684-1de4-64b3a7fbd836 Address:127.0.0.1:16318}]"
>         writer.go:29: 2020-02-23T02:46:37.666Z [INFO]  TestPreparedQuery_List/#00.server.raft: entering follower state: follower="Node at 127.0.0.1:16318 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:37.666Z [INFO]  TestPreparedQuery_List/#00.server.serf.wan: serf: EventMemberJoin: Node-77314078-e399-b684-1de4-64b3a7fbd836.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.667Z [INFO]  TestPreparedQuery_List/#00.server.serf.lan: serf: EventMemberJoin: Node-77314078-e399-b684-1de4-64b3a7fbd836 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.667Z [INFO]  TestPreparedQuery_List/#00.server: Handled event for server in area: event=member-join server=Node-77314078-e399-b684-1de4-64b3a7fbd836.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:37.667Z [INFO]  TestPreparedQuery_List/#00.server: Adding LAN server: server="Node-77314078-e399-b684-1de4-64b3a7fbd836 (Addr: tcp/127.0.0.1:16318) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:37.667Z [INFO]  TestPreparedQuery_List/#00: Started DNS server: address=127.0.0.1:16313 network=udp
>         writer.go:29: 2020-02-23T02:46:37.667Z [INFO]  TestPreparedQuery_List/#00: Started DNS server: address=127.0.0.1:16313 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.667Z [INFO]  TestPreparedQuery_List/#00: Started HTTP server: address=127.0.0.1:16314 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.667Z [INFO]  TestPreparedQuery_List/#00: started state syncer
>         writer.go:29: 2020-02-23T02:46:37.727Z [WARN]  TestPreparedQuery_List/#00.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:37.727Z [INFO]  TestPreparedQuery_List/#00.server.raft: entering candidate state: node="Node at 127.0.0.1:16318 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:37.785Z [DEBUG] TestPreparedQuery_List/#00.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:37.785Z [DEBUG] TestPreparedQuery_List/#00.server.raft: vote granted: from=77314078-e399-b684-1de4-64b3a7fbd836 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:37.785Z [INFO]  TestPreparedQuery_List/#00.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:37.785Z [INFO]  TestPreparedQuery_List/#00.server.raft: entering leader state: leader="Node at 127.0.0.1:16318 [Leader]"
>         writer.go:29: 2020-02-23T02:46:37.785Z [INFO]  TestPreparedQuery_List/#00.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:37.785Z [INFO]  TestPreparedQuery_List/#00.server: New leader elected: payload=Node-77314078-e399-b684-1de4-64b3a7fbd836
>         writer.go:29: 2020-02-23T02:46:37.792Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:37.806Z [INFO]  TestPreparedQuery_List/#00.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:37.807Z [INFO]  TestPreparedQuery_List/#00.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.807Z [DEBUG] TestPreparedQuery_List/#00.server: Skipping self join check for node since the cluster is too small: node=Node-77314078-e399-b684-1de4-64b3a7fbd836
>         writer.go:29: 2020-02-23T02:46:37.807Z [INFO]  TestPreparedQuery_List/#00.server: member joined, marking health alive: member=Node-77314078-e399-b684-1de4-64b3a7fbd836
>         writer.go:29: 2020-02-23T02:46:37.834Z [WARN]  TestPreparedQuery_List/#00.server: endpoint injected; this should only be used for testing
>         writer.go:29: 2020-02-23T02:46:37.834Z [INFO]  TestPreparedQuery_List/#00: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:37.834Z [INFO]  TestPreparedQuery_List/#00.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:37.834Z [DEBUG] TestPreparedQuery_List/#00.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.834Z [WARN]  TestPreparedQuery_List/#00.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:37.834Z [ERROR] TestPreparedQuery_List/#00.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:37.834Z [DEBUG] TestPreparedQuery_List/#00.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.836Z [WARN]  TestPreparedQuery_List/#00.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:37.838Z [INFO]  TestPreparedQuery_List/#00.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:37.838Z [INFO]  TestPreparedQuery_List/#00: consul server down
>         writer.go:29: 2020-02-23T02:46:37.838Z [INFO]  TestPreparedQuery_List/#00: shutdown complete
>         writer.go:29: 2020-02-23T02:46:37.838Z [INFO]  TestPreparedQuery_List/#00: Stopping server: protocol=DNS address=127.0.0.1:16313 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.838Z [INFO]  TestPreparedQuery_List/#00: Stopping server: protocol=DNS address=127.0.0.1:16313 network=udp
>         writer.go:29: 2020-02-23T02:46:37.838Z [INFO]  TestPreparedQuery_List/#00: Stopping server: protocol=HTTP address=127.0.0.1:16314 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.838Z [INFO]  TestPreparedQuery_List/#00: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:37.838Z [INFO]  TestPreparedQuery_List/#00: Endpoints down
>     --- PASS: TestPreparedQuery_List/#01 (0.41s)
>         writer.go:29: 2020-02-23T02:46:37.846Z [WARN]  TestPreparedQuery_List/#01: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:37.846Z [DEBUG] TestPreparedQuery_List/#01.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:37.846Z [DEBUG] TestPreparedQuery_List/#01.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:37.855Z [INFO]  TestPreparedQuery_List/#01.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:89db6886-9d8e-6f99-85e3-ca6710dc2035 Address:127.0.0.1:16336}]"
>         writer.go:29: 2020-02-23T02:46:37.855Z [INFO]  TestPreparedQuery_List/#01.server.raft: entering follower state: follower="Node at 127.0.0.1:16336 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:37.856Z [INFO]  TestPreparedQuery_List/#01.server.serf.wan: serf: EventMemberJoin: Node-89db6886-9d8e-6f99-85e3-ca6710dc2035.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.856Z [INFO]  TestPreparedQuery_List/#01.server.serf.lan: serf: EventMemberJoin: Node-89db6886-9d8e-6f99-85e3-ca6710dc2035 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.856Z [INFO]  TestPreparedQuery_List/#01.server: Adding LAN server: server="Node-89db6886-9d8e-6f99-85e3-ca6710dc2035 (Addr: tcp/127.0.0.1:16336) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:37.856Z [INFO]  TestPreparedQuery_List/#01.server: Handled event for server in area: event=member-join server=Node-89db6886-9d8e-6f99-85e3-ca6710dc2035.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:37.856Z [INFO]  TestPreparedQuery_List/#01: Started DNS server: address=127.0.0.1:16331 network=udp
>         writer.go:29: 2020-02-23T02:46:37.856Z [INFO]  TestPreparedQuery_List/#01: Started DNS server: address=127.0.0.1:16331 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.857Z [INFO]  TestPreparedQuery_List/#01: Started HTTP server: address=127.0.0.1:16332 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.857Z [INFO]  TestPreparedQuery_List/#01: started state syncer
>         writer.go:29: 2020-02-23T02:46:37.895Z [WARN]  TestPreparedQuery_List/#01.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:37.896Z [INFO]  TestPreparedQuery_List/#01.server.raft: entering candidate state: node="Node at 127.0.0.1:16336 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:37.903Z [DEBUG] TestPreparedQuery_List/#01.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:37.903Z [DEBUG] TestPreparedQuery_List/#01.server.raft: vote granted: from=89db6886-9d8e-6f99-85e3-ca6710dc2035 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:37.903Z [INFO]  TestPreparedQuery_List/#01.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:37.903Z [INFO]  TestPreparedQuery_List/#01.server.raft: entering leader state: leader="Node at 127.0.0.1:16336 [Leader]"
>         writer.go:29: 2020-02-23T02:46:37.903Z [INFO]  TestPreparedQuery_List/#01.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:37.904Z [INFO]  TestPreparedQuery_List/#01.server: New leader elected: payload=Node-89db6886-9d8e-6f99-85e3-ca6710dc2035
>         writer.go:29: 2020-02-23T02:46:37.912Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:37.936Z [INFO]  TestPreparedQuery_List/#01.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:37.936Z [INFO]  TestPreparedQuery_List/#01.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.936Z [DEBUG] TestPreparedQuery_List/#01.server: Skipping self join check for node since the cluster is too small: node=Node-89db6886-9d8e-6f99-85e3-ca6710dc2035
>         writer.go:29: 2020-02-23T02:46:37.936Z [INFO]  TestPreparedQuery_List/#01.server: member joined, marking health alive: member=Node-89db6886-9d8e-6f99-85e3-ca6710dc2035
>         writer.go:29: 2020-02-23T02:46:37.983Z [DEBUG] TestPreparedQuery_List/#01: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:38.033Z [INFO]  TestPreparedQuery_List/#01: Synced node info
>         writer.go:29: 2020-02-23T02:46:38.033Z [DEBUG] TestPreparedQuery_List/#01: Node info in sync
>         writer.go:29: 2020-02-23T02:46:38.249Z [WARN]  TestPreparedQuery_List/#01.server: endpoint injected; this should only be used for testing
>         writer.go:29: 2020-02-23T02:46:38.250Z [INFO]  TestPreparedQuery_List/#01: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:38.250Z [INFO]  TestPreparedQuery_List/#01.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:38.250Z [DEBUG] TestPreparedQuery_List/#01.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:38.250Z [WARN]  TestPreparedQuery_List/#01.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:38.250Z [DEBUG] TestPreparedQuery_List/#01.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:38.251Z [WARN]  TestPreparedQuery_List/#01.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:38.253Z [INFO]  TestPreparedQuery_List/#01.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:38.253Z [INFO]  TestPreparedQuery_List/#01: consul server down
>         writer.go:29: 2020-02-23T02:46:38.253Z [INFO]  TestPreparedQuery_List/#01: shutdown complete
>         writer.go:29: 2020-02-23T02:46:38.253Z [INFO]  TestPreparedQuery_List/#01: Stopping server: protocol=DNS address=127.0.0.1:16331 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.253Z [INFO]  TestPreparedQuery_List/#01: Stopping server: protocol=DNS address=127.0.0.1:16331 network=udp
>         writer.go:29: 2020-02-23T02:46:38.253Z [INFO]  TestPreparedQuery_List/#01: Stopping server: protocol=HTTP address=127.0.0.1:16332 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.253Z [INFO]  TestPreparedQuery_List/#01: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:38.253Z [INFO]  TestPreparedQuery_List/#01: Endpoints down
> === CONT  TestOperator_ServerHealth_Unhealthy
> --- PASS: TestPreparedQuery_Create (0.38s)
>     writer.go:29: 2020-02-23T02:46:37.891Z [WARN]  TestPreparedQuery_Create: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:37.891Z [DEBUG] TestPreparedQuery_Create.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:37.891Z [DEBUG] TestPreparedQuery_Create.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:37.911Z [INFO]  TestPreparedQuery_Create.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4daf7332-2641-4649-98d7-ac111c0fa4d2 Address:127.0.0.1:16342}]"
>     writer.go:29: 2020-02-23T02:46:37.911Z [INFO]  TestPreparedQuery_Create.server.raft: entering follower state: follower="Node at 127.0.0.1:16342 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:37.912Z [INFO]  TestPreparedQuery_Create.server.serf.wan: serf: EventMemberJoin: Node-4daf7332-2641-4649-98d7-ac111c0fa4d2.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:37.912Z [INFO]  TestPreparedQuery_Create.server.serf.lan: serf: EventMemberJoin: Node-4daf7332-2641-4649-98d7-ac111c0fa4d2 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:37.912Z [INFO]  TestPreparedQuery_Create: Started DNS server: address=127.0.0.1:16337 network=udp
>     writer.go:29: 2020-02-23T02:46:37.912Z [INFO]  TestPreparedQuery_Create.server: Adding LAN server: server="Node-4daf7332-2641-4649-98d7-ac111c0fa4d2 (Addr: tcp/127.0.0.1:16342) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:37.912Z [INFO]  TestPreparedQuery_Create.server: Handled event for server in area: event=member-join server=Node-4daf7332-2641-4649-98d7-ac111c0fa4d2.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:37.912Z [INFO]  TestPreparedQuery_Create: Started DNS server: address=127.0.0.1:16337 network=tcp
>     writer.go:29: 2020-02-23T02:46:37.913Z [INFO]  TestPreparedQuery_Create: Started HTTP server: address=127.0.0.1:16338 network=tcp
>     writer.go:29: 2020-02-23T02:46:37.913Z [INFO]  TestPreparedQuery_Create: started state syncer
>     writer.go:29: 2020-02-23T02:46:37.962Z [WARN]  TestPreparedQuery_Create.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:37.962Z [INFO]  TestPreparedQuery_Create.server.raft: entering candidate state: node="Node at 127.0.0.1:16342 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:38.053Z [DEBUG] TestPreparedQuery_Create.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:38.053Z [DEBUG] TestPreparedQuery_Create.server.raft: vote granted: from=4daf7332-2641-4649-98d7-ac111c0fa4d2 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:38.053Z [INFO]  TestPreparedQuery_Create.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:38.053Z [INFO]  TestPreparedQuery_Create.server.raft: entering leader state: leader="Node at 127.0.0.1:16342 [Leader]"
>     writer.go:29: 2020-02-23T02:46:38.053Z [INFO]  TestPreparedQuery_Create.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:38.053Z [INFO]  TestPreparedQuery_Create.server: New leader elected: payload=Node-4daf7332-2641-4649-98d7-ac111c0fa4d2
>     writer.go:29: 2020-02-23T02:46:38.138Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:38.146Z [INFO]  TestPreparedQuery_Create.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:38.146Z [INFO]  TestPreparedQuery_Create.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:38.146Z [DEBUG] TestPreparedQuery_Create.server: Skipping self join check for node since the cluster is too small: node=Node-4daf7332-2641-4649-98d7-ac111c0fa4d2
>     writer.go:29: 2020-02-23T02:46:38.146Z [INFO]  TestPreparedQuery_Create.server: member joined, marking health alive: member=Node-4daf7332-2641-4649-98d7-ac111c0fa4d2
>     writer.go:29: 2020-02-23T02:46:38.204Z [DEBUG] TestPreparedQuery_Create: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:38.207Z [INFO]  TestPreparedQuery_Create: Synced node info
>     writer.go:29: 2020-02-23T02:46:38.207Z [DEBUG] TestPreparedQuery_Create: Node info in sync
>     writer.go:29: 2020-02-23T02:46:38.249Z [WARN]  TestPreparedQuery_Create.server: endpoint injected; this should only be used for testing
>     writer.go:29: 2020-02-23T02:46:38.249Z [INFO]  TestPreparedQuery_Create: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:38.249Z [INFO]  TestPreparedQuery_Create.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:38.249Z [DEBUG] TestPreparedQuery_Create.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:38.249Z [WARN]  TestPreparedQuery_Create.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:38.249Z [DEBUG] TestPreparedQuery_Create.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:38.251Z [WARN]  TestPreparedQuery_Create.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:38.263Z [INFO]  TestPreparedQuery_Create.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:38.263Z [INFO]  TestPreparedQuery_Create: consul server down
>     writer.go:29: 2020-02-23T02:46:38.263Z [INFO]  TestPreparedQuery_Create: shutdown complete
>     writer.go:29: 2020-02-23T02:46:38.263Z [INFO]  TestPreparedQuery_Create: Stopping server: protocol=DNS address=127.0.0.1:16337 network=tcp
>     writer.go:29: 2020-02-23T02:46:38.263Z [INFO]  TestPreparedQuery_Create: Stopping server: protocol=DNS address=127.0.0.1:16337 network=udp
>     writer.go:29: 2020-02-23T02:46:38.263Z [INFO]  TestPreparedQuery_Create: Stopping server: protocol=HTTP address=127.0.0.1:16338 network=tcp
>     writer.go:29: 2020-02-23T02:46:38.263Z [INFO]  TestPreparedQuery_Create: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:38.263Z [INFO]  TestPreparedQuery_Create: Endpoints down
> === CONT  TestOperator_ServerHealth
> === RUN   TestPreparedQuery_Execute/#02
> === RUN   TestPreparedQuery_Execute/#03
> === RUN   TestPreparedQuery_Execute/#04
> === RUN   TestPreparedQuery_Execute/#05
> --- PASS: TestServiceManager_PersistService_ConfigFiles (3.98s)
>     writer.go:29: 2020-02-23T02:46:35.411Z [WARN]  TestServiceManager_PersistService_ConfigFiles: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:35.412Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:35.412Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:35.428Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:2018063e-03ee-d4a9-e5fe-c07630fc4390 Address:127.0.0.1:16186}]"
>     writer.go:29: 2020-02-23T02:46:35.429Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server.serf.wan: serf: EventMemberJoin: Node-2018063e-03ee-d4a9-e5fe-c07630fc4390.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.429Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server.serf.lan: serf: EventMemberJoin: Node-2018063e-03ee-d4a9-e5fe-c07630fc4390 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.429Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Started DNS server: address=127.0.0.1:16181 network=udp
>     writer.go:29: 2020-02-23T02:46:35.429Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server.raft: entering follower state: follower="Node at 127.0.0.1:16186 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:35.429Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server: Adding LAN server: server="Node-2018063e-03ee-d4a9-e5fe-c07630fc4390 (Addr: tcp/127.0.0.1:16186) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:35.430Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server: Handled event for server in area: event=member-join server=Node-2018063e-03ee-d4a9-e5fe-c07630fc4390.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:35.430Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Started DNS server: address=127.0.0.1:16181 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.438Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Started HTTP server: address=127.0.0.1:16182 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.438Z [INFO]  TestServiceManager_PersistService_ConfigFiles: started state syncer
>     writer.go:29: 2020-02-23T02:46:35.495Z [WARN]  TestServiceManager_PersistService_ConfigFiles.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:35.495Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server.raft: entering candidate state: node="Node at 127.0.0.1:16186 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:35.498Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:35.498Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.server.raft: vote granted: from=2018063e-03ee-d4a9-e5fe-c07630fc4390 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:35.498Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:35.498Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server.raft: entering leader state: leader="Node at 127.0.0.1:16186 [Leader]"
>     writer.go:29: 2020-02-23T02:46:35.498Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:35.498Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server: New leader elected: payload=Node-2018063e-03ee-d4a9-e5fe-c07630fc4390
>     writer.go:29: 2020-02-23T02:46:35.505Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:35.517Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:35.517Z [INFO]  TestServiceManager_PersistService_ConfigFiles.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:35.517Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.server: Skipping self join check for node since the cluster is too small: node=Node-2018063e-03ee-d4a9-e5fe-c07630fc4390
>     writer.go:29: 2020-02-23T02:46:35.517Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server: member joined, marking health alive: member=Node-2018063e-03ee-d4a9-e5fe-c07630fc4390
>     writer.go:29: 2020-02-23T02:46:35.706Z [DEBUG] TestServiceManager_PersistService_ConfigFiles: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:35.733Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Synced node info
>     writer.go:29: 2020-02-23T02:46:35.899Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:35.899Z [INFO]  TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: EventMemberJoin: Node-7dea7cbb-616f-473b-2bc3-8f6d0c034c80 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.899Z [DEBUG] TestServiceManager_PersistService_ConfigFiles: added local registration for service: service=web-sidecar-proxy
>     writer.go:29: 2020-02-23T02:46:35.899Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.899Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.899Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.899Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.899Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.899Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.899Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.899Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Started DNS server: address=127.0.0.1:16223 network=udp
>     writer.go:29: 2020-02-23T02:46:35.900Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Started DNS server: address=127.0.0.1:16223 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Started HTTP server: address=127.0.0.1:16224 network=tcp
>     writer.go:29: 2020-02-23T02:46:35.900Z [INFO]  TestServiceManager_PersistService_ConfigFiles: started state syncer
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [ERROR] TestServiceManager_PersistService_ConfigFiles: error handling service update: error="error watching service config: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [ERROR] TestServiceManager_PersistService_ConfigFiles.anti_entropy: failed to sync remote state: error="No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.900Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [ERROR] TestServiceManager_PersistService_ConfigFiles: error handling service update: error="error watching service config: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [ERROR] TestServiceManager_PersistService_ConfigFiles: error handling service update: error="error watching service config: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.901Z [ERROR] TestServiceManager_PersistService_ConfigFiles: error handling service update: error="error watching service config: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.901Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.900Z [INFO]  TestServiceManager_PersistService_ConfigFiles: (LAN) joining: lan_addresses=[127.0.0.1:16184]
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.902Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.902Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.902Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.902Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.902Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=service-http-checks:web error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>     writer.go:29: 2020-02-23T02:46:35.902Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=discovery-chain:redis error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.902Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.902Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.903Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.903Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.903Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.903Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.903Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.903Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.903Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.903Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=discovery-chain:redis error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.903Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.903Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.903Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.903Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.903Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.903Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.903Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.903Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.903Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.903Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.903Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=discovery-chain:redis error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.903Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.903Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.903Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.903Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=discovery-chain:redis error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:35.903Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.client.memberlist.lan: memberlist: Initiating push/pull sync with: 127.0.0.1:16184
>     writer.go:29: 2020-02-23T02:46:35.904Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.server.memberlist.lan: memberlist: Stream connection from=127.0.0.1:50564
>     writer.go:29: 2020-02-23T02:46:35.904Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server.serf.lan: serf: EventMemberJoin: Node-7dea7cbb-616f-473b-2bc3-8f6d0c034c80 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.904Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server: member joined, marking health alive: member=Node-7dea7cbb-616f-473b-2bc3-8f6d0c034c80
>     writer.go:29: 2020-02-23T02:46:35.904Z [INFO]  TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: EventMemberJoin: Node-2018063e-03ee-d4a9-e5fe-c07630fc4390 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:35.904Z [INFO]  TestServiceManager_PersistService_ConfigFiles: (LAN) joined: number_of_nodes=1
>     writer.go:29: 2020-02-23T02:46:35.905Z [DEBUG] TestServiceManager_PersistService_ConfigFiles: systemd notify failed: error="No socket"
>     writer.go:29: 2020-02-23T02:46:35.905Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:35.905Z [INFO]  TestServiceManager_PersistService_ConfigFiles.client: adding server: server="Node-2018063e-03ee-d4a9-e5fe-c07630fc4390 (Addr: tcp/127.0.0.1:16186) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:35.933Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:36.030Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: messageUserEventType: consul:new-leader
>     writer.go:29: 2020-02-23T02:46:36.100Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.server.serf.lan: serf: messageJoinType: Node-7dea7cbb-616f-473b-2bc3-8f6d0c034c80
>     writer.go:29: 2020-02-23T02:46:36.230Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: messageUserEventType: consul:new-leader
>     writer.go:29: 2020-02-23T02:46:36.230Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: messageJoinType: Node-7dea7cbb-616f-473b-2bc3-8f6d0c034c80
>     writer.go:29: 2020-02-23T02:46:36.299Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.server.serf.lan: serf: messageJoinType: Node-7dea7cbb-616f-473b-2bc3-8f6d0c034c80
>     writer.go:29: 2020-02-23T02:46:36.431Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: messageUserEventType: consul:new-leader
>     writer.go:29: 2020-02-23T02:46:36.431Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: messageJoinType: Node-7dea7cbb-616f-473b-2bc3-8f6d0c034c80
>     writer.go:29: 2020-02-23T02:46:36.432Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: messageUserEventType: consul:new-leader
>     writer.go:29: 2020-02-23T02:46:36.432Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: messageJoinType: Node-7dea7cbb-616f-473b-2bc3-8f6d0c034c80
>     writer.go:29: 2020-02-23T02:46:36.432Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.server.serf.lan: serf: messageJoinType: Node-7dea7cbb-616f-473b-2bc3-8f6d0c034c80
>     writer.go:29: 2020-02-23T02:46:36.472Z [DEBUG] TestServiceManager_PersistService_ConfigFiles: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:36.489Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Synced node info
>     writer.go:29: 2020-02-23T02:46:36.499Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.server.serf.lan: serf: messageJoinType: Node-7dea7cbb-616f-473b-2bc3-8f6d0c034c80
>     writer.go:29: 2020-02-23T02:46:36.513Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Synced service: service=web-sidecar-proxy
>     writer.go:29: 2020-02-23T02:46:36.513Z [DEBUG] TestServiceManager_PersistService_ConfigFiles: Node info in sync
>     writer.go:29: 2020-02-23T02:46:36.513Z [DEBUG] TestServiceManager_PersistService_ConfigFiles: Service in sync: service=web-sidecar-proxy
>     writer.go:29: 2020-02-23T02:46:36.629Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: messageJoinType: Node-7dea7cbb-616f-473b-2bc3-8f6d0c034c80
>     writer.go:29: 2020-02-23T02:46:37.502Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:37.958Z [DEBUG] TestServiceManager_PersistService_ConfigFiles: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:37.958Z [DEBUG] TestServiceManager_PersistService_ConfigFiles: Node info in sync
>     writer.go:29: 2020-02-23T02:46:37.958Z [DEBUG] TestServiceManager_PersistService_ConfigFiles: Node info in sync
>     writer.go:29: 2020-02-23T02:46:39.345Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=service-http-checks:web error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>     writer.go:29: 2020-02-23T02:46:39.348Z [DEBUG] TestServiceManager_PersistService_ConfigFiles: Node info in sync
>     writer.go:29: 2020-02-23T02:46:39.351Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Synced service: service=web-sidecar-proxy
>     writer.go:29: 2020-02-23T02:46:39.354Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:39.354Z [INFO]  TestServiceManager_PersistService_ConfigFiles.client: shutting down client
>     writer.go:29: 2020-02-23T02:46:39.354Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:39.354Z [INFO]  TestServiceManager_PersistService_ConfigFiles.client.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:39.356Z [INFO]  TestServiceManager_PersistService_ConfigFiles: consul client down
>     writer.go:29: 2020-02-23T02:46:39.356Z [INFO]  TestServiceManager_PersistService_ConfigFiles: shutdown complete
>     writer.go:29: 2020-02-23T02:46:39.356Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Stopping server: protocol=DNS address=127.0.0.1:16223 network=tcp
>     writer.go:29: 2020-02-23T02:46:39.356Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Stopping server: protocol=DNS address=127.0.0.1:16223 network=udp
>     writer.go:29: 2020-02-23T02:46:39.356Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Stopping server: protocol=HTTP address=127.0.0.1:16224 network=tcp
>     writer.go:29: 2020-02-23T02:46:39.356Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:39.356Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Endpoints down
>     writer.go:29: 2020-02-23T02:46:39.356Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:39.356Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:39.356Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:39.356Z [WARN]  TestServiceManager_PersistService_ConfigFiles.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:39.356Z [ERROR] TestServiceManager_PersistService_ConfigFiles.client: RPC failed to server: method=DiscoveryChain.Get server=127.0.0.1:16186 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:39.356Z [ERROR] TestServiceManager_PersistService_ConfigFiles.client: RPC failed to server: method=DiscoveryChain.Get server=127.0.0.1:16186 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:39.356Z [ERROR] TestServiceManager_PersistService_ConfigFiles.client: RPC failed to server: method=Health.ServiceNodes server=127.0.0.1:16186 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:39.356Z [ERROR] TestServiceManager_PersistService_ConfigFiles.client: RPC failed to server: method=Intention.Match server=127.0.0.1:16186 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:39.356Z [ERROR] TestServiceManager_PersistService_ConfigFiles.client: RPC failed to server: method=ConnectCA.Roots server=127.0.0.1:16186 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:39.356Z [ERROR] TestServiceManager_PersistService_ConfigFiles.client: RPC failed to server: method=ConfigEntry.ResolveServiceConfig server=127.0.0.1:16186 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:39.356Z [ERROR] TestServiceManager_PersistService_ConfigFiles.client: RPC failed to server: method=ConnectCA.Roots server=127.0.0.1:16186 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:39.356Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:39.356Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:39.357Z [ERROR] TestServiceManager_PersistService_ConfigFiles.client: RPC failed to server: method=ConnectCA.Roots server=127.0.0.1:16186 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:39.357Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:39.357Z [ERROR] TestServiceManager_PersistService_ConfigFiles.client: RPC failed to server: method=ConnectCA.Roots server=127.0.0.1:16186 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:39.357Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:39.357Z [ERROR] TestServiceManager_PersistService_ConfigFiles.client: RPC failed to server: method=ConnectCA.Roots server=127.0.0.1:16186 error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:46:39.359Z [WARN]  TestServiceManager_PersistService_ConfigFiles.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:39.360Z [INFO]  TestServiceManager_PersistService_ConfigFiles.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:39.360Z [INFO]  TestServiceManager_PersistService_ConfigFiles: consul server down
>     writer.go:29: 2020-02-23T02:46:39.360Z [INFO]  TestServiceManager_PersistService_ConfigFiles: shutdown complete
>     writer.go:29: 2020-02-23T02:46:39.360Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Stopping server: protocol=DNS address=127.0.0.1:16181 network=tcp
>     writer.go:29: 2020-02-23T02:46:39.360Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Stopping server: protocol=DNS address=127.0.0.1:16181 network=udp
>     writer.go:29: 2020-02-23T02:46:39.360Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Stopping server: protocol=HTTP address=127.0.0.1:16182 network=tcp
>     writer.go:29: 2020-02-23T02:46:39.360Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:39.361Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Endpoints down
>     writer.go:29: 2020-02-23T02:46:39.377Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:39.378Z [INFO]  TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: EventMemberJoin: Node-a3fa0981-147f-d286-d7c8-29508399b74d 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:39.379Z [INFO]  TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: Attempting re-join to previously known node: Node-2018063e-03ee-d4a9-e5fe-c07630fc4390: 127.0.0.1:16184
>     writer.go:29: 2020-02-23T02:46:39.379Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.client.memberlist.lan: memberlist: Failed to join 127.0.0.1: dial tcp 127.0.0.1:16184: connect: connection refused
>     writer.go:29: 2020-02-23T02:46:39.379Z [INFO]  TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: Attempting re-join to previously known node: Node-7dea7cbb-616f-473b-2bc3-8f6d0c034c80: 127.0.0.1:16226
>     writer.go:29: 2020-02-23T02:46:39.379Z [DEBUG] TestServiceManager_PersistService_ConfigFiles.client.memberlist.lan: memberlist: Failed to join 127.0.0.1: dial tcp 127.0.0.1:16226: connect: connection refused
>     writer.go:29: 2020-02-23T02:46:39.379Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: Failed to re-join any previously known node
>     writer.go:29: 2020-02-23T02:46:39.379Z [DEBUG] TestServiceManager_PersistService_ConfigFiles: added local registration for service: service=web-sidecar-proxy
>     writer.go:29: 2020-02-23T02:46:39.379Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.379Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.379Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.379Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [ERROR] TestServiceManager_PersistService_ConfigFiles: error handling service update: error="error watching service config: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.380Z [ERROR] TestServiceManager_PersistService_ConfigFiles: error handling service update: error="error watching service config: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.380Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [ERROR] TestServiceManager_PersistService_ConfigFiles: error handling service update: error="error watching service config: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.381Z [ERROR] TestServiceManager_PersistService_ConfigFiles: error handling service update: error="error watching service config: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.381Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.381Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=service-http-checks:web error="invalid type for service checks response: cache.FetchResult, want: []structs.CheckType"
>     writer.go:29: 2020-02-23T02:46:39.381Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=discovery-chain:redis error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.381Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.382Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.382Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.382Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=discovery-chain:redis error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.382Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.383Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.383Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.383Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.383Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=discovery-chain:redis error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.383Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.383Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.383Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.383Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.383Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Started DNS server: address=127.0.0.1:16379 network=tcp
>     writer.go:29: 2020-02-23T02:46:39.383Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.383Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.383Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.383Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.383Z [ERROR] TestServiceManager_PersistService_ConfigFiles.proxycfg: watch error: id=discovery-chain:redis error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.383Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Started DNS server: address=127.0.0.1:16379 network=udp
>     writer.go:29: 2020-02-23T02:46:39.384Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Started HTTP server: address=127.0.0.1:16380 network=tcp
>     writer.go:29: 2020-02-23T02:46:39.384Z [INFO]  TestServiceManager_PersistService_ConfigFiles: started state syncer
>     writer.go:29: 2020-02-23T02:46:39.384Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:39.384Z [ERROR] TestServiceManager_PersistService_ConfigFiles.anti_entropy: failed to sync remote state: error="No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:39.385Z [DEBUG] TestServiceManager_PersistService_ConfigFiles: removed service: service=web-sidecar-proxy
>     writer.go:29: 2020-02-23T02:46:39.385Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:39.385Z [INFO]  TestServiceManager_PersistService_ConfigFiles.client: shutting down client
>     writer.go:29: 2020-02-23T02:46:39.385Z [WARN]  TestServiceManager_PersistService_ConfigFiles.client.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:39.385Z [INFO]  TestServiceManager_PersistService_ConfigFiles.client.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:39.385Z [INFO]  TestServiceManager_PersistService_ConfigFiles: consul client down
>     writer.go:29: 2020-02-23T02:46:39.385Z [INFO]  TestServiceManager_PersistService_ConfigFiles: shutdown complete
>     writer.go:29: 2020-02-23T02:46:39.385Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Stopping server: protocol=DNS address=127.0.0.1:16379 network=tcp
>     writer.go:29: 2020-02-23T02:46:39.385Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Stopping server: protocol=DNS address=127.0.0.1:16379 network=udp
>     writer.go:29: 2020-02-23T02:46:39.385Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Stopping server: protocol=HTTP address=127.0.0.1:16380 network=tcp
>     writer.go:29: 2020-02-23T02:46:39.385Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:39.385Z [INFO]  TestServiceManager_PersistService_ConfigFiles: Endpoints down
> === CONT  TestOperator_AutopilotCASConfiguration
> === RUN   TestPreparedQuery_Execute/#06
> --- PASS: TestOperator_AutopilotCASConfiguration (0.43s)
>     writer.go:29: 2020-02-23T02:46:39.393Z [WARN]  TestOperator_AutopilotCASConfiguration: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:39.393Z [DEBUG] TestOperator_AutopilotCASConfiguration.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:39.394Z [DEBUG] TestOperator_AutopilotCASConfiguration.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:39.404Z [INFO]  TestOperator_AutopilotCASConfiguration.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:07e6ad86-2c28-14ee-e910-352a1835f771 Address:127.0.0.1:16402}]"
>     writer.go:29: 2020-02-23T02:46:39.405Z [INFO]  TestOperator_AutopilotCASConfiguration.server.serf.wan: serf: EventMemberJoin: Node-07e6ad86-2c28-14ee-e910-352a1835f771.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:39.405Z [INFO]  TestOperator_AutopilotCASConfiguration.server.serf.lan: serf: EventMemberJoin: Node-07e6ad86-2c28-14ee-e910-352a1835f771 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:39.405Z [INFO]  TestOperator_AutopilotCASConfiguration: Started DNS server: address=127.0.0.1:16397 network=udp
>     writer.go:29: 2020-02-23T02:46:39.405Z [INFO]  TestOperator_AutopilotCASConfiguration.server.raft: entering follower state: follower="Node at 127.0.0.1:16402 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:39.406Z [INFO]  TestOperator_AutopilotCASConfiguration.server: Handled event for server in area: event=member-join server=Node-07e6ad86-2c28-14ee-e910-352a1835f771.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:39.406Z [INFO]  TestOperator_AutopilotCASConfiguration: Started DNS server: address=127.0.0.1:16397 network=tcp
>     writer.go:29: 2020-02-23T02:46:39.406Z [INFO]  TestOperator_AutopilotCASConfiguration: Started HTTP server: address=127.0.0.1:16398 network=tcp
>     writer.go:29: 2020-02-23T02:46:39.406Z [INFO]  TestOperator_AutopilotCASConfiguration: started state syncer
>     writer.go:29: 2020-02-23T02:46:39.406Z [INFO]  TestOperator_AutopilotCASConfiguration.server: Adding LAN server: server="Node-07e6ad86-2c28-14ee-e910-352a1835f771 (Addr: tcp/127.0.0.1:16402) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:39.451Z [WARN]  TestOperator_AutopilotCASConfiguration.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:39.451Z [INFO]  TestOperator_AutopilotCASConfiguration.server.raft: entering candidate state: node="Node at 127.0.0.1:16402 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:39.454Z [DEBUG] TestOperator_AutopilotCASConfiguration.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:39.454Z [DEBUG] TestOperator_AutopilotCASConfiguration.server.raft: vote granted: from=07e6ad86-2c28-14ee-e910-352a1835f771 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:39.454Z [INFO]  TestOperator_AutopilotCASConfiguration.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:39.454Z [INFO]  TestOperator_AutopilotCASConfiguration.server.raft: entering leader state: leader="Node at 127.0.0.1:16402 [Leader]"
>     writer.go:29: 2020-02-23T02:46:39.455Z [INFO]  TestOperator_AutopilotCASConfiguration.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:39.461Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:39.462Z [INFO]  TestOperator_AutopilotCASConfiguration.server: New leader elected: payload=Node-07e6ad86-2c28-14ee-e910-352a1835f771
>     writer.go:29: 2020-02-23T02:46:39.469Z [INFO]  TestOperator_AutopilotCASConfiguration.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:39.469Z [INFO]  TestOperator_AutopilotCASConfiguration.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:39.469Z [DEBUG] TestOperator_AutopilotCASConfiguration.server: Skipping self join check for node since the cluster is too small: node=Node-07e6ad86-2c28-14ee-e910-352a1835f771
>     writer.go:29: 2020-02-23T02:46:39.469Z [INFO]  TestOperator_AutopilotCASConfiguration.server: member joined, marking health alive: member=Node-07e6ad86-2c28-14ee-e910-352a1835f771
>     writer.go:29: 2020-02-23T02:46:39.815Z [INFO]  TestOperator_AutopilotCASConfiguration: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:39.815Z [INFO]  TestOperator_AutopilotCASConfiguration.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:39.815Z [DEBUG] TestOperator_AutopilotCASConfiguration.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:39.815Z [WARN]  TestOperator_AutopilotCASConfiguration.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:39.815Z [ERROR] TestOperator_AutopilotCASConfiguration.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:39.815Z [DEBUG] TestOperator_AutopilotCASConfiguration.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:39.817Z [WARN]  TestOperator_AutopilotCASConfiguration.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:39.818Z [INFO]  TestOperator_AutopilotCASConfiguration.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:39.818Z [INFO]  TestOperator_AutopilotCASConfiguration: consul server down
>     writer.go:29: 2020-02-23T02:46:39.818Z [INFO]  TestOperator_AutopilotCASConfiguration: shutdown complete
>     writer.go:29: 2020-02-23T02:46:39.818Z [INFO]  TestOperator_AutopilotCASConfiguration: Stopping server: protocol=DNS address=127.0.0.1:16397 network=tcp
>     writer.go:29: 2020-02-23T02:46:39.819Z [INFO]  TestOperator_AutopilotCASConfiguration: Stopping server: protocol=DNS address=127.0.0.1:16397 network=udp
>     writer.go:29: 2020-02-23T02:46:39.819Z [INFO]  TestOperator_AutopilotCASConfiguration: Stopping server: protocol=HTTP address=127.0.0.1:16398 network=tcp
>     writer.go:29: 2020-02-23T02:46:39.819Z [INFO]  TestOperator_AutopilotCASConfiguration: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:39.819Z [INFO]  TestOperator_AutopilotCASConfiguration: Endpoints down
> === CONT  TestOperator_AutopilotGetConfiguration
> === RUN   TestPreparedQuery_Execute/#07
> --- PASS: TestOperator_AutopilotGetConfiguration (0.19s)
>     writer.go:29: 2020-02-23T02:46:39.867Z [WARN]  TestOperator_AutopilotGetConfiguration: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:39.868Z [DEBUG] TestOperator_AutopilotGetConfiguration.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:39.868Z [DEBUG] TestOperator_AutopilotGetConfiguration.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:39.881Z [INFO]  TestOperator_AutopilotGetConfiguration.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:ee116593-9d76-25c7-ea11-51c3a85a593e Address:127.0.0.1:16408}]"
>     writer.go:29: 2020-02-23T02:46:39.881Z [INFO]  TestOperator_AutopilotGetConfiguration.server.raft: entering follower state: follower="Node at 127.0.0.1:16408 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:39.882Z [INFO]  TestOperator_AutopilotGetConfiguration.server.serf.wan: serf: EventMemberJoin: Node-ee116593-9d76-25c7-ea11-51c3a85a593e.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:39.884Z [INFO]  TestOperator_AutopilotGetConfiguration.server.serf.lan: serf: EventMemberJoin: Node-ee116593-9d76-25c7-ea11-51c3a85a593e 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:39.884Z [INFO]  TestOperator_AutopilotGetConfiguration: Started DNS server: address=127.0.0.1:16403 network=udp
>     writer.go:29: 2020-02-23T02:46:39.884Z [INFO]  TestOperator_AutopilotGetConfiguration.server: Adding LAN server: server="Node-ee116593-9d76-25c7-ea11-51c3a85a593e (Addr: tcp/127.0.0.1:16408) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:39.884Z [INFO]  TestOperator_AutopilotGetConfiguration.server: Handled event for server in area: event=member-join server=Node-ee116593-9d76-25c7-ea11-51c3a85a593e.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:39.885Z [INFO]  TestOperator_AutopilotGetConfiguration: Started DNS server: address=127.0.0.1:16403 network=tcp
>     writer.go:29: 2020-02-23T02:46:39.886Z [INFO]  TestOperator_AutopilotGetConfiguration: Started HTTP server: address=127.0.0.1:16404 network=tcp
>     writer.go:29: 2020-02-23T02:46:39.886Z [INFO]  TestOperator_AutopilotGetConfiguration: started state syncer
>     writer.go:29: 2020-02-23T02:46:39.938Z [WARN]  TestOperator_AutopilotGetConfiguration.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:39.938Z [INFO]  TestOperator_AutopilotGetConfiguration.server.raft: entering candidate state: node="Node at 127.0.0.1:16408 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:39.941Z [DEBUG] TestOperator_AutopilotGetConfiguration.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:39.941Z [DEBUG] TestOperator_AutopilotGetConfiguration.server.raft: vote granted: from=ee116593-9d76-25c7-ea11-51c3a85a593e term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:39.941Z [INFO]  TestOperator_AutopilotGetConfiguration.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:39.941Z [INFO]  TestOperator_AutopilotGetConfiguration.server.raft: entering leader state: leader="Node at 127.0.0.1:16408 [Leader]"
>     writer.go:29: 2020-02-23T02:46:39.941Z [INFO]  TestOperator_AutopilotGetConfiguration.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:39.942Z [INFO]  TestOperator_AutopilotGetConfiguration.server: New leader elected: payload=Node-ee116593-9d76-25c7-ea11-51c3a85a593e
>     writer.go:29: 2020-02-23T02:46:39.949Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:39.956Z [INFO]  TestOperator_AutopilotGetConfiguration.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:39.956Z [INFO]  TestOperator_AutopilotGetConfiguration.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:39.956Z [DEBUG] TestOperator_AutopilotGetConfiguration.server: Skipping self join check for node since the cluster is too small: node=Node-ee116593-9d76-25c7-ea11-51c3a85a593e
>     writer.go:29: 2020-02-23T02:46:39.956Z [INFO]  TestOperator_AutopilotGetConfiguration.server: member joined, marking health alive: member=Node-ee116593-9d76-25c7-ea11-51c3a85a593e
>     writer.go:29: 2020-02-23T02:46:39.995Z [INFO]  TestOperator_AutopilotGetConfiguration: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:39.995Z [INFO]  TestOperator_AutopilotGetConfiguration.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:39.995Z [DEBUG] TestOperator_AutopilotGetConfiguration.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:39.995Z [WARN]  TestOperator_AutopilotGetConfiguration.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:39.995Z [ERROR] TestOperator_AutopilotGetConfiguration.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:39.995Z [DEBUG] TestOperator_AutopilotGetConfiguration.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.003Z [WARN]  TestOperator_AutopilotGetConfiguration.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.005Z [INFO]  TestOperator_AutopilotGetConfiguration.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:40.005Z [INFO]  TestOperator_AutopilotGetConfiguration: consul server down
>     writer.go:29: 2020-02-23T02:46:40.005Z [INFO]  TestOperator_AutopilotGetConfiguration: shutdown complete
>     writer.go:29: 2020-02-23T02:46:40.005Z [INFO]  TestOperator_AutopilotGetConfiguration: Stopping server: protocol=DNS address=127.0.0.1:16403 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.005Z [INFO]  TestOperator_AutopilotGetConfiguration: Stopping server: protocol=DNS address=127.0.0.1:16403 network=udp
>     writer.go:29: 2020-02-23T02:46:40.005Z [INFO]  TestOperator_AutopilotGetConfiguration: Stopping server: protocol=HTTP address=127.0.0.1:16404 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.005Z [INFO]  TestOperator_AutopilotGetConfiguration: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:40.005Z [INFO]  TestOperator_AutopilotGetConfiguration: Endpoints down
> === CONT  TestOperator_Keyring_LocalOnly
> --- PASS: TestOperator_Keyring_LocalOnly (0.17s)
>     writer.go:29: 2020-02-23T02:46:40.012Z [WARN]  TestOperator_Keyring_LocalOnly: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:40.013Z [DEBUG] TestOperator_Keyring_LocalOnly.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:40.014Z [DEBUG] TestOperator_Keyring_LocalOnly.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.026Z [INFO]  TestOperator_Keyring_LocalOnly.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b3722bd2-4ea8-af82-88eb-a148463a3d10 Address:127.0.0.1:16426}]"
>     writer.go:29: 2020-02-23T02:46:40.027Z [INFO]  TestOperator_Keyring_LocalOnly.server.serf.wan: serf: EventMemberJoin: Node-b3722bd2-4ea8-af82-88eb-a148463a3d10.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.027Z [INFO]  TestOperator_Keyring_LocalOnly.server.serf.lan: serf: EventMemberJoin: Node-b3722bd2-4ea8-af82-88eb-a148463a3d10 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.027Z [INFO]  TestOperator_Keyring_LocalOnly: Started DNS server: address=127.0.0.1:16421 network=udp
>     writer.go:29: 2020-02-23T02:46:40.027Z [INFO]  TestOperator_Keyring_LocalOnly.server.raft: entering follower state: follower="Node at 127.0.0.1:16426 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:40.028Z [INFO]  TestOperator_Keyring_LocalOnly.server: Adding LAN server: server="Node-b3722bd2-4ea8-af82-88eb-a148463a3d10 (Addr: tcp/127.0.0.1:16426) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:40.028Z [INFO]  TestOperator_Keyring_LocalOnly.server: Handled event for server in area: event=member-join server=Node-b3722bd2-4ea8-af82-88eb-a148463a3d10.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:40.028Z [INFO]  TestOperator_Keyring_LocalOnly: Started DNS server: address=127.0.0.1:16421 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.028Z [INFO]  TestOperator_Keyring_LocalOnly: Started HTTP server: address=127.0.0.1:16422 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.028Z [INFO]  TestOperator_Keyring_LocalOnly: started state syncer
>     writer.go:29: 2020-02-23T02:46:40.087Z [WARN]  TestOperator_Keyring_LocalOnly.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:40.087Z [INFO]  TestOperator_Keyring_LocalOnly.server.raft: entering candidate state: node="Node at 127.0.0.1:16426 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:40.090Z [DEBUG] TestOperator_Keyring_LocalOnly.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:40.090Z [DEBUG] TestOperator_Keyring_LocalOnly.server.raft: vote granted: from=b3722bd2-4ea8-af82-88eb-a148463a3d10 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:40.090Z [INFO]  TestOperator_Keyring_LocalOnly.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:40.090Z [INFO]  TestOperator_Keyring_LocalOnly.server.raft: entering leader state: leader="Node at 127.0.0.1:16426 [Leader]"
>     writer.go:29: 2020-02-23T02:46:40.090Z [INFO]  TestOperator_Keyring_LocalOnly.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:40.090Z [INFO]  TestOperator_Keyring_LocalOnly.server: New leader elected: payload=Node-b3722bd2-4ea8-af82-88eb-a148463a3d10
>     writer.go:29: 2020-02-23T02:46:40.098Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:40.106Z [INFO]  TestOperator_Keyring_LocalOnly.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:40.106Z [INFO]  TestOperator_Keyring_LocalOnly.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.106Z [DEBUG] TestOperator_Keyring_LocalOnly.server: Skipping self join check for node since the cluster is too small: node=Node-b3722bd2-4ea8-af82-88eb-a148463a3d10
>     writer.go:29: 2020-02-23T02:46:40.106Z [INFO]  TestOperator_Keyring_LocalOnly.server: member joined, marking health alive: member=Node-b3722bd2-4ea8-af82-88eb-a148463a3d10
>     writer.go:29: 2020-02-23T02:46:40.167Z [INFO]  TestOperator_Keyring_LocalOnly.server.serf.wan: serf: Received list-keys query
>     writer.go:29: 2020-02-23T02:46:40.167Z [DEBUG] TestOperator_Keyring_LocalOnly.server.serf.wan: serf: messageQueryResponseType: Node-b3722bd2-4ea8-af82-88eb-a148463a3d10.dc1
>     writer.go:29: 2020-02-23T02:46:40.168Z [DEBUG] TestOperator_Keyring_LocalOnly.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.168Z [INFO]  TestOperator_Keyring_LocalOnly.server.serf.lan: serf: Received list-keys query
>     writer.go:29: 2020-02-23T02:46:40.168Z [DEBUG] TestOperator_Keyring_LocalOnly.server.serf.lan: serf: messageQueryResponseType: Node-b3722bd2-4ea8-af82-88eb-a148463a3d10
>     writer.go:29: 2020-02-23T02:46:40.168Z [INFO]  TestOperator_Keyring_LocalOnly: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:40.168Z [INFO]  TestOperator_Keyring_LocalOnly.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:40.168Z [DEBUG] TestOperator_Keyring_LocalOnly.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.168Z [WARN]  TestOperator_Keyring_LocalOnly.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.168Z [ERROR] TestOperator_Keyring_LocalOnly.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:40.168Z [DEBUG] TestOperator_Keyring_LocalOnly.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.170Z [WARN]  TestOperator_Keyring_LocalOnly.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.172Z [INFO]  TestOperator_Keyring_LocalOnly.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:40.172Z [INFO]  TestOperator_Keyring_LocalOnly: consul server down
>     writer.go:29: 2020-02-23T02:46:40.172Z [INFO]  TestOperator_Keyring_LocalOnly: shutdown complete
>     writer.go:29: 2020-02-23T02:46:40.172Z [INFO]  TestOperator_Keyring_LocalOnly: Stopping server: protocol=DNS address=127.0.0.1:16421 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.172Z [INFO]  TestOperator_Keyring_LocalOnly: Stopping server: protocol=DNS address=127.0.0.1:16421 network=udp
>     writer.go:29: 2020-02-23T02:46:40.172Z [INFO]  TestOperator_Keyring_LocalOnly: Stopping server: protocol=HTTP address=127.0.0.1:16422 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.172Z [INFO]  TestOperator_Keyring_LocalOnly: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:40.172Z [INFO]  TestOperator_Keyring_LocalOnly: Endpoints down
> === CONT  TestOperator_Keyring_InvalidRelayFactor
> --- PASS: TestPreparedQuery_Execute (2.60s)
>     --- PASS: TestPreparedQuery_Execute/#00 (0.41s)
>         writer.go:29: 2020-02-23T02:46:37.648Z [WARN]  TestPreparedQuery_Execute/#00: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:37.648Z [DEBUG] TestPreparedQuery_Execute/#00.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:37.649Z [DEBUG] TestPreparedQuery_Execute/#00.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:37.664Z [INFO]  TestPreparedQuery_Execute/#00.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a501c6b5-ba91-bb67-9cab-5b1629402a8e Address:127.0.0.1:16324}]"
>         writer.go:29: 2020-02-23T02:46:37.664Z [INFO]  TestPreparedQuery_Execute/#00.server.raft: entering follower state: follower="Node at 127.0.0.1:16324 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:37.665Z [INFO]  TestPreparedQuery_Execute/#00.server.serf.wan: serf: EventMemberJoin: Node-a501c6b5-ba91-bb67-9cab-5b1629402a8e.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.665Z [INFO]  TestPreparedQuery_Execute/#00.server.serf.lan: serf: EventMemberJoin: Node-a501c6b5-ba91-bb67-9cab-5b1629402a8e 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:37.665Z [INFO]  TestPreparedQuery_Execute/#00.server: Adding LAN server: server="Node-a501c6b5-ba91-bb67-9cab-5b1629402a8e (Addr: tcp/127.0.0.1:16324) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:37.665Z [INFO]  TestPreparedQuery_Execute/#00.server: Handled event for server in area: event=member-join server=Node-a501c6b5-ba91-bb67-9cab-5b1629402a8e.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:37.665Z [INFO]  TestPreparedQuery_Execute/#00: Started DNS server: address=127.0.0.1:16319 network=udp
>         writer.go:29: 2020-02-23T02:46:37.665Z [INFO]  TestPreparedQuery_Execute/#00: Started DNS server: address=127.0.0.1:16319 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.666Z [INFO]  TestPreparedQuery_Execute/#00: Started HTTP server: address=127.0.0.1:16320 network=tcp
>         writer.go:29: 2020-02-23T02:46:37.666Z [INFO]  TestPreparedQuery_Execute/#00: started state syncer
>         writer.go:29: 2020-02-23T02:46:37.714Z [WARN]  TestPreparedQuery_Execute/#00.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:37.714Z [INFO]  TestPreparedQuery_Execute/#00.server.raft: entering candidate state: node="Node at 127.0.0.1:16324 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:37.783Z [DEBUG] TestPreparedQuery_Execute/#00.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:37.783Z [DEBUG] TestPreparedQuery_Execute/#00.server.raft: vote granted: from=a501c6b5-ba91-bb67-9cab-5b1629402a8e term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:37.783Z [INFO]  TestPreparedQuery_Execute/#00.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:37.783Z [INFO]  TestPreparedQuery_Execute/#00.server.raft: entering leader state: leader="Node at 127.0.0.1:16324 [Leader]"
>         writer.go:29: 2020-02-23T02:46:37.783Z [INFO]  TestPreparedQuery_Execute/#00.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:37.784Z [INFO]  TestPreparedQuery_Execute/#00.server: New leader elected: payload=Node-a501c6b5-ba91-bb67-9cab-5b1629402a8e
>         writer.go:29: 2020-02-23T02:46:37.792Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:37.806Z [INFO]  TestPreparedQuery_Execute/#00.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:37.806Z [INFO]  TestPreparedQuery_Execute/#00.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:37.806Z [DEBUG] TestPreparedQuery_Execute/#00.server: Skipping self join check for node since the cluster is too small: node=Node-a501c6b5-ba91-bb67-9cab-5b1629402a8e
>         writer.go:29: 2020-02-23T02:46:37.807Z [INFO]  TestPreparedQuery_Execute/#00.server: member joined, marking health alive: member=Node-a501c6b5-ba91-bb67-9cab-5b1629402a8e
>         writer.go:29: 2020-02-23T02:46:37.991Z [DEBUG] TestPreparedQuery_Execute/#00: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:38.001Z [WARN]  TestPreparedQuery_Execute/#00.server: endpoint injected; this should only be used for testing
>         writer.go:29: 2020-02-23T02:46:38.001Z [INFO]  TestPreparedQuery_Execute/#00: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:38.001Z [INFO]  TestPreparedQuery_Execute/#00.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:38.001Z [DEBUG] TestPreparedQuery_Execute/#00.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:38.001Z [WARN]  TestPreparedQuery_Execute/#00.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:38.001Z [DEBUG] TestPreparedQuery_Execute/#00.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:38.022Z [WARN]  TestPreparedQuery_Execute/#00.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:38.033Z [INFO]  TestPreparedQuery_Execute/#00: Synced node info
>         writer.go:29: 2020-02-23T02:46:38.047Z [INFO]  TestPreparedQuery_Execute/#00.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:38.047Z [INFO]  TestPreparedQuery_Execute/#00: consul server down
>         writer.go:29: 2020-02-23T02:46:38.047Z [INFO]  TestPreparedQuery_Execute/#00: shutdown complete
>         writer.go:29: 2020-02-23T02:46:38.047Z [INFO]  TestPreparedQuery_Execute/#00: Stopping server: protocol=DNS address=127.0.0.1:16319 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.047Z [INFO]  TestPreparedQuery_Execute/#00: Stopping server: protocol=DNS address=127.0.0.1:16319 network=udp
>         writer.go:29: 2020-02-23T02:46:38.047Z [INFO]  TestPreparedQuery_Execute/#00: Stopping server: protocol=HTTP address=127.0.0.1:16320 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.047Z [INFO]  TestPreparedQuery_Execute/#00: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:38.047Z [INFO]  TestPreparedQuery_Execute/#00: Endpoints down
>     --- PASS: TestPreparedQuery_Execute/#01 (0.30s)
>         writer.go:29: 2020-02-23T02:46:38.055Z [WARN]  TestPreparedQuery_Execute/#01: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:38.055Z [DEBUG] TestPreparedQuery_Execute/#01.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:38.056Z [DEBUG] TestPreparedQuery_Execute/#01.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:38.135Z [INFO]  TestPreparedQuery_Execute/#01.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:5e328aec-861b-5a91-5276-f82adeddf935 Address:127.0.0.1:16348}]"
>         writer.go:29: 2020-02-23T02:46:38.135Z [INFO]  TestPreparedQuery_Execute/#01.server.serf.wan: serf: EventMemberJoin: Node-5e328aec-861b-5a91-5276-f82adeddf935.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:38.136Z [INFO]  TestPreparedQuery_Execute/#01.server.serf.lan: serf: EventMemberJoin: Node-5e328aec-861b-5a91-5276-f82adeddf935 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:38.136Z [INFO]  TestPreparedQuery_Execute/#01: Started DNS server: address=127.0.0.1:16343 network=udp
>         writer.go:29: 2020-02-23T02:46:38.136Z [INFO]  TestPreparedQuery_Execute/#01.server.raft: entering follower state: follower="Node at 127.0.0.1:16348 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:38.136Z [INFO]  TestPreparedQuery_Execute/#01.server: Adding LAN server: server="Node-5e328aec-861b-5a91-5276-f82adeddf935 (Addr: tcp/127.0.0.1:16348) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:38.136Z [INFO]  TestPreparedQuery_Execute/#01.server: Handled event for server in area: event=member-join server=Node-5e328aec-861b-5a91-5276-f82adeddf935.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:38.136Z [INFO]  TestPreparedQuery_Execute/#01: Started DNS server: address=127.0.0.1:16343 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.137Z [INFO]  TestPreparedQuery_Execute/#01: Started HTTP server: address=127.0.0.1:16344 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.137Z [INFO]  TestPreparedQuery_Execute/#01: started state syncer
>         writer.go:29: 2020-02-23T02:46:38.173Z [WARN]  TestPreparedQuery_Execute/#01.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:38.173Z [INFO]  TestPreparedQuery_Execute/#01.server.raft: entering candidate state: node="Node at 127.0.0.1:16348 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:38.176Z [DEBUG] TestPreparedQuery_Execute/#01.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:38.176Z [DEBUG] TestPreparedQuery_Execute/#01.server.raft: vote granted: from=5e328aec-861b-5a91-5276-f82adeddf935 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:38.176Z [INFO]  TestPreparedQuery_Execute/#01.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:38.176Z [INFO]  TestPreparedQuery_Execute/#01.server.raft: entering leader state: leader="Node at 127.0.0.1:16348 [Leader]"
>         writer.go:29: 2020-02-23T02:46:38.176Z [INFO]  TestPreparedQuery_Execute/#01.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:38.176Z [INFO]  TestPreparedQuery_Execute/#01.server: New leader elected: payload=Node-5e328aec-861b-5a91-5276-f82adeddf935
>         writer.go:29: 2020-02-23T02:46:38.191Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:38.202Z [INFO]  TestPreparedQuery_Execute/#01.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:38.202Z [INFO]  TestPreparedQuery_Execute/#01.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:38.202Z [DEBUG] TestPreparedQuery_Execute/#01.server: Skipping self join check for node since the cluster is too small: node=Node-5e328aec-861b-5a91-5276-f82adeddf935
>         writer.go:29: 2020-02-23T02:46:38.202Z [INFO]  TestPreparedQuery_Execute/#01.server: member joined, marking health alive: member=Node-5e328aec-861b-5a91-5276-f82adeddf935
>         writer.go:29: 2020-02-23T02:46:38.345Z [WARN]  TestPreparedQuery_Execute/#01.server: endpoint injected; this should only be used for testing
>         writer.go:29: 2020-02-23T02:46:38.345Z [INFO]  TestPreparedQuery_Execute/#01: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:38.345Z [INFO]  TestPreparedQuery_Execute/#01.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:38.345Z [DEBUG] TestPreparedQuery_Execute/#01.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:38.345Z [WARN]  TestPreparedQuery_Execute/#01.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:38.345Z [ERROR] TestPreparedQuery_Execute/#01.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:38.345Z [DEBUG] TestPreparedQuery_Execute/#01.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:38.347Z [WARN]  TestPreparedQuery_Execute/#01.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:38.349Z [INFO]  TestPreparedQuery_Execute/#01.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:38.349Z [INFO]  TestPreparedQuery_Execute/#01: consul server down
>         writer.go:29: 2020-02-23T02:46:38.349Z [INFO]  TestPreparedQuery_Execute/#01: shutdown complete
>         writer.go:29: 2020-02-23T02:46:38.349Z [INFO]  TestPreparedQuery_Execute/#01: Stopping server: protocol=DNS address=127.0.0.1:16343 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.349Z [INFO]  TestPreparedQuery_Execute/#01: Stopping server: protocol=DNS address=127.0.0.1:16343 network=udp
>         writer.go:29: 2020-02-23T02:46:38.349Z [INFO]  TestPreparedQuery_Execute/#01: Stopping server: protocol=HTTP address=127.0.0.1:16344 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.349Z [INFO]  TestPreparedQuery_Execute/#01: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:38.349Z [INFO]  TestPreparedQuery_Execute/#01: Endpoints down
>     --- PASS: TestPreparedQuery_Execute/#02 (0.48s)
>         writer.go:29: 2020-02-23T02:46:38.357Z [WARN]  TestPreparedQuery_Execute/#02: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:38.357Z [DEBUG] TestPreparedQuery_Execute/#02.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:38.357Z [DEBUG] TestPreparedQuery_Execute/#02.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:38.367Z [INFO]  TestPreparedQuery_Execute/#02.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fb9636c1-30fb-81a6-fb90-ef1f1cd56fd4 Address:127.0.0.1:16366}]"
>         writer.go:29: 2020-02-23T02:46:38.368Z [INFO]  TestPreparedQuery_Execute/#02.server.serf.wan: serf: EventMemberJoin: Node-fb9636c1-30fb-81a6-fb90-ef1f1cd56fd4.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:38.368Z [INFO]  TestPreparedQuery_Execute/#02.server.raft: entering follower state: follower="Node at 127.0.0.1:16366 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:38.368Z [INFO]  TestPreparedQuery_Execute/#02.server.serf.lan: serf: EventMemberJoin: Node-fb9636c1-30fb-81a6-fb90-ef1f1cd56fd4 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:38.368Z [INFO]  TestPreparedQuery_Execute/#02.server: Handled event for server in area: event=member-join server=Node-fb9636c1-30fb-81a6-fb90-ef1f1cd56fd4.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:38.368Z [INFO]  TestPreparedQuery_Execute/#02.server: Adding LAN server: server="Node-fb9636c1-30fb-81a6-fb90-ef1f1cd56fd4 (Addr: tcp/127.0.0.1:16366) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:38.368Z [INFO]  TestPreparedQuery_Execute/#02: Started DNS server: address=127.0.0.1:16361 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.369Z [INFO]  TestPreparedQuery_Execute/#02: Started DNS server: address=127.0.0.1:16361 network=udp
>         writer.go:29: 2020-02-23T02:46:38.369Z [INFO]  TestPreparedQuery_Execute/#02: Started HTTP server: address=127.0.0.1:16362 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.369Z [INFO]  TestPreparedQuery_Execute/#02: started state syncer
>         writer.go:29: 2020-02-23T02:46:38.423Z [WARN]  TestPreparedQuery_Execute/#02.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:38.423Z [INFO]  TestPreparedQuery_Execute/#02.server.raft: entering candidate state: node="Node at 127.0.0.1:16366 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:38.426Z [DEBUG] TestPreparedQuery_Execute/#02.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:38.426Z [DEBUG] TestPreparedQuery_Execute/#02.server.raft: vote granted: from=fb9636c1-30fb-81a6-fb90-ef1f1cd56fd4 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:38.426Z [INFO]  TestPreparedQuery_Execute/#02.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:38.426Z [INFO]  TestPreparedQuery_Execute/#02.server.raft: entering leader state: leader="Node at 127.0.0.1:16366 [Leader]"
>         writer.go:29: 2020-02-23T02:46:38.426Z [INFO]  TestPreparedQuery_Execute/#02.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:38.426Z [INFO]  TestPreparedQuery_Execute/#02.server: New leader elected: payload=Node-fb9636c1-30fb-81a6-fb90-ef1f1cd56fd4
>         writer.go:29: 2020-02-23T02:46:38.434Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:38.442Z [INFO]  TestPreparedQuery_Execute/#02.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:38.442Z [INFO]  TestPreparedQuery_Execute/#02.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:38.442Z [DEBUG] TestPreparedQuery_Execute/#02.server: Skipping self join check for node since the cluster is too small: node=Node-fb9636c1-30fb-81a6-fb90-ef1f1cd56fd4
>         writer.go:29: 2020-02-23T02:46:38.442Z [INFO]  TestPreparedQuery_Execute/#02.server: member joined, marking health alive: member=Node-fb9636c1-30fb-81a6-fb90-ef1f1cd56fd4
>         writer.go:29: 2020-02-23T02:46:38.650Z [DEBUG] TestPreparedQuery_Execute/#02: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:38.652Z [INFO]  TestPreparedQuery_Execute/#02: Synced node info
>         writer.go:29: 2020-02-23T02:46:38.652Z [DEBUG] TestPreparedQuery_Execute/#02: Node info in sync
>         writer.go:29: 2020-02-23T02:46:38.825Z [WARN]  TestPreparedQuery_Execute/#02.server: endpoint injected; this should only be used for testing
>         writer.go:29: 2020-02-23T02:46:38.825Z [INFO]  TestPreparedQuery_Execute/#02: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:38.825Z [INFO]  TestPreparedQuery_Execute/#02.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:38.825Z [DEBUG] TestPreparedQuery_Execute/#02.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:38.825Z [WARN]  TestPreparedQuery_Execute/#02.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:38.825Z [DEBUG] TestPreparedQuery_Execute/#02.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:38.827Z [WARN]  TestPreparedQuery_Execute/#02.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:38.829Z [INFO]  TestPreparedQuery_Execute/#02.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:38.829Z [INFO]  TestPreparedQuery_Execute/#02: consul server down
>         writer.go:29: 2020-02-23T02:46:38.829Z [INFO]  TestPreparedQuery_Execute/#02: shutdown complete
>         writer.go:29: 2020-02-23T02:46:38.829Z [INFO]  TestPreparedQuery_Execute/#02: Stopping server: protocol=DNS address=127.0.0.1:16361 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.829Z [INFO]  TestPreparedQuery_Execute/#02: Stopping server: protocol=DNS address=127.0.0.1:16361 network=udp
>         writer.go:29: 2020-02-23T02:46:38.829Z [INFO]  TestPreparedQuery_Execute/#02: Stopping server: protocol=HTTP address=127.0.0.1:16362 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.829Z [INFO]  TestPreparedQuery_Execute/#02: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:38.829Z [INFO]  TestPreparedQuery_Execute/#02: Endpoints down
>     --- PASS: TestPreparedQuery_Execute/#03 (0.08s)
>         writer.go:29: 2020-02-23T02:46:38.837Z [WARN]  TestPreparedQuery_Execute/#03: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:38.837Z [DEBUG] TestPreparedQuery_Execute/#03.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:38.837Z [DEBUG] TestPreparedQuery_Execute/#03.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:38.847Z [INFO]  TestPreparedQuery_Execute/#03.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:5ff7ecb7-4879-3dbf-9cc7-08646af158e3 Address:127.0.0.1:16372}]"
>         writer.go:29: 2020-02-23T02:46:38.847Z [INFO]  TestPreparedQuery_Execute/#03.server.serf.wan: serf: EventMemberJoin: Node-5ff7ecb7-4879-3dbf-9cc7-08646af158e3.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:38.848Z [INFO]  TestPreparedQuery_Execute/#03.server.serf.lan: serf: EventMemberJoin: Node-5ff7ecb7-4879-3dbf-9cc7-08646af158e3 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:38.848Z [INFO]  TestPreparedQuery_Execute/#03: Started DNS server: address=127.0.0.1:16367 network=udp
>         writer.go:29: 2020-02-23T02:46:38.848Z [INFO]  TestPreparedQuery_Execute/#03.server.raft: entering follower state: follower="Node at 127.0.0.1:16372 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:38.848Z [INFO]  TestPreparedQuery_Execute/#03.server: Adding LAN server: server="Node-5ff7ecb7-4879-3dbf-9cc7-08646af158e3 (Addr: tcp/127.0.0.1:16372) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:38.848Z [INFO]  TestPreparedQuery_Execute/#03.server: Handled event for server in area: event=member-join server=Node-5ff7ecb7-4879-3dbf-9cc7-08646af158e3.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:38.848Z [INFO]  TestPreparedQuery_Execute/#03: Started DNS server: address=127.0.0.1:16367 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.849Z [INFO]  TestPreparedQuery_Execute/#03: Started HTTP server: address=127.0.0.1:16368 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.849Z [INFO]  TestPreparedQuery_Execute/#03: started state syncer
>         writer.go:29: 2020-02-23T02:46:38.884Z [WARN]  TestPreparedQuery_Execute/#03.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:38.884Z [INFO]  TestPreparedQuery_Execute/#03.server.raft: entering candidate state: node="Node at 127.0.0.1:16372 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:38.887Z [DEBUG] TestPreparedQuery_Execute/#03.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:38.887Z [DEBUG] TestPreparedQuery_Execute/#03.server.raft: vote granted: from=5ff7ecb7-4879-3dbf-9cc7-08646af158e3 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:38.887Z [INFO]  TestPreparedQuery_Execute/#03.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:38.887Z [INFO]  TestPreparedQuery_Execute/#03.server.raft: entering leader state: leader="Node at 127.0.0.1:16372 [Leader]"
>         writer.go:29: 2020-02-23T02:46:38.888Z [INFO]  TestPreparedQuery_Execute/#03.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:38.888Z [INFO]  TestPreparedQuery_Execute/#03.server: New leader elected: payload=Node-5ff7ecb7-4879-3dbf-9cc7-08646af158e3
>         writer.go:29: 2020-02-23T02:46:38.895Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:38.903Z [INFO]  TestPreparedQuery_Execute/#03.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:38.903Z [INFO]  TestPreparedQuery_Execute/#03.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:38.903Z [DEBUG] TestPreparedQuery_Execute/#03.server: Skipping self join check for node since the cluster is too small: node=Node-5ff7ecb7-4879-3dbf-9cc7-08646af158e3
>         writer.go:29: 2020-02-23T02:46:38.903Z [INFO]  TestPreparedQuery_Execute/#03.server: member joined, marking health alive: member=Node-5ff7ecb7-4879-3dbf-9cc7-08646af158e3
>         writer.go:29: 2020-02-23T02:46:38.907Z [WARN]  TestPreparedQuery_Execute/#03.server: endpoint injected; this should only be used for testing
>         writer.go:29: 2020-02-23T02:46:38.907Z [INFO]  TestPreparedQuery_Execute/#03: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:38.907Z [INFO]  TestPreparedQuery_Execute/#03.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:38.907Z [DEBUG] TestPreparedQuery_Execute/#03.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:38.907Z [WARN]  TestPreparedQuery_Execute/#03.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:38.907Z [ERROR] TestPreparedQuery_Execute/#03.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:38.907Z [DEBUG] TestPreparedQuery_Execute/#03.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:38.909Z [WARN]  TestPreparedQuery_Execute/#03.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:38.910Z [INFO]  TestPreparedQuery_Execute/#03.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:38.910Z [INFO]  TestPreparedQuery_Execute/#03: consul server down
>         writer.go:29: 2020-02-23T02:46:38.910Z [INFO]  TestPreparedQuery_Execute/#03: shutdown complete
>         writer.go:29: 2020-02-23T02:46:38.910Z [INFO]  TestPreparedQuery_Execute/#03: Stopping server: protocol=DNS address=127.0.0.1:16367 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.910Z [INFO]  TestPreparedQuery_Execute/#03: Stopping server: protocol=DNS address=127.0.0.1:16367 network=udp
>         writer.go:29: 2020-02-23T02:46:38.910Z [INFO]  TestPreparedQuery_Execute/#03: Stopping server: protocol=HTTP address=127.0.0.1:16368 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.910Z [INFO]  TestPreparedQuery_Execute/#03: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:38.910Z [INFO]  TestPreparedQuery_Execute/#03: Endpoints down
>     --- PASS: TestPreparedQuery_Execute/#04 (0.46s)
>         writer.go:29: 2020-02-23T02:46:38.918Z [WARN]  TestPreparedQuery_Execute/#04: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:38.918Z [DEBUG] TestPreparedQuery_Execute/#04.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:38.919Z [DEBUG] TestPreparedQuery_Execute/#04.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:38.934Z [INFO]  TestPreparedQuery_Execute/#04.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:34a80f7e-9d1b-1e97-ecb5-cfd38fd72e90 Address:127.0.0.1:16378}]"
>         writer.go:29: 2020-02-23T02:46:38.934Z [INFO]  TestPreparedQuery_Execute/#04.server.raft: entering follower state: follower="Node at 127.0.0.1:16378 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:38.934Z [INFO]  TestPreparedQuery_Execute/#04.server.serf.wan: serf: EventMemberJoin: Node-34a80f7e-9d1b-1e97-ecb5-cfd38fd72e90.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:38.935Z [INFO]  TestPreparedQuery_Execute/#04.server.serf.lan: serf: EventMemberJoin: Node-34a80f7e-9d1b-1e97-ecb5-cfd38fd72e90 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:38.935Z [INFO]  TestPreparedQuery_Execute/#04.server: Handled event for server in area: event=member-join server=Node-34a80f7e-9d1b-1e97-ecb5-cfd38fd72e90.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:38.935Z [INFO]  TestPreparedQuery_Execute/#04.server: Adding LAN server: server="Node-34a80f7e-9d1b-1e97-ecb5-cfd38fd72e90 (Addr: tcp/127.0.0.1:16378) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:38.935Z [INFO]  TestPreparedQuery_Execute/#04: Started DNS server: address=127.0.0.1:16373 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.935Z [INFO]  TestPreparedQuery_Execute/#04: Started DNS server: address=127.0.0.1:16373 network=udp
>         writer.go:29: 2020-02-23T02:46:38.936Z [INFO]  TestPreparedQuery_Execute/#04: Started HTTP server: address=127.0.0.1:16374 network=tcp
>         writer.go:29: 2020-02-23T02:46:38.936Z [INFO]  TestPreparedQuery_Execute/#04: started state syncer
>         writer.go:29: 2020-02-23T02:46:38.984Z [WARN]  TestPreparedQuery_Execute/#04.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:38.984Z [INFO]  TestPreparedQuery_Execute/#04.server.raft: entering candidate state: node="Node at 127.0.0.1:16378 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:38.987Z [DEBUG] TestPreparedQuery_Execute/#04.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:38.987Z [DEBUG] TestPreparedQuery_Execute/#04.server.raft: vote granted: from=34a80f7e-9d1b-1e97-ecb5-cfd38fd72e90 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:38.987Z [INFO]  TestPreparedQuery_Execute/#04.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:38.987Z [INFO]  TestPreparedQuery_Execute/#04.server.raft: entering leader state: leader="Node at 127.0.0.1:16378 [Leader]"
>         writer.go:29: 2020-02-23T02:46:38.987Z [INFO]  TestPreparedQuery_Execute/#04.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:38.987Z [INFO]  TestPreparedQuery_Execute/#04.server: New leader elected: payload=Node-34a80f7e-9d1b-1e97-ecb5-cfd38fd72e90
>         writer.go:29: 2020-02-23T02:46:38.995Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:39.003Z [INFO]  TestPreparedQuery_Execute/#04.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:39.003Z [INFO]  TestPreparedQuery_Execute/#04.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:39.003Z [DEBUG] TestPreparedQuery_Execute/#04.server: Skipping self join check for node since the cluster is too small: node=Node-34a80f7e-9d1b-1e97-ecb5-cfd38fd72e90
>         writer.go:29: 2020-02-23T02:46:39.003Z [INFO]  TestPreparedQuery_Execute/#04.server: member joined, marking health alive: member=Node-34a80f7e-9d1b-1e97-ecb5-cfd38fd72e90
>         writer.go:29: 2020-02-23T02:46:39.179Z [DEBUG] TestPreparedQuery_Execute/#04: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:39.182Z [INFO]  TestPreparedQuery_Execute/#04: Synced node info
>         writer.go:29: 2020-02-23T02:46:39.182Z [DEBUG] TestPreparedQuery_Execute/#04: Node info in sync
>         writer.go:29: 2020-02-23T02:46:39.359Z [WARN]  TestPreparedQuery_Execute/#04.server: endpoint injected; this should only be used for testing
>         writer.go:29: 2020-02-23T02:46:39.359Z [INFO]  TestPreparedQuery_Execute/#04: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:39.359Z [INFO]  TestPreparedQuery_Execute/#04.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:39.359Z [DEBUG] TestPreparedQuery_Execute/#04.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:39.359Z [WARN]  TestPreparedQuery_Execute/#04.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:39.359Z [DEBUG] TestPreparedQuery_Execute/#04.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:39.365Z [WARN]  TestPreparedQuery_Execute/#04.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:39.367Z [INFO]  TestPreparedQuery_Execute/#04.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:39.367Z [INFO]  TestPreparedQuery_Execute/#04: consul server down
>         writer.go:29: 2020-02-23T02:46:39.367Z [INFO]  TestPreparedQuery_Execute/#04: shutdown complete
>         writer.go:29: 2020-02-23T02:46:39.367Z [INFO]  TestPreparedQuery_Execute/#04: Stopping server: protocol=DNS address=127.0.0.1:16373 network=tcp
>         writer.go:29: 2020-02-23T02:46:39.367Z [INFO]  TestPreparedQuery_Execute/#04: Stopping server: protocol=DNS address=127.0.0.1:16373 network=udp
>         writer.go:29: 2020-02-23T02:46:39.367Z [INFO]  TestPreparedQuery_Execute/#04: Stopping server: protocol=HTTP address=127.0.0.1:16374 network=tcp
>         writer.go:29: 2020-02-23T02:46:39.367Z [INFO]  TestPreparedQuery_Execute/#04: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:39.367Z [INFO]  TestPreparedQuery_Execute/#04: Endpoints down
>     --- PASS: TestPreparedQuery_Execute/#05 (0.35s)
>         writer.go:29: 2020-02-23T02:46:39.377Z [WARN]  TestPreparedQuery_Execute/#05: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:39.377Z [DEBUG] TestPreparedQuery_Execute/#05.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:39.378Z [DEBUG] TestPreparedQuery_Execute/#05.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:39.401Z [INFO]  TestPreparedQuery_Execute/#05.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:161c74a1-491c-d627-64a5-f9bf4efd5d37 Address:127.0.0.1:16390}]"
>         writer.go:29: 2020-02-23T02:46:39.402Z [INFO]  TestPreparedQuery_Execute/#05.server.serf.wan: serf: EventMemberJoin: Node-161c74a1-491c-d627-64a5-f9bf4efd5d37.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:39.402Z [INFO]  TestPreparedQuery_Execute/#05.server.raft: entering follower state: follower="Node at 127.0.0.1:16390 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:39.403Z [INFO]  TestPreparedQuery_Execute/#05.server.serf.lan: serf: EventMemberJoin: Node-161c74a1-491c-d627-64a5-f9bf4efd5d37 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:39.403Z [INFO]  TestPreparedQuery_Execute/#05.server: Adding LAN server: server="Node-161c74a1-491c-d627-64a5-f9bf4efd5d37 (Addr: tcp/127.0.0.1:16390) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:39.403Z [INFO]  TestPreparedQuery_Execute/#05.server: Handled event for server in area: event=member-join server=Node-161c74a1-491c-d627-64a5-f9bf4efd5d37.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:39.403Z [INFO]  TestPreparedQuery_Execute/#05: Started DNS server: address=127.0.0.1:16385 network=tcp
>         writer.go:29: 2020-02-23T02:46:39.403Z [INFO]  TestPreparedQuery_Execute/#05: Started DNS server: address=127.0.0.1:16385 network=udp
>         writer.go:29: 2020-02-23T02:46:39.404Z [INFO]  TestPreparedQuery_Execute/#05: Started HTTP server: address=127.0.0.1:16386 network=tcp
>         writer.go:29: 2020-02-23T02:46:39.404Z [INFO]  TestPreparedQuery_Execute/#05: started state syncer
>         writer.go:29: 2020-02-23T02:46:39.463Z [WARN]  TestPreparedQuery_Execute/#05.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:39.463Z [INFO]  TestPreparedQuery_Execute/#05.server.raft: entering candidate state: node="Node at 127.0.0.1:16390 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:39.468Z [DEBUG] TestPreparedQuery_Execute/#05.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:39.468Z [DEBUG] TestPreparedQuery_Execute/#05.server.raft: vote granted: from=161c74a1-491c-d627-64a5-f9bf4efd5d37 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:39.468Z [INFO]  TestPreparedQuery_Execute/#05.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:39.468Z [INFO]  TestPreparedQuery_Execute/#05.server.raft: entering leader state: leader="Node at 127.0.0.1:16390 [Leader]"
>         writer.go:29: 2020-02-23T02:46:39.468Z [INFO]  TestPreparedQuery_Execute/#05.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:39.468Z [INFO]  TestPreparedQuery_Execute/#05.server: New leader elected: payload=Node-161c74a1-491c-d627-64a5-f9bf4efd5d37
>         writer.go:29: 2020-02-23T02:46:39.476Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:39.484Z [INFO]  TestPreparedQuery_Execute/#05.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:39.484Z [INFO]  TestPreparedQuery_Execute/#05.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:39.484Z [DEBUG] TestPreparedQuery_Execute/#05.server: Skipping self join check for node since the cluster is too small: node=Node-161c74a1-491c-d627-64a5-f9bf4efd5d37
>         writer.go:29: 2020-02-23T02:46:39.484Z [INFO]  TestPreparedQuery_Execute/#05.server: member joined, marking health alive: member=Node-161c74a1-491c-d627-64a5-f9bf4efd5d37
>         writer.go:29: 2020-02-23T02:46:39.714Z [WARN]  TestPreparedQuery_Execute/#05.server: endpoint injected; this should only be used for testing
>         writer.go:29: 2020-02-23T02:46:39.714Z [INFO]  TestPreparedQuery_Execute/#05: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:39.714Z [INFO]  TestPreparedQuery_Execute/#05.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:39.714Z [DEBUG] TestPreparedQuery_Execute/#05.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:39.714Z [WARN]  TestPreparedQuery_Execute/#05.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:39.714Z [ERROR] TestPreparedQuery_Execute/#05.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:39.714Z [DEBUG] TestPreparedQuery_Execute/#05.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:39.716Z [WARN]  TestPreparedQuery_Execute/#05.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:39.718Z [INFO]  TestPreparedQuery_Execute/#05.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:39.718Z [INFO]  TestPreparedQuery_Execute/#05: consul server down
>         writer.go:29: 2020-02-23T02:46:39.718Z [INFO]  TestPreparedQuery_Execute/#05: shutdown complete
>         writer.go:29: 2020-02-23T02:46:39.718Z [INFO]  TestPreparedQuery_Execute/#05: Stopping server: protocol=DNS address=127.0.0.1:16385 network=tcp
>         writer.go:29: 2020-02-23T02:46:39.718Z [INFO]  TestPreparedQuery_Execute/#05: Stopping server: protocol=DNS address=127.0.0.1:16385 network=udp
>         writer.go:29: 2020-02-23T02:46:39.718Z [INFO]  TestPreparedQuery_Execute/#05: Stopping server: protocol=HTTP address=127.0.0.1:16386 network=tcp
>         writer.go:29: 2020-02-23T02:46:39.718Z [INFO]  TestPreparedQuery_Execute/#05: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:39.718Z [INFO]  TestPreparedQuery_Execute/#05: Endpoints down
>     --- PASS: TestPreparedQuery_Execute/#06 (0.28s)
>         writer.go:29: 2020-02-23T02:46:39.726Z [WARN]  TestPreparedQuery_Execute/#06: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:39.726Z [DEBUG] TestPreparedQuery_Execute/#06.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:39.726Z [DEBUG] TestPreparedQuery_Execute/#06.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:39.740Z [INFO]  TestPreparedQuery_Execute/#06.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:ff973409-b9f9-6204-1081-1e620cb2f2d4 Address:127.0.0.1:16396}]"
>         writer.go:29: 2020-02-23T02:46:39.741Z [INFO]  TestPreparedQuery_Execute/#06.server.serf.wan: serf: EventMemberJoin: Node-ff973409-b9f9-6204-1081-1e620cb2f2d4.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:39.741Z [INFO]  TestPreparedQuery_Execute/#06.server.serf.lan: serf: EventMemberJoin: Node-ff973409-b9f9-6204-1081-1e620cb2f2d4 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:39.741Z [INFO]  TestPreparedQuery_Execute/#06: Started DNS server: address=127.0.0.1:16391 network=udp
>         writer.go:29: 2020-02-23T02:46:39.741Z [INFO]  TestPreparedQuery_Execute/#06.server.raft: entering follower state: follower="Node at 127.0.0.1:16396 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:39.741Z [INFO]  TestPreparedQuery_Execute/#06.server: Adding LAN server: server="Node-ff973409-b9f9-6204-1081-1e620cb2f2d4 (Addr: tcp/127.0.0.1:16396) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:39.741Z [INFO]  TestPreparedQuery_Execute/#06.server: Handled event for server in area: event=member-join server=Node-ff973409-b9f9-6204-1081-1e620cb2f2d4.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:39.741Z [INFO]  TestPreparedQuery_Execute/#06: Started DNS server: address=127.0.0.1:16391 network=tcp
>         writer.go:29: 2020-02-23T02:46:39.742Z [INFO]  TestPreparedQuery_Execute/#06: Started HTTP server: address=127.0.0.1:16392 network=tcp
>         writer.go:29: 2020-02-23T02:46:39.742Z [INFO]  TestPreparedQuery_Execute/#06: started state syncer
>         writer.go:29: 2020-02-23T02:46:39.795Z [WARN]  TestPreparedQuery_Execute/#06.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:39.795Z [INFO]  TestPreparedQuery_Execute/#06.server.raft: entering candidate state: node="Node at 127.0.0.1:16396 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:39.799Z [DEBUG] TestPreparedQuery_Execute/#06.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:39.799Z [DEBUG] TestPreparedQuery_Execute/#06.server.raft: vote granted: from=ff973409-b9f9-6204-1081-1e620cb2f2d4 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:39.799Z [INFO]  TestPreparedQuery_Execute/#06.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:39.799Z [INFO]  TestPreparedQuery_Execute/#06.server.raft: entering leader state: leader="Node at 127.0.0.1:16396 [Leader]"
>         writer.go:29: 2020-02-23T02:46:39.799Z [INFO]  TestPreparedQuery_Execute/#06.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:39.799Z [INFO]  TestPreparedQuery_Execute/#06.server: New leader elected: payload=Node-ff973409-b9f9-6204-1081-1e620cb2f2d4
>         writer.go:29: 2020-02-23T02:46:39.806Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:39.814Z [INFO]  TestPreparedQuery_Execute/#06.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:39.814Z [INFO]  TestPreparedQuery_Execute/#06.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:39.814Z [DEBUG] TestPreparedQuery_Execute/#06.server: Skipping self join check for node since the cluster is too small: node=Node-ff973409-b9f9-6204-1081-1e620cb2f2d4
>         writer.go:29: 2020-02-23T02:46:39.814Z [INFO]  TestPreparedQuery_Execute/#06.server: member joined, marking health alive: member=Node-ff973409-b9f9-6204-1081-1e620cb2f2d4
>         writer.go:29: 2020-02-23T02:46:39.993Z [WARN]  TestPreparedQuery_Execute/#06.server: endpoint injected; this should only be used for testing
>         writer.go:29: 2020-02-23T02:46:39.993Z [INFO]  TestPreparedQuery_Execute/#06: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:39.993Z [INFO]  TestPreparedQuery_Execute/#06.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:39.993Z [DEBUG] TestPreparedQuery_Execute/#06.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:39.993Z [WARN]  TestPreparedQuery_Execute/#06.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:39.993Z [ERROR] TestPreparedQuery_Execute/#06.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:39.993Z [DEBUG] TestPreparedQuery_Execute/#06.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:39.995Z [WARN]  TestPreparedQuery_Execute/#06.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:39.997Z [INFO]  TestPreparedQuery_Execute/#06.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:39.997Z [INFO]  TestPreparedQuery_Execute/#06: consul server down
>         writer.go:29: 2020-02-23T02:46:39.997Z [INFO]  TestPreparedQuery_Execute/#06: shutdown complete
>         writer.go:29: 2020-02-23T02:46:39.997Z [INFO]  TestPreparedQuery_Execute/#06: Stopping server: protocol=DNS address=127.0.0.1:16391 network=tcp
>         writer.go:29: 2020-02-23T02:46:39.997Z [INFO]  TestPreparedQuery_Execute/#06: Stopping server: protocol=DNS address=127.0.0.1:16391 network=udp
>         writer.go:29: 2020-02-23T02:46:39.997Z [INFO]  TestPreparedQuery_Execute/#06: Stopping server: protocol=HTTP address=127.0.0.1:16392 network=tcp
>         writer.go:29: 2020-02-23T02:46:39.997Z [INFO]  TestPreparedQuery_Execute/#06: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:39.997Z [INFO]  TestPreparedQuery_Execute/#06: Endpoints down
>     --- PASS: TestPreparedQuery_Execute/#07 (0.24s)
>         writer.go:29: 2020-02-23T02:46:40.012Z [WARN]  TestPreparedQuery_Execute/#07: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:40.013Z [DEBUG] TestPreparedQuery_Execute/#07.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:40.013Z [DEBUG] TestPreparedQuery_Execute/#07.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:40.028Z [INFO]  TestPreparedQuery_Execute/#07.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fe10886c-a9f7-c338-0c09-8fe3c2508553 Address:127.0.0.1:16414}]"
>         writer.go:29: 2020-02-23T02:46:40.029Z [INFO]  TestPreparedQuery_Execute/#07.server.serf.wan: serf: EventMemberJoin: Node-fe10886c-a9f7-c338-0c09-8fe3c2508553.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:40.029Z [INFO]  TestPreparedQuery_Execute/#07.server.serf.lan: serf: EventMemberJoin: Node-fe10886c-a9f7-c338-0c09-8fe3c2508553 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:40.029Z [INFO]  TestPreparedQuery_Execute/#07: Started DNS server: address=127.0.0.1:16409 network=udp
>         writer.go:29: 2020-02-23T02:46:40.029Z [INFO]  TestPreparedQuery_Execute/#07.server.raft: entering follower state: follower="Node at 127.0.0.1:16414 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:40.029Z [INFO]  TestPreparedQuery_Execute/#07.server: Adding LAN server: server="Node-fe10886c-a9f7-c338-0c09-8fe3c2508553 (Addr: tcp/127.0.0.1:16414) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:40.030Z [INFO]  TestPreparedQuery_Execute/#07.server: Handled event for server in area: event=member-join server=Node-fe10886c-a9f7-c338-0c09-8fe3c2508553.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:40.030Z [INFO]  TestPreparedQuery_Execute/#07: Started DNS server: address=127.0.0.1:16409 network=tcp
>         writer.go:29: 2020-02-23T02:46:40.030Z [INFO]  TestPreparedQuery_Execute/#07: Started HTTP server: address=127.0.0.1:16410 network=tcp
>         writer.go:29: 2020-02-23T02:46:40.030Z [INFO]  TestPreparedQuery_Execute/#07: started state syncer
>         writer.go:29: 2020-02-23T02:46:40.092Z [WARN]  TestPreparedQuery_Execute/#07.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:40.093Z [INFO]  TestPreparedQuery_Execute/#07.server.raft: entering candidate state: node="Node at 127.0.0.1:16414 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:40.096Z [DEBUG] TestPreparedQuery_Execute/#07.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:40.096Z [DEBUG] TestPreparedQuery_Execute/#07.server.raft: vote granted: from=fe10886c-a9f7-c338-0c09-8fe3c2508553 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:40.096Z [INFO]  TestPreparedQuery_Execute/#07.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:40.096Z [INFO]  TestPreparedQuery_Execute/#07.server.raft: entering leader state: leader="Node at 127.0.0.1:16414 [Leader]"
>         writer.go:29: 2020-02-23T02:46:40.097Z [INFO]  TestPreparedQuery_Execute/#07.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:40.097Z [INFO]  TestPreparedQuery_Execute/#07.server: New leader elected: payload=Node-fe10886c-a9f7-c338-0c09-8fe3c2508553
>         writer.go:29: 2020-02-23T02:46:40.105Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:40.113Z [INFO]  TestPreparedQuery_Execute/#07.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:40.113Z [INFO]  TestPreparedQuery_Execute/#07.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:40.113Z [DEBUG] TestPreparedQuery_Execute/#07.server: Skipping self join check for node since the cluster is too small: node=Node-fe10886c-a9f7-c338-0c09-8fe3c2508553
>         writer.go:29: 2020-02-23T02:46:40.113Z [INFO]  TestPreparedQuery_Execute/#07.server: member joined, marking health alive: member=Node-fe10886c-a9f7-c338-0c09-8fe3c2508553
>         writer.go:29: 2020-02-23T02:46:40.232Z [INFO]  TestPreparedQuery_Execute/#07: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:40.232Z [INFO]  TestPreparedQuery_Execute/#07.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:40.232Z [DEBUG] TestPreparedQuery_Execute/#07.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:40.232Z [WARN]  TestPreparedQuery_Execute/#07.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:40.232Z [ERROR] TestPreparedQuery_Execute/#07.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:40.232Z [DEBUG] TestPreparedQuery_Execute/#07.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:40.234Z [WARN]  TestPreparedQuery_Execute/#07.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:40.236Z [INFO]  TestPreparedQuery_Execute/#07.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:40.236Z [INFO]  TestPreparedQuery_Execute/#07: consul server down
>         writer.go:29: 2020-02-23T02:46:40.236Z [INFO]  TestPreparedQuery_Execute/#07: shutdown complete
>         writer.go:29: 2020-02-23T02:46:40.236Z [INFO]  TestPreparedQuery_Execute/#07: Stopping server: protocol=DNS address=127.0.0.1:16409 network=tcp
>         writer.go:29: 2020-02-23T02:46:40.236Z [INFO]  TestPreparedQuery_Execute/#07: Stopping server: protocol=DNS address=127.0.0.1:16409 network=udp
>         writer.go:29: 2020-02-23T02:46:40.236Z [INFO]  TestPreparedQuery_Execute/#07: Stopping server: protocol=HTTP address=127.0.0.1:16410 network=tcp
>         writer.go:29: 2020-02-23T02:46:40.236Z [INFO]  TestPreparedQuery_Execute/#07: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:40.236Z [INFO]  TestPreparedQuery_Execute/#07: Endpoints down
> === CONT  TestOperator_KeyringUse
> --- PASS: TestOperator_KeyringUse (0.11s)
>     writer.go:29: 2020-02-23T02:46:40.256Z [WARN]  TestOperator_KeyringUse: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:40.256Z [DEBUG] TestOperator_KeyringUse.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:40.257Z [DEBUG] TestOperator_KeyringUse.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.268Z [INFO]  TestOperator_KeyringUse.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:959a43b9-701d-7dce-b8e5-18b0e06594f2 Address:127.0.0.1:16432}]"
>     writer.go:29: 2020-02-23T02:46:40.269Z [INFO]  TestOperator_KeyringUse.server.serf.wan: serf: EventMemberJoin: Node-959a43b9-701d-7dce-b8e5-18b0e06594f2.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.269Z [INFO]  TestOperator_KeyringUse.server.serf.lan: serf: EventMemberJoin: Node-959a43b9-701d-7dce-b8e5-18b0e06594f2 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.269Z [INFO]  TestOperator_KeyringUse: Started DNS server: address=127.0.0.1:16427 network=udp
>     writer.go:29: 2020-02-23T02:46:40.269Z [INFO]  TestOperator_KeyringUse.server.raft: entering follower state: follower="Node at 127.0.0.1:16432 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:40.270Z [INFO]  TestOperator_KeyringUse.server: Adding LAN server: server="Node-959a43b9-701d-7dce-b8e5-18b0e06594f2 (Addr: tcp/127.0.0.1:16432) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:40.270Z [INFO]  TestOperator_KeyringUse.server: Handled event for server in area: event=member-join server=Node-959a43b9-701d-7dce-b8e5-18b0e06594f2.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:40.270Z [INFO]  TestOperator_KeyringUse: Started DNS server: address=127.0.0.1:16427 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.270Z [INFO]  TestOperator_KeyringUse: Started HTTP server: address=127.0.0.1:16428 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.270Z [INFO]  TestOperator_KeyringUse: started state syncer
>     writer.go:29: 2020-02-23T02:46:40.323Z [WARN]  TestOperator_KeyringUse.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:40.323Z [INFO]  TestOperator_KeyringUse.server.raft: entering candidate state: node="Node at 127.0.0.1:16432 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:40.326Z [DEBUG] TestOperator_KeyringUse.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:40.326Z [DEBUG] TestOperator_KeyringUse.server.raft: vote granted: from=959a43b9-701d-7dce-b8e5-18b0e06594f2 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:40.326Z [INFO]  TestOperator_KeyringUse.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:40.326Z [INFO]  TestOperator_KeyringUse.server.raft: entering leader state: leader="Node at 127.0.0.1:16432 [Leader]"
>     writer.go:29: 2020-02-23T02:46:40.326Z [INFO]  TestOperator_KeyringUse.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:40.326Z [INFO]  TestOperator_KeyringUse.server: New leader elected: payload=Node-959a43b9-701d-7dce-b8e5-18b0e06594f2
>     writer.go:29: 2020-02-23T02:46:40.333Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:40.334Z [INFO]  TestOperator_KeyringUse.server.serf.wan: serf: Received install-key query
>     writer.go:29: 2020-02-23T02:46:40.334Z [DEBUG] TestOperator_KeyringUse.server.serf.wan: serf: messageQueryResponseType: Node-959a43b9-701d-7dce-b8e5-18b0e06594f2.dc1
>     writer.go:29: 2020-02-23T02:46:40.334Z [DEBUG] TestOperator_KeyringUse.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.334Z [INFO]  TestOperator_KeyringUse.server.serf.lan: serf: Received install-key query
>     writer.go:29: 2020-02-23T02:46:40.335Z [DEBUG] TestOperator_KeyringUse.server.serf.lan: serf: messageQueryResponseType: Node-959a43b9-701d-7dce-b8e5-18b0e06594f2
>     writer.go:29: 2020-02-23T02:46:40.335Z [INFO]  TestOperator_KeyringUse.server.serf.wan: serf: Received use-key query
>     writer.go:29: 2020-02-23T02:46:40.335Z [DEBUG] TestOperator_KeyringUse.server.serf.wan: serf: messageQueryResponseType: Node-959a43b9-701d-7dce-b8e5-18b0e06594f2.dc1
>     writer.go:29: 2020-02-23T02:46:40.335Z [INFO]  TestOperator_KeyringUse.server.serf.lan: serf: Received use-key query
>     writer.go:29: 2020-02-23T02:46:40.335Z [DEBUG] TestOperator_KeyringUse.server.serf.lan: serf: messageQueryResponseType: Node-959a43b9-701d-7dce-b8e5-18b0e06594f2
>     writer.go:29: 2020-02-23T02:46:40.335Z [INFO]  TestOperator_KeyringUse.server.serf.wan: serf: Received remove-key query
>     writer.go:29: 2020-02-23T02:46:40.336Z [DEBUG] TestOperator_KeyringUse.server.serf.wan: serf: messageQueryResponseType: Node-959a43b9-701d-7dce-b8e5-18b0e06594f2.dc1
>     writer.go:29: 2020-02-23T02:46:40.336Z [INFO]  TestOperator_KeyringUse.server.serf.lan: serf: Received remove-key query
>     writer.go:29: 2020-02-23T02:46:40.336Z [DEBUG] TestOperator_KeyringUse.server.serf.lan: serf: messageQueryResponseType: Node-959a43b9-701d-7dce-b8e5-18b0e06594f2
>     writer.go:29: 2020-02-23T02:46:40.336Z [INFO]  TestOperator_KeyringUse.server.serf.wan: serf: Received list-keys query
>     writer.go:29: 2020-02-23T02:46:40.336Z [DEBUG] TestOperator_KeyringUse.server.serf.wan: serf: messageQueryResponseType: Node-959a43b9-701d-7dce-b8e5-18b0e06594f2.dc1
>     writer.go:29: 2020-02-23T02:46:40.336Z [INFO]  TestOperator_KeyringUse.server.serf.lan: serf: Received list-keys query
>     writer.go:29: 2020-02-23T02:46:40.336Z [DEBUG] TestOperator_KeyringUse.server.serf.lan: serf: messageQueryResponseType: Node-959a43b9-701d-7dce-b8e5-18b0e06594f2
>     writer.go:29: 2020-02-23T02:46:40.336Z [INFO]  TestOperator_KeyringUse: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:40.337Z [INFO]  TestOperator_KeyringUse.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:40.337Z [WARN]  TestOperator_KeyringUse.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.337Z [ERROR] TestOperator_KeyringUse.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:40.339Z [WARN]  TestOperator_KeyringUse.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.341Z [INFO]  TestOperator_KeyringUse.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:40.341Z [INFO]  TestOperator_KeyringUse: consul server down
>     writer.go:29: 2020-02-23T02:46:40.341Z [INFO]  TestOperator_KeyringUse: shutdown complete
>     writer.go:29: 2020-02-23T02:46:40.341Z [INFO]  TestOperator_KeyringUse: Stopping server: protocol=DNS address=127.0.0.1:16427 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.341Z [INFO]  TestOperator_KeyringUse: Stopping server: protocol=DNS address=127.0.0.1:16427 network=udp
>     writer.go:29: 2020-02-23T02:46:40.341Z [INFO]  TestOperator_KeyringUse: Stopping server: protocol=HTTP address=127.0.0.1:16428 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.341Z [INFO]  TestOperator_KeyringUse: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:40.341Z [INFO]  TestOperator_KeyringUse: Endpoints down
> === CONT  TestOperator_KeyringRemove
> --- PASS: TestOperator_ServerHealth (2.16s)
>     writer.go:29: 2020-02-23T02:46:38.336Z [WARN]  TestOperator_ServerHealth: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:38.336Z [DEBUG] TestOperator_ServerHealth.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:38.338Z [DEBUG] TestOperator_ServerHealth.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:38.358Z [INFO]  TestOperator_ServerHealth.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:6ec0a6de-8320-5eaa-2b86-f679dd44d675 Address:127.0.0.1:16360}]"
>     writer.go:29: 2020-02-23T02:46:38.358Z [INFO]  TestOperator_ServerHealth.server.raft: entering follower state: follower="Node at 127.0.0.1:16360 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:38.359Z [INFO]  TestOperator_ServerHealth.server.serf.wan: serf: EventMemberJoin: Node-6ec0a6de-8320-5eaa-2b86-f679dd44d675.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:38.359Z [INFO]  TestOperator_ServerHealth.server.serf.lan: serf: EventMemberJoin: Node-6ec0a6de-8320-5eaa-2b86-f679dd44d675 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:38.359Z [INFO]  TestOperator_ServerHealth.server: Handled event for server in area: event=member-join server=Node-6ec0a6de-8320-5eaa-2b86-f679dd44d675.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:38.359Z [INFO]  TestOperator_ServerHealth.server: Adding LAN server: server="Node-6ec0a6de-8320-5eaa-2b86-f679dd44d675 (Addr: tcp/127.0.0.1:16360) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:38.360Z [INFO]  TestOperator_ServerHealth: Started DNS server: address=127.0.0.1:16355 network=tcp
>     writer.go:29: 2020-02-23T02:46:38.360Z [INFO]  TestOperator_ServerHealth: Started DNS server: address=127.0.0.1:16355 network=udp
>     writer.go:29: 2020-02-23T02:46:38.361Z [INFO]  TestOperator_ServerHealth: Started HTTP server: address=127.0.0.1:16356 network=tcp
>     writer.go:29: 2020-02-23T02:46:38.361Z [INFO]  TestOperator_ServerHealth: started state syncer
>     writer.go:29: 2020-02-23T02:46:38.400Z [WARN]  TestOperator_ServerHealth.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:38.400Z [INFO]  TestOperator_ServerHealth.server.raft: entering candidate state: node="Node at 127.0.0.1:16360 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:38.404Z [DEBUG] TestOperator_ServerHealth.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:38.404Z [DEBUG] TestOperator_ServerHealth.server.raft: vote granted: from=6ec0a6de-8320-5eaa-2b86-f679dd44d675 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:38.404Z [INFO]  TestOperator_ServerHealth.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:38.404Z [INFO]  TestOperator_ServerHealth.server.raft: entering leader state: leader="Node at 127.0.0.1:16360 [Leader]"
>     writer.go:29: 2020-02-23T02:46:38.405Z [INFO]  TestOperator_ServerHealth.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:38.405Z [INFO]  TestOperator_ServerHealth.server: New leader elected: payload=Node-6ec0a6de-8320-5eaa-2b86-f679dd44d675
>     writer.go:29: 2020-02-23T02:46:38.414Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:38.422Z [INFO]  TestOperator_ServerHealth.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:38.422Z [INFO]  TestOperator_ServerHealth.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:38.422Z [DEBUG] TestOperator_ServerHealth.server: Skipping self join check for node since the cluster is too small: node=Node-6ec0a6de-8320-5eaa-2b86-f679dd44d675
>     writer.go:29: 2020-02-23T02:46:38.422Z [INFO]  TestOperator_ServerHealth.server: member joined, marking health alive: member=Node-6ec0a6de-8320-5eaa-2b86-f679dd44d675
>     writer.go:29: 2020-02-23T02:46:38.785Z [DEBUG] TestOperator_ServerHealth: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:38.789Z [INFO]  TestOperator_ServerHealth: Synced node info
>     writer.go:29: 2020-02-23T02:46:38.789Z [DEBUG] TestOperator_ServerHealth: Node info in sync
>     writer.go:29: 2020-02-23T02:46:39.011Z [DEBUG] TestOperator_ServerHealth: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:39.011Z [DEBUG] TestOperator_ServerHealth: Node info in sync
>     writer.go:29: 2020-02-23T02:46:40.410Z [DEBUG] TestOperator_ServerHealth.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.423Z [INFO]  TestOperator_ServerHealth: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:40.423Z [INFO]  TestOperator_ServerHealth.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:40.423Z [DEBUG] TestOperator_ServerHealth.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.423Z [WARN]  TestOperator_ServerHealth.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.423Z [DEBUG] TestOperator_ServerHealth.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.425Z [WARN]  TestOperator_ServerHealth.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.426Z [INFO]  TestOperator_ServerHealth.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:40.426Z [INFO]  TestOperator_ServerHealth: consul server down
>     writer.go:29: 2020-02-23T02:46:40.426Z [INFO]  TestOperator_ServerHealth: shutdown complete
>     writer.go:29: 2020-02-23T02:46:40.426Z [INFO]  TestOperator_ServerHealth: Stopping server: protocol=DNS address=127.0.0.1:16355 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.426Z [INFO]  TestOperator_ServerHealth: Stopping server: protocol=DNS address=127.0.0.1:16355 network=udp
>     writer.go:29: 2020-02-23T02:46:40.426Z [INFO]  TestOperator_ServerHealth: Stopping server: protocol=HTTP address=127.0.0.1:16356 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.426Z [INFO]  TestOperator_ServerHealth: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:40.426Z [INFO]  TestOperator_ServerHealth: Endpoints down
> === CONT  TestOperator_KeyringList
> --- PASS: TestOperator_ServerHealth_Unhealthy (2.18s)
>     writer.go:29: 2020-02-23T02:46:38.321Z [WARN]  TestOperator_ServerHealth_Unhealthy: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:38.321Z [DEBUG] TestOperator_ServerHealth_Unhealthy.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:38.322Z [DEBUG] TestOperator_ServerHealth_Unhealthy.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:38.336Z [INFO]  TestOperator_ServerHealth_Unhealthy.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:cd649294-40f2-5954-c7ff-6700be2c26fb Address:127.0.0.1:16354}]"
>     writer.go:29: 2020-02-23T02:46:38.337Z [INFO]  TestOperator_ServerHealth_Unhealthy.server.serf.wan: serf: EventMemberJoin: Node-cd649294-40f2-5954-c7ff-6700be2c26fb.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:38.337Z [INFO]  TestOperator_ServerHealth_Unhealthy.server.serf.lan: serf: EventMemberJoin: Node-cd649294-40f2-5954-c7ff-6700be2c26fb 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:38.337Z [INFO]  TestOperator_ServerHealth_Unhealthy: Started DNS server: address=127.0.0.1:16349 network=udp
>     writer.go:29: 2020-02-23T02:46:38.337Z [INFO]  TestOperator_ServerHealth_Unhealthy.server.raft: entering follower state: follower="Node at 127.0.0.1:16354 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:38.338Z [INFO]  TestOperator_ServerHealth_Unhealthy.server: Adding LAN server: server="Node-cd649294-40f2-5954-c7ff-6700be2c26fb (Addr: tcp/127.0.0.1:16354) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:38.338Z [INFO]  TestOperator_ServerHealth_Unhealthy.server: Handled event for server in area: event=member-join server=Node-cd649294-40f2-5954-c7ff-6700be2c26fb.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:38.338Z [INFO]  TestOperator_ServerHealth_Unhealthy: Started DNS server: address=127.0.0.1:16349 network=tcp
>     writer.go:29: 2020-02-23T02:46:38.338Z [INFO]  TestOperator_ServerHealth_Unhealthy: Started HTTP server: address=127.0.0.1:16350 network=tcp
>     writer.go:29: 2020-02-23T02:46:38.338Z [INFO]  TestOperator_ServerHealth_Unhealthy: started state syncer
>     writer.go:29: 2020-02-23T02:46:38.391Z [WARN]  TestOperator_ServerHealth_Unhealthy.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:38.391Z [INFO]  TestOperator_ServerHealth_Unhealthy.server.raft: entering candidate state: node="Node at 127.0.0.1:16354 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:38.394Z [DEBUG] TestOperator_ServerHealth_Unhealthy.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:38.394Z [DEBUG] TestOperator_ServerHealth_Unhealthy.server.raft: vote granted: from=cd649294-40f2-5954-c7ff-6700be2c26fb term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:38.394Z [INFO]  TestOperator_ServerHealth_Unhealthy.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:38.394Z [INFO]  TestOperator_ServerHealth_Unhealthy.server.raft: entering leader state: leader="Node at 127.0.0.1:16354 [Leader]"
>     writer.go:29: 2020-02-23T02:46:38.394Z [INFO]  TestOperator_ServerHealth_Unhealthy.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:38.394Z [INFO]  TestOperator_ServerHealth_Unhealthy.server: New leader elected: payload=Node-cd649294-40f2-5954-c7ff-6700be2c26fb
>     writer.go:29: 2020-02-23T02:46:38.402Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:38.409Z [INFO]  TestOperator_ServerHealth_Unhealthy.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:38.409Z [INFO]  TestOperator_ServerHealth_Unhealthy.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:38.409Z [DEBUG] TestOperator_ServerHealth_Unhealthy.server: Skipping self join check for node since the cluster is too small: node=Node-cd649294-40f2-5954-c7ff-6700be2c26fb
>     writer.go:29: 2020-02-23T02:46:38.409Z [INFO]  TestOperator_ServerHealth_Unhealthy.server: member joined, marking health alive: member=Node-cd649294-40f2-5954-c7ff-6700be2c26fb
>     writer.go:29: 2020-02-23T02:46:38.586Z [DEBUG] TestOperator_ServerHealth_Unhealthy: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:38.589Z [INFO]  TestOperator_ServerHealth_Unhealthy: Synced node info
>     writer.go:29: 2020-02-23T02:46:40.398Z [DEBUG] TestOperator_ServerHealth_Unhealthy.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.423Z [INFO]  TestOperator_ServerHealth_Unhealthy: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:40.423Z [INFO]  TestOperator_ServerHealth_Unhealthy.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:40.423Z [DEBUG] TestOperator_ServerHealth_Unhealthy.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.423Z [WARN]  TestOperator_ServerHealth_Unhealthy.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.423Z [DEBUG] TestOperator_ServerHealth_Unhealthy.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.425Z [WARN]  TestOperator_ServerHealth_Unhealthy.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.430Z [INFO]  TestOperator_ServerHealth_Unhealthy.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:40.430Z [INFO]  TestOperator_ServerHealth_Unhealthy: consul server down
>     writer.go:29: 2020-02-23T02:46:40.430Z [INFO]  TestOperator_ServerHealth_Unhealthy: shutdown complete
>     writer.go:29: 2020-02-23T02:46:40.430Z [INFO]  TestOperator_ServerHealth_Unhealthy: Stopping server: protocol=DNS address=127.0.0.1:16349 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.431Z [INFO]  TestOperator_ServerHealth_Unhealthy: Stopping server: protocol=DNS address=127.0.0.1:16349 network=udp
>     writer.go:29: 2020-02-23T02:46:40.431Z [INFO]  TestOperator_ServerHealth_Unhealthy: Stopping server: protocol=HTTP address=127.0.0.1:16350 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.431Z [INFO]  TestOperator_ServerHealth_Unhealthy: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:40.431Z [INFO]  TestOperator_ServerHealth_Unhealthy: Endpoints down
> === CONT  TestDNS_syncExtra
> --- PASS: TestDNS_syncExtra (0.00s)
> === CONT  TestDNS_ServiceLookup_FilterCritical
> --- PASS: TestOperator_Keyring_InvalidRelayFactor (0.37s)
>     writer.go:29: 2020-02-23T02:46:40.180Z [WARN]  TestOperator_Keyring_InvalidRelayFactor: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:40.180Z [DEBUG] TestOperator_Keyring_InvalidRelayFactor.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:40.181Z [DEBUG] TestOperator_Keyring_InvalidRelayFactor.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.190Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:041c7303-4492-a7e2-a4a6-ffd7a8b7a63d Address:127.0.0.1:16420}]"
>     writer.go:29: 2020-02-23T02:46:40.190Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.server.raft: entering follower state: follower="Node at 127.0.0.1:16420 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:40.190Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.server.serf.wan: serf: EventMemberJoin: Node-041c7303-4492-a7e2-a4a6-ffd7a8b7a63d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.191Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.server.serf.lan: serf: EventMemberJoin: Node-041c7303-4492-a7e2-a4a6-ffd7a8b7a63d 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.191Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.server: Handled event for server in area: event=member-join server=Node-041c7303-4492-a7e2-a4a6-ffd7a8b7a63d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:40.191Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.server: Adding LAN server: server="Node-041c7303-4492-a7e2-a4a6-ffd7a8b7a63d (Addr: tcp/127.0.0.1:16420) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:40.191Z [INFO]  TestOperator_Keyring_InvalidRelayFactor: Started DNS server: address=127.0.0.1:16415 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.191Z [INFO]  TestOperator_Keyring_InvalidRelayFactor: Started DNS server: address=127.0.0.1:16415 network=udp
>     writer.go:29: 2020-02-23T02:46:40.192Z [INFO]  TestOperator_Keyring_InvalidRelayFactor: Started HTTP server: address=127.0.0.1:16416 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.192Z [INFO]  TestOperator_Keyring_InvalidRelayFactor: started state syncer
>     writer.go:29: 2020-02-23T02:46:40.241Z [WARN]  TestOperator_Keyring_InvalidRelayFactor.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:40.241Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.server.raft: entering candidate state: node="Node at 127.0.0.1:16420 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:40.246Z [DEBUG] TestOperator_Keyring_InvalidRelayFactor.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:40.246Z [DEBUG] TestOperator_Keyring_InvalidRelayFactor.server.raft: vote granted: from=041c7303-4492-a7e2-a4a6-ffd7a8b7a63d term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:40.246Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:40.246Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.server.raft: entering leader state: leader="Node at 127.0.0.1:16420 [Leader]"
>     writer.go:29: 2020-02-23T02:46:40.246Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:40.246Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.server: New leader elected: payload=Node-041c7303-4492-a7e2-a4a6-ffd7a8b7a63d
>     writer.go:29: 2020-02-23T02:46:40.253Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:40.261Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:40.261Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.261Z [DEBUG] TestOperator_Keyring_InvalidRelayFactor.server: Skipping self join check for node since the cluster is too small: node=Node-041c7303-4492-a7e2-a4a6-ffd7a8b7a63d
>     writer.go:29: 2020-02-23T02:46:40.261Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.server: member joined, marking health alive: member=Node-041c7303-4492-a7e2-a4a6-ffd7a8b7a63d
>     writer.go:29: 2020-02-23T02:46:40.365Z [DEBUG] TestOperator_Keyring_InvalidRelayFactor: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:40.367Z [INFO]  TestOperator_Keyring_InvalidRelayFactor: Synced node info
>     writer.go:29: 2020-02-23T02:46:40.367Z [DEBUG] TestOperator_Keyring_InvalidRelayFactor: Node info in sync
>     writer.go:29: 2020-02-23T02:46:40.532Z [INFO]  TestOperator_Keyring_InvalidRelayFactor: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:40.532Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:40.532Z [DEBUG] TestOperator_Keyring_InvalidRelayFactor.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.532Z [WARN]  TestOperator_Keyring_InvalidRelayFactor.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.532Z [DEBUG] TestOperator_Keyring_InvalidRelayFactor.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.534Z [WARN]  TestOperator_Keyring_InvalidRelayFactor.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.538Z [INFO]  TestOperator_Keyring_InvalidRelayFactor.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:40.538Z [INFO]  TestOperator_Keyring_InvalidRelayFactor: consul server down
>     writer.go:29: 2020-02-23T02:46:40.538Z [INFO]  TestOperator_Keyring_InvalidRelayFactor: shutdown complete
>     writer.go:29: 2020-02-23T02:46:40.538Z [INFO]  TestOperator_Keyring_InvalidRelayFactor: Stopping server: protocol=DNS address=127.0.0.1:16415 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.538Z [INFO]  TestOperator_Keyring_InvalidRelayFactor: Stopping server: protocol=DNS address=127.0.0.1:16415 network=udp
>     writer.go:29: 2020-02-23T02:46:40.538Z [INFO]  TestOperator_Keyring_InvalidRelayFactor: Stopping server: protocol=HTTP address=127.0.0.1:16416 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.538Z [INFO]  TestOperator_Keyring_InvalidRelayFactor: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:40.538Z [INFO]  TestOperator_Keyring_InvalidRelayFactor: Endpoints down
> === CONT  TestDNS_trimUDPResponse_TrimSizeEDNS
> --- PASS: TestDNS_trimUDPResponse_TrimSizeEDNS (0.00s)
> === CONT  TestDNS_trimUDPResponse_TrimSize
> --- PASS: TestDNS_trimUDPResponse_TrimSize (0.01s)
> === CONT  TestDNS_trimUDPResponse_TrimLimit
> --- PASS: TestDNS_trimUDPResponse_TrimLimit (0.01s)
> === CONT  TestDNS_trimUDPResponse_NoTrim
> --- PASS: TestDNS_trimUDPResponse_NoTrim (0.01s)
> === CONT  TestDNS_PreparedQuery_AgentSource
> --- PASS: TestOperator_KeyringRemove (0.31s)
>     writer.go:29: 2020-02-23T02:46:40.350Z [WARN]  TestOperator_KeyringRemove: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:40.350Z [DEBUG] TestOperator_KeyringRemove.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:40.351Z [DEBUG] TestOperator_KeyringRemove.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.359Z [INFO]  TestOperator_KeyringRemove.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:3dcb2018-94c4-5db2-6377-1b8f52671565 Address:127.0.0.1:16438}]"
>     writer.go:29: 2020-02-23T02:46:40.359Z [INFO]  TestOperator_KeyringRemove.server.raft: entering follower state: follower="Node at 127.0.0.1:16438 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:40.360Z [INFO]  TestOperator_KeyringRemove.server.serf.wan: serf: EventMemberJoin: Node-3dcb2018-94c4-5db2-6377-1b8f52671565.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.361Z [INFO]  TestOperator_KeyringRemove.server.serf.lan: serf: EventMemberJoin: Node-3dcb2018-94c4-5db2-6377-1b8f52671565 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.361Z [INFO]  TestOperator_KeyringRemove.server: Handled event for server in area: event=member-join server=Node-3dcb2018-94c4-5db2-6377-1b8f52671565.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:40.361Z [INFO]  TestOperator_KeyringRemove.server: Adding LAN server: server="Node-3dcb2018-94c4-5db2-6377-1b8f52671565 (Addr: tcp/127.0.0.1:16438) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:40.361Z [INFO]  TestOperator_KeyringRemove: Started DNS server: address=127.0.0.1:16433 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.361Z [INFO]  TestOperator_KeyringRemove: Started DNS server: address=127.0.0.1:16433 network=udp
>     writer.go:29: 2020-02-23T02:46:40.361Z [INFO]  TestOperator_KeyringRemove: Started HTTP server: address=127.0.0.1:16434 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.362Z [INFO]  TestOperator_KeyringRemove: started state syncer
>     writer.go:29: 2020-02-23T02:46:40.415Z [WARN]  TestOperator_KeyringRemove.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:40.415Z [INFO]  TestOperator_KeyringRemove.server.raft: entering candidate state: node="Node at 127.0.0.1:16438 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:40.418Z [DEBUG] TestOperator_KeyringRemove.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:40.419Z [DEBUG] TestOperator_KeyringRemove.server.raft: vote granted: from=3dcb2018-94c4-5db2-6377-1b8f52671565 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:40.419Z [INFO]  TestOperator_KeyringRemove.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:40.419Z [INFO]  TestOperator_KeyringRemove.server.raft: entering leader state: leader="Node at 127.0.0.1:16438 [Leader]"
>     writer.go:29: 2020-02-23T02:46:40.419Z [INFO]  TestOperator_KeyringRemove.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:40.419Z [INFO]  TestOperator_KeyringRemove.server: New leader elected: payload=Node-3dcb2018-94c4-5db2-6377-1b8f52671565
>     writer.go:29: 2020-02-23T02:46:40.435Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:40.490Z [INFO]  TestOperator_KeyringRemove: Synced node info
>     writer.go:29: 2020-02-23T02:46:40.490Z [DEBUG] TestOperator_KeyringRemove: Node info in sync
>     writer.go:29: 2020-02-23T02:46:40.524Z [INFO]  TestOperator_KeyringRemove.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:40.524Z [INFO]  TestOperator_KeyringRemove.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.524Z [DEBUG] TestOperator_KeyringRemove.server: Skipping self join check for node since the cluster is too small: node=Node-3dcb2018-94c4-5db2-6377-1b8f52671565
>     writer.go:29: 2020-02-23T02:46:40.524Z [INFO]  TestOperator_KeyringRemove.server: member joined, marking health alive: member=Node-3dcb2018-94c4-5db2-6377-1b8f52671565
>     writer.go:29: 2020-02-23T02:46:40.645Z [INFO]  TestOperator_KeyringRemove.server.serf.wan: serf: Received install-key query
>     writer.go:29: 2020-02-23T02:46:40.645Z [DEBUG] TestOperator_KeyringRemove.server.serf.wan: serf: messageQueryResponseType: Node-3dcb2018-94c4-5db2-6377-1b8f52671565.dc1
>     writer.go:29: 2020-02-23T02:46:40.645Z [DEBUG] TestOperator_KeyringRemove.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.645Z [INFO]  TestOperator_KeyringRemove.server.serf.lan: serf: Received install-key query
>     writer.go:29: 2020-02-23T02:46:40.646Z [DEBUG] TestOperator_KeyringRemove.server.serf.lan: serf: messageQueryResponseType: Node-3dcb2018-94c4-5db2-6377-1b8f52671565
>     writer.go:29: 2020-02-23T02:46:40.646Z [INFO]  TestOperator_KeyringRemove.server.serf.wan: serf: Received list-keys query
>     writer.go:29: 2020-02-23T02:46:40.646Z [DEBUG] TestOperator_KeyringRemove.server.serf.wan: serf: messageQueryResponseType: Node-3dcb2018-94c4-5db2-6377-1b8f52671565.dc1
>     writer.go:29: 2020-02-23T02:46:40.646Z [INFO]  TestOperator_KeyringRemove.server.serf.lan: serf: Received list-keys query
>     writer.go:29: 2020-02-23T02:46:40.646Z [DEBUG] TestOperator_KeyringRemove.server.serf.lan: serf: messageQueryResponseType: Node-3dcb2018-94c4-5db2-6377-1b8f52671565
>     writer.go:29: 2020-02-23T02:46:40.646Z [INFO]  TestOperator_KeyringRemove.server.serf.wan: serf: Received remove-key query
>     writer.go:29: 2020-02-23T02:46:40.647Z [DEBUG] TestOperator_KeyringRemove.server.serf.wan: serf: messageQueryResponseType: Node-3dcb2018-94c4-5db2-6377-1b8f52671565.dc1
>     writer.go:29: 2020-02-23T02:46:40.647Z [INFO]  TestOperator_KeyringRemove.server.serf.lan: serf: Received remove-key query
>     writer.go:29: 2020-02-23T02:46:40.647Z [DEBUG] TestOperator_KeyringRemove.server.serf.lan: serf: messageQueryResponseType: Node-3dcb2018-94c4-5db2-6377-1b8f52671565
>     writer.go:29: 2020-02-23T02:46:40.647Z [INFO]  TestOperator_KeyringRemove.server.serf.wan: serf: Received list-keys query
>     writer.go:29: 2020-02-23T02:46:40.647Z [DEBUG] TestOperator_KeyringRemove.server.serf.wan: serf: messageQueryResponseType: Node-3dcb2018-94c4-5db2-6377-1b8f52671565.dc1
>     writer.go:29: 2020-02-23T02:46:40.647Z [INFO]  TestOperator_KeyringRemove.server.serf.lan: serf: Received list-keys query
>     writer.go:29: 2020-02-23T02:46:40.647Z [DEBUG] TestOperator_KeyringRemove.server.serf.lan: serf: messageQueryResponseType: Node-3dcb2018-94c4-5db2-6377-1b8f52671565
>     writer.go:29: 2020-02-23T02:46:40.648Z [INFO]  TestOperator_KeyringRemove: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:40.648Z [INFO]  TestOperator_KeyringRemove.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:40.648Z [DEBUG] TestOperator_KeyringRemove.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.648Z [WARN]  TestOperator_KeyringRemove.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.648Z [DEBUG] TestOperator_KeyringRemove.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.649Z [WARN]  TestOperator_KeyringRemove.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.651Z [INFO]  TestOperator_KeyringRemove.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:40.651Z [INFO]  TestOperator_KeyringRemove: consul server down
>     writer.go:29: 2020-02-23T02:46:40.651Z [INFO]  TestOperator_KeyringRemove: shutdown complete
>     writer.go:29: 2020-02-23T02:46:40.651Z [INFO]  TestOperator_KeyringRemove: Stopping server: protocol=DNS address=127.0.0.1:16433 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.651Z [INFO]  TestOperator_KeyringRemove: Stopping server: protocol=DNS address=127.0.0.1:16433 network=udp
>     writer.go:29: 2020-02-23T02:46:40.651Z [INFO]  TestOperator_KeyringRemove: Stopping server: protocol=HTTP address=127.0.0.1:16434 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.651Z [INFO]  TestOperator_KeyringRemove: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:40.651Z [INFO]  TestOperator_KeyringRemove: Endpoints down
> === CONT  TestDNS_InvalidQueries
> --- PASS: TestDNS_ServiceLookup_FilterCritical (0.25s)
>     writer.go:29: 2020-02-23T02:46:40.442Z [WARN]  TestDNS_ServiceLookup_FilterCritical: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:40.442Z [DEBUG] TestDNS_ServiceLookup_FilterCritical.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:40.442Z [DEBUG] TestDNS_ServiceLookup_FilterCritical.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.534Z [INFO]  TestDNS_ServiceLookup_FilterCritical.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:81bf70f8-a7c8-7cb6-4560-a695e3a975df Address:127.0.0.1:16456}]"
>     writer.go:29: 2020-02-23T02:46:40.534Z [INFO]  TestDNS_ServiceLookup_FilterCritical.server.raft: entering follower state: follower="Node at 127.0.0.1:16456 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:40.535Z [INFO]  TestDNS_ServiceLookup_FilterCritical.server.serf.wan: serf: EventMemberJoin: Node-81bf70f8-a7c8-7cb6-4560-a695e3a975df.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.535Z [INFO]  TestDNS_ServiceLookup_FilterCritical.server.serf.lan: serf: EventMemberJoin: Node-81bf70f8-a7c8-7cb6-4560-a695e3a975df 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.535Z [INFO]  TestDNS_ServiceLookup_FilterCritical: Started DNS server: address=127.0.0.1:16451 network=udp
>     writer.go:29: 2020-02-23T02:46:40.535Z [INFO]  TestDNS_ServiceLookup_FilterCritical.server: Adding LAN server: server="Node-81bf70f8-a7c8-7cb6-4560-a695e3a975df (Addr: tcp/127.0.0.1:16456) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:40.535Z [INFO]  TestDNS_ServiceLookup_FilterCritical.server: Handled event for server in area: event=member-join server=Node-81bf70f8-a7c8-7cb6-4560-a695e3a975df.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:40.535Z [INFO]  TestDNS_ServiceLookup_FilterCritical: Started DNS server: address=127.0.0.1:16451 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.536Z [INFO]  TestDNS_ServiceLookup_FilterCritical: Started HTTP server: address=127.0.0.1:16452 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.536Z [INFO]  TestDNS_ServiceLookup_FilterCritical: started state syncer
>     writer.go:29: 2020-02-23T02:46:40.574Z [WARN]  TestDNS_ServiceLookup_FilterCritical.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:40.574Z [INFO]  TestDNS_ServiceLookup_FilterCritical.server.raft: entering candidate state: node="Node at 127.0.0.1:16456 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:40.587Z [DEBUG] TestDNS_ServiceLookup_FilterCritical.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:40.587Z [DEBUG] TestDNS_ServiceLookup_FilterCritical.server.raft: vote granted: from=81bf70f8-a7c8-7cb6-4560-a695e3a975df term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:40.587Z [INFO]  TestDNS_ServiceLookup_FilterCritical.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:40.587Z [INFO]  TestDNS_ServiceLookup_FilterCritical.server.raft: entering leader state: leader="Node at 127.0.0.1:16456 [Leader]"
>     writer.go:29: 2020-02-23T02:46:40.588Z [INFO]  TestDNS_ServiceLookup_FilterCritical.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:40.589Z [INFO]  TestDNS_ServiceLookup_FilterCritical.server: New leader elected: payload=Node-81bf70f8-a7c8-7cb6-4560-a695e3a975df
>     writer.go:29: 2020-02-23T02:46:40.597Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:40.612Z [INFO]  TestDNS_ServiceLookup_FilterCritical.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:40.612Z [INFO]  TestDNS_ServiceLookup_FilterCritical.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.612Z [DEBUG] TestDNS_ServiceLookup_FilterCritical.server: Skipping self join check for node since the cluster is too small: node=Node-81bf70f8-a7c8-7cb6-4560-a695e3a975df
>     writer.go:29: 2020-02-23T02:46:40.612Z [INFO]  TestDNS_ServiceLookup_FilterCritical.server: member joined, marking health alive: member=Node-81bf70f8-a7c8-7cb6-4560-a695e3a975df
>     writer.go:29: 2020-02-23T02:46:40.679Z [DEBUG] TestDNS_ServiceLookup_FilterCritical.dns: request served from client: name=db.service.consul. type=ANY class=IN latency=134.871µs client=127.0.0.1:60028 client_network=udp
>     writer.go:29: 2020-02-23T02:46:40.680Z [DEBUG] TestDNS_ServiceLookup_FilterCritical.dns: request served from client: name=6a6feae9-e612-d29e-c265-18f0c7f6aa67.query.consul. type=ANY class=IN latency=100.542µs client=127.0.0.1:43800 client_network=udp
>     writer.go:29: 2020-02-23T02:46:40.680Z [INFO]  TestDNS_ServiceLookup_FilterCritical: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:40.680Z [INFO]  TestDNS_ServiceLookup_FilterCritical.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:40.680Z [DEBUG] TestDNS_ServiceLookup_FilterCritical.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.680Z [WARN]  TestDNS_ServiceLookup_FilterCritical.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.680Z [ERROR] TestDNS_ServiceLookup_FilterCritical.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:40.680Z [DEBUG] TestDNS_ServiceLookup_FilterCritical.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.682Z [WARN]  TestDNS_ServiceLookup_FilterCritical.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.683Z [INFO]  TestDNS_ServiceLookup_FilterCritical.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:40.683Z [INFO]  TestDNS_ServiceLookup_FilterCritical: consul server down
>     writer.go:29: 2020-02-23T02:46:40.683Z [INFO]  TestDNS_ServiceLookup_FilterCritical: shutdown complete
>     writer.go:29: 2020-02-23T02:46:40.683Z [INFO]  TestDNS_ServiceLookup_FilterCritical: Stopping server: protocol=DNS address=127.0.0.1:16451 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.683Z [INFO]  TestDNS_ServiceLookup_FilterCritical: Stopping server: protocol=DNS address=127.0.0.1:16451 network=udp
>     writer.go:29: 2020-02-23T02:46:40.683Z [INFO]  TestDNS_ServiceLookup_FilterCritical: Stopping server: protocol=HTTP address=127.0.0.1:16452 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.684Z [INFO]  TestDNS_ServiceLookup_FilterCritical: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:40.684Z [INFO]  TestDNS_ServiceLookup_FilterCritical: Endpoints down
> === CONT  TestDNS_PreparedQuery_AllowStale
> --- PASS: TestOperator_KeyringList (0.36s)
>     writer.go:29: 2020-02-23T02:46:40.443Z [WARN]  TestOperator_KeyringList: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:40.447Z [DEBUG] TestOperator_KeyringList.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:40.448Z [DEBUG] TestOperator_KeyringList.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.535Z [INFO]  TestOperator_KeyringList.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:afd9aed7-2c20-6ac3-e72a-94932268cda0 Address:127.0.0.1:16450}]"
>     writer.go:29: 2020-02-23T02:46:40.536Z [INFO]  TestOperator_KeyringList.server.serf.wan: serf: EventMemberJoin: Node-afd9aed7-2c20-6ac3-e72a-94932268cda0.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.536Z [INFO]  TestOperator_KeyringList.server.raft: entering follower state: follower="Node at 127.0.0.1:16450 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:40.536Z [INFO]  TestOperator_KeyringList.server.serf.lan: serf: EventMemberJoin: Node-afd9aed7-2c20-6ac3-e72a-94932268cda0 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.536Z [INFO]  TestOperator_KeyringList: Started DNS server: address=127.0.0.1:16445 network=udp
>     writer.go:29: 2020-02-23T02:46:40.536Z [INFO]  TestOperator_KeyringList.server: Adding LAN server: server="Node-afd9aed7-2c20-6ac3-e72a-94932268cda0 (Addr: tcp/127.0.0.1:16450) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:40.536Z [INFO]  TestOperator_KeyringList.server: Handled event for server in area: event=member-join server=Node-afd9aed7-2c20-6ac3-e72a-94932268cda0.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:40.537Z [INFO]  TestOperator_KeyringList: Started DNS server: address=127.0.0.1:16445 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.537Z [INFO]  TestOperator_KeyringList: Started HTTP server: address=127.0.0.1:16446 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.537Z [INFO]  TestOperator_KeyringList: started state syncer
>     writer.go:29: 2020-02-23T02:46:40.602Z [WARN]  TestOperator_KeyringList.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:40.602Z [INFO]  TestOperator_KeyringList.server.raft: entering candidate state: node="Node at 127.0.0.1:16450 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:40.618Z [DEBUG] TestOperator_KeyringList.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:40.618Z [DEBUG] TestOperator_KeyringList.server.raft: vote granted: from=afd9aed7-2c20-6ac3-e72a-94932268cda0 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:40.618Z [INFO]  TestOperator_KeyringList.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:40.618Z [INFO]  TestOperator_KeyringList.server.raft: entering leader state: leader="Node at 127.0.0.1:16450 [Leader]"
>     writer.go:29: 2020-02-23T02:46:40.619Z [INFO]  TestOperator_KeyringList.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:40.619Z [INFO]  TestOperator_KeyringList.server: New leader elected: payload=Node-afd9aed7-2c20-6ac3-e72a-94932268cda0
>     writer.go:29: 2020-02-23T02:46:40.627Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:40.635Z [INFO]  TestOperator_KeyringList.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:40.635Z [INFO]  TestOperator_KeyringList.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.635Z [DEBUG] TestOperator_KeyringList.server: Skipping self join check for node since the cluster is too small: node=Node-afd9aed7-2c20-6ac3-e72a-94932268cda0
>     writer.go:29: 2020-02-23T02:46:40.635Z [INFO]  TestOperator_KeyringList.server: member joined, marking health alive: member=Node-afd9aed7-2c20-6ac3-e72a-94932268cda0
>     writer.go:29: 2020-02-23T02:46:40.707Z [INFO]  TestOperator_KeyringList.server.serf.wan: serf: Received list-keys query
>     writer.go:29: 2020-02-23T02:46:40.708Z [DEBUG] TestOperator_KeyringList.server.serf.wan: serf: messageQueryResponseType: Node-afd9aed7-2c20-6ac3-e72a-94932268cda0.dc1
>     writer.go:29: 2020-02-23T02:46:40.708Z [DEBUG] TestOperator_KeyringList.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.708Z [INFO]  TestOperator_KeyringList.server.serf.lan: serf: Received list-keys query
>     writer.go:29: 2020-02-23T02:46:40.708Z [DEBUG] TestOperator_KeyringList.server.serf.lan: serf: messageQueryResponseType: Node-afd9aed7-2c20-6ac3-e72a-94932268cda0
>     writer.go:29: 2020-02-23T02:46:40.708Z [INFO]  TestOperator_KeyringList: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:40.708Z [INFO]  TestOperator_KeyringList.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:40.708Z [DEBUG] TestOperator_KeyringList.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.708Z [WARN]  TestOperator_KeyringList.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.708Z [ERROR] TestOperator_KeyringList.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:40.708Z [DEBUG] TestOperator_KeyringList.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.740Z [WARN]  TestOperator_KeyringList.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.787Z [INFO]  TestOperator_KeyringList.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:40.787Z [INFO]  TestOperator_KeyringList: consul server down
>     writer.go:29: 2020-02-23T02:46:40.787Z [INFO]  TestOperator_KeyringList: shutdown complete
>     writer.go:29: 2020-02-23T02:46:40.787Z [INFO]  TestOperator_KeyringList: Stopping server: protocol=DNS address=127.0.0.1:16445 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.787Z [INFO]  TestOperator_KeyringList: Stopping server: protocol=DNS address=127.0.0.1:16445 network=udp
>     writer.go:29: 2020-02-23T02:46:40.787Z [INFO]  TestOperator_KeyringList: Stopping server: protocol=HTTP address=127.0.0.1:16446 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.787Z [INFO]  TestOperator_KeyringList: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:40.787Z [INFO]  TestOperator_KeyringList: Endpoints down
> === CONT  TestDNS_AltDomains_Overlap
> --- PASS: TestDNS_InvalidQueries (0.22s)
>     writer.go:29: 2020-02-23T02:46:40.658Z [WARN]  TestDNS_InvalidQueries: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:40.658Z [DEBUG] TestDNS_InvalidQueries.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:40.659Z [DEBUG] TestDNS_InvalidQueries.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.674Z [INFO]  TestDNS_InvalidQueries.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:af91d51e-acb5-e891-3aa8-ee9eadb2d9f6 Address:127.0.0.1:16462}]"
>     writer.go:29: 2020-02-23T02:46:40.674Z [INFO]  TestDNS_InvalidQueries.server.raft: entering follower state: follower="Node at 127.0.0.1:16462 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:40.674Z [INFO]  TestDNS_InvalidQueries.server.serf.wan: serf: EventMemberJoin: Node-af91d51e-acb5-e891-3aa8-ee9eadb2d9f6.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.675Z [INFO]  TestDNS_InvalidQueries.server.serf.lan: serf: EventMemberJoin: Node-af91d51e-acb5-e891-3aa8-ee9eadb2d9f6 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.675Z [INFO]  TestDNS_InvalidQueries.server: Adding LAN server: server="Node-af91d51e-acb5-e891-3aa8-ee9eadb2d9f6 (Addr: tcp/127.0.0.1:16462) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:40.675Z [INFO]  TestDNS_InvalidQueries.server: Handled event for server in area: event=member-join server=Node-af91d51e-acb5-e891-3aa8-ee9eadb2d9f6.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:40.675Z [INFO]  TestDNS_InvalidQueries: Started DNS server: address=127.0.0.1:16457 network=udp
>     writer.go:29: 2020-02-23T02:46:40.675Z [INFO]  TestDNS_InvalidQueries: Started DNS server: address=127.0.0.1:16457 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.675Z [INFO]  TestDNS_InvalidQueries: Started HTTP server: address=127.0.0.1:16458 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.675Z [INFO]  TestDNS_InvalidQueries: started state syncer
>     writer.go:29: 2020-02-23T02:46:40.723Z [WARN]  TestDNS_InvalidQueries.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:40.723Z [INFO]  TestDNS_InvalidQueries.server.raft: entering candidate state: node="Node at 127.0.0.1:16462 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:40.788Z [DEBUG] TestDNS_InvalidQueries.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:40.788Z [DEBUG] TestDNS_InvalidQueries.server.raft: vote granted: from=af91d51e-acb5-e891-3aa8-ee9eadb2d9f6 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:40.788Z [INFO]  TestDNS_InvalidQueries.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:40.788Z [INFO]  TestDNS_InvalidQueries.server.raft: entering leader state: leader="Node at 127.0.0.1:16462 [Leader]"
>     writer.go:29: 2020-02-23T02:46:40.789Z [INFO]  TestDNS_InvalidQueries.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:40.789Z [INFO]  TestDNS_InvalidQueries.server: New leader elected: payload=Node-af91d51e-acb5-e891-3aa8-ee9eadb2d9f6
>     writer.go:29: 2020-02-23T02:46:40.796Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:40.810Z [INFO]  TestDNS_InvalidQueries.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:40.810Z [INFO]  TestDNS_InvalidQueries.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.810Z [DEBUG] TestDNS_InvalidQueries.server: Skipping self join check for node since the cluster is too small: node=Node-af91d51e-acb5-e891-3aa8-ee9eadb2d9f6
>     writer.go:29: 2020-02-23T02:46:40.810Z [INFO]  TestDNS_InvalidQueries.server: member joined, marking health alive: member=Node-af91d51e-acb5-e891-3aa8-ee9eadb2d9f6
>     writer.go:29: 2020-02-23T02:46:40.867Z [WARN]  TestDNS_InvalidQueries.dns: QName invalid: qname=
>     writer.go:29: 2020-02-23T02:46:40.867Z [DEBUG] TestDNS_InvalidQueries.dns: request served from client: name=consul. type=SRV class=IN latency=61.976µs client=127.0.0.1:35794 client_network=udp
>     writer.go:29: 2020-02-23T02:46:40.867Z [WARN]  TestDNS_InvalidQueries.dns: QName invalid: qname=node.
>     writer.go:29: 2020-02-23T02:46:40.867Z [DEBUG] TestDNS_InvalidQueries.dns: request served from client: name=node.consul. type=SRV class=IN latency=37.195µs client=127.0.0.1:33073 client_network=udp
>     writer.go:29: 2020-02-23T02:46:40.867Z [WARN]  TestDNS_InvalidQueries.dns: QName invalid: qname=service.
>     writer.go:29: 2020-02-23T02:46:40.867Z [DEBUG] TestDNS_InvalidQueries.dns: request served from client: name=service.consul. type=SRV class=IN latency=31.979µs client=127.0.0.1:40637 client_network=udp
>     writer.go:29: 2020-02-23T02:46:40.867Z [WARN]  TestDNS_InvalidQueries.dns: QName invalid: qname=query.
>     writer.go:29: 2020-02-23T02:46:40.867Z [DEBUG] TestDNS_InvalidQueries.dns: request served from client: name=query.consul. type=SRV class=IN latency=39.479µs client=127.0.0.1:33386 client_network=udp
>     writer.go:29: 2020-02-23T02:46:40.867Z [WARN]  TestDNS_InvalidQueries.dns: QName invalid: qname=foo.node.dc1.extra.more.
>     writer.go:29: 2020-02-23T02:46:40.867Z [DEBUG] TestDNS_InvalidQueries.dns: request served from client: name=foo.node.dc1.extra.more.consul. type=SRV class=IN latency=36.456µs client=127.0.0.1:37578 client_network=udp
>     writer.go:29: 2020-02-23T02:46:40.867Z [WARN]  TestDNS_InvalidQueries.dns: QName invalid: qname=foo.service.dc1.extra.more.
>     writer.go:29: 2020-02-23T02:46:40.867Z [DEBUG] TestDNS_InvalidQueries.dns: request served from client: name=foo.service.dc1.extra.more.consul. type=SRV class=IN latency=34.373µs client=127.0.0.1:42683 client_network=udp
>     writer.go:29: 2020-02-23T02:46:40.868Z [WARN]  TestDNS_InvalidQueries.dns: QName invalid: qname=foo.query.dc1.extra.more.
>     writer.go:29: 2020-02-23T02:46:40.868Z [DEBUG] TestDNS_InvalidQueries.dns: request served from client: name=foo.query.dc1.extra.more.consul. type=SRV class=IN latency=33.888µs client=127.0.0.1:45010 client_network=udp
>     writer.go:29: 2020-02-23T02:46:40.868Z [INFO]  TestDNS_InvalidQueries: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:40.868Z [INFO]  TestDNS_InvalidQueries.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:40.868Z [DEBUG] TestDNS_InvalidQueries.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.868Z [WARN]  TestDNS_InvalidQueries.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.868Z [ERROR] TestDNS_InvalidQueries.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:40.868Z [DEBUG] TestDNS_InvalidQueries.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.869Z [WARN]  TestDNS_InvalidQueries.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.871Z [INFO]  TestDNS_InvalidQueries.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:40.871Z [INFO]  TestDNS_InvalidQueries: consul server down
>     writer.go:29: 2020-02-23T02:46:40.871Z [INFO]  TestDNS_InvalidQueries: shutdown complete
>     writer.go:29: 2020-02-23T02:46:40.871Z [INFO]  TestDNS_InvalidQueries: Stopping server: protocol=DNS address=127.0.0.1:16457 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.871Z [INFO]  TestDNS_InvalidQueries: Stopping server: protocol=DNS address=127.0.0.1:16457 network=udp
>     writer.go:29: 2020-02-23T02:46:40.871Z [INFO]  TestDNS_InvalidQueries: Stopping server: protocol=HTTP address=127.0.0.1:16458 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.871Z [INFO]  TestDNS_InvalidQueries: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:40.871Z [INFO]  TestDNS_InvalidQueries: Endpoints down
> === CONT  TestDNS_AltDomains_SOA
> --- PASS: TestDNS_PreparedQuery_AgentSource (0.42s)
>     writer.go:29: 2020-02-23T02:46:40.610Z [WARN]  TestDNS_PreparedQuery_AgentSource: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:40.610Z [DEBUG] TestDNS_PreparedQuery_AgentSource.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:40.610Z [DEBUG] TestDNS_PreparedQuery_AgentSource.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.621Z [INFO]  TestDNS_PreparedQuery_AgentSource.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:c0c9798c-a099-66e0-77dc-5a2f39173581 Address:127.0.0.1:16444}]"
>     writer.go:29: 2020-02-23T02:46:40.621Z [INFO]  TestDNS_PreparedQuery_AgentSource.server.serf.wan: serf: EventMemberJoin: Node-c0c9798c-a099-66e0-77dc-5a2f39173581.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.622Z [INFO]  TestDNS_PreparedQuery_AgentSource.server.serf.lan: serf: EventMemberJoin: Node-c0c9798c-a099-66e0-77dc-5a2f39173581 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.622Z [INFO]  TestDNS_PreparedQuery_AgentSource: Started DNS server: address=127.0.0.1:16439 network=udp
>     writer.go:29: 2020-02-23T02:46:40.622Z [INFO]  TestDNS_PreparedQuery_AgentSource.server.raft: entering follower state: follower="Node at 127.0.0.1:16444 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:40.622Z [INFO]  TestDNS_PreparedQuery_AgentSource.server: Adding LAN server: server="Node-c0c9798c-a099-66e0-77dc-5a2f39173581 (Addr: tcp/127.0.0.1:16444) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:40.622Z [INFO]  TestDNS_PreparedQuery_AgentSource.server: Handled event for server in area: event=member-join server=Node-c0c9798c-a099-66e0-77dc-5a2f39173581.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:40.622Z [INFO]  TestDNS_PreparedQuery_AgentSource: Started DNS server: address=127.0.0.1:16439 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.622Z [INFO]  TestDNS_PreparedQuery_AgentSource: Started HTTP server: address=127.0.0.1:16440 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.623Z [INFO]  TestDNS_PreparedQuery_AgentSource: started state syncer
>     writer.go:29: 2020-02-23T02:46:40.679Z [WARN]  TestDNS_PreparedQuery_AgentSource.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:40.679Z [INFO]  TestDNS_PreparedQuery_AgentSource.server.raft: entering candidate state: node="Node at 127.0.0.1:16444 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:40.687Z [DEBUG] TestDNS_PreparedQuery_AgentSource.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:40.687Z [DEBUG] TestDNS_PreparedQuery_AgentSource.server.raft: vote granted: from=c0c9798c-a099-66e0-77dc-5a2f39173581 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:40.687Z [INFO]  TestDNS_PreparedQuery_AgentSource.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:40.688Z [INFO]  TestDNS_PreparedQuery_AgentSource.server.raft: entering leader state: leader="Node at 127.0.0.1:16444 [Leader]"
>     writer.go:29: 2020-02-23T02:46:40.688Z [INFO]  TestDNS_PreparedQuery_AgentSource.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:40.688Z [INFO]  TestDNS_PreparedQuery_AgentSource.server: New leader elected: payload=Node-c0c9798c-a099-66e0-77dc-5a2f39173581
>     writer.go:29: 2020-02-23T02:46:40.695Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:40.726Z [INFO]  TestDNS_PreparedQuery_AgentSource.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:40.726Z [INFO]  TestDNS_PreparedQuery_AgentSource.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.726Z [DEBUG] TestDNS_PreparedQuery_AgentSource.server: Skipping self join check for node since the cluster is too small: node=Node-c0c9798c-a099-66e0-77dc-5a2f39173581
>     writer.go:29: 2020-02-23T02:46:40.726Z [INFO]  TestDNS_PreparedQuery_AgentSource.server: member joined, marking health alive: member=Node-c0c9798c-a099-66e0-77dc-5a2f39173581
>     writer.go:29: 2020-02-23T02:46:40.986Z [WARN]  TestDNS_PreparedQuery_AgentSource.server: endpoint injected; this should only be used for testing
>     writer.go:29: 2020-02-23T02:46:40.987Z [DEBUG] TestDNS_PreparedQuery_AgentSource.dns: request served from client: name=foo.query.consul. type=SRV class=IN latency=58.987µs client=127.0.0.1:57913 client_network=udp
>     writer.go:29: 2020-02-23T02:46:40.987Z [INFO]  TestDNS_PreparedQuery_AgentSource: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:40.987Z [INFO]  TestDNS_PreparedQuery_AgentSource.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:40.987Z [DEBUG] TestDNS_PreparedQuery_AgentSource.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.987Z [WARN]  TestDNS_PreparedQuery_AgentSource.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.987Z [ERROR] TestDNS_PreparedQuery_AgentSource.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:40.987Z [DEBUG] TestDNS_PreparedQuery_AgentSource.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.988Z [WARN]  TestDNS_PreparedQuery_AgentSource.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:40.990Z [INFO]  TestDNS_PreparedQuery_AgentSource.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:40.990Z [INFO]  TestDNS_PreparedQuery_AgentSource: consul server down
>     writer.go:29: 2020-02-23T02:46:40.990Z [INFO]  TestDNS_PreparedQuery_AgentSource: shutdown complete
>     writer.go:29: 2020-02-23T02:46:40.990Z [INFO]  TestDNS_PreparedQuery_AgentSource: Stopping server: protocol=DNS address=127.0.0.1:16439 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.990Z [INFO]  TestDNS_PreparedQuery_AgentSource: Stopping server: protocol=DNS address=127.0.0.1:16439 network=udp
>     writer.go:29: 2020-02-23T02:46:40.990Z [INFO]  TestDNS_PreparedQuery_AgentSource: Stopping server: protocol=HTTP address=127.0.0.1:16440 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.990Z [INFO]  TestDNS_PreparedQuery_AgentSource: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:40.990Z [INFO]  TestDNS_PreparedQuery_AgentSource: Endpoints down
> === CONT  TestDNS_AltDomains_Service
> --- PASS: TestDNS_AltDomains_Overlap (0.36s)
>     writer.go:29: 2020-02-23T02:46:40.797Z [WARN]  TestDNS_AltDomains_Overlap: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:40.797Z [DEBUG] TestDNS_AltDomains_Overlap.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:40.798Z [DEBUG] TestDNS_AltDomains_Overlap.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.815Z [INFO]  TestDNS_AltDomains_Overlap.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:dc1f8fcf-195e-eaf4-9a65-88303618d948 Address:127.0.0.1:16468}]"
>     writer.go:29: 2020-02-23T02:46:40.815Z [INFO]  TestDNS_AltDomains_Overlap.server.raft: entering follower state: follower="Node at 127.0.0.1:16468 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:40.815Z [INFO]  TestDNS_AltDomains_Overlap.server.serf.wan: serf: EventMemberJoin: test-node.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.816Z [INFO]  TestDNS_AltDomains_Overlap.server.serf.lan: serf: EventMemberJoin: test-node 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.816Z [INFO]  TestDNS_AltDomains_Overlap.server: Handled event for server in area: event=member-join server=test-node.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:40.816Z [INFO]  TestDNS_AltDomains_Overlap.server: Adding LAN server: server="test-node (Addr: tcp/127.0.0.1:16468) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:40.816Z [INFO]  TestDNS_AltDomains_Overlap: Started DNS server: address=127.0.0.1:16463 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.816Z [INFO]  TestDNS_AltDomains_Overlap: Started DNS server: address=127.0.0.1:16463 network=udp
>     writer.go:29: 2020-02-23T02:46:40.817Z [INFO]  TestDNS_AltDomains_Overlap: Started HTTP server: address=127.0.0.1:16464 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.817Z [INFO]  TestDNS_AltDomains_Overlap: started state syncer
>     writer.go:29: 2020-02-23T02:46:40.863Z [WARN]  TestDNS_AltDomains_Overlap.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:40.863Z [INFO]  TestDNS_AltDomains_Overlap.server.raft: entering candidate state: node="Node at 127.0.0.1:16468 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:40.866Z [DEBUG] TestDNS_AltDomains_Overlap.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:40.866Z [DEBUG] TestDNS_AltDomains_Overlap.server.raft: vote granted: from=dc1f8fcf-195e-eaf4-9a65-88303618d948 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:40.866Z [INFO]  TestDNS_AltDomains_Overlap.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:40.866Z [INFO]  TestDNS_AltDomains_Overlap.server.raft: entering leader state: leader="Node at 127.0.0.1:16468 [Leader]"
>     writer.go:29: 2020-02-23T02:46:40.866Z [INFO]  TestDNS_AltDomains_Overlap.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:40.866Z [INFO]  TestDNS_AltDomains_Overlap.server: New leader elected: payload=test-node
>     writer.go:29: 2020-02-23T02:46:40.883Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:40.901Z [INFO]  TestDNS_AltDomains_Overlap.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:40.901Z [INFO]  TestDNS_AltDomains_Overlap.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.901Z [DEBUG] TestDNS_AltDomains_Overlap.server: Skipping self join check for node since the cluster is too small: node=test-node
>     writer.go:29: 2020-02-23T02:46:40.901Z [INFO]  TestDNS_AltDomains_Overlap.server: member joined, marking health alive: member=test-node
>     writer.go:29: 2020-02-23T02:46:41.113Z [DEBUG] TestDNS_AltDomains_Overlap.dns: request served from client: name=test-node.node.consul. type=A class=IN latency=70.556µs client=127.0.0.1:43003 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.113Z [DEBUG] TestDNS_AltDomains_Overlap.dns: request served from client: name=test-node.node.test.consul. type=A class=IN latency=40.264µs client=127.0.0.1:57761 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.113Z [DEBUG] TestDNS_AltDomains_Overlap.dns: request served from client: name=test-node.node.dc1.consul. type=A class=IN latency=38.428µs client=127.0.0.1:38480 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.113Z [DEBUG] TestDNS_AltDomains_Overlap.dns: request served from client: name=test-node.node.dc1.test.consul. type=A class=IN latency=34.547µs client=127.0.0.1:42626 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.113Z [INFO]  TestDNS_AltDomains_Overlap: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:41.113Z [INFO]  TestDNS_AltDomains_Overlap.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:41.113Z [DEBUG] TestDNS_AltDomains_Overlap.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.113Z [WARN]  TestDNS_AltDomains_Overlap.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.113Z [ERROR] TestDNS_AltDomains_Overlap.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:41.113Z [DEBUG] TestDNS_AltDomains_Overlap.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.142Z [WARN]  TestDNS_AltDomains_Overlap.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.150Z [INFO]  TestDNS_AltDomains_Overlap.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:41.150Z [INFO]  TestDNS_AltDomains_Overlap: consul server down
>     writer.go:29: 2020-02-23T02:46:41.150Z [INFO]  TestDNS_AltDomains_Overlap: shutdown complete
>     writer.go:29: 2020-02-23T02:46:41.150Z [INFO]  TestDNS_AltDomains_Overlap: Stopping server: protocol=DNS address=127.0.0.1:16463 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.150Z [INFO]  TestDNS_AltDomains_Overlap: Stopping server: protocol=DNS address=127.0.0.1:16463 network=udp
>     writer.go:29: 2020-02-23T02:46:41.150Z [INFO]  TestDNS_AltDomains_Overlap: Stopping server: protocol=HTTP address=127.0.0.1:16464 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.150Z [INFO]  TestDNS_AltDomains_Overlap: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:41.150Z [INFO]  TestDNS_AltDomains_Overlap: Endpoints down
> === CONT  TestDNS_NonExistingLookupEmptyAorAAAA
> --- PASS: TestDNS_AltDomains_SOA (0.33s)
>     writer.go:29: 2020-02-23T02:46:40.878Z [WARN]  TestDNS_AltDomains_SOA: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:40.878Z [DEBUG] TestDNS_AltDomains_SOA.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:40.878Z [DEBUG] TestDNS_AltDomains_SOA.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.889Z [INFO]  TestDNS_AltDomains_SOA.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:320356fc-0dfc-fb9d-32a1-ec61bd30e611 Address:127.0.0.1:16474}]"
>     writer.go:29: 2020-02-23T02:46:40.890Z [INFO]  TestDNS_AltDomains_SOA.server.raft: entering follower state: follower="Node at 127.0.0.1:16474 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:40.890Z [INFO]  TestDNS_AltDomains_SOA.server.serf.wan: serf: EventMemberJoin: test-node.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.902Z [INFO]  TestDNS_AltDomains_SOA.server.serf.lan: serf: EventMemberJoin: test-node 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.902Z [INFO]  TestDNS_AltDomains_SOA.server: Adding LAN server: server="test-node (Addr: tcp/127.0.0.1:16474) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:40.902Z [INFO]  TestDNS_AltDomains_SOA.server: Handled event for server in area: event=member-join server=test-node.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:40.902Z [INFO]  TestDNS_AltDomains_SOA: Started DNS server: address=127.0.0.1:16469 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.903Z [INFO]  TestDNS_AltDomains_SOA: Started DNS server: address=127.0.0.1:16469 network=udp
>     writer.go:29: 2020-02-23T02:46:40.903Z [INFO]  TestDNS_AltDomains_SOA: Started HTTP server: address=127.0.0.1:16470 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.903Z [INFO]  TestDNS_AltDomains_SOA: started state syncer
>     writer.go:29: 2020-02-23T02:46:40.926Z [WARN]  TestDNS_AltDomains_SOA.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:40.926Z [INFO]  TestDNS_AltDomains_SOA.server.raft: entering candidate state: node="Node at 127.0.0.1:16474 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:40.929Z [DEBUG] TestDNS_AltDomains_SOA.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:40.929Z [DEBUG] TestDNS_AltDomains_SOA.server.raft: vote granted: from=320356fc-0dfc-fb9d-32a1-ec61bd30e611 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:40.929Z [INFO]  TestDNS_AltDomains_SOA.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:40.929Z [INFO]  TestDNS_AltDomains_SOA.server.raft: entering leader state: leader="Node at 127.0.0.1:16474 [Leader]"
>     writer.go:29: 2020-02-23T02:46:40.930Z [INFO]  TestDNS_AltDomains_SOA.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:40.930Z [INFO]  TestDNS_AltDomains_SOA.server: New leader elected: payload=test-node
>     writer.go:29: 2020-02-23T02:46:40.936Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:40.944Z [INFO]  TestDNS_AltDomains_SOA.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:40.944Z [INFO]  TestDNS_AltDomains_SOA.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.944Z [DEBUG] TestDNS_AltDomains_SOA.server: Skipping self join check for node since the cluster is too small: node=test-node
>     writer.go:29: 2020-02-23T02:46:40.944Z [INFO]  TestDNS_AltDomains_SOA.server: member joined, marking health alive: member=test-node
>     writer.go:29: 2020-02-23T02:46:41.092Z [DEBUG] TestDNS_AltDomains_SOA: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:41.143Z [INFO]  TestDNS_AltDomains_SOA: Synced node info
>     writer.go:29: 2020-02-23T02:46:41.200Z [DEBUG] TestDNS_AltDomains_SOA.dns: request served from client: name=test-node.node.consul. type=SOA class=IN latency=99.404µs client=127.0.0.1:55000 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.200Z [DEBUG] TestDNS_AltDomains_SOA.dns: request served from client: name=test-node.node.test-domain. type=SOA class=IN latency=60.817µs client=127.0.0.1:40209 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.200Z [INFO]  TestDNS_AltDomains_SOA: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:41.200Z [INFO]  TestDNS_AltDomains_SOA.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:41.200Z [DEBUG] TestDNS_AltDomains_SOA.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.200Z [WARN]  TestDNS_AltDomains_SOA.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.200Z [DEBUG] TestDNS_AltDomains_SOA.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.202Z [WARN]  TestDNS_AltDomains_SOA.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.204Z [INFO]  TestDNS_AltDomains_SOA.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:41.204Z [INFO]  TestDNS_AltDomains_SOA: consul server down
>     writer.go:29: 2020-02-23T02:46:41.204Z [INFO]  TestDNS_AltDomains_SOA: shutdown complete
>     writer.go:29: 2020-02-23T02:46:41.204Z [INFO]  TestDNS_AltDomains_SOA: Stopping server: protocol=DNS address=127.0.0.1:16469 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.204Z [INFO]  TestDNS_AltDomains_SOA: Stopping server: protocol=DNS address=127.0.0.1:16469 network=udp
>     writer.go:29: 2020-02-23T02:46:41.204Z [INFO]  TestDNS_AltDomains_SOA: Stopping server: protocol=HTTP address=127.0.0.1:16470 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.204Z [INFO]  TestDNS_AltDomains_SOA: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:41.204Z [INFO]  TestDNS_AltDomains_SOA: Endpoints down
> === CONT  TestDNS_NonExistingLookup
> --- PASS: TestDNS_PreparedQuery_AllowStale (0.54s)
>     writer.go:29: 2020-02-23T02:46:40.694Z [WARN]  TestDNS_PreparedQuery_AllowStale: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:40.694Z [DEBUG] TestDNS_PreparedQuery_AllowStale.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:40.695Z [DEBUG] TestDNS_PreparedQuery_AllowStale.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:40.789Z [INFO]  TestDNS_PreparedQuery_AllowStale.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:703f9154-bcfb-b918-9980-bc400f756280 Address:127.0.0.1:16480}]"
>     writer.go:29: 2020-02-23T02:46:40.790Z [INFO]  TestDNS_PreparedQuery_AllowStale.server.serf.wan: serf: EventMemberJoin: Node-703f9154-bcfb-b918-9980-bc400f756280.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.790Z [INFO]  TestDNS_PreparedQuery_AllowStale.server.serf.lan: serf: EventMemberJoin: Node-703f9154-bcfb-b918-9980-bc400f756280 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:40.791Z [INFO]  TestDNS_PreparedQuery_AllowStale: Started DNS server: address=127.0.0.1:16475 network=udp
>     writer.go:29: 2020-02-23T02:46:40.791Z [INFO]  TestDNS_PreparedQuery_AllowStale.server.raft: entering follower state: follower="Node at 127.0.0.1:16480 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:40.791Z [INFO]  TestDNS_PreparedQuery_AllowStale.server: Adding LAN server: server="Node-703f9154-bcfb-b918-9980-bc400f756280 (Addr: tcp/127.0.0.1:16480) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:40.791Z [INFO]  TestDNS_PreparedQuery_AllowStale.server: Handled event for server in area: event=member-join server=Node-703f9154-bcfb-b918-9980-bc400f756280.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:40.791Z [INFO]  TestDNS_PreparedQuery_AllowStale: Started DNS server: address=127.0.0.1:16475 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.792Z [INFO]  TestDNS_PreparedQuery_AllowStale: Started HTTP server: address=127.0.0.1:16476 network=tcp
>     writer.go:29: 2020-02-23T02:46:40.792Z [INFO]  TestDNS_PreparedQuery_AllowStale: started state syncer
>     writer.go:29: 2020-02-23T02:46:40.828Z [WARN]  TestDNS_PreparedQuery_AllowStale.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:40.828Z [INFO]  TestDNS_PreparedQuery_AllowStale.server.raft: entering candidate state: node="Node at 127.0.0.1:16480 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:40.832Z [DEBUG] TestDNS_PreparedQuery_AllowStale.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:40.832Z [DEBUG] TestDNS_PreparedQuery_AllowStale.server.raft: vote granted: from=703f9154-bcfb-b918-9980-bc400f756280 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:40.832Z [INFO]  TestDNS_PreparedQuery_AllowStale.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:40.832Z [INFO]  TestDNS_PreparedQuery_AllowStale.server.raft: entering leader state: leader="Node at 127.0.0.1:16480 [Leader]"
>     writer.go:29: 2020-02-23T02:46:40.832Z [INFO]  TestDNS_PreparedQuery_AllowStale.server: New leader elected: payload=Node-703f9154-bcfb-b918-9980-bc400f756280
>     writer.go:29: 2020-02-23T02:46:40.832Z [INFO]  TestDNS_PreparedQuery_AllowStale.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:40.839Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:40.847Z [INFO]  TestDNS_PreparedQuery_AllowStale.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:40.847Z [INFO]  TestDNS_PreparedQuery_AllowStale.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:40.847Z [DEBUG] TestDNS_PreparedQuery_AllowStale.server: Skipping self join check for node since the cluster is too small: node=Node-703f9154-bcfb-b918-9980-bc400f756280
>     writer.go:29: 2020-02-23T02:46:40.847Z [INFO]  TestDNS_PreparedQuery_AllowStale.server: member joined, marking health alive: member=Node-703f9154-bcfb-b918-9980-bc400f756280
>     writer.go:29: 2020-02-23T02:46:40.885Z [DEBUG] TestDNS_PreparedQuery_AllowStale: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:40.887Z [INFO]  TestDNS_PreparedQuery_AllowStale: Synced node info
>     writer.go:29: 2020-02-23T02:46:41.222Z [WARN]  TestDNS_PreparedQuery_AllowStale.server: endpoint injected; this should only be used for testing
>     writer.go:29: 2020-02-23T02:46:41.222Z [WARN]  TestDNS_PreparedQuery_AllowStale.dns: Query results too stale, re-requesting
>     writer.go:29: 2020-02-23T02:46:41.222Z [DEBUG] TestDNS_PreparedQuery_AllowStale.dns: request served from client: name=nope.query.consul. type=SRV class=IN latency=80.352µs client=127.0.0.1:41106 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.222Z [INFO]  TestDNS_PreparedQuery_AllowStale: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:41.222Z [INFO]  TestDNS_PreparedQuery_AllowStale.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:41.222Z [DEBUG] TestDNS_PreparedQuery_AllowStale.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.222Z [WARN]  TestDNS_PreparedQuery_AllowStale.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.222Z [DEBUG] TestDNS_PreparedQuery_AllowStale.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.225Z [WARN]  TestDNS_PreparedQuery_AllowStale.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.227Z [INFO]  TestDNS_PreparedQuery_AllowStale.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:41.227Z [INFO]  TestDNS_PreparedQuery_AllowStale: consul server down
>     writer.go:29: 2020-02-23T02:46:41.227Z [INFO]  TestDNS_PreparedQuery_AllowStale: shutdown complete
>     writer.go:29: 2020-02-23T02:46:41.227Z [INFO]  TestDNS_PreparedQuery_AllowStale: Stopping server: protocol=DNS address=127.0.0.1:16475 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.227Z [INFO]  TestDNS_PreparedQuery_AllowStale: Stopping server: protocol=DNS address=127.0.0.1:16475 network=udp
>     writer.go:29: 2020-02-23T02:46:41.227Z [INFO]  TestDNS_PreparedQuery_AllowStale: Stopping server: protocol=HTTP address=127.0.0.1:16476 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.227Z [INFO]  TestDNS_PreparedQuery_AllowStale: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:41.227Z [INFO]  TestDNS_PreparedQuery_AllowStale: Endpoints down
> === CONT  TestDNS_AddressLookup
> --- PASS: TestDNS_AltDomains_Service (0.30s)
>     writer.go:29: 2020-02-23T02:46:40.998Z [WARN]  TestDNS_AltDomains_Service: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:40.998Z [DEBUG] TestDNS_AltDomains_Service.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:40.999Z [DEBUG] TestDNS_AltDomains_Service.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:41.147Z [INFO]  TestDNS_AltDomains_Service.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:c315da92-854b-e973-ff8e-e1f9cd041338 Address:127.0.0.1:16486}]"
>     writer.go:29: 2020-02-23T02:46:41.148Z [INFO]  TestDNS_AltDomains_Service.server.raft: entering follower state: follower="Node at 127.0.0.1:16486 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:41.148Z [INFO]  TestDNS_AltDomains_Service.server.serf.wan: serf: EventMemberJoin: Node-c315da92-854b-e973-ff8e-e1f9cd041338.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.149Z [INFO]  TestDNS_AltDomains_Service.server.serf.lan: serf: EventMemberJoin: Node-c315da92-854b-e973-ff8e-e1f9cd041338 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.149Z [INFO]  TestDNS_AltDomains_Service: Started DNS server: address=127.0.0.1:16481 network=udp
>     writer.go:29: 2020-02-23T02:46:41.149Z [INFO]  TestDNS_AltDomains_Service.server: Adding LAN server: server="Node-c315da92-854b-e973-ff8e-e1f9cd041338 (Addr: tcp/127.0.0.1:16486) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:41.149Z [INFO]  TestDNS_AltDomains_Service.server: Handled event for server in area: event=member-join server=Node-c315da92-854b-e973-ff8e-e1f9cd041338.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:41.149Z [INFO]  TestDNS_AltDomains_Service: Started DNS server: address=127.0.0.1:16481 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.149Z [INFO]  TestDNS_AltDomains_Service: Started HTTP server: address=127.0.0.1:16482 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.149Z [INFO]  TestDNS_AltDomains_Service: started state syncer
>     writer.go:29: 2020-02-23T02:46:41.186Z [WARN]  TestDNS_AltDomains_Service.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:41.186Z [INFO]  TestDNS_AltDomains_Service.server.raft: entering candidate state: node="Node at 127.0.0.1:16486 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:41.190Z [DEBUG] TestDNS_AltDomains_Service.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:41.190Z [DEBUG] TestDNS_AltDomains_Service.server.raft: vote granted: from=c315da92-854b-e973-ff8e-e1f9cd041338 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:41.190Z [INFO]  TestDNS_AltDomains_Service.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:41.190Z [INFO]  TestDNS_AltDomains_Service.server.raft: entering leader state: leader="Node at 127.0.0.1:16486 [Leader]"
>     writer.go:29: 2020-02-23T02:46:41.190Z [INFO]  TestDNS_AltDomains_Service.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:41.190Z [INFO]  TestDNS_AltDomains_Service.server: New leader elected: payload=Node-c315da92-854b-e973-ff8e-e1f9cd041338
>     writer.go:29: 2020-02-23T02:46:41.197Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:41.211Z [INFO]  TestDNS_AltDomains_Service.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:41.211Z [INFO]  TestDNS_AltDomains_Service.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.211Z [DEBUG] TestDNS_AltDomains_Service.server: Skipping self join check for node since the cluster is too small: node=Node-c315da92-854b-e973-ff8e-e1f9cd041338
>     writer.go:29: 2020-02-23T02:46:41.211Z [INFO]  TestDNS_AltDomains_Service.server: member joined, marking health alive: member=Node-c315da92-854b-e973-ff8e-e1f9cd041338
>     writer.go:29: 2020-02-23T02:46:41.283Z [DEBUG] TestDNS_AltDomains_Service.dns: request served from client: name=db.service.consul. type=SRV class=IN latency=94.654µs client=127.0.0.1:60024 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.284Z [DEBUG] TestDNS_AltDomains_Service.dns: request served from client: name=db.service.test-domain. type=SRV class=IN latency=55.421µs client=127.0.0.1:60268 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.284Z [DEBUG] TestDNS_AltDomains_Service.dns: request served from client: name=db.service.dc1.consul. type=SRV class=IN latency=51.11µs client=127.0.0.1:57502 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.284Z [DEBUG] TestDNS_AltDomains_Service.dns: request served from client: name=db.service.dc1.test-domain. type=SRV class=IN latency=47.085µs client=127.0.0.1:36641 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.284Z [INFO]  TestDNS_AltDomains_Service: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:41.284Z [INFO]  TestDNS_AltDomains_Service.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:41.284Z [DEBUG] TestDNS_AltDomains_Service.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.284Z [WARN]  TestDNS_AltDomains_Service.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.284Z [ERROR] TestDNS_AltDomains_Service.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:41.284Z [DEBUG] TestDNS_AltDomains_Service.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.286Z [WARN]  TestDNS_AltDomains_Service.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.287Z [INFO]  TestDNS_AltDomains_Service.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:41.287Z [INFO]  TestDNS_AltDomains_Service: consul server down
>     writer.go:29: 2020-02-23T02:46:41.287Z [INFO]  TestDNS_AltDomains_Service: shutdown complete
>     writer.go:29: 2020-02-23T02:46:41.288Z [INFO]  TestDNS_AltDomains_Service: Stopping server: protocol=DNS address=127.0.0.1:16481 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.288Z [INFO]  TestDNS_AltDomains_Service: Stopping server: protocol=DNS address=127.0.0.1:16481 network=udp
>     writer.go:29: 2020-02-23T02:46:41.288Z [INFO]  TestDNS_AltDomains_Service: Stopping server: protocol=HTTP address=127.0.0.1:16482 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.288Z [INFO]  TestDNS_AltDomains_Service: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:41.288Z [INFO]  TestDNS_AltDomains_Service: Endpoints down
> === CONT  TestDNS_ServiceLookup_FilterACL
> === RUN   TestDNS_ServiceLookup_FilterACL/ACLToken_==_root
> === RUN   TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous
> --- PASS: TestDNS_NonExistingLookup (0.39s)
>     writer.go:29: 2020-02-23T02:46:41.212Z [WARN]  TestDNS_NonExistingLookup: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:41.213Z [DEBUG] TestDNS_NonExistingLookup.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:41.213Z [DEBUG] TestDNS_NonExistingLookup.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:41.223Z [INFO]  TestDNS_NonExistingLookup.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d76ac2f7-8b51-bc4c-7889-9c23eea3ed9a Address:127.0.0.1:16492}]"
>     writer.go:29: 2020-02-23T02:46:41.223Z [INFO]  TestDNS_NonExistingLookup.server.raft: entering follower state: follower="Node at 127.0.0.1:16492 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:41.223Z [INFO]  TestDNS_NonExistingLookup.server.serf.wan: serf: EventMemberJoin: Node-d76ac2f7-8b51-bc4c-7889-9c23eea3ed9a.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.224Z [INFO]  TestDNS_NonExistingLookup.server.serf.lan: serf: EventMemberJoin: Node-d76ac2f7-8b51-bc4c-7889-9c23eea3ed9a 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.224Z [INFO]  TestDNS_NonExistingLookup.server: Adding LAN server: server="Node-d76ac2f7-8b51-bc4c-7889-9c23eea3ed9a (Addr: tcp/127.0.0.1:16492) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:41.224Z [INFO]  TestDNS_NonExistingLookup: Started DNS server: address=127.0.0.1:16487 network=udp
>     writer.go:29: 2020-02-23T02:46:41.224Z [INFO]  TestDNS_NonExistingLookup.server: Handled event for server in area: event=member-join server=Node-d76ac2f7-8b51-bc4c-7889-9c23eea3ed9a.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:41.224Z [INFO]  TestDNS_NonExistingLookup: Started DNS server: address=127.0.0.1:16487 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.225Z [INFO]  TestDNS_NonExistingLookup: Started HTTP server: address=127.0.0.1:16488 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.225Z [INFO]  TestDNS_NonExistingLookup: started state syncer
>     writer.go:29: 2020-02-23T02:46:41.262Z [WARN]  TestDNS_NonExistingLookup.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:41.262Z [INFO]  TestDNS_NonExistingLookup.server.raft: entering candidate state: node="Node at 127.0.0.1:16492 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:41.267Z [DEBUG] TestDNS_NonExistingLookup.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:41.267Z [DEBUG] TestDNS_NonExistingLookup.server.raft: vote granted: from=d76ac2f7-8b51-bc4c-7889-9c23eea3ed9a term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:41.267Z [INFO]  TestDNS_NonExistingLookup.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:41.267Z [INFO]  TestDNS_NonExistingLookup.server.raft: entering leader state: leader="Node at 127.0.0.1:16492 [Leader]"
>     writer.go:29: 2020-02-23T02:46:41.267Z [INFO]  TestDNS_NonExistingLookup.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:41.267Z [INFO]  TestDNS_NonExistingLookup.server: New leader elected: payload=Node-d76ac2f7-8b51-bc4c-7889-9c23eea3ed9a
>     writer.go:29: 2020-02-23T02:46:41.273Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:41.280Z [INFO]  TestDNS_NonExistingLookup.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:41.280Z [INFO]  TestDNS_NonExistingLookup.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.280Z [DEBUG] TestDNS_NonExistingLookup.server: Skipping self join check for node since the cluster is too small: node=Node-d76ac2f7-8b51-bc4c-7889-9c23eea3ed9a
>     writer.go:29: 2020-02-23T02:46:41.281Z [INFO]  TestDNS_NonExistingLookup.server: member joined, marking health alive: member=Node-d76ac2f7-8b51-bc4c-7889-9c23eea3ed9a
>     writer.go:29: 2020-02-23T02:46:41.471Z [DEBUG] TestDNS_NonExistingLookup: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:41.473Z [INFO]  TestDNS_NonExistingLookup: Synced node info
>     writer.go:29: 2020-02-23T02:46:41.473Z [DEBUG] TestDNS_NonExistingLookup: Node info in sync
>     writer.go:29: 2020-02-23T02:46:41.591Z [WARN]  TestDNS_NonExistingLookup.dns: QName invalid: qname=nonexisting.
>     writer.go:29: 2020-02-23T02:46:41.591Z [DEBUG] TestDNS_NonExistingLookup.dns: request served from client: name=nonexisting.consul. type=ANY class=IN latency=81.076µs client=127.0.0.1:48105 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.592Z [INFO]  TestDNS_NonExistingLookup: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:41.592Z [INFO]  TestDNS_NonExistingLookup.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:41.592Z [DEBUG] TestDNS_NonExistingLookup.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.592Z [WARN]  TestDNS_NonExistingLookup.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.592Z [DEBUG] TestDNS_NonExistingLookup.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.594Z [WARN]  TestDNS_NonExistingLookup.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.595Z [INFO]  TestDNS_NonExistingLookup.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:41.595Z [INFO]  TestDNS_NonExistingLookup: consul server down
>     writer.go:29: 2020-02-23T02:46:41.595Z [INFO]  TestDNS_NonExistingLookup: shutdown complete
>     writer.go:29: 2020-02-23T02:46:41.595Z [INFO]  TestDNS_NonExistingLookup: Stopping server: protocol=DNS address=127.0.0.1:16487 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.595Z [INFO]  TestDNS_NonExistingLookup: Stopping server: protocol=DNS address=127.0.0.1:16487 network=udp
>     writer.go:29: 2020-02-23T02:46:41.595Z [INFO]  TestDNS_NonExistingLookup: Stopping server: protocol=HTTP address=127.0.0.1:16488 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.595Z [INFO]  TestDNS_NonExistingLookup: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:41.595Z [INFO]  TestDNS_NonExistingLookup: Endpoints down
> === CONT  TestDNS_ServiceLookup_SRV_RFC_TCP_Default
> --- PASS: TestDNS_NonExistingLookupEmptyAorAAAA (0.46s)
>     writer.go:29: 2020-02-23T02:46:41.164Z [WARN]  TestDNS_NonExistingLookupEmptyAorAAAA: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:41.164Z [DEBUG] TestDNS_NonExistingLookupEmptyAorAAAA.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:41.165Z [DEBUG] TestDNS_NonExistingLookupEmptyAorAAAA.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:41.179Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b8d34ab5-9b36-ec07-0409-31dcf316e8c5 Address:127.0.0.1:16498}]"
>     writer.go:29: 2020-02-23T02:46:41.179Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.server.raft: entering follower state: follower="Node at 127.0.0.1:16498 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:41.179Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.server.serf.wan: serf: EventMemberJoin: Node-b8d34ab5-9b36-ec07-0409-31dcf316e8c5.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.180Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.server.serf.lan: serf: EventMemberJoin: Node-b8d34ab5-9b36-ec07-0409-31dcf316e8c5 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.180Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA: Started DNS server: address=127.0.0.1:16493 network=udp
>     writer.go:29: 2020-02-23T02:46:41.180Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.server: Adding LAN server: server="Node-b8d34ab5-9b36-ec07-0409-31dcf316e8c5 (Addr: tcp/127.0.0.1:16498) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:41.180Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.server: Handled event for server in area: event=member-join server=Node-b8d34ab5-9b36-ec07-0409-31dcf316e8c5.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:41.180Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA: Started DNS server: address=127.0.0.1:16493 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.181Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA: Started HTTP server: address=127.0.0.1:16494 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.181Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA: started state syncer
>     writer.go:29: 2020-02-23T02:46:41.244Z [WARN]  TestDNS_NonExistingLookupEmptyAorAAAA.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:41.244Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.server.raft: entering candidate state: node="Node at 127.0.0.1:16498 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:41.248Z [DEBUG] TestDNS_NonExistingLookupEmptyAorAAAA.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:41.248Z [DEBUG] TestDNS_NonExistingLookupEmptyAorAAAA.server.raft: vote granted: from=b8d34ab5-9b36-ec07-0409-31dcf316e8c5 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:41.248Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:41.248Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.server.raft: entering leader state: leader="Node at 127.0.0.1:16498 [Leader]"
>     writer.go:29: 2020-02-23T02:46:41.248Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:41.248Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.server: New leader elected: payload=Node-b8d34ab5-9b36-ec07-0409-31dcf316e8c5
>     writer.go:29: 2020-02-23T02:46:41.257Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:41.264Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:41.264Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.264Z [DEBUG] TestDNS_NonExistingLookupEmptyAorAAAA.server: Skipping self join check for node since the cluster is too small: node=Node-b8d34ab5-9b36-ec07-0409-31dcf316e8c5
>     writer.go:29: 2020-02-23T02:46:41.264Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.server: member joined, marking health alive: member=Node-b8d34ab5-9b36-ec07-0409-31dcf316e8c5
>     writer.go:29: 2020-02-23T02:46:41.394Z [DEBUG] TestDNS_NonExistingLookupEmptyAorAAAA: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:41.397Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA: Synced node info
>     writer.go:29: 2020-02-23T02:46:41.397Z [DEBUG] TestDNS_NonExistingLookupEmptyAorAAAA: Node info in sync
>     writer.go:29: 2020-02-23T02:46:41.607Z [DEBUG] TestDNS_NonExistingLookupEmptyAorAAAA.dns: request served from client: name=webv4.service.consul. type=AAAA class=IN latency=89.576µs client=127.0.0.1:40269 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.607Z [DEBUG] TestDNS_NonExistingLookupEmptyAorAAAA.dns: request served from client: name=webv4.query.consul. type=AAAA class=IN latency=61.012µs client=127.0.0.1:44870 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.607Z [DEBUG] TestDNS_NonExistingLookupEmptyAorAAAA.dns: request served from client: name=webv6.service.consul. type=A class=IN latency=51.129µs client=127.0.0.1:60743 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.607Z [DEBUG] TestDNS_NonExistingLookupEmptyAorAAAA.dns: request served from client: name=webv6.query.consul. type=A class=IN latency=46.393µs client=127.0.0.1:56815 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.607Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:41.607Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:41.607Z [DEBUG] TestDNS_NonExistingLookupEmptyAorAAAA.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.607Z [WARN]  TestDNS_NonExistingLookupEmptyAorAAAA.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.607Z [DEBUG] TestDNS_NonExistingLookupEmptyAorAAAA.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.611Z [WARN]  TestDNS_NonExistingLookupEmptyAorAAAA.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.613Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:41.613Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA: consul server down
>     writer.go:29: 2020-02-23T02:46:41.613Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA: shutdown complete
>     writer.go:29: 2020-02-23T02:46:41.613Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA: Stopping server: protocol=DNS address=127.0.0.1:16493 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.613Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA: Stopping server: protocol=DNS address=127.0.0.1:16493 network=udp
>     writer.go:29: 2020-02-23T02:46:41.613Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA: Stopping server: protocol=HTTP address=127.0.0.1:16494 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.613Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:41.613Z [INFO]  TestDNS_NonExistingLookupEmptyAorAAAA: Endpoints down
> === CONT  TestDNS_PreparedQuery_TTL
> --- PASS: TestDNS_AddressLookup (0.42s)
>     writer.go:29: 2020-02-23T02:46:41.234Z [WARN]  TestDNS_AddressLookup: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:41.234Z [DEBUG] TestDNS_AddressLookup.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:41.234Z [DEBUG] TestDNS_AddressLookup.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:41.251Z [INFO]  TestDNS_AddressLookup.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fb1a856e-6d59-ef5e-afd0-74556e969ea1 Address:127.0.0.1:16504}]"
>     writer.go:29: 2020-02-23T02:46:41.251Z [INFO]  TestDNS_AddressLookup.server.serf.wan: serf: EventMemberJoin: Node-fb1a856e-6d59-ef5e-afd0-74556e969ea1.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.252Z [INFO]  TestDNS_AddressLookup.server.serf.lan: serf: EventMemberJoin: Node-fb1a856e-6d59-ef5e-afd0-74556e969ea1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.252Z [INFO]  TestDNS_AddressLookup: Started DNS server: address=127.0.0.1:16499 network=udp
>     writer.go:29: 2020-02-23T02:46:41.252Z [INFO]  TestDNS_AddressLookup.server.raft: entering follower state: follower="Node at 127.0.0.1:16504 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:41.252Z [INFO]  TestDNS_AddressLookup.server: Adding LAN server: server="Node-fb1a856e-6d59-ef5e-afd0-74556e969ea1 (Addr: tcp/127.0.0.1:16504) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:41.252Z [INFO]  TestDNS_AddressLookup.server: Handled event for server in area: event=member-join server=Node-fb1a856e-6d59-ef5e-afd0-74556e969ea1.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:41.252Z [INFO]  TestDNS_AddressLookup: Started DNS server: address=127.0.0.1:16499 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.253Z [INFO]  TestDNS_AddressLookup: Started HTTP server: address=127.0.0.1:16500 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.253Z [INFO]  TestDNS_AddressLookup: started state syncer
>     writer.go:29: 2020-02-23T02:46:41.303Z [WARN]  TestDNS_AddressLookup.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:41.303Z [INFO]  TestDNS_AddressLookup.server.raft: entering candidate state: node="Node at 127.0.0.1:16504 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:41.306Z [DEBUG] TestDNS_AddressLookup.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:41.306Z [DEBUG] TestDNS_AddressLookup.server.raft: vote granted: from=fb1a856e-6d59-ef5e-afd0-74556e969ea1 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:41.306Z [INFO]  TestDNS_AddressLookup.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:41.306Z [INFO]  TestDNS_AddressLookup.server.raft: entering leader state: leader="Node at 127.0.0.1:16504 [Leader]"
>     writer.go:29: 2020-02-23T02:46:41.307Z [INFO]  TestDNS_AddressLookup.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:41.309Z [INFO]  TestDNS_AddressLookup.server: New leader elected: payload=Node-fb1a856e-6d59-ef5e-afd0-74556e969ea1
>     writer.go:29: 2020-02-23T02:46:41.313Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:41.321Z [INFO]  TestDNS_AddressLookup.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:41.321Z [INFO]  TestDNS_AddressLookup.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.321Z [DEBUG] TestDNS_AddressLookup.server: Skipping self join check for node since the cluster is too small: node=Node-fb1a856e-6d59-ef5e-afd0-74556e969ea1
>     writer.go:29: 2020-02-23T02:46:41.321Z [INFO]  TestDNS_AddressLookup.server: member joined, marking health alive: member=Node-fb1a856e-6d59-ef5e-afd0-74556e969ea1
>     writer.go:29: 2020-02-23T02:46:41.625Z [DEBUG] TestDNS_AddressLookup: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:41.629Z [INFO]  TestDNS_AddressLookup: Synced node info
>     writer.go:29: 2020-02-23T02:46:41.637Z [DEBUG] TestDNS_AddressLookup.dns: request served from client: name=7f000001.addr.dc1.consul. type=SRV class=IN latency=22.552µs client=127.0.0.1:54095 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.637Z [INFO]  TestDNS_AddressLookup: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:41.637Z [INFO]  TestDNS_AddressLookup.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:41.637Z [DEBUG] TestDNS_AddressLookup.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.637Z [WARN]  TestDNS_AddressLookup.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.637Z [DEBUG] TestDNS_AddressLookup.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.639Z [WARN]  TestDNS_AddressLookup.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.644Z [INFO]  TestDNS_AddressLookup.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:41.644Z [INFO]  TestDNS_AddressLookup: consul server down
>     writer.go:29: 2020-02-23T02:46:41.644Z [INFO]  TestDNS_AddressLookup: shutdown complete
>     writer.go:29: 2020-02-23T02:46:41.644Z [INFO]  TestDNS_AddressLookup: Stopping server: protocol=DNS address=127.0.0.1:16499 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.644Z [INFO]  TestDNS_AddressLookup: Stopping server: protocol=DNS address=127.0.0.1:16499 network=udp
>     writer.go:29: 2020-02-23T02:46:41.644Z [INFO]  TestDNS_AddressLookup: Stopping server: protocol=HTTP address=127.0.0.1:16500 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.645Z [INFO]  TestDNS_AddressLookup: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:41.645Z [INFO]  TestDNS_AddressLookup: Endpoints down
> === CONT  TestDNS_ServiceLookup_TTL
> --- PASS: TestDNS_ServiceLookup_FilterACL (0.56s)
>     --- PASS: TestDNS_ServiceLookup_FilterACL/ACLToken_==_root (0.24s)
>         writer.go:29: 2020-02-23T02:46:41.332Z [WARN]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:41.332Z [WARN]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:41.332Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:41.333Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:41.342Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d9cedc4c-9613-9236-0148-e2c3c583aa92 Address:127.0.0.1:16510}]"
>         writer.go:29: 2020-02-23T02:46:41.342Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.raft: entering follower state: follower="Node at 127.0.0.1:16510 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:41.343Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.serf.wan: serf: EventMemberJoin: Node-d9cedc4c-9613-9236-0148-e2c3c583aa92.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:41.343Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.serf.lan: serf: EventMemberJoin: Node-d9cedc4c-9613-9236-0148-e2c3c583aa92 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:41.344Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server: Handled event for server in area: event=member-join server=Node-d9cedc4c-9613-9236-0148-e2c3c583aa92.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:41.344Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server: Adding LAN server: server="Node-d9cedc4c-9613-9236-0148-e2c3c583aa92 (Addr: tcp/127.0.0.1:16510) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:41.344Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: Started DNS server: address=127.0.0.1:16505 network=tcp
>         writer.go:29: 2020-02-23T02:46:41.344Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: Started DNS server: address=127.0.0.1:16505 network=udp
>         writer.go:29: 2020-02-23T02:46:41.344Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: Started HTTP server: address=127.0.0.1:16506 network=tcp
>         writer.go:29: 2020-02-23T02:46:41.344Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: started state syncer
>         writer.go:29: 2020-02-23T02:46:41.408Z [WARN]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:41.408Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.raft: entering candidate state: node="Node at 127.0.0.1:16510 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:41.412Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:41.412Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.raft: vote granted: from=d9cedc4c-9613-9236-0148-e2c3c583aa92 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:41.412Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:41.412Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.raft: entering leader state: leader="Node at 127.0.0.1:16510 [Leader]"
>         writer.go:29: 2020-02-23T02:46:41.412Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:41.412Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server: New leader elected: payload=Node-d9cedc4c-9613-9236-0148-e2c3c583aa92
>         writer.go:29: 2020-02-23T02:46:41.414Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:41.415Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:41.415Z [WARN]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:41.418Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:41.421Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:41.421Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:41.421Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:41.421Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.serf.lan: serf: EventMemberUpdate: Node-d9cedc4c-9613-9236-0148-e2c3c583aa92
>         writer.go:29: 2020-02-23T02:46:41.421Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.serf.wan: serf: EventMemberUpdate: Node-d9cedc4c-9613-9236-0148-e2c3c583aa92.dc1
>         writer.go:29: 2020-02-23T02:46:41.421Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server: Handled event for server in area: event=member-update server=Node-d9cedc4c-9613-9236-0148-e2c3c583aa92.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:41.425Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:41.432Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:41.432Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:41.432Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server: Skipping self join check for node since the cluster is too small: node=Node-d9cedc4c-9613-9236-0148-e2c3c583aa92
>         writer.go:29: 2020-02-23T02:46:41.432Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server: member joined, marking health alive: member=Node-d9cedc4c-9613-9236-0148-e2c3c583aa92
>         writer.go:29: 2020-02-23T02:46:41.435Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server: Skipping self join check for node since the cluster is too small: node=Node-d9cedc4c-9613-9236-0148-e2c3c583aa92
>         writer.go:29: 2020-02-23T02:46:41.511Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:41.514Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: Synced node info
>         writer.go:29: 2020-02-23T02:46:41.514Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: Node info in sync
>         writer.go:29: 2020-02-23T02:46:41.517Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.acl: dropping node from result due to ACLs: node=Node-d9cedc4c-9613-9236-0148-e2c3c583aa92
>         writer.go:29: 2020-02-23T02:46:41.517Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.acl: dropping node from result due to ACLs: node=Node-d9cedc4c-9613-9236-0148-e2c3c583aa92
>         writer.go:29: 2020-02-23T02:46:41.520Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.dns: request served from client: name=foo.service.consul. type=A class=IN latency=139.901µs client=127.0.0.1:40956 client_network=udp
>         writer.go:29: 2020-02-23T02:46:41.520Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:41.520Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:41.520Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:41.520Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:41.520Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:41.520Z [WARN]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:41.520Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:41.520Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:41.520Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:41.522Z [WARN]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:41.524Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:41.524Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: consul server down
>         writer.go:29: 2020-02-23T02:46:41.524Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: shutdown complete
>         writer.go:29: 2020-02-23T02:46:41.524Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: Stopping server: protocol=DNS address=127.0.0.1:16505 network=tcp
>         writer.go:29: 2020-02-23T02:46:41.524Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: Stopping server: protocol=DNS address=127.0.0.1:16505 network=udp
>         writer.go:29: 2020-02-23T02:46:41.524Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: Stopping server: protocol=HTTP address=127.0.0.1:16506 network=tcp
>         writer.go:29: 2020-02-23T02:46:41.524Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:41.524Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_root: Endpoints down
>     --- PASS: TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous (0.33s)
>         writer.go:29: 2020-02-23T02:46:41.531Z [WARN]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:41.531Z [WARN]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:41.532Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:41.532Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:41.541Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:cfd356b2-e61d-18a2-4c53-e0bb28f42ea2 Address:127.0.0.1:16516}]"
>         writer.go:29: 2020-02-23T02:46:41.541Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.raft: entering follower state: follower="Node at 127.0.0.1:16516 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:41.542Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.serf.wan: serf: EventMemberJoin: Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:41.542Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.serf.lan: serf: EventMemberJoin: Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:41.542Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: Handled event for server in area: event=member-join server=Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:41.542Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: Adding LAN server: server="Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2 (Addr: tcp/127.0.0.1:16516) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:41.542Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous: Started DNS server: address=127.0.0.1:16511 network=tcp
>         writer.go:29: 2020-02-23T02:46:41.543Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous: Started DNS server: address=127.0.0.1:16511 network=udp
>         writer.go:29: 2020-02-23T02:46:41.543Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous: Started HTTP server: address=127.0.0.1:16512 network=tcp
>         writer.go:29: 2020-02-23T02:46:41.543Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous: started state syncer
>         writer.go:29: 2020-02-23T02:46:41.588Z [WARN]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:41.588Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.raft: entering candidate state: node="Node at 127.0.0.1:16516 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:41.591Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:41.591Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.raft: vote granted: from=cfd356b2-e61d-18a2-4c53-e0bb28f42ea2 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:41.591Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:41.591Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.raft: entering leader state: leader="Node at 127.0.0.1:16516 [Leader]"
>         writer.go:29: 2020-02-23T02:46:41.591Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:41.591Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: New leader elected: payload=Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2
>         writer.go:29: 2020-02-23T02:46:41.592Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:41.594Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:41.600Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:41.600Z [WARN]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:41.606Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:41.606Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:41.606Z [WARN]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:41.612Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:41.612Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:41.612Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:41.612Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.serf.lan: serf: EventMemberUpdate: Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2
>         writer.go:29: 2020-02-23T02:46:41.612Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.serf.wan: serf: EventMemberUpdate: Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2.dc1
>         writer.go:29: 2020-02-23T02:46:41.612Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:41.612Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: transitioning out of legacy ACL mode
>         writer.go:29: 2020-02-23T02:46:41.612Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: Handled event for server in area: event=member-update server=Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:41.612Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.serf.lan: serf: EventMemberUpdate: Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2
>         writer.go:29: 2020-02-23T02:46:41.612Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.serf.wan: serf: EventMemberUpdate: Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2.dc1
>         writer.go:29: 2020-02-23T02:46:41.612Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: Handled event for server in area: event=member-update server=Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:41.626Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:41.635Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:41.635Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:41.635Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: Skipping self join check for node since the cluster is too small: node=Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2
>         writer.go:29: 2020-02-23T02:46:41.635Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: member joined, marking health alive: member=Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2
>         writer.go:29: 2020-02-23T02:46:41.637Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: Skipping self join check for node since the cluster is too small: node=Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2
>         writer.go:29: 2020-02-23T02:46:41.637Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: Skipping self join check for node since the cluster is too small: node=Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2
>         writer.go:29: 2020-02-23T02:46:41.681Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.acl: dropping check from result due to ACLs: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:41.682Z [WARN]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous: Node info update blocked by ACLs: node=cfd356b2-e61d-18a2-4c53-e0bb28f42ea2 accessorID=00000000-0000-0000-0000-000000000002
>         writer.go:29: 2020-02-23T02:46:41.841Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.acl: dropping node from result due to ACLs: node=Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2
>         writer.go:29: 2020-02-23T02:46:41.841Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.acl: dropping node from result due to ACLs: node=Node-cfd356b2-e61d-18a2-4c53-e0bb28f42ea2
>         writer.go:29: 2020-02-23T02:46:41.846Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.acl: dropping node from result due to ACLs: node=foo
>         writer.go:29: 2020-02-23T02:46:41.846Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.dns: request served from client: name=foo.service.consul. type=A class=IN latency=156.573µs client=127.0.0.1:34730 client_network=udp
>         writer.go:29: 2020-02-23T02:46:41.846Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:41.846Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:41.846Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:41.846Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:41.846Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:41.846Z [WARN]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:41.846Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:41.846Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:41.846Z [DEBUG] TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:41.847Z [WARN]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:41.849Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:41.849Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous: consul server down
>         writer.go:29: 2020-02-23T02:46:41.849Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous: shutdown complete
>         writer.go:29: 2020-02-23T02:46:41.849Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous: Stopping server: protocol=DNS address=127.0.0.1:16511 network=tcp
>         writer.go:29: 2020-02-23T02:46:41.849Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous: Stopping server: protocol=DNS address=127.0.0.1:16511 network=udp
>         writer.go:29: 2020-02-23T02:46:41.849Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous: Stopping server: protocol=HTTP address=127.0.0.1:16512 network=tcp
>         writer.go:29: 2020-02-23T02:46:41.849Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:41.849Z [INFO]  TestDNS_ServiceLookup_FilterACL/ACLToken_==_anonymous: Endpoints down
> === CONT  TestDNS_NodeLookup_TTL
> === RUN   TestDNS_PreparedQuery_TTL/db.query.consul.
> === RUN   TestDNS_PreparedQuery_TTL/db-ttl.query.consul.
> === RUN   TestDNS_PreparedQuery_TTL/dblb.query.consul.
> === RUN   TestDNS_PreparedQuery_TTL/dblb-ttl.query.consul.
> === RUN   TestDNS_PreparedQuery_TTL/dk.query.consul.
> === RUN   TestDNS_PreparedQuery_TTL/dk-ttl.query.consul.
> === RUN   TestDNS_PreparedQuery_TTL/api.query.consul.
> === RUN   TestDNS_PreparedQuery_TTL/api-ttl.query.consul.
> --- PASS: TestDNS_PreparedQuery_TTL (0.28s)
>     writer.go:29: 2020-02-23T02:46:41.625Z [WARN]  TestDNS_PreparedQuery_TTL: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:41.625Z [DEBUG] TestDNS_PreparedQuery_TTL.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:41.625Z [DEBUG] TestDNS_PreparedQuery_TTL.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:41.639Z [INFO]  TestDNS_PreparedQuery_TTL.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:c438bb92-9186-5d09-cf0f-bbc34dace536 Address:127.0.0.1:16528}]"
>     writer.go:29: 2020-02-23T02:46:41.640Z [INFO]  TestDNS_PreparedQuery_TTL.server.serf.wan: serf: EventMemberJoin: Node-c438bb92-9186-5d09-cf0f-bbc34dace536.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.640Z [INFO]  TestDNS_PreparedQuery_TTL.server.serf.lan: serf: EventMemberJoin: Node-c438bb92-9186-5d09-cf0f-bbc34dace536 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.641Z [INFO]  TestDNS_PreparedQuery_TTL: Started DNS server: address=127.0.0.1:16523 network=udp
>     writer.go:29: 2020-02-23T02:46:41.641Z [INFO]  TestDNS_PreparedQuery_TTL.server.raft: entering follower state: follower="Node at 127.0.0.1:16528 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:41.642Z [INFO]  TestDNS_PreparedQuery_TTL.server: Adding LAN server: server="Node-c438bb92-9186-5d09-cf0f-bbc34dace536 (Addr: tcp/127.0.0.1:16528) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:41.642Z [INFO]  TestDNS_PreparedQuery_TTL.server: Handled event for server in area: event=member-join server=Node-c438bb92-9186-5d09-cf0f-bbc34dace536.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:41.642Z [INFO]  TestDNS_PreparedQuery_TTL: Started DNS server: address=127.0.0.1:16523 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.643Z [INFO]  TestDNS_PreparedQuery_TTL: Started HTTP server: address=127.0.0.1:16524 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.643Z [INFO]  TestDNS_PreparedQuery_TTL: started state syncer
>     writer.go:29: 2020-02-23T02:46:41.697Z [WARN]  TestDNS_PreparedQuery_TTL.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:41.697Z [INFO]  TestDNS_PreparedQuery_TTL.server.raft: entering candidate state: node="Node at 127.0.0.1:16528 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:41.701Z [DEBUG] TestDNS_PreparedQuery_TTL.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:41.701Z [DEBUG] TestDNS_PreparedQuery_TTL.server.raft: vote granted: from=c438bb92-9186-5d09-cf0f-bbc34dace536 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:41.701Z [INFO]  TestDNS_PreparedQuery_TTL.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:41.701Z [INFO]  TestDNS_PreparedQuery_TTL.server.raft: entering leader state: leader="Node at 127.0.0.1:16528 [Leader]"
>     writer.go:29: 2020-02-23T02:46:41.701Z [INFO]  TestDNS_PreparedQuery_TTL.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:41.701Z [INFO]  TestDNS_PreparedQuery_TTL.server: New leader elected: payload=Node-c438bb92-9186-5d09-cf0f-bbc34dace536
>     writer.go:29: 2020-02-23T02:46:41.708Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:41.716Z [INFO]  TestDNS_PreparedQuery_TTL.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:41.716Z [INFO]  TestDNS_PreparedQuery_TTL.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.716Z [DEBUG] TestDNS_PreparedQuery_TTL.server: Skipping self join check for node since the cluster is too small: node=Node-c438bb92-9186-5d09-cf0f-bbc34dace536
>     writer.go:29: 2020-02-23T02:46:41.716Z [INFO]  TestDNS_PreparedQuery_TTL.server: member joined, marking health alive: member=Node-c438bb92-9186-5d09-cf0f-bbc34dace536
>     writer.go:29: 2020-02-23T02:46:41.887Z [DEBUG] TestDNS_PreparedQuery_TTL.dns: request served from client: name=db.query.consul. type=SRV class=IN latency=97.217µs client=127.0.0.1:42806 client_network=udp
>     --- PASS: TestDNS_PreparedQuery_TTL/db.query.consul. (0.00s)
>     writer.go:29: 2020-02-23T02:46:41.887Z [DEBUG] TestDNS_PreparedQuery_TTL.dns: request served from client: name=db-ttl.query.consul. type=SRV class=IN latency=62.574µs client=127.0.0.1:50859 client_network=udp
>     --- PASS: TestDNS_PreparedQuery_TTL/db-ttl.query.consul. (0.00s)
>     writer.go:29: 2020-02-23T02:46:41.887Z [DEBUG] TestDNS_PreparedQuery_TTL.dns: request served from client: name=dblb.query.consul. type=SRV class=IN latency=52.967µs client=127.0.0.1:36975 client_network=udp
>     --- PASS: TestDNS_PreparedQuery_TTL/dblb.query.consul. (0.00s)
>     writer.go:29: 2020-02-23T02:46:41.887Z [DEBUG] TestDNS_PreparedQuery_TTL.dns: request served from client: name=dblb-ttl.query.consul. type=SRV class=IN latency=52.358µs client=127.0.0.1:58205 client_network=udp
>     --- PASS: TestDNS_PreparedQuery_TTL/dblb-ttl.query.consul. (0.00s)
>     writer.go:29: 2020-02-23T02:46:41.887Z [DEBUG] TestDNS_PreparedQuery_TTL.dns: request served from client: name=dk.query.consul. type=SRV class=IN latency=51.088µs client=127.0.0.1:41042 client_network=udp
>     --- PASS: TestDNS_PreparedQuery_TTL/dk.query.consul. (0.00s)
>     writer.go:29: 2020-02-23T02:46:41.887Z [DEBUG] TestDNS_PreparedQuery_TTL.dns: request served from client: name=dk-ttl.query.consul. type=SRV class=IN latency=51.331µs client=127.0.0.1:59030 client_network=udp
>     --- PASS: TestDNS_PreparedQuery_TTL/dk-ttl.query.consul. (0.00s)
>     writer.go:29: 2020-02-23T02:46:41.888Z [DEBUG] TestDNS_PreparedQuery_TTL.dns: request served from client: name=api.query.consul. type=SRV class=IN latency=48.059µs client=127.0.0.1:35479 client_network=udp
>     --- PASS: TestDNS_PreparedQuery_TTL/api.query.consul. (0.00s)
>     writer.go:29: 2020-02-23T02:46:41.888Z [DEBUG] TestDNS_PreparedQuery_TTL.dns: request served from client: name=api-ttl.query.consul. type=SRV class=IN latency=52.644µs client=127.0.0.1:52055 client_network=udp
>     --- PASS: TestDNS_PreparedQuery_TTL/api-ttl.query.consul. (0.00s)
>     writer.go:29: 2020-02-23T02:46:41.888Z [INFO]  TestDNS_PreparedQuery_TTL: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:41.888Z [INFO]  TestDNS_PreparedQuery_TTL.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:41.888Z [DEBUG] TestDNS_PreparedQuery_TTL.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.888Z [WARN]  TestDNS_PreparedQuery_TTL.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.888Z [ERROR] TestDNS_PreparedQuery_TTL.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:41.888Z [DEBUG] TestDNS_PreparedQuery_TTL.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.890Z [WARN]  TestDNS_PreparedQuery_TTL.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.891Z [INFO]  TestDNS_PreparedQuery_TTL.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:41.891Z [INFO]  TestDNS_PreparedQuery_TTL: consul server down
>     writer.go:29: 2020-02-23T02:46:41.891Z [INFO]  TestDNS_PreparedQuery_TTL: shutdown complete
>     writer.go:29: 2020-02-23T02:46:41.891Z [INFO]  TestDNS_PreparedQuery_TTL: Stopping server: protocol=DNS address=127.0.0.1:16523 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.891Z [INFO]  TestDNS_PreparedQuery_TTL: Stopping server: protocol=DNS address=127.0.0.1:16523 network=udp
>     writer.go:29: 2020-02-23T02:46:41.891Z [INFO]  TestDNS_PreparedQuery_TTL: Stopping server: protocol=HTTP address=127.0.0.1:16524 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.891Z [INFO]  TestDNS_PreparedQuery_TTL: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:41.891Z [INFO]  TestDNS_PreparedQuery_TTL: Endpoints down
> === CONT  TestDNS_ServiceLookup_ServiceAddress_CNAME
> --- PASS: TestDNS_NodeLookup_TTL (0.12s)
>     writer.go:29: 2020-02-23T02:46:41.858Z [WARN]  TestDNS_NodeLookup_TTL: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:41.858Z [DEBUG] TestDNS_NodeLookup_TTL.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:41.858Z [DEBUG] TestDNS_NodeLookup_TTL.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:41.872Z [INFO]  TestDNS_NodeLookup_TTL.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:3335a968-afef-3b84-1cae-5fd3cb135a25 Address:127.0.0.1:16534}]"
>     writer.go:29: 2020-02-23T02:46:41.873Z [INFO]  TestDNS_NodeLookup_TTL.server.serf.wan: serf: EventMemberJoin: Node-3335a968-afef-3b84-1cae-5fd3cb135a25.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.873Z [INFO]  TestDNS_NodeLookup_TTL.server.serf.lan: serf: EventMemberJoin: Node-3335a968-afef-3b84-1cae-5fd3cb135a25 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.873Z [DEBUG] TestDNS_NodeLookup_TTL.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:41.873Z [INFO]  TestDNS_NodeLookup_TTL: Started DNS server: address=127.0.0.1:16529 network=udp
>     writer.go:29: 2020-02-23T02:46:41.873Z [INFO]  TestDNS_NodeLookup_TTL.server.raft: entering follower state: follower="Node at 127.0.0.1:16534 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:41.874Z [INFO]  TestDNS_NodeLookup_TTL.server: Adding LAN server: server="Node-3335a968-afef-3b84-1cae-5fd3cb135a25 (Addr: tcp/127.0.0.1:16534) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:41.874Z [INFO]  TestDNS_NodeLookup_TTL.server: Handled event for server in area: event=member-join server=Node-3335a968-afef-3b84-1cae-5fd3cb135a25.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:41.874Z [DEBUG] TestDNS_NodeLookup_TTL.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:41.874Z [INFO]  TestDNS_NodeLookup_TTL: Started DNS server: address=127.0.0.1:16529 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.874Z [INFO]  TestDNS_NodeLookup_TTL: Started HTTP server: address=127.0.0.1:16530 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.874Z [INFO]  TestDNS_NodeLookup_TTL: started state syncer
>     writer.go:29: 2020-02-23T02:46:41.929Z [WARN]  TestDNS_NodeLookup_TTL.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:41.929Z [INFO]  TestDNS_NodeLookup_TTL.server.raft: entering candidate state: node="Node at 127.0.0.1:16534 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:41.932Z [DEBUG] TestDNS_NodeLookup_TTL.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:41.932Z [DEBUG] TestDNS_NodeLookup_TTL.server.raft: vote granted: from=3335a968-afef-3b84-1cae-5fd3cb135a25 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:41.932Z [INFO]  TestDNS_NodeLookup_TTL.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:41.932Z [INFO]  TestDNS_NodeLookup_TTL.server.raft: entering leader state: leader="Node at 127.0.0.1:16534 [Leader]"
>     writer.go:29: 2020-02-23T02:46:41.932Z [INFO]  TestDNS_NodeLookup_TTL.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:41.932Z [INFO]  TestDNS_NodeLookup_TTL.server: New leader elected: payload=Node-3335a968-afef-3b84-1cae-5fd3cb135a25
>     writer.go:29: 2020-02-23T02:46:41.939Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:41.946Z [INFO]  TestDNS_NodeLookup_TTL.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:41.946Z [INFO]  TestDNS_NodeLookup_TTL.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.946Z [DEBUG] TestDNS_NodeLookup_TTL.server: Skipping self join check for node since the cluster is too small: node=Node-3335a968-afef-3b84-1cae-5fd3cb135a25
>     writer.go:29: 2020-02-23T02:46:41.946Z [INFO]  TestDNS_NodeLookup_TTL.server: member joined, marking health alive: member=Node-3335a968-afef-3b84-1cae-5fd3cb135a25
>     writer.go:29: 2020-02-23T02:46:41.964Z [DEBUG] TestDNS_NodeLookup_TTL.dns: request served from client: name=foo.node.consul. type=ANY class=IN latency=78.806µs client=127.0.0.1:38159 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.966Z [DEBUG] TestDNS_NodeLookup_TTL.dns: request served from client: name=bar.node.consul. type=ANY class=IN latency=70.683µs client=127.0.0.1:42291 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.968Z [DEBUG] TestDNS_NodeLookup_TTL.dns: cname recurse RTT for name: name=www.google.com. rtt=49.179µs
>     writer.go:29: 2020-02-23T02:46:41.968Z [DEBUG] TestDNS_NodeLookup_TTL.dns: request served from client: name=google.node.consul. type=ANY class=IN latency=165.805µs client=127.0.0.1:55725 client_network=udp
>     writer.go:29: 2020-02-23T02:46:41.968Z [INFO]  TestDNS_NodeLookup_TTL: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:41.968Z [INFO]  TestDNS_NodeLookup_TTL.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:41.968Z [DEBUG] TestDNS_NodeLookup_TTL.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.968Z [WARN]  TestDNS_NodeLookup_TTL.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.968Z [ERROR] TestDNS_NodeLookup_TTL.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:41.968Z [DEBUG] TestDNS_NodeLookup_TTL.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.970Z [WARN]  TestDNS_NodeLookup_TTL.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:41.972Z [INFO]  TestDNS_NodeLookup_TTL.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:41.972Z [INFO]  TestDNS_NodeLookup_TTL: consul server down
>     writer.go:29: 2020-02-23T02:46:41.972Z [INFO]  TestDNS_NodeLookup_TTL: shutdown complete
>     writer.go:29: 2020-02-23T02:46:41.972Z [INFO]  TestDNS_NodeLookup_TTL: Stopping server: protocol=DNS address=127.0.0.1:16529 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.972Z [INFO]  TestDNS_NodeLookup_TTL: Stopping server: protocol=DNS address=127.0.0.1:16529 network=udp
>     writer.go:29: 2020-02-23T02:46:41.972Z [INFO]  TestDNS_NodeLookup_TTL: Stopping server: protocol=HTTP address=127.0.0.1:16530 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.972Z [INFO]  TestDNS_NodeLookup_TTL: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:41.972Z [INFO]  TestDNS_NodeLookup_TTL: Endpoints down
> === CONT  TestDNS_ServiceLookup_AnswerLimits
> === RUN   TestDNS_ServiceLookup_AnswerLimits/A_lookup_{0_0_0_0_0_0_0_0_0_0_0}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/A_lookup_{0_0_0_0_0_0_0_0_0_0_0}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{0_0_0_0_0_0_0_0_0_0_0}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{0_0_0_0_0_0_0_0_0_0_0}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{0_0_0_0_0_0_0_0_0_0_0}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{0_0_0_0_0_0_0_0_0_0_0}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/A_lookup_{1_1_1_1_1_1_1_1_1_1_1}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/A_lookup_{1_1_1_1_1_1_1_1_1_1_1}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{1_1_1_1_1_1_1_1_1_1_1}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{1_1_1_1_1_1_1_1_1_1_1}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{1_1_1_1_1_1_1_1_1_1_1}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{1_1_1_1_1_1_1_1_1_1_1}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/A_lookup_{2_2_2_2_2_2_2_2_2_2_2}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/A_lookup_{2_2_2_2_2_2_2_2_2_2_2}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{2_2_2_2_2_2_2_2_2_2_2}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{2_2_2_2_2_2_2_2_2_2_2}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{2_2_2_2_2_2_2_2_2_2_2}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{2_2_2_2_2_2_2_2_2_2_2}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/A_lookup_{3_3_3_3_3_3_3_3_3_3_3}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/A_lookup_{3_3_3_3_3_3_3_3_3_3_3}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{3_3_3_3_3_3_3_3_3_3_3}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{3_3_3_3_3_3_3_3_3_3_3}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{3_3_3_3_3_3_3_3_3_3_3}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{3_3_3_3_3_3_3_3_3_3_3}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/A_lookup_{4_4_4_4_4_4_4_4_4_4_4}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/A_lookup_{4_4_4_4_4_4_4_4_4_4_4}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{4_4_4_4_4_4_4_4_4_4_4}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{4_4_4_4_4_4_4_4_4_4_4}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{4_4_4_4_4_4_4_4_4_4_4}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{4_4_4_4_4_4_4_4_4_4_4}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/A_lookup_{5_5_5_5_5_5_5_5_5_5_5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/A_lookup_{5_5_5_5_5_5_5_5_5_5_5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{5_5_5_5_5_5_5_5_5_5_5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{5_5_5_5_5_5_5_5_5_5_5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{5_5_5_5_5_5_5_5_5_5_5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{5_5_5_5_5_5_5_5_5_5_5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/A_lookup_{6_6_6_6_6_6_6_5_6_6_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/A_lookup_{6_6_6_6_6_6_6_5_6_6_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{6_6_6_6_6_6_6_5_6_6_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{6_6_6_6_6_6_6_5_6_6_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{6_6_6_6_6_6_6_5_6_6_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{6_6_6_6_6_6_6_5_6_6_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/A_lookup_{7_7_7_7_6_7_7_5_7_7_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/A_lookup_{7_7_7_7_6_7_7_5_7_7_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{7_7_7_7_6_7_7_5_7_7_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{7_7_7_7_6_7_7_5_7_7_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{7_7_7_7_6_7_7_5_7_7_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{7_7_7_7_6_7_7_5_7_7_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/A_lookup_{8_8_8_8_6_8_8_5_8_8_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/A_lookup_{8_8_8_8_6_8_8_5_8_8_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{8_8_8_8_6_8_8_5_8_8_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{8_8_8_8_6_8_8_5_8_8_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{8_8_8_8_6_8_8_5_8_8_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{8_8_8_8_6_8_8_5_8_8_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/A_lookup_{9_9_8_8_6_8_8_5_8_8_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/A_lookup_{9_9_8_8_6_8_8_5_8_8_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{9_9_8_8_6_8_8_5_8_8_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{9_9_8_8_6_8_8_5_8_8_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{9_9_8_8_6_8_8_5_8_8_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{9_9_8_8_6_8_8_5_8_8_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/A_lookup_{20_20_8_8_6_8_8_5_8_-5_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/A_lookup_{20_20_8_8_6_8_8_5_8_-5_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{20_20_8_8_6_8_8_5_8_-5_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{20_20_8_8_6_8_8_5_8_-5_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{20_20_8_8_6_8_8_5_8_-5_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{20_20_8_8_6_8_8_5_8_-5_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/A_lookup_{30_30_8_8_6_8_8_5_8_-5_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/A_lookup_{30_30_8_8_6_8_8_5_8_-5_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{30_30_8_8_6_8_8_5_8_-5_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/AAAA_lookup_{30_30_8_8_6_8_8_5_8_-5_-5}
> === RUN   TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{30_30_8_8_6_8_8_5_8_-5_-5}
> === PAUSE TestDNS_ServiceLookup_AnswerLimits/ANY_lookup_{30_30_8_8_6_8_8_5_8_-5_-5}
> === CONT  TestDNS_ServiceLookup_LargeResponses
> === RUN   TestDNS_ServiceLookup_TTL/db.service.consul.
> === RUN   TestDNS_ServiceLookup_TTL/dblb.service.consul.
> === RUN   TestDNS_ServiceLookup_TTL/dk.service.consul.
> === RUN   TestDNS_ServiceLookup_TTL/api.service.consul.
> --- PASS: TestDNS_ServiceLookup_TTL (0.37s)
>     writer.go:29: 2020-02-23T02:46:41.660Z [WARN]  TestDNS_ServiceLookup_TTL: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:41.660Z [DEBUG] TestDNS_ServiceLookup_TTL.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:41.661Z [DEBUG] TestDNS_ServiceLookup_TTL.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:41.669Z [INFO]  TestDNS_ServiceLookup_TTL.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:248e675e-5299-c877-9fe8-89692558cab0 Address:127.0.0.1:16540}]"
>     writer.go:29: 2020-02-23T02:46:41.670Z [INFO]  TestDNS_ServiceLookup_TTL.server.serf.wan: serf: EventMemberJoin: Node-248e675e-5299-c877-9fe8-89692558cab0.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.670Z [INFO]  TestDNS_ServiceLookup_TTL.server.serf.lan: serf: EventMemberJoin: Node-248e675e-5299-c877-9fe8-89692558cab0 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.670Z [INFO]  TestDNS_ServiceLookup_TTL: Started DNS server: address=127.0.0.1:16535 network=udp
>     writer.go:29: 2020-02-23T02:46:41.670Z [INFO]  TestDNS_ServiceLookup_TTL.server.raft: entering follower state: follower="Node at 127.0.0.1:16540 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:41.671Z [INFO]  TestDNS_ServiceLookup_TTL.server: Adding LAN server: server="Node-248e675e-5299-c877-9fe8-89692558cab0 (Addr: tcp/127.0.0.1:16540) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:41.671Z [INFO]  TestDNS_ServiceLookup_TTL.server: Handled event for server in area: event=member-join server=Node-248e675e-5299-c877-9fe8-89692558cab0.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:41.671Z [INFO]  TestDNS_ServiceLookup_TTL: Started DNS server: address=127.0.0.1:16535 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.671Z [INFO]  TestDNS_ServiceLookup_TTL: Started HTTP server: address=127.0.0.1:16536 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.671Z [INFO]  TestDNS_ServiceLookup_TTL: started state syncer
>     writer.go:29: 2020-02-23T02:46:41.708Z [WARN]  TestDNS_ServiceLookup_TTL.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:41.708Z [INFO]  TestDNS_ServiceLookup_TTL.server.raft: entering candidate state: node="Node at 127.0.0.1:16540 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:41.712Z [DEBUG] TestDNS_ServiceLookup_TTL.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:41.712Z [DEBUG] TestDNS_ServiceLookup_TTL.server.raft: vote granted: from=248e675e-5299-c877-9fe8-89692558cab0 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:41.712Z [INFO]  TestDNS_ServiceLookup_TTL.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:41.712Z [INFO]  TestDNS_ServiceLookup_TTL.server.raft: entering leader state: leader="Node at 127.0.0.1:16540 [Leader]"
>     writer.go:29: 2020-02-23T02:46:41.712Z [INFO]  TestDNS_ServiceLookup_TTL.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:41.712Z [INFO]  TestDNS_ServiceLookup_TTL.server: New leader elected: payload=Node-248e675e-5299-c877-9fe8-89692558cab0
>     writer.go:29: 2020-02-23T02:46:41.721Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:41.728Z [INFO]  TestDNS_ServiceLookup_TTL.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:41.728Z [INFO]  TestDNS_ServiceLookup_TTL.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.728Z [DEBUG] TestDNS_ServiceLookup_TTL.server: Skipping self join check for node since the cluster is too small: node=Node-248e675e-5299-c877-9fe8-89692558cab0
>     writer.go:29: 2020-02-23T02:46:41.728Z [INFO]  TestDNS_ServiceLookup_TTL.server: member joined, marking health alive: member=Node-248e675e-5299-c877-9fe8-89692558cab0
>     writer.go:29: 2020-02-23T02:46:42.007Z [DEBUG] TestDNS_ServiceLookup_TTL.dns: request served from client: name=db.service.consul. type=SRV class=IN latency=108.032µs client=127.0.0.1:51607 client_network=udp
>     --- PASS: TestDNS_ServiceLookup_TTL/db.service.consul. (0.00s)
>     writer.go:29: 2020-02-23T02:46:42.007Z [DEBUG] TestDNS_ServiceLookup_TTL.dns: request served from client: name=dblb.service.consul. type=SRV class=IN latency=68.947µs client=127.0.0.1:41688 client_network=udp
>     --- PASS: TestDNS_ServiceLookup_TTL/dblb.service.consul. (0.00s)
>     writer.go:29: 2020-02-23T02:46:42.008Z [DEBUG] TestDNS_ServiceLookup_TTL.dns: request served from client: name=dk.service.consul. type=SRV class=IN latency=62.052µs client=127.0.0.1:43758 client_network=udp
>     --- PASS: TestDNS_ServiceLookup_TTL/dk.service.consul. (0.00s)
>     writer.go:29: 2020-02-23T02:46:42.008Z [DEBUG] TestDNS_ServiceLookup_TTL.dns: request served from client: name=api.service.consul. type=SRV class=IN latency=56.408µs client=127.0.0.1:40087 client_network=udp
>     --- PASS: TestDNS_ServiceLookup_TTL/api.service.consul. (0.00s)
>     writer.go:29: 2020-02-23T02:46:42.008Z [INFO]  TestDNS_ServiceLookup_TTL: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:42.008Z [INFO]  TestDNS_ServiceLookup_TTL.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:42.008Z [DEBUG] TestDNS_ServiceLookup_TTL.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.008Z [WARN]  TestDNS_ServiceLookup_TTL.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:42.008Z [ERROR] TestDNS_ServiceLookup_TTL.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:42.008Z [DEBUG] TestDNS_ServiceLookup_TTL.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.009Z [WARN]  TestDNS_ServiceLookup_TTL.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:42.012Z [INFO]  TestDNS_ServiceLookup_TTL.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:42.012Z [INFO]  TestDNS_ServiceLookup_TTL: consul server down
>     writer.go:29: 2020-02-23T02:46:42.012Z [INFO]  TestDNS_ServiceLookup_TTL: shutdown complete
>     writer.go:29: 2020-02-23T02:46:42.012Z [INFO]  TestDNS_ServiceLookup_TTL: Stopping server: protocol=DNS address=127.0.0.1:16535 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.012Z [INFO]  TestDNS_ServiceLookup_TTL: Stopping server: protocol=DNS address=127.0.0.1:16535 network=udp
>     writer.go:29: 2020-02-23T02:46:42.012Z [INFO]  TestDNS_ServiceLookup_TTL: Stopping server: protocol=HTTP address=127.0.0.1:16536 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.012Z [INFO]  TestDNS_ServiceLookup_TTL: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:42.012Z [INFO]  TestDNS_ServiceLookup_TTL: Endpoints down
> === CONT  TestDNS_ServiceLookup_Truncate
> --- PASS: TestDNS_ServiceLookup_SRV_RFC_TCP_Default (0.47s)
>     writer.go:29: 2020-02-23T02:46:41.606Z [WARN]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:41.606Z [DEBUG] TestDNS_ServiceLookup_SRV_RFC_TCP_Default.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:41.608Z [DEBUG] TestDNS_ServiceLookup_SRV_RFC_TCP_Default.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:41.626Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:97c7f701-d165-b9dd-6318-0d81dcaf3419 Address:127.0.0.1:16522}]"
>     writer.go:29: 2020-02-23T02:46:41.626Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server.raft: entering follower state: follower="Node at 127.0.0.1:16522 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:41.627Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server.serf.wan: serf: EventMemberJoin: Node-97c7f701-d165-b9dd-6318-0d81dcaf3419.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.628Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server.serf.lan: serf: EventMemberJoin: Node-97c7f701-d165-b9dd-6318-0d81dcaf3419 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.628Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server: Adding LAN server: server="Node-97c7f701-d165-b9dd-6318-0d81dcaf3419 (Addr: tcp/127.0.0.1:16522) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:41.628Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server: Handled event for server in area: event=member-join server=Node-97c7f701-d165-b9dd-6318-0d81dcaf3419.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:41.628Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default: Started DNS server: address=127.0.0.1:16517 network=udp
>     writer.go:29: 2020-02-23T02:46:41.628Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default: Started DNS server: address=127.0.0.1:16517 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.628Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default: Started HTTP server: address=127.0.0.1:16518 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.628Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default: started state syncer
>     writer.go:29: 2020-02-23T02:46:41.670Z [WARN]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:41.670Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server.raft: entering candidate state: node="Node at 127.0.0.1:16522 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:41.674Z [DEBUG] TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:41.674Z [DEBUG] TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server.raft: vote granted: from=97c7f701-d165-b9dd-6318-0d81dcaf3419 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:41.674Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:41.674Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server.raft: entering leader state: leader="Node at 127.0.0.1:16522 [Leader]"
>     writer.go:29: 2020-02-23T02:46:41.674Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:41.674Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server: New leader elected: payload=Node-97c7f701-d165-b9dd-6318-0d81dcaf3419
>     writer.go:29: 2020-02-23T02:46:41.681Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:41.689Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:41.689Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.689Z [DEBUG] TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server: Skipping self join check for node since the cluster is too small: node=Node-97c7f701-d165-b9dd-6318-0d81dcaf3419
>     writer.go:29: 2020-02-23T02:46:41.689Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server: member joined, marking health alive: member=Node-97c7f701-d165-b9dd-6318-0d81dcaf3419
>     writer.go:29: 2020-02-23T02:46:42.053Z [DEBUG] TestDNS_ServiceLookup_SRV_RFC_TCP_Default: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:42.057Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default: Synced node info
>     writer.go:29: 2020-02-23T02:46:42.057Z [DEBUG] TestDNS_ServiceLookup_SRV_RFC_TCP_Default: Node info in sync
>     writer.go:29: 2020-02-23T02:46:42.058Z [DEBUG] TestDNS_ServiceLookup_SRV_RFC_TCP_Default.dns: request served from client: name=_db._tcp.service.dc1.consul. type=SRV class=IN latency=95.181µs client=127.0.0.1:52486 client_network=udp
>     writer.go:29: 2020-02-23T02:46:42.058Z [DEBUG] TestDNS_ServiceLookup_SRV_RFC_TCP_Default.dns: request served from client: name=_db._tcp.service.consul. type=SRV class=IN latency=52.809µs client=127.0.0.1:55877 client_network=udp
>     writer.go:29: 2020-02-23T02:46:42.058Z [DEBUG] TestDNS_ServiceLookup_SRV_RFC_TCP_Default.dns: request served from client: name=_db._tcp.dc1.consul. type=SRV class=IN latency=46.707µs client=127.0.0.1:50292 client_network=udp
>     writer.go:29: 2020-02-23T02:46:42.058Z [DEBUG] TestDNS_ServiceLookup_SRV_RFC_TCP_Default.dns: request served from client: name=_db._tcp.consul. type=SRV class=IN latency=51.95µs client=127.0.0.1:41325 client_network=udp
>     writer.go:29: 2020-02-23T02:46:42.058Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:42.058Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:42.058Z [DEBUG] TestDNS_ServiceLookup_SRV_RFC_TCP_Default.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.058Z [WARN]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:42.058Z [DEBUG] TestDNS_ServiceLookup_SRV_RFC_TCP_Default.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.060Z [WARN]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:42.062Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:42.062Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default: consul server down
>     writer.go:29: 2020-02-23T02:46:42.062Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default: shutdown complete
>     writer.go:29: 2020-02-23T02:46:42.062Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default: Stopping server: protocol=DNS address=127.0.0.1:16517 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.062Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default: Stopping server: protocol=DNS address=127.0.0.1:16517 network=udp
>     writer.go:29: 2020-02-23T02:46:42.062Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default: Stopping server: protocol=HTTP address=127.0.0.1:16518 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.062Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:42.062Z [INFO]  TestDNS_ServiceLookup_SRV_RFC_TCP_Default: Endpoints down
> === CONT  TestBinarySearch
> === RUN   TestBinarySearch/binarySearch_12
> === RUN   TestBinarySearch/binarySearch_256
> === RUN   TestBinarySearch/binarySearch_512
> === RUN   TestBinarySearch/binarySearch_8192
> === RUN   TestBinarySearch/binarySearch_65535
> === RUN   TestBinarySearch/binarySearch_12#01
> === RUN   TestBinarySearch/binarySearch_256#01
> === RUN   TestBinarySearch/binarySearch_512#01
> === RUN   TestBinarySearch/binarySearch_8192#01
> === RUN   TestBinarySearch/binarySearch_65535#01
> --- PASS: TestBinarySearch (0.09s)
>     --- PASS: TestBinarySearch/binarySearch_12 (0.02s)
>     --- PASS: TestBinarySearch/binarySearch_256 (0.02s)
>     --- PASS: TestBinarySearch/binarySearch_512 (0.01s)
>     --- PASS: TestBinarySearch/binarySearch_8192 (0.01s)
>     --- PASS: TestBinarySearch/binarySearch_65535 (0.01s)
>     --- PASS: TestBinarySearch/binarySearch_12#01 (0.01s)
>     --- PASS: TestBinarySearch/binarySearch_256#01 (0.01s)
>     --- PASS: TestBinarySearch/binarySearch_512#01 (0.00s)
>     --- PASS: TestBinarySearch/binarySearch_8192#01 (0.00s)
>     --- PASS: TestBinarySearch/binarySearch_65535#01 (0.01s)
> === CONT  TestDNS_ServiceLookup_Randomize
> --- PASS: TestDNS_ServiceLookup_ServiceAddress_CNAME (0.33s)
>     writer.go:29: 2020-02-23T02:46:41.899Z [WARN]  TestDNS_ServiceLookup_ServiceAddress_CNAME: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:41.899Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_CNAME.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:41.899Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_CNAME.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:41.918Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:1c01625e-5cae-5209-d542-653644f507fd Address:127.0.0.1:16552}]"
>     writer.go:29: 2020-02-23T02:46:41.918Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server.raft: entering follower state: follower="Node at 127.0.0.1:16552 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:41.919Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server.serf.wan: serf: EventMemberJoin: Node-1c01625e-5cae-5209-d542-653644f507fd.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.919Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server.serf.lan: serf: EventMemberJoin: Node-1c01625e-5cae-5209-d542-653644f507fd 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.919Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server: Handled event for server in area: event=member-join server=Node-1c01625e-5cae-5209-d542-653644f507fd.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:41.919Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server: Adding LAN server: server="Node-1c01625e-5cae-5209-d542-653644f507fd (Addr: tcp/127.0.0.1:16552) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:41.920Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_CNAME.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:41.920Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_CNAME.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:41.920Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME: Started DNS server: address=127.0.0.1:16547 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.920Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME: Started DNS server: address=127.0.0.1:16547 network=udp
>     writer.go:29: 2020-02-23T02:46:41.920Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME: Started HTTP server: address=127.0.0.1:16548 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.920Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME: started state syncer
>     writer.go:29: 2020-02-23T02:46:41.953Z [WARN]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:41.953Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server.raft: entering candidate state: node="Node at 127.0.0.1:16552 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:41.956Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_CNAME.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:41.956Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_CNAME.server.raft: vote granted: from=1c01625e-5cae-5209-d542-653644f507fd term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:41.956Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:41.956Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server.raft: entering leader state: leader="Node at 127.0.0.1:16552 [Leader]"
>     writer.go:29: 2020-02-23T02:46:41.957Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:41.957Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server: New leader elected: payload=Node-1c01625e-5cae-5209-d542-653644f507fd
>     writer.go:29: 2020-02-23T02:46:41.963Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:41.975Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:41.975Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:41.975Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_CNAME.server: Skipping self join check for node since the cluster is too small: node=Node-1c01625e-5cae-5209-d542-653644f507fd
>     writer.go:29: 2020-02-23T02:46:41.975Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server: member joined, marking health alive: member=Node-1c01625e-5cae-5209-d542-653644f507fd
>     writer.go:29: 2020-02-23T02:46:42.213Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_CNAME.dns: cname recurse RTT for name: name=www.google.com. rtt=69.082µs
>     writer.go:29: 2020-02-23T02:46:42.213Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_CNAME.dns: request served from client: name=search.service.consul. type=ANY class=IN latency=260.65µs client=127.0.0.1:40641 client_network=udp
>     writer.go:29: 2020-02-23T02:46:42.214Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_CNAME.dns: cname recurse RTT for name: name=www.google.com. rtt=38.442µs
>     writer.go:29: 2020-02-23T02:46:42.214Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_CNAME.dns: request served from client: name=405b0582-b8f6-d75d-c6be-b56b2e8ce8f2.query.consul. type=ANY class=IN latency=161.547µs client=127.0.0.1:52942 client_network=udp
>     writer.go:29: 2020-02-23T02:46:42.214Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:42.214Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:42.214Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_CNAME.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.214Z [WARN]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:42.214Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_CNAME: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:42.214Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_CNAME.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.216Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME: Synced node info
>     writer.go:29: 2020-02-23T02:46:42.216Z [WARN]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:42.218Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:42.218Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME: consul server down
>     writer.go:29: 2020-02-23T02:46:42.218Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME: shutdown complete
>     writer.go:29: 2020-02-23T02:46:42.218Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME: Stopping server: protocol=DNS address=127.0.0.1:16547 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.218Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME: Stopping server: protocol=DNS address=127.0.0.1:16547 network=udp
>     writer.go:29: 2020-02-23T02:46:42.218Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME: Stopping server: protocol=HTTP address=127.0.0.1:16548 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.218Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:42.218Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_CNAME: Endpoints down
> === CONT  TestDNS_ServiceLookup_OnlyPassing
> --- PASS: TestDNS_ServiceLookup_LargeResponses (0.43s)
>     writer.go:29: 2020-02-23T02:46:41.981Z [WARN]  TestDNS_ServiceLookup_LargeResponses: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:41.981Z [DEBUG] TestDNS_ServiceLookup_LargeResponses.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:41.982Z [DEBUG] TestDNS_ServiceLookup_LargeResponses.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:41.992Z [INFO]  TestDNS_ServiceLookup_LargeResponses.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:55546aad-d517-01cd-bcf0-e900384bca85 Address:127.0.0.1:16546}]"
>     writer.go:29: 2020-02-23T02:46:41.993Z [INFO]  TestDNS_ServiceLookup_LargeResponses.server.serf.wan: serf: EventMemberJoin: Node-55546aad-d517-01cd-bcf0-e900384bca85.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.993Z [INFO]  TestDNS_ServiceLookup_LargeResponses.server.serf.lan: serf: EventMemberJoin: Node-55546aad-d517-01cd-bcf0-e900384bca85 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:41.993Z [INFO]  TestDNS_ServiceLookup_LargeResponses: Started DNS server: address=127.0.0.1:16541 network=udp
>     writer.go:29: 2020-02-23T02:46:41.993Z [INFO]  TestDNS_ServiceLookup_LargeResponses.server.raft: entering follower state: follower="Node at 127.0.0.1:16546 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:41.993Z [INFO]  TestDNS_ServiceLookup_LargeResponses.server: Adding LAN server: server="Node-55546aad-d517-01cd-bcf0-e900384bca85 (Addr: tcp/127.0.0.1:16546) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:41.993Z [INFO]  TestDNS_ServiceLookup_LargeResponses.server: Handled event for server in area: event=member-join server=Node-55546aad-d517-01cd-bcf0-e900384bca85.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:41.993Z [INFO]  TestDNS_ServiceLookup_LargeResponses: Started DNS server: address=127.0.0.1:16541 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.994Z [INFO]  TestDNS_ServiceLookup_LargeResponses: Started HTTP server: address=127.0.0.1:16542 network=tcp
>     writer.go:29: 2020-02-23T02:46:41.994Z [INFO]  TestDNS_ServiceLookup_LargeResponses: started state syncer
>     writer.go:29: 2020-02-23T02:46:42.047Z [WARN]  TestDNS_ServiceLookup_LargeResponses.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:42.047Z [INFO]  TestDNS_ServiceLookup_LargeResponses.server.raft: entering candidate state: node="Node at 127.0.0.1:16546 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:42.052Z [DEBUG] TestDNS_ServiceLookup_LargeResponses.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:42.052Z [DEBUG] TestDNS_ServiceLookup_LargeResponses.server.raft: vote granted: from=55546aad-d517-01cd-bcf0-e900384bca85 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:42.052Z [INFO]  TestDNS_ServiceLookup_LargeResponses.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:42.052Z [INFO]  TestDNS_ServiceLookup_LargeResponses.server.raft: entering leader state: leader="Node at 127.0.0.1:16546 [Leader]"
>     writer.go:29: 2020-02-23T02:46:42.052Z [INFO]  TestDNS_ServiceLookup_LargeResponses.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:42.052Z [INFO]  TestDNS_ServiceLookup_LargeResponses.server: New leader elected: payload=Node-55546aad-d517-01cd-bcf0-e900384bca85
>     writer.go:29: 2020-02-23T02:46:42.063Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:42.085Z [INFO]  TestDNS_ServiceLookup_LargeResponses.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:42.085Z [INFO]  TestDNS_ServiceLookup_LargeResponses.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.085Z [DEBUG] TestDNS_ServiceLookup_LargeResponses.server: Skipping self join check for node since the cluster is too small: node=Node-55546aad-d517-01cd-bcf0-e900384bca85
>     writer.go:29: 2020-02-23T02:46:42.085Z [INFO]  TestDNS_ServiceLookup_LargeResponses.server: member joined, marking health alive: member=Node-55546aad-d517-01cd-bcf0-e900384bca85
>     writer.go:29: 2020-02-23T02:46:42.376Z [DEBUG] TestDNS_ServiceLookup_LargeResponses: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:42.394Z [INFO]  TestDNS_ServiceLookup_LargeResponses: Synced node info
>     writer.go:29: 2020-02-23T02:46:42.399Z [DEBUG] TestDNS_ServiceLookup_LargeResponses.dns: request served from client: name=_this-is-a-very-very-very-very-very-long-name-for-a-service._master.service.consul. type=SRV class=IN latency=144.222µs client=127.0.0.1:54601 client_network=udp
>     writer.go:29: 2020-02-23T02:46:42.399Z [DEBUG] TestDNS_ServiceLookup_LargeResponses.dns: request served from client: name=this-is-a-very-very-very-very-very-long-name-for-a-service.query.consul. type=SRV class=IN latency=105.227µs client=127.0.0.1:55813 client_network=udp
>     writer.go:29: 2020-02-23T02:46:42.399Z [INFO]  TestDNS_ServiceLookup_LargeResponses: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:42.399Z [INFO]  TestDNS_ServiceLookup_LargeResponses.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:42.399Z [DEBUG] TestDNS_ServiceLookup_LargeResponses.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.399Z [WARN]  TestDNS_ServiceLookup_LargeResponses.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:42.399Z [DEBUG] TestDNS_ServiceLookup_LargeResponses.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.402Z [WARN]  TestDNS_ServiceLookup_LargeResponses.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:42.404Z [INFO]  TestDNS_ServiceLookup_LargeResponses.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:42.404Z [INFO]  TestDNS_ServiceLookup_LargeResponses: consul server down
>     writer.go:29: 2020-02-23T02:46:42.404Z [INFO]  TestDNS_ServiceLookup_LargeResponses: shutdown complete
>     writer.go:29: 2020-02-23T02:46:42.404Z [INFO]  TestDNS_ServiceLookup_LargeResponses: Stopping server: protocol=DNS address=127.0.0.1:16541 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.404Z [INFO]  TestDNS_ServiceLookup_LargeResponses: Stopping server: protocol=DNS address=127.0.0.1:16541 network=udp
>     writer.go:29: 2020-02-23T02:46:42.404Z [INFO]  TestDNS_ServiceLookup_LargeResponses: Stopping server: protocol=HTTP address=127.0.0.1:16542 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.404Z [INFO]  TestDNS_ServiceLookup_LargeResponses: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:42.404Z [INFO]  TestDNS_ServiceLookup_LargeResponses: Endpoints down
> === CONT  TestDNS_ServiceLookup_OnlyFailing
> --- PASS: TestDNS_ServiceLookup_OnlyPassing (0.61s)
>     writer.go:29: 2020-02-23T02:46:42.228Z [WARN]  TestDNS_ServiceLookup_OnlyPassing: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:42.228Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:42.229Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:42.251Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:0877d572-5226-9c1a-03c4-6de1808ac7af Address:127.0.0.1:16570}]"
>     writer.go:29: 2020-02-23T02:46:42.251Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.server.raft: entering follower state: follower="Node at 127.0.0.1:16570 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:42.251Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.server.serf.wan: serf: EventMemberJoin: Node-0877d572-5226-9c1a-03c4-6de1808ac7af.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:42.252Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.server.serf.lan: serf: EventMemberJoin: Node-0877d572-5226-9c1a-03c4-6de1808ac7af 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:42.252Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.server: Adding LAN server: server="Node-0877d572-5226-9c1a-03c4-6de1808ac7af (Addr: tcp/127.0.0.1:16570) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:42.252Z [INFO]  TestDNS_ServiceLookup_OnlyPassing: Started DNS server: address=127.0.0.1:16565 network=udp
>     writer.go:29: 2020-02-23T02:46:42.252Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.server: Handled event for server in area: event=member-join server=Node-0877d572-5226-9c1a-03c4-6de1808ac7af.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:42.252Z [INFO]  TestDNS_ServiceLookup_OnlyPassing: Started DNS server: address=127.0.0.1:16565 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.253Z [INFO]  TestDNS_ServiceLookup_OnlyPassing: Started HTTP server: address=127.0.0.1:16566 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.253Z [INFO]  TestDNS_ServiceLookup_OnlyPassing: started state syncer
>     writer.go:29: 2020-02-23T02:46:42.316Z [WARN]  TestDNS_ServiceLookup_OnlyPassing.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:42.316Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.server.raft: entering candidate state: node="Node at 127.0.0.1:16570 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:42.387Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:42.387Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing.server.raft: vote granted: from=0877d572-5226-9c1a-03c4-6de1808ac7af term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:42.387Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:42.387Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.server.raft: entering leader state: leader="Node at 127.0.0.1:16570 [Leader]"
>     writer.go:29: 2020-02-23T02:46:42.387Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:42.387Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.server: New leader elected: payload=Node-0877d572-5226-9c1a-03c4-6de1808ac7af
>     writer.go:29: 2020-02-23T02:46:42.397Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:42.408Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:42.409Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.409Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing.server: Skipping self join check for node since the cluster is too small: node=Node-0877d572-5226-9c1a-03c4-6de1808ac7af
>     writer.go:29: 2020-02-23T02:46:42.409Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.server: member joined, marking health alive: member=Node-0877d572-5226-9c1a-03c4-6de1808ac7af
>     writer.go:29: 2020-02-23T02:46:42.609Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:42.693Z [INFO]  TestDNS_ServiceLookup_OnlyPassing: Synced node info
>     writer.go:29: 2020-02-23T02:46:42.693Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing: Node info in sync
>     writer.go:29: 2020-02-23T02:46:42.818Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:42.818Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing: Node info in sync
>     writer.go:29: 2020-02-23T02:46:42.822Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing.dns: request served from client: name=db.service.consul. type=ANY class=IN latency=144.822µs client=127.0.0.1:46764 client_network=udp
>     writer.go:29: 2020-02-23T02:46:42.823Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing.dns: request served from client: name=cf440fc6-d776-7ec6-3aa0-b690629cb4d9.query.consul. type=ANY class=IN latency=113.64µs client=127.0.0.1:58725 client_network=udp
>     writer.go:29: 2020-02-23T02:46:42.823Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing.tlsutil: Update: version=2
>     writer.go:29: 2020-02-23T02:46:42.823Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing: Node info in sync
>     writer.go:29: 2020-02-23T02:46:42.823Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing: Node info in sync
>     writer.go:29: 2020-02-23T02:46:42.823Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing.dns: request served from client: name=db.service.consul. type=ANY class=IN latency=99.262µs client=127.0.0.1:36399 client_network=udp
>     writer.go:29: 2020-02-23T02:46:42.823Z [INFO]  TestDNS_ServiceLookup_OnlyPassing: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:42.823Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:42.823Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.823Z [WARN]  TestDNS_ServiceLookup_OnlyPassing.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:42.823Z [DEBUG] TestDNS_ServiceLookup_OnlyPassing.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.826Z [WARN]  TestDNS_ServiceLookup_OnlyPassing.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:42.828Z [INFO]  TestDNS_ServiceLookup_OnlyPassing.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:42.829Z [INFO]  TestDNS_ServiceLookup_OnlyPassing: consul server down
>     writer.go:29: 2020-02-23T02:46:42.829Z [INFO]  TestDNS_ServiceLookup_OnlyPassing: shutdown complete
>     writer.go:29: 2020-02-23T02:46:42.829Z [INFO]  TestDNS_ServiceLookup_OnlyPassing: Stopping server: protocol=DNS address=127.0.0.1:16565 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.829Z [INFO]  TestDNS_ServiceLookup_OnlyPassing: Stopping server: protocol=DNS address=127.0.0.1:16565 network=udp
>     writer.go:29: 2020-02-23T02:46:42.829Z [INFO]  TestDNS_ServiceLookup_OnlyPassing: Stopping server: protocol=HTTP address=127.0.0.1:16566 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.829Z [INFO]  TestDNS_ServiceLookup_OnlyPassing: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:42.829Z [INFO]  TestDNS_ServiceLookup_OnlyPassing: Endpoints down
> === CONT  TestDNS_RecursorTimeout
> --- PASS: TestDNS_ServiceLookup_OnlyFailing (0.43s)
>     writer.go:29: 2020-02-23T02:46:42.426Z [WARN]  TestDNS_ServiceLookup_OnlyFailing: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:42.426Z [DEBUG] TestDNS_ServiceLookup_OnlyFailing.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:42.428Z [DEBUG] TestDNS_ServiceLookup_OnlyFailing.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:42.447Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d92c5704-d576-2c40-823b-8061af05afa7 Address:127.0.0.1:16576}]"
>     writer.go:29: 2020-02-23T02:46:42.447Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.server.raft: entering follower state: follower="Node at 127.0.0.1:16576 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:42.447Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.server.serf.wan: serf: EventMemberJoin: Node-d92c5704-d576-2c40-823b-8061af05afa7.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:42.448Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.server.serf.lan: serf: EventMemberJoin: Node-d92c5704-d576-2c40-823b-8061af05afa7 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:42.448Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.server: Adding LAN server: server="Node-d92c5704-d576-2c40-823b-8061af05afa7 (Addr: tcp/127.0.0.1:16576) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:42.448Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.server: Handled event for server in area: event=member-join server=Node-d92c5704-d576-2c40-823b-8061af05afa7.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:42.448Z [INFO]  TestDNS_ServiceLookup_OnlyFailing: Started DNS server: address=127.0.0.1:16571 network=udp
>     writer.go:29: 2020-02-23T02:46:42.448Z [INFO]  TestDNS_ServiceLookup_OnlyFailing: Started DNS server: address=127.0.0.1:16571 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.448Z [INFO]  TestDNS_ServiceLookup_OnlyFailing: Started HTTP server: address=127.0.0.1:16572 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.448Z [INFO]  TestDNS_ServiceLookup_OnlyFailing: started state syncer
>     writer.go:29: 2020-02-23T02:46:42.512Z [WARN]  TestDNS_ServiceLookup_OnlyFailing.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:42.512Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.server.raft: entering candidate state: node="Node at 127.0.0.1:16576 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:42.517Z [DEBUG] TestDNS_ServiceLookup_OnlyFailing.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:42.517Z [DEBUG] TestDNS_ServiceLookup_OnlyFailing.server.raft: vote granted: from=d92c5704-d576-2c40-823b-8061af05afa7 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:42.517Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:42.517Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.server.raft: entering leader state: leader="Node at 127.0.0.1:16576 [Leader]"
>     writer.go:29: 2020-02-23T02:46:42.517Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:42.517Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.server: New leader elected: payload=Node-d92c5704-d576-2c40-823b-8061af05afa7
>     writer.go:29: 2020-02-23T02:46:42.526Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:42.535Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:42.535Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.535Z [DEBUG] TestDNS_ServiceLookup_OnlyFailing.server: Skipping self join check for node since the cluster is too small: node=Node-d92c5704-d576-2c40-823b-8061af05afa7
>     writer.go:29: 2020-02-23T02:46:42.535Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.server: member joined, marking health alive: member=Node-d92c5704-d576-2c40-823b-8061af05afa7
>     writer.go:29: 2020-02-23T02:46:42.799Z [DEBUG] TestDNS_ServiceLookup_OnlyFailing: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:42.826Z [INFO]  TestDNS_ServiceLookup_OnlyFailing: Synced node info
>     writer.go:29: 2020-02-23T02:46:42.826Z [DEBUG] TestDNS_ServiceLookup_OnlyFailing: Node info in sync
>     writer.go:29: 2020-02-23T02:46:42.833Z [DEBUG] TestDNS_ServiceLookup_OnlyFailing.dns: request served from client: name=db.service.consul. type=ANY class=IN latency=98.597µs client=127.0.0.1:38245 client_network=udp
>     writer.go:29: 2020-02-23T02:46:42.834Z [DEBUG] TestDNS_ServiceLookup_OnlyFailing.dns: request served from client: name=232fe989-2fc2-fe70-72ab-7c2e1e99c820.query.consul. type=ANY class=IN latency=120.577µs client=127.0.0.1:50806 client_network=udp
>     writer.go:29: 2020-02-23T02:46:42.834Z [INFO]  TestDNS_ServiceLookup_OnlyFailing: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:42.834Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:42.834Z [DEBUG] TestDNS_ServiceLookup_OnlyFailing.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.834Z [WARN]  TestDNS_ServiceLookup_OnlyFailing.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:42.834Z [DEBUG] TestDNS_ServiceLookup_OnlyFailing.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.835Z [WARN]  TestDNS_ServiceLookup_OnlyFailing.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:42.837Z [INFO]  TestDNS_ServiceLookup_OnlyFailing.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:42.837Z [INFO]  TestDNS_ServiceLookup_OnlyFailing: consul server down
>     writer.go:29: 2020-02-23T02:46:42.837Z [INFO]  TestDNS_ServiceLookup_OnlyFailing: shutdown complete
>     writer.go:29: 2020-02-23T02:46:42.837Z [INFO]  TestDNS_ServiceLookup_OnlyFailing: Stopping server: protocol=DNS address=127.0.0.1:16571 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.837Z [INFO]  TestDNS_ServiceLookup_OnlyFailing: Stopping server: protocol=DNS address=127.0.0.1:16571 network=udp
>     writer.go:29: 2020-02-23T02:46:42.838Z [INFO]  TestDNS_ServiceLookup_OnlyFailing: Stopping server: protocol=HTTP address=127.0.0.1:16572 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.838Z [INFO]  TestDNS_ServiceLookup_OnlyFailing: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:42.838Z [INFO]  TestDNS_ServiceLookup_OnlyFailing: Endpoints down
> === CONT  TestDNS_Recurse_Truncation
> --- PASS: TestDNS_ServiceLookup_Truncate (1.03s)
>     writer.go:29: 2020-02-23T02:46:42.020Z [WARN]  TestDNS_ServiceLookup_Truncate: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:42.020Z [DEBUG] TestDNS_ServiceLookup_Truncate.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:42.020Z [DEBUG] TestDNS_ServiceLookup_Truncate.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:42.033Z [INFO]  TestDNS_ServiceLookup_Truncate.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:92b91da1-54a8-4430-d4fc-a42d587fb5c9 Address:127.0.0.1:16564}]"
>     writer.go:29: 2020-02-23T02:46:42.034Z [INFO]  TestDNS_ServiceLookup_Truncate.server.serf.wan: serf: EventMemberJoin: Node-92b91da1-54a8-4430-d4fc-a42d587fb5c9.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:42.034Z [INFO]  TestDNS_ServiceLookup_Truncate.server.serf.lan: serf: EventMemberJoin: Node-92b91da1-54a8-4430-d4fc-a42d587fb5c9 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:42.034Z [INFO]  TestDNS_ServiceLookup_Truncate: Started DNS server: address=127.0.0.1:16559 network=udp
>     writer.go:29: 2020-02-23T02:46:42.034Z [INFO]  TestDNS_ServiceLookup_Truncate.server.raft: entering follower state: follower="Node at 127.0.0.1:16564 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:42.034Z [INFO]  TestDNS_ServiceLookup_Truncate.server: Adding LAN server: server="Node-92b91da1-54a8-4430-d4fc-a42d587fb5c9 (Addr: tcp/127.0.0.1:16564) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:42.034Z [INFO]  TestDNS_ServiceLookup_Truncate.server: Handled event for server in area: event=member-join server=Node-92b91da1-54a8-4430-d4fc-a42d587fb5c9.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:42.035Z [INFO]  TestDNS_ServiceLookup_Truncate: Started DNS server: address=127.0.0.1:16559 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.035Z [INFO]  TestDNS_ServiceLookup_Truncate: Started HTTP server: address=127.0.0.1:16560 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.035Z [INFO]  TestDNS_ServiceLookup_Truncate: started state syncer
>     writer.go:29: 2020-02-23T02:46:42.083Z [WARN]  TestDNS_ServiceLookup_Truncate.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:42.083Z [INFO]  TestDNS_ServiceLookup_Truncate.server.raft: entering candidate state: node="Node at 127.0.0.1:16564 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:42.088Z [DEBUG] TestDNS_ServiceLookup_Truncate.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:42.088Z [DEBUG] TestDNS_ServiceLookup_Truncate.server.raft: vote granted: from=92b91da1-54a8-4430-d4fc-a42d587fb5c9 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:42.088Z [INFO]  TestDNS_ServiceLookup_Truncate.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:42.088Z [INFO]  TestDNS_ServiceLookup_Truncate.server.raft: entering leader state: leader="Node at 127.0.0.1:16564 [Leader]"
>     writer.go:29: 2020-02-23T02:46:42.088Z [INFO]  TestDNS_ServiceLookup_Truncate.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:42.089Z [INFO]  TestDNS_ServiceLookup_Truncate.server: New leader elected: payload=Node-92b91da1-54a8-4430-d4fc-a42d587fb5c9
>     writer.go:29: 2020-02-23T02:46:42.101Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:42.111Z [INFO]  TestDNS_ServiceLookup_Truncate.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:42.111Z [INFO]  TestDNS_ServiceLookup_Truncate.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.111Z [DEBUG] TestDNS_ServiceLookup_Truncate.server: Skipping self join check for node since the cluster is too small: node=Node-92b91da1-54a8-4430-d4fc-a42d587fb5c9
>     writer.go:29: 2020-02-23T02:46:42.111Z [INFO]  TestDNS_ServiceLookup_Truncate.server: member joined, marking health alive: member=Node-92b91da1-54a8-4430-d4fc-a42d587fb5c9
>     writer.go:29: 2020-02-23T02:46:42.230Z [DEBUG] TestDNS_ServiceLookup_Truncate: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:42.232Z [INFO]  TestDNS_ServiceLookup_Truncate: Synced node info
>     writer.go:29: 2020-02-23T02:46:42.232Z [DEBUG] TestDNS_ServiceLookup_Truncate: Node info in sync
>     writer.go:29: 2020-02-23T02:46:43.029Z [DEBUG] TestDNS_ServiceLookup_Truncate.dns: request served from client: name=web.service.consul. type=ANY class=IN latency=3.388481ms client=127.0.0.1:42255 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.033Z [DEBUG] TestDNS_ServiceLookup_Truncate.dns: request served from client: name=59251745-7fe2-2bc5-96fb-e586be51c003.query.consul. type=ANY class=IN latency=3.527194ms client=127.0.0.1:36682 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.033Z [INFO]  TestDNS_ServiceLookup_Truncate: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:43.033Z [INFO]  TestDNS_ServiceLookup_Truncate.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:43.033Z [DEBUG] TestDNS_ServiceLookup_Truncate.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.033Z [WARN]  TestDNS_ServiceLookup_Truncate.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.033Z [DEBUG] TestDNS_ServiceLookup_Truncate.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.035Z [WARN]  TestDNS_ServiceLookup_Truncate.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.037Z [INFO]  TestDNS_ServiceLookup_Truncate.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:43.037Z [INFO]  TestDNS_ServiceLookup_Truncate: consul server down
>     writer.go:29: 2020-02-23T02:46:43.037Z [INFO]  TestDNS_ServiceLookup_Truncate: shutdown complete
>     writer.go:29: 2020-02-23T02:46:43.037Z [INFO]  TestDNS_ServiceLookup_Truncate: Stopping server: protocol=DNS address=127.0.0.1:16559 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.037Z [INFO]  TestDNS_ServiceLookup_Truncate: Stopping server: protocol=DNS address=127.0.0.1:16559 network=udp
>     writer.go:29: 2020-02-23T02:46:43.037Z [INFO]  TestDNS_ServiceLookup_Truncate: Stopping server: protocol=HTTP address=127.0.0.1:16560 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.037Z [INFO]  TestDNS_ServiceLookup_Truncate: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:43.037Z [INFO]  TestDNS_ServiceLookup_Truncate: Endpoints down
> === CONT  TestDNS_Recurse
> --- PASS: TestDNS_ServiceLookup_Randomize (0.90s)
>     writer.go:29: 2020-02-23T02:46:42.159Z [WARN]  TestDNS_ServiceLookup_Randomize: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:42.159Z [DEBUG] TestDNS_ServiceLookup_Randomize.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:42.159Z [DEBUG] TestDNS_ServiceLookup_Randomize.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:42.224Z [INFO]  TestDNS_ServiceLookup_Randomize.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d8beab0d-bece-af31-797a-03bed81c2559 Address:127.0.0.1:16558}]"
>     writer.go:29: 2020-02-23T02:46:42.224Z [INFO]  TestDNS_ServiceLookup_Randomize.server.serf.wan: serf: EventMemberJoin: Node-d8beab0d-bece-af31-797a-03bed81c2559.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:42.225Z [INFO]  TestDNS_ServiceLookup_Randomize.server.serf.lan: serf: EventMemberJoin: Node-d8beab0d-bece-af31-797a-03bed81c2559 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:42.225Z [INFO]  TestDNS_ServiceLookup_Randomize: Started DNS server: address=127.0.0.1:16553 network=udp
>     writer.go:29: 2020-02-23T02:46:42.225Z [INFO]  TestDNS_ServiceLookup_Randomize.server.raft: entering follower state: follower="Node at 127.0.0.1:16558 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:42.225Z [INFO]  TestDNS_ServiceLookup_Randomize.server: Adding LAN server: server="Node-d8beab0d-bece-af31-797a-03bed81c2559 (Addr: tcp/127.0.0.1:16558) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:42.225Z [INFO]  TestDNS_ServiceLookup_Randomize.server: Handled event for server in area: event=member-join server=Node-d8beab0d-bece-af31-797a-03bed81c2559.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:42.225Z [INFO]  TestDNS_ServiceLookup_Randomize: Started DNS server: address=127.0.0.1:16553 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.226Z [INFO]  TestDNS_ServiceLookup_Randomize: Started HTTP server: address=127.0.0.1:16554 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.226Z [INFO]  TestDNS_ServiceLookup_Randomize: started state syncer
>     writer.go:29: 2020-02-23T02:46:42.262Z [WARN]  TestDNS_ServiceLookup_Randomize.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:42.262Z [INFO]  TestDNS_ServiceLookup_Randomize.server.raft: entering candidate state: node="Node at 127.0.0.1:16558 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:42.266Z [DEBUG] TestDNS_ServiceLookup_Randomize.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:42.266Z [DEBUG] TestDNS_ServiceLookup_Randomize.server.raft: vote granted: from=d8beab0d-bece-af31-797a-03bed81c2559 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:42.266Z [INFO]  TestDNS_ServiceLookup_Randomize.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:42.266Z [INFO]  TestDNS_ServiceLookup_Randomize.server.raft: entering leader state: leader="Node at 127.0.0.1:16558 [Leader]"
>     writer.go:29: 2020-02-23T02:46:42.266Z [INFO]  TestDNS_ServiceLookup_Randomize.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:42.266Z [INFO]  TestDNS_ServiceLookup_Randomize.server: New leader elected: payload=Node-d8beab0d-bece-af31-797a-03bed81c2559
>     writer.go:29: 2020-02-23T02:46:42.273Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:42.281Z [INFO]  TestDNS_ServiceLookup_Randomize.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:42.281Z [INFO]  TestDNS_ServiceLookup_Randomize.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:42.281Z [DEBUG] TestDNS_ServiceLookup_Randomize.server: Skipping self join check for node since the cluster is too small: node=Node-d8beab0d-bece-af31-797a-03bed81c2559
>     writer.go:29: 2020-02-23T02:46:42.281Z [INFO]  TestDNS_ServiceLookup_Randomize.server: member joined, marking health alive: member=Node-d8beab0d-bece-af31-797a-03bed81c2559
>     writer.go:29: 2020-02-23T02:46:42.590Z [DEBUG] TestDNS_ServiceLookup_Randomize: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:42.594Z [INFO]  TestDNS_ServiceLookup_Randomize: Synced node info
>     writer.go:29: 2020-02-23T02:46:43.028Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=web.service.consul. type=ANY class=IN latency=912.52µs client=127.0.0.1:52925 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.029Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=web.service.consul. type=ANY class=IN latency=845.42µs client=127.0.0.1:38888 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.030Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=web.service.consul. type=ANY class=IN latency=834.942µs client=127.0.0.1:57741 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.031Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=web.service.consul. type=ANY class=IN latency=867.233µs client=127.0.0.1:44465 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.032Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=web.service.consul. type=ANY class=IN latency=811.307µs client=127.0.0.1:42183 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.033Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=web.service.consul. type=ANY class=IN latency=799.942µs client=127.0.0.1:45242 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.035Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=web.service.consul. type=ANY class=IN latency=1.988404ms client=127.0.0.1:42058 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.035Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=web.service.consul. type=ANY class=IN latency=1.056676ms client=127.0.0.1:59026 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.036Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=web.service.consul. type=ANY class=IN latency=799.372µs client=127.0.0.1:40181 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.037Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=web.service.consul. type=ANY class=IN latency=796.58µs client=127.0.0.1:56756 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.038Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=eab0aa87-d243-b638-8a62-29cbf0b80ab8.query.consul. type=ANY class=IN latency=909.067µs client=127.0.0.1:53889 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.039Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=eab0aa87-d243-b638-8a62-29cbf0b80ab8.query.consul. type=ANY class=IN latency=958.408µs client=127.0.0.1:44822 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.042Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=eab0aa87-d243-b638-8a62-29cbf0b80ab8.query.consul. type=ANY class=IN latency=843.283µs client=127.0.0.1:55161 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.043Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=eab0aa87-d243-b638-8a62-29cbf0b80ab8.query.consul. type=ANY class=IN latency=861.566µs client=127.0.0.1:35039 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.045Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=eab0aa87-d243-b638-8a62-29cbf0b80ab8.query.consul. type=ANY class=IN latency=876.033µs client=127.0.0.1:50690 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.049Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=eab0aa87-d243-b638-8a62-29cbf0b80ab8.query.consul. type=ANY class=IN latency=1.931725ms client=127.0.0.1:36232 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.050Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=eab0aa87-d243-b638-8a62-29cbf0b80ab8.query.consul. type=ANY class=IN latency=1.93788ms client=127.0.0.1:45771 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.051Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=eab0aa87-d243-b638-8a62-29cbf0b80ab8.query.consul. type=ANY class=IN latency=2.013626ms client=127.0.0.1:42076 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.052Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=eab0aa87-d243-b638-8a62-29cbf0b80ab8.query.consul. type=ANY class=IN latency=2.10596ms client=127.0.0.1:60527 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.052Z [INFO]  TestDNS_ServiceLookup_Randomize: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:43.052Z [INFO]  TestDNS_ServiceLookup_Randomize.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:43.052Z [DEBUG] TestDNS_ServiceLookup_Randomize.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.052Z [WARN]  TestDNS_ServiceLookup_Randomize.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.052Z [DEBUG] TestDNS_ServiceLookup_Randomize.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.052Z [DEBUG] TestDNS_ServiceLookup_Randomize.dns: request served from client: name=eab0aa87-d243-b638-8a62-29cbf0b80ab8.query.consul. type=ANY class=IN latency=1.32391ms client=127.0.0.1:41434 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.054Z [WARN]  TestDNS_ServiceLookup_Randomize.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.055Z [INFO]  TestDNS_ServiceLookup_Randomize.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:43.055Z [INFO]  TestDNS_ServiceLookup_Randomize: consul server down
>     writer.go:29: 2020-02-23T02:46:43.055Z [INFO]  TestDNS_ServiceLookup_Randomize: shutdown complete
>     writer.go:29: 2020-02-23T02:46:43.055Z [INFO]  TestDNS_ServiceLookup_Randomize: Stopping server: protocol=DNS address=127.0.0.1:16553 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.055Z [INFO]  TestDNS_ServiceLookup_Randomize: Stopping server: protocol=DNS address=127.0.0.1:16553 network=udp
>     writer.go:29: 2020-02-23T02:46:43.055Z [INFO]  TestDNS_ServiceLookup_Randomize: Stopping server: protocol=HTTP address=127.0.0.1:16554 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.055Z [INFO]  TestDNS_ServiceLookup_Randomize: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:43.055Z [INFO]  TestDNS_ServiceLookup_Randomize: Endpoints down
> === CONT  TestDNS_ServiceLookup_Dedup_SRV
> --- PASS: TestDNS_Recurse (0.30s)
>     writer.go:29: 2020-02-23T02:46:43.046Z [WARN]  TestDNS_Recurse: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:43.046Z [DEBUG] TestDNS_Recurse.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:43.046Z [DEBUG] TestDNS_Recurse.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:43.072Z [INFO]  TestDNS_Recurse.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:bca3171f-7651-195f-9c75-3e6c3c2c9820 Address:127.0.0.1:16594}]"
>     writer.go:29: 2020-02-23T02:46:43.073Z [INFO]  TestDNS_Recurse.server.raft: entering follower state: follower="Node at 127.0.0.1:16594 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:43.073Z [INFO]  TestDNS_Recurse.server.serf.wan: serf: EventMemberJoin: Node-bca3171f-7651-195f-9c75-3e6c3c2c9820.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.074Z [INFO]  TestDNS_Recurse.server.serf.lan: serf: EventMemberJoin: Node-bca3171f-7651-195f-9c75-3e6c3c2c9820 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.074Z [DEBUG] TestDNS_Recurse.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:43.074Z [INFO]  TestDNS_Recurse: Started DNS server: address=127.0.0.1:16589 network=udp
>     writer.go:29: 2020-02-23T02:46:43.074Z [INFO]  TestDNS_Recurse.server: Handled event for server in area: event=member-join server=Node-bca3171f-7651-195f-9c75-3e6c3c2c9820.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:43.074Z [DEBUG] TestDNS_Recurse.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:43.074Z [INFO]  TestDNS_Recurse: Started DNS server: address=127.0.0.1:16589 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.074Z [INFO]  TestDNS_Recurse: Started HTTP server: address=127.0.0.1:16590 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.074Z [INFO]  TestDNS_Recurse: started state syncer
>     writer.go:29: 2020-02-23T02:46:43.074Z [INFO]  TestDNS_Recurse.server: Adding LAN server: server="Node-bca3171f-7651-195f-9c75-3e6c3c2c9820 (Addr: tcp/127.0.0.1:16594) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:43.123Z [WARN]  TestDNS_Recurse.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:43.123Z [INFO]  TestDNS_Recurse.server.raft: entering candidate state: node="Node at 127.0.0.1:16594 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:43.126Z [DEBUG] TestDNS_Recurse.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:43.126Z [DEBUG] TestDNS_Recurse.server.raft: vote granted: from=bca3171f-7651-195f-9c75-3e6c3c2c9820 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:43.126Z [INFO]  TestDNS_Recurse.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:43.126Z [INFO]  TestDNS_Recurse.server.raft: entering leader state: leader="Node at 127.0.0.1:16594 [Leader]"
>     writer.go:29: 2020-02-23T02:46:43.126Z [INFO]  TestDNS_Recurse.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:43.127Z [INFO]  TestDNS_Recurse.server: New leader elected: payload=Node-bca3171f-7651-195f-9c75-3e6c3c2c9820
>     writer.go:29: 2020-02-23T02:46:43.134Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:43.142Z [INFO]  TestDNS_Recurse.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:43.142Z [INFO]  TestDNS_Recurse.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.142Z [DEBUG] TestDNS_Recurse.server: Skipping self join check for node since the cluster is too small: node=Node-bca3171f-7651-195f-9c75-3e6c3c2c9820
>     writer.go:29: 2020-02-23T02:46:43.142Z [INFO]  TestDNS_Recurse.server: member joined, marking health alive: member=Node-bca3171f-7651-195f-9c75-3e6c3c2c9820
>     writer.go:29: 2020-02-23T02:46:43.183Z [DEBUG] TestDNS_Recurse: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:43.186Z [INFO]  TestDNS_Recurse: Synced node info
>     writer.go:29: 2020-02-23T02:46:43.186Z [DEBUG] TestDNS_Recurse: Node info in sync
>     writer.go:29: 2020-02-23T02:46:43.315Z [DEBUG] TestDNS_Recurse.dns: recurse succeeded for question: question="{apple.com. 255 1}" rtt=53.02µs recursor=127.0.0.1:40143
>     writer.go:29: 2020-02-23T02:46:43.315Z [DEBUG] TestDNS_Recurse.dns: request served from client: question="{apple.com. 255 1}" network=udp latency=176.337µs client=127.0.0.1:45645 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.315Z [INFO]  TestDNS_Recurse: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:43.315Z [INFO]  TestDNS_Recurse.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:43.315Z [DEBUG] TestDNS_Recurse.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.315Z [WARN]  TestDNS_Recurse.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.315Z [DEBUG] TestDNS_Recurse.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.332Z [WARN]  TestDNS_Recurse.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.334Z [INFO]  TestDNS_Recurse.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:43.334Z [INFO]  TestDNS_Recurse: consul server down
>     writer.go:29: 2020-02-23T02:46:43.334Z [INFO]  TestDNS_Recurse: shutdown complete
>     writer.go:29: 2020-02-23T02:46:43.334Z [INFO]  TestDNS_Recurse: Stopping server: protocol=DNS address=127.0.0.1:16589 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.334Z [INFO]  TestDNS_Recurse: Stopping server: protocol=DNS address=127.0.0.1:16589 network=udp
>     writer.go:29: 2020-02-23T02:46:43.334Z [INFO]  TestDNS_Recurse: Stopping server: protocol=HTTP address=127.0.0.1:16590 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.334Z [INFO]  TestDNS_Recurse: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:43.334Z [INFO]  TestDNS_Recurse: Endpoints down
> === CONT  TestDNS_ServiceLookup_PreparedQueryNamePeriod
> --- PASS: TestDNS_Recurse_Truncation (0.52s)
>     writer.go:29: 2020-02-23T02:46:42.851Z [WARN]  TestDNS_Recurse_Truncation: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:42.851Z [DEBUG] TestDNS_Recurse_Truncation.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:42.852Z [DEBUG] TestDNS_Recurse_Truncation.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:42.873Z [INFO]  TestDNS_Recurse_Truncation.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:1949aeed-65b7-92b4-f058-9f51c3ad86b9 Address:127.0.0.1:16588}]"
>     writer.go:29: 2020-02-23T02:46:42.873Z [INFO]  TestDNS_Recurse_Truncation.server.raft: entering follower state: follower="Node at 127.0.0.1:16588 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:42.885Z [INFO]  TestDNS_Recurse_Truncation.server.serf.wan: serf: EventMemberJoin: Node-1949aeed-65b7-92b4-f058-9f51c3ad86b9.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:42.886Z [INFO]  TestDNS_Recurse_Truncation.server.serf.lan: serf: EventMemberJoin: Node-1949aeed-65b7-92b4-f058-9f51c3ad86b9 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:42.886Z [INFO]  TestDNS_Recurse_Truncation.server: Handled event for server in area: event=member-join server=Node-1949aeed-65b7-92b4-f058-9f51c3ad86b9.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:42.886Z [INFO]  TestDNS_Recurse_Truncation.server: Adding LAN server: server="Node-1949aeed-65b7-92b4-f058-9f51c3ad86b9 (Addr: tcp/127.0.0.1:16588) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:42.886Z [DEBUG] TestDNS_Recurse_Truncation.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:42.886Z [DEBUG] TestDNS_Recurse_Truncation.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:42.886Z [INFO]  TestDNS_Recurse_Truncation: Started DNS server: address=127.0.0.1:16583 network=udp
>     writer.go:29: 2020-02-23T02:46:42.886Z [INFO]  TestDNS_Recurse_Truncation: Started DNS server: address=127.0.0.1:16583 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.887Z [INFO]  TestDNS_Recurse_Truncation: Started HTTP server: address=127.0.0.1:16584 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.887Z [INFO]  TestDNS_Recurse_Truncation: started state syncer
>     writer.go:29: 2020-02-23T02:46:42.909Z [WARN]  TestDNS_Recurse_Truncation.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:42.909Z [INFO]  TestDNS_Recurse_Truncation.server.raft: entering candidate state: node="Node at 127.0.0.1:16588 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:42.967Z [DEBUG] TestDNS_Recurse_Truncation.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:42.967Z [DEBUG] TestDNS_Recurse_Truncation.server.raft: vote granted: from=1949aeed-65b7-92b4-f058-9f51c3ad86b9 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:42.967Z [INFO]  TestDNS_Recurse_Truncation.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:42.967Z [INFO]  TestDNS_Recurse_Truncation.server.raft: entering leader state: leader="Node at 127.0.0.1:16588 [Leader]"
>     writer.go:29: 2020-02-23T02:46:42.967Z [INFO]  TestDNS_Recurse_Truncation.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:42.967Z [INFO]  TestDNS_Recurse_Truncation.server: New leader elected: payload=Node-1949aeed-65b7-92b4-f058-9f51c3ad86b9
>     writer.go:29: 2020-02-23T02:46:42.988Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:43.023Z [INFO]  TestDNS_Recurse_Truncation.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:43.024Z [INFO]  TestDNS_Recurse_Truncation.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.025Z [DEBUG] TestDNS_Recurse_Truncation.server: Skipping self join check for node since the cluster is too small: node=Node-1949aeed-65b7-92b4-f058-9f51c3ad86b9
>     writer.go:29: 2020-02-23T02:46:43.025Z [INFO]  TestDNS_Recurse_Truncation.server: member joined, marking health alive: member=Node-1949aeed-65b7-92b4-f058-9f51c3ad86b9
>     writer.go:29: 2020-02-23T02:46:43.054Z [DEBUG] TestDNS_Recurse_Truncation: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:43.060Z [INFO]  TestDNS_Recurse_Truncation: Synced node info
>     writer.go:29: 2020-02-23T02:46:43.060Z [DEBUG] TestDNS_Recurse_Truncation: Node info in sync
>     writer.go:29: 2020-02-23T02:46:43.352Z [DEBUG] TestDNS_Recurse_Truncation.dns: recurse succeeded for question: question="{apple.com. 255 1}" rtt=66.947µs recursor=127.0.0.1:39736
>     writer.go:29: 2020-02-23T02:46:43.352Z [DEBUG] TestDNS_Recurse_Truncation.dns: request served from client: question="{apple.com. 255 1}" network=udp latency=184.107µs client=127.0.0.1:43635 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.352Z [INFO]  TestDNS_Recurse_Truncation: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:43.352Z [INFO]  TestDNS_Recurse_Truncation.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:43.352Z [DEBUG] TestDNS_Recurse_Truncation.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.352Z [WARN]  TestDNS_Recurse_Truncation.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.352Z [DEBUG] TestDNS_Recurse_Truncation.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.354Z [WARN]  TestDNS_Recurse_Truncation.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.356Z [INFO]  TestDNS_Recurse_Truncation.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:43.356Z [INFO]  TestDNS_Recurse_Truncation: consul server down
>     writer.go:29: 2020-02-23T02:46:43.356Z [INFO]  TestDNS_Recurse_Truncation: shutdown complete
>     writer.go:29: 2020-02-23T02:46:43.356Z [INFO]  TestDNS_Recurse_Truncation: Stopping server: protocol=DNS address=127.0.0.1:16583 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.356Z [INFO]  TestDNS_Recurse_Truncation: Stopping server: protocol=DNS address=127.0.0.1:16583 network=udp
>     writer.go:29: 2020-02-23T02:46:43.356Z [INFO]  TestDNS_Recurse_Truncation: Stopping server: protocol=HTTP address=127.0.0.1:16584 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.356Z [INFO]  TestDNS_Recurse_Truncation: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:43.356Z [INFO]  TestDNS_Recurse_Truncation: Endpoints down
> === CONT  TestDNS_PreparedQueryNearIP
> --- PASS: TestDNS_ServiceLookup_Dedup_SRV (0.37s)
>     writer.go:29: 2020-02-23T02:46:43.063Z [WARN]  TestDNS_ServiceLookup_Dedup_SRV: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:43.063Z [DEBUG] TestDNS_ServiceLookup_Dedup_SRV.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:43.064Z [DEBUG] TestDNS_ServiceLookup_Dedup_SRV.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:43.075Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:98c84fea-0f8a-5dfc-28e7-b22a21dfb85f Address:127.0.0.1:16600}]"
>     writer.go:29: 2020-02-23T02:46:43.075Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.server.raft: entering follower state: follower="Node at 127.0.0.1:16600 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:43.075Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.server.serf.wan: serf: EventMemberJoin: Node-98c84fea-0f8a-5dfc-28e7-b22a21dfb85f.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.076Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.server.serf.lan: serf: EventMemberJoin: Node-98c84fea-0f8a-5dfc-28e7-b22a21dfb85f 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.076Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.server: Adding LAN server: server="Node-98c84fea-0f8a-5dfc-28e7-b22a21dfb85f (Addr: tcp/127.0.0.1:16600) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:43.076Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.server: Handled event for server in area: event=member-join server=Node-98c84fea-0f8a-5dfc-28e7-b22a21dfb85f.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:43.076Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV: Started DNS server: address=127.0.0.1:16595 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.076Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV: Started DNS server: address=127.0.0.1:16595 network=udp
>     writer.go:29: 2020-02-23T02:46:43.076Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV: Started HTTP server: address=127.0.0.1:16596 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.076Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV: started state syncer
>     writer.go:29: 2020-02-23T02:46:43.135Z [WARN]  TestDNS_ServiceLookup_Dedup_SRV.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:43.135Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.server.raft: entering candidate state: node="Node at 127.0.0.1:16600 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:43.139Z [DEBUG] TestDNS_ServiceLookup_Dedup_SRV.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:43.139Z [DEBUG] TestDNS_ServiceLookup_Dedup_SRV.server.raft: vote granted: from=98c84fea-0f8a-5dfc-28e7-b22a21dfb85f term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:43.139Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:43.139Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.server.raft: entering leader state: leader="Node at 127.0.0.1:16600 [Leader]"
>     writer.go:29: 2020-02-23T02:46:43.139Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:43.139Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.server: New leader elected: payload=Node-98c84fea-0f8a-5dfc-28e7-b22a21dfb85f
>     writer.go:29: 2020-02-23T02:46:43.147Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:43.156Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:43.156Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.156Z [DEBUG] TestDNS_ServiceLookup_Dedup_SRV.server: Skipping self join check for node since the cluster is too small: node=Node-98c84fea-0f8a-5dfc-28e7-b22a21dfb85f
>     writer.go:29: 2020-02-23T02:46:43.156Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.server: member joined, marking health alive: member=Node-98c84fea-0f8a-5dfc-28e7-b22a21dfb85f
>     writer.go:29: 2020-02-23T02:46:43.423Z [DEBUG] TestDNS_ServiceLookup_Dedup_SRV.dns: request served from client: name=db.service.consul. type=SRV class=IN latency=122.363µs client=127.0.0.1:40807 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.423Z [DEBUG] TestDNS_ServiceLookup_Dedup_SRV.dns: request served from client: name=ac3659ef-fff6-51d5-1dea-feb5e42a56cb.query.consul. type=SRV class=IN latency=83.82µs client=127.0.0.1:59776 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.423Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:43.423Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:43.423Z [DEBUG] TestDNS_ServiceLookup_Dedup_SRV.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.423Z [WARN]  TestDNS_ServiceLookup_Dedup_SRV.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.423Z [ERROR] TestDNS_ServiceLookup_Dedup_SRV.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:43.423Z [DEBUG] TestDNS_ServiceLookup_Dedup_SRV.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.424Z [WARN]  TestDNS_ServiceLookup_Dedup_SRV.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.426Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:43.427Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV: consul server down
>     writer.go:29: 2020-02-23T02:46:43.427Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV: shutdown complete
>     writer.go:29: 2020-02-23T02:46:43.427Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV: Stopping server: protocol=DNS address=127.0.0.1:16595 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.427Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV: Stopping server: protocol=DNS address=127.0.0.1:16595 network=udp
>     writer.go:29: 2020-02-23T02:46:43.427Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV: Stopping server: protocol=HTTP address=127.0.0.1:16596 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.427Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:43.427Z [INFO]  TestDNS_ServiceLookup_Dedup_SRV: Endpoints down
> === CONT  TestDNS_PreparedQueryNearIPEDNS
> Added 3 service nodes
> Added 3 service nodes
> --- PASS: TestDNS_PreparedQueryNearIPEDNS (0.24s)
>     writer.go:29: 2020-02-23T02:46:43.434Z [WARN]  TestDNS_PreparedQueryNearIPEDNS: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:43.434Z [DEBUG] TestDNS_PreparedQueryNearIPEDNS.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:43.434Z [DEBUG] TestDNS_PreparedQueryNearIPEDNS.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:43.445Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:ed89c62b-97a5-cdcf-3a00-d27a6fd67b39 Address:127.0.0.1:16624}]"
>     writer.go:29: 2020-02-23T02:46:43.445Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.server.raft: entering follower state: follower="Node at 127.0.0.1:16624 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:43.446Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.server.serf.wan: serf: EventMemberJoin: Node-ed89c62b-97a5-cdcf-3a00-d27a6fd67b39.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.446Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.server.serf.lan: serf: EventMemberJoin: Node-ed89c62b-97a5-cdcf-3a00-d27a6fd67b39 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.446Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.server: Handled event for server in area: event=member-join server=Node-ed89c62b-97a5-cdcf-3a00-d27a6fd67b39.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:43.446Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.server: Adding LAN server: server="Node-ed89c62b-97a5-cdcf-3a00-d27a6fd67b39 (Addr: tcp/127.0.0.1:16624) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:43.447Z [INFO]  TestDNS_PreparedQueryNearIPEDNS: Started DNS server: address=127.0.0.1:16619 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.447Z [INFO]  TestDNS_PreparedQueryNearIPEDNS: Started DNS server: address=127.0.0.1:16619 network=udp
>     writer.go:29: 2020-02-23T02:46:43.447Z [INFO]  TestDNS_PreparedQueryNearIPEDNS: Started HTTP server: address=127.0.0.1:16620 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.447Z [INFO]  TestDNS_PreparedQueryNearIPEDNS: started state syncer
>     writer.go:29: 2020-02-23T02:46:43.496Z [WARN]  TestDNS_PreparedQueryNearIPEDNS.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:43.496Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.server.raft: entering candidate state: node="Node at 127.0.0.1:16624 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:43.499Z [DEBUG] TestDNS_PreparedQueryNearIPEDNS.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:43.499Z [DEBUG] TestDNS_PreparedQueryNearIPEDNS.server.raft: vote granted: from=ed89c62b-97a5-cdcf-3a00-d27a6fd67b39 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:43.499Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:43.499Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.server.raft: entering leader state: leader="Node at 127.0.0.1:16624 [Leader]"
>     writer.go:29: 2020-02-23T02:46:43.499Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:43.500Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.server: New leader elected: payload=Node-ed89c62b-97a5-cdcf-3a00-d27a6fd67b39
>     writer.go:29: 2020-02-23T02:46:43.507Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:43.514Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:43.514Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.514Z [DEBUG] TestDNS_PreparedQueryNearIPEDNS.server: Skipping self join check for node since the cluster is too small: node=Node-ed89c62b-97a5-cdcf-3a00-d27a6fd67b39
>     writer.go:29: 2020-02-23T02:46:43.514Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.server: member joined, marking health alive: member=Node-ed89c62b-97a5-cdcf-3a00-d27a6fd67b39
>     writer.go:29: 2020-02-23T02:46:43.632Z [DEBUG] TestDNS_PreparedQueryNearIPEDNS.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=156.586µs client=127.0.0.1:58496 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.658Z [DEBUG] TestDNS_PreparedQueryNearIPEDNS.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=210.083µs client=127.0.0.1:49826 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.658Z [INFO]  TestDNS_PreparedQueryNearIPEDNS: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:43.658Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:43.658Z [DEBUG] TestDNS_PreparedQueryNearIPEDNS.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.658Z [WARN]  TestDNS_PreparedQueryNearIPEDNS.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.658Z [ERROR] TestDNS_PreparedQueryNearIPEDNS.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:43.658Z [DEBUG] TestDNS_PreparedQueryNearIPEDNS.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.660Z [WARN]  TestDNS_PreparedQueryNearIPEDNS.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.663Z [INFO]  TestDNS_PreparedQueryNearIPEDNS.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:43.663Z [INFO]  TestDNS_PreparedQueryNearIPEDNS: consul server down
>     writer.go:29: 2020-02-23T02:46:43.663Z [INFO]  TestDNS_PreparedQueryNearIPEDNS: shutdown complete
>     writer.go:29: 2020-02-23T02:46:43.663Z [INFO]  TestDNS_PreparedQueryNearIPEDNS: Stopping server: protocol=DNS address=127.0.0.1:16619 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.663Z [INFO]  TestDNS_PreparedQueryNearIPEDNS: Stopping server: protocol=DNS address=127.0.0.1:16619 network=udp
>     writer.go:29: 2020-02-23T02:46:43.663Z [INFO]  TestDNS_PreparedQueryNearIPEDNS: Stopping server: protocol=HTTP address=127.0.0.1:16620 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.663Z [INFO]  TestDNS_PreparedQueryNearIPEDNS: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:43.663Z [INFO]  TestDNS_PreparedQueryNearIPEDNS: Endpoints down
> === CONT  TestDNS_ServiceLookup_TagPeriod
> --- PASS: TestDNS_ServiceLookup_PreparedQueryNamePeriod (0.34s)
>     writer.go:29: 2020-02-23T02:46:43.343Z [WARN]  TestDNS_ServiceLookup_PreparedQueryNamePeriod: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:43.343Z [DEBUG] TestDNS_ServiceLookup_PreparedQueryNamePeriod.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:43.343Z [DEBUG] TestDNS_ServiceLookup_PreparedQueryNamePeriod.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:43.354Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b25ea08d-fd83-1e8a-4688-472e272daab1 Address:127.0.0.1:16612}]"
>     writer.go:29: 2020-02-23T02:46:43.354Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server.raft: entering follower state: follower="Node at 127.0.0.1:16612 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:43.354Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server.serf.wan: serf: EventMemberJoin: Node-b25ea08d-fd83-1e8a-4688-472e272daab1.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.355Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server.serf.lan: serf: EventMemberJoin: Node-b25ea08d-fd83-1e8a-4688-472e272daab1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.355Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server: Adding LAN server: server="Node-b25ea08d-fd83-1e8a-4688-472e272daab1 (Addr: tcp/127.0.0.1:16612) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:43.355Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server: Handled event for server in area: event=member-join server=Node-b25ea08d-fd83-1e8a-4688-472e272daab1.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:43.356Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod: Started DNS server: address=127.0.0.1:16607 network=udp
>     writer.go:29: 2020-02-23T02:46:43.356Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod: Started DNS server: address=127.0.0.1:16607 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.356Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod: Started HTTP server: address=127.0.0.1:16608 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.356Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod: started state syncer
>     writer.go:29: 2020-02-23T02:46:43.402Z [WARN]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:43.402Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server.raft: entering candidate state: node="Node at 127.0.0.1:16612 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:43.406Z [DEBUG] TestDNS_ServiceLookup_PreparedQueryNamePeriod.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:43.406Z [DEBUG] TestDNS_ServiceLookup_PreparedQueryNamePeriod.server.raft: vote granted: from=b25ea08d-fd83-1e8a-4688-472e272daab1 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:43.406Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:43.406Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server.raft: entering leader state: leader="Node at 127.0.0.1:16612 [Leader]"
>     writer.go:29: 2020-02-23T02:46:43.407Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:43.407Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server: New leader elected: payload=Node-b25ea08d-fd83-1e8a-4688-472e272daab1
>     writer.go:29: 2020-02-23T02:46:43.414Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:43.425Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:43.425Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.425Z [DEBUG] TestDNS_ServiceLookup_PreparedQueryNamePeriod.server: Skipping self join check for node since the cluster is too small: node=Node-b25ea08d-fd83-1e8a-4688-472e272daab1
>     writer.go:29: 2020-02-23T02:46:43.425Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server: member joined, marking health alive: member=Node-b25ea08d-fd83-1e8a-4688-472e272daab1
>     writer.go:29: 2020-02-23T02:46:43.662Z [DEBUG] TestDNS_ServiceLookup_PreparedQueryNamePeriod.dns: request served from client: name=some.query.we.like.query.consul. type=SRV class=IN latency=68.667µs client=127.0.0.1:59018 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.662Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:43.662Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:43.662Z [DEBUG] TestDNS_ServiceLookup_PreparedQueryNamePeriod.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.662Z [WARN]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.662Z [ERROR] TestDNS_ServiceLookup_PreparedQueryNamePeriod.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:43.662Z [DEBUG] TestDNS_ServiceLookup_PreparedQueryNamePeriod.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.664Z [WARN]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.672Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:43.672Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod: consul server down
>     writer.go:29: 2020-02-23T02:46:43.672Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod: shutdown complete
>     writer.go:29: 2020-02-23T02:46:43.672Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod: Stopping server: protocol=DNS address=127.0.0.1:16607 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.672Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod: Stopping server: protocol=DNS address=127.0.0.1:16607 network=udp
>     writer.go:29: 2020-02-23T02:46:43.672Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod: Stopping server: protocol=HTTP address=127.0.0.1:16608 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.672Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:43.672Z [INFO]  TestDNS_ServiceLookup_PreparedQueryNamePeriod: Endpoints down
> === CONT  TestDNS_CaseInsensitiveServiceLookup
> --- PASS: TestDNS_ServiceLookup_TagPeriod (0.12s)
>     writer.go:29: 2020-02-23T02:46:43.670Z [WARN]  TestDNS_ServiceLookup_TagPeriod: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:43.670Z [DEBUG] TestDNS_ServiceLookup_TagPeriod.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:43.671Z [DEBUG] TestDNS_ServiceLookup_TagPeriod.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:43.681Z [INFO]  TestDNS_ServiceLookup_TagPeriod.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:7cdf975f-4f79-0dc1-16d4-0fa711d3ee84 Address:127.0.0.1:16618}]"
>     writer.go:29: 2020-02-23T02:46:43.681Z [INFO]  TestDNS_ServiceLookup_TagPeriod.server.raft: entering follower state: follower="Node at 127.0.0.1:16618 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:43.681Z [INFO]  TestDNS_ServiceLookup_TagPeriod.server.serf.wan: serf: EventMemberJoin: Node-7cdf975f-4f79-0dc1-16d4-0fa711d3ee84.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.682Z [INFO]  TestDNS_ServiceLookup_TagPeriod.server.serf.lan: serf: EventMemberJoin: Node-7cdf975f-4f79-0dc1-16d4-0fa711d3ee84 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.682Z [INFO]  TestDNS_ServiceLookup_TagPeriod.server: Adding LAN server: server="Node-7cdf975f-4f79-0dc1-16d4-0fa711d3ee84 (Addr: tcp/127.0.0.1:16618) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:43.682Z [INFO]  TestDNS_ServiceLookup_TagPeriod.server: Handled event for server in area: event=member-join server=Node-7cdf975f-4f79-0dc1-16d4-0fa711d3ee84.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:43.682Z [INFO]  TestDNS_ServiceLookup_TagPeriod: Started DNS server: address=127.0.0.1:16613 network=udp
>     writer.go:29: 2020-02-23T02:46:43.683Z [INFO]  TestDNS_ServiceLookup_TagPeriod: Started DNS server: address=127.0.0.1:16613 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.683Z [INFO]  TestDNS_ServiceLookup_TagPeriod: Started HTTP server: address=127.0.0.1:16614 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.683Z [INFO]  TestDNS_ServiceLookup_TagPeriod: started state syncer
>     writer.go:29: 2020-02-23T02:46:43.726Z [WARN]  TestDNS_ServiceLookup_TagPeriod.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:43.726Z [INFO]  TestDNS_ServiceLookup_TagPeriod.server.raft: entering candidate state: node="Node at 127.0.0.1:16618 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:43.732Z [DEBUG] TestDNS_ServiceLookup_TagPeriod.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:43.732Z [DEBUG] TestDNS_ServiceLookup_TagPeriod.server.raft: vote granted: from=7cdf975f-4f79-0dc1-16d4-0fa711d3ee84 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:43.732Z [INFO]  TestDNS_ServiceLookup_TagPeriod.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:43.732Z [INFO]  TestDNS_ServiceLookup_TagPeriod.server.raft: entering leader state: leader="Node at 127.0.0.1:16618 [Leader]"
>     writer.go:29: 2020-02-23T02:46:43.732Z [INFO]  TestDNS_ServiceLookup_TagPeriod.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:43.732Z [INFO]  TestDNS_ServiceLookup_TagPeriod.server: New leader elected: payload=Node-7cdf975f-4f79-0dc1-16d4-0fa711d3ee84
>     writer.go:29: 2020-02-23T02:46:43.742Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:43.749Z [INFO]  TestDNS_ServiceLookup_TagPeriod.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:43.749Z [INFO]  TestDNS_ServiceLookup_TagPeriod.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.749Z [DEBUG] TestDNS_ServiceLookup_TagPeriod.server: Skipping self join check for node since the cluster is too small: node=Node-7cdf975f-4f79-0dc1-16d4-0fa711d3ee84
>     writer.go:29: 2020-02-23T02:46:43.749Z [INFO]  TestDNS_ServiceLookup_TagPeriod.server: member joined, marking health alive: member=Node-7cdf975f-4f79-0dc1-16d4-0fa711d3ee84
>     writer.go:29: 2020-02-23T02:46:43.780Z [DEBUG] TestDNS_ServiceLookup_TagPeriod.dns: request served from client: name=v1.master2.db.service.consul. type=SRV class=IN latency=100.149µs client=127.0.0.1:44224 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.780Z [DEBUG] TestDNS_ServiceLookup_TagPeriod.dns: request served from client: name=v1.master.db.service.consul. type=SRV class=IN latency=86.285µs client=127.0.0.1:46359 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.780Z [INFO]  TestDNS_ServiceLookup_TagPeriod: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:43.780Z [INFO]  TestDNS_ServiceLookup_TagPeriod.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:43.780Z [DEBUG] TestDNS_ServiceLookup_TagPeriod.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.780Z [WARN]  TestDNS_ServiceLookup_TagPeriod.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.780Z [ERROR] TestDNS_ServiceLookup_TagPeriod.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:43.780Z [DEBUG] TestDNS_ServiceLookup_TagPeriod.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.782Z [WARN]  TestDNS_ServiceLookup_TagPeriod.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.785Z [INFO]  TestDNS_ServiceLookup_TagPeriod.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:43.785Z [INFO]  TestDNS_ServiceLookup_TagPeriod: consul server down
>     writer.go:29: 2020-02-23T02:46:43.785Z [INFO]  TestDNS_ServiceLookup_TagPeriod: shutdown complete
>     writer.go:29: 2020-02-23T02:46:43.785Z [INFO]  TestDNS_ServiceLookup_TagPeriod: Stopping server: protocol=DNS address=127.0.0.1:16613 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.785Z [INFO]  TestDNS_ServiceLookup_TagPeriod: Stopping server: protocol=DNS address=127.0.0.1:16613 network=udp
>     writer.go:29: 2020-02-23T02:46:43.785Z [INFO]  TestDNS_ServiceLookup_TagPeriod: Stopping server: protocol=HTTP address=127.0.0.1:16614 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.785Z [INFO]  TestDNS_ServiceLookup_TagPeriod: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:43.785Z [INFO]  TestDNS_ServiceLookup_TagPeriod: Endpoints down
> === CONT  TestDNS_Lookup_TaggedIPAddresses
> --- PASS: TestDNS_CaseInsensitiveServiceLookup (0.20s)
>     writer.go:29: 2020-02-23T02:46:43.680Z [WARN]  TestDNS_CaseInsensitiveServiceLookup: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:43.680Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:43.680Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:43.691Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a6d40026-4607-8cbf-2420-6dd86fe13b09 Address:127.0.0.1:16630}]"
>     writer.go:29: 2020-02-23T02:46:43.691Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.server.raft: entering follower state: follower="Node at 127.0.0.1:16630 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:43.691Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.server.serf.wan: serf: EventMemberJoin: Node-a6d40026-4607-8cbf-2420-6dd86fe13b09.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.692Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.server.serf.lan: serf: EventMemberJoin: Node-a6d40026-4607-8cbf-2420-6dd86fe13b09 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.692Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.server: Adding LAN server: server="Node-a6d40026-4607-8cbf-2420-6dd86fe13b09 (Addr: tcp/127.0.0.1:16630) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:43.692Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.server: Handled event for server in area: event=member-join server=Node-a6d40026-4607-8cbf-2420-6dd86fe13b09.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:43.692Z [INFO]  TestDNS_CaseInsensitiveServiceLookup: Started DNS server: address=127.0.0.1:16625 network=udp
>     writer.go:29: 2020-02-23T02:46:43.692Z [INFO]  TestDNS_CaseInsensitiveServiceLookup: Started DNS server: address=127.0.0.1:16625 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.692Z [INFO]  TestDNS_CaseInsensitiveServiceLookup: Started HTTP server: address=127.0.0.1:16626 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.692Z [INFO]  TestDNS_CaseInsensitiveServiceLookup: started state syncer
>     writer.go:29: 2020-02-23T02:46:43.761Z [WARN]  TestDNS_CaseInsensitiveServiceLookup.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:43.761Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.server.raft: entering candidate state: node="Node at 127.0.0.1:16630 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:43.764Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:43.764Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.server.raft: vote granted: from=a6d40026-4607-8cbf-2420-6dd86fe13b09 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:43.764Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:43.764Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.server.raft: entering leader state: leader="Node at 127.0.0.1:16630 [Leader]"
>     writer.go:29: 2020-02-23T02:46:43.764Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:43.764Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.server: New leader elected: payload=Node-a6d40026-4607-8cbf-2420-6dd86fe13b09
>     writer.go:29: 2020-02-23T02:46:43.771Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:43.780Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:43.781Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.781Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.server: Skipping self join check for node since the cluster is too small: node=Node-a6d40026-4607-8cbf-2420-6dd86fe13b09
>     writer.go:29: 2020-02-23T02:46:43.781Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.server: member joined, marking health alive: member=Node-a6d40026-4607-8cbf-2420-6dd86fe13b09
>     writer.go:29: 2020-02-23T02:46:43.814Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:43.820Z [INFO]  TestDNS_CaseInsensitiveServiceLookup: Synced node info
>     writer.go:29: 2020-02-23T02:46:43.820Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup: Node info in sync
>     writer.go:29: 2020-02-23T02:46:43.865Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.dns: request served from client: name=master.db.service.consul. type=SRV class=IN latency=134.06µs client=127.0.0.1:37223 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.865Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.dns: request served from client: name=mASTER.dB.service.consul. type=SRV class=IN latency=76.894µs client=127.0.0.1:48613 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.865Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.dns: request served from client: name=MASTER.dB.service.consul. type=SRV class=IN latency=72.963µs client=127.0.0.1:38929 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.865Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.dns: request served from client: name=db.service.consul. type=SRV class=IN latency=64.289µs client=127.0.0.1:55960 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.865Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.dns: request served from client: name=DB.service.consul. type=SRV class=IN latency=63.107µs client=127.0.0.1:36689 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.865Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.dns: request served from client: name=Db.service.consul. type=SRV class=IN latency=71.151µs client=127.0.0.1:41520 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.866Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.dns: request served from client: name=somequery.query.consul. type=SRV class=IN latency=61.558µs client=127.0.0.1:58155 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.866Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.dns: request served from client: name=SomeQuery.query.consul. type=SRV class=IN latency=59.243µs client=127.0.0.1:35941 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.866Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.dns: request served from client: name=SOMEQUERY.query.consul. type=SRV class=IN latency=48.273µs client=127.0.0.1:57924 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.866Z [INFO]  TestDNS_CaseInsensitiveServiceLookup: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:43.866Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:43.866Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.866Z [WARN]  TestDNS_CaseInsensitiveServiceLookup.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.866Z [DEBUG] TestDNS_CaseInsensitiveServiceLookup.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.869Z [WARN]  TestDNS_CaseInsensitiveServiceLookup.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.870Z [INFO]  TestDNS_CaseInsensitiveServiceLookup.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:43.871Z [INFO]  TestDNS_CaseInsensitiveServiceLookup: consul server down
>     writer.go:29: 2020-02-23T02:46:43.871Z [INFO]  TestDNS_CaseInsensitiveServiceLookup: shutdown complete
>     writer.go:29: 2020-02-23T02:46:43.871Z [INFO]  TestDNS_CaseInsensitiveServiceLookup: Stopping server: protocol=DNS address=127.0.0.1:16625 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.871Z [INFO]  TestDNS_CaseInsensitiveServiceLookup: Stopping server: protocol=DNS address=127.0.0.1:16625 network=udp
>     writer.go:29: 2020-02-23T02:46:43.871Z [INFO]  TestDNS_CaseInsensitiveServiceLookup: Stopping server: protocol=HTTP address=127.0.0.1:16626 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.871Z [INFO]  TestDNS_CaseInsensitiveServiceLookup: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:43.871Z [INFO]  TestDNS_CaseInsensitiveServiceLookup: Endpoints down
> === CONT  TestDNS_ServiceLookup_WanTranslation
> --- PASS: TestDNS_PreparedQueryNearIP (0.64s)
>     writer.go:29: 2020-02-23T02:46:43.363Z [WARN]  TestDNS_PreparedQueryNearIP: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:43.363Z [DEBUG] TestDNS_PreparedQueryNearIP.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:43.363Z [DEBUG] TestDNS_PreparedQueryNearIP.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:43.372Z [INFO]  TestDNS_PreparedQueryNearIP.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:549a90af-24f9-bebe-2ba7-af35d9493ef9 Address:127.0.0.1:16606}]"
>     writer.go:29: 2020-02-23T02:46:43.372Z [INFO]  TestDNS_PreparedQueryNearIP.server.raft: entering follower state: follower="Node at 127.0.0.1:16606 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:43.372Z [INFO]  TestDNS_PreparedQueryNearIP.server.serf.wan: serf: EventMemberJoin: Node-549a90af-24f9-bebe-2ba7-af35d9493ef9.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.373Z [INFO]  TestDNS_PreparedQueryNearIP.server.serf.lan: serf: EventMemberJoin: Node-549a90af-24f9-bebe-2ba7-af35d9493ef9 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.373Z [INFO]  TestDNS_PreparedQueryNearIP.server: Handled event for server in area: event=member-join server=Node-549a90af-24f9-bebe-2ba7-af35d9493ef9.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:43.373Z [INFO]  TestDNS_PreparedQueryNearIP.server: Adding LAN server: server="Node-549a90af-24f9-bebe-2ba7-af35d9493ef9 (Addr: tcp/127.0.0.1:16606) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:43.373Z [INFO]  TestDNS_PreparedQueryNearIP: Started DNS server: address=127.0.0.1:16601 network=udp
>     writer.go:29: 2020-02-23T02:46:43.373Z [INFO]  TestDNS_PreparedQueryNearIP: Started DNS server: address=127.0.0.1:16601 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.373Z [INFO]  TestDNS_PreparedQueryNearIP: Started HTTP server: address=127.0.0.1:16602 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.373Z [INFO]  TestDNS_PreparedQueryNearIP: started state syncer
>     writer.go:29: 2020-02-23T02:46:43.439Z [WARN]  TestDNS_PreparedQueryNearIP.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:43.439Z [INFO]  TestDNS_PreparedQueryNearIP.server.raft: entering candidate state: node="Node at 127.0.0.1:16606 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:43.444Z [DEBUG] TestDNS_PreparedQueryNearIP.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:43.444Z [DEBUG] TestDNS_PreparedQueryNearIP.server.raft: vote granted: from=549a90af-24f9-bebe-2ba7-af35d9493ef9 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:43.444Z [INFO]  TestDNS_PreparedQueryNearIP.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:43.444Z [INFO]  TestDNS_PreparedQueryNearIP.server.raft: entering leader state: leader="Node at 127.0.0.1:16606 [Leader]"
>     writer.go:29: 2020-02-23T02:46:43.444Z [INFO]  TestDNS_PreparedQueryNearIP.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:43.444Z [INFO]  TestDNS_PreparedQueryNearIP.server: New leader elected: payload=Node-549a90af-24f9-bebe-2ba7-af35d9493ef9
>     writer.go:29: 2020-02-23T02:46:43.450Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:43.458Z [INFO]  TestDNS_PreparedQueryNearIP.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:43.458Z [INFO]  TestDNS_PreparedQueryNearIP.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.458Z [DEBUG] TestDNS_PreparedQueryNearIP.server: Skipping self join check for node since the cluster is too small: node=Node-549a90af-24f9-bebe-2ba7-af35d9493ef9
>     writer.go:29: 2020-02-23T02:46:43.458Z [INFO]  TestDNS_PreparedQueryNearIP.server: member joined, marking health alive: member=Node-549a90af-24f9-bebe-2ba7-af35d9493ef9
>     writer.go:29: 2020-02-23T02:46:43.636Z [DEBUG] TestDNS_PreparedQueryNearIP.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=117.696µs client=127.0.0.1:40051 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.661Z [DEBUG] TestDNS_PreparedQueryNearIP.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=115.81µs client=127.0.0.1:39488 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.679Z [DEBUG] TestDNS_PreparedQueryNearIP: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:43.680Z [INFO]  TestDNS_PreparedQueryNearIP: Synced node info
>     writer.go:29: 2020-02-23T02:46:43.680Z [DEBUG] TestDNS_PreparedQueryNearIP: Node info in sync
>     writer.go:29: 2020-02-23T02:46:43.686Z [DEBUG] TestDNS_PreparedQueryNearIP.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=104.885µs client=127.0.0.1:37217 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.712Z [DEBUG] TestDNS_PreparedQueryNearIP.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=142.949µs client=127.0.0.1:55933 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.737Z [DEBUG] TestDNS_PreparedQueryNearIP.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=157.046µs client=127.0.0.1:32980 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.763Z [DEBUG] TestDNS_PreparedQueryNearIP.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=166.201µs client=127.0.0.1:46293 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.788Z [DEBUG] TestDNS_PreparedQueryNearIP.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=132.938µs client=127.0.0.1:60929 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.814Z [DEBUG] TestDNS_PreparedQueryNearIP.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=177.396µs client=127.0.0.1:55760 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.840Z [DEBUG] TestDNS_PreparedQueryNearIP.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=185.611µs client=127.0.0.1:37114 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.865Z [DEBUG] TestDNS_PreparedQueryNearIP.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=126.612µs client=127.0.0.1:52574 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.893Z [DEBUG] TestDNS_PreparedQueryNearIP.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=177.101µs client=127.0.0.1:38248 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.918Z [DEBUG] TestDNS_PreparedQueryNearIP.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=184.323µs client=127.0.0.1:59678 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.944Z [DEBUG] TestDNS_PreparedQueryNearIP.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=146.788µs client=127.0.0.1:59187 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.969Z [DEBUG] TestDNS_PreparedQueryNearIP.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=144.145µs client=127.0.0.1:56442 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.995Z [DEBUG] TestDNS_PreparedQueryNearIP.dns: request served from client: name=some.query.we.like.query.consul. type=A class=IN latency=176.534µs client=127.0.0.1:40320 client_network=udp
>     writer.go:29: 2020-02-23T02:46:43.995Z [INFO]  TestDNS_PreparedQueryNearIP: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:43.995Z [INFO]  TestDNS_PreparedQueryNearIP.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:43.995Z [DEBUG] TestDNS_PreparedQueryNearIP.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.995Z [WARN]  TestDNS_PreparedQueryNearIP.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.995Z [DEBUG] TestDNS_PreparedQueryNearIP.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.997Z [WARN]  TestDNS_PreparedQueryNearIP.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:43.998Z [INFO]  TestDNS_PreparedQueryNearIP.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:43.998Z [INFO]  TestDNS_PreparedQueryNearIP: consul server down
>     writer.go:29: 2020-02-23T02:46:43.998Z [INFO]  TestDNS_PreparedQueryNearIP: shutdown complete
>     writer.go:29: 2020-02-23T02:46:43.998Z [INFO]  TestDNS_PreparedQueryNearIP: Stopping server: protocol=DNS address=127.0.0.1:16601 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.998Z [INFO]  TestDNS_PreparedQueryNearIP: Stopping server: protocol=DNS address=127.0.0.1:16601 network=udp
>     writer.go:29: 2020-02-23T02:46:43.998Z [INFO]  TestDNS_PreparedQueryNearIP: Stopping server: protocol=HTTP address=127.0.0.1:16602 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.999Z [INFO]  TestDNS_PreparedQueryNearIP: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:43.999Z [INFO]  TestDNS_PreparedQueryNearIP: Endpoints down
> === CONT  TestDNS_ServiceLookup_ServiceAddressIPV6
> === RUN   TestDNS_Lookup_TaggedIPAddresses/simple-ipv4
> === RUN   TestDNS_Lookup_TaggedIPAddresses/simple-ipv6
> === RUN   TestDNS_Lookup_TaggedIPAddresses/ipv4-with-tagged-ipv6
> === RUN   TestDNS_Lookup_TaggedIPAddresses/ipv6-with-tagged-ipv4
> --- PASS: TestDNS_Lookup_TaggedIPAddresses (0.36s)
>     writer.go:29: 2020-02-23T02:46:43.790Z [WARN]  TestDNS_Lookup_TaggedIPAddresses: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:43.790Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:43.791Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:43.811Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:94305821-f4be-955a-b777-edb1b91b0472 Address:127.0.0.1:16636}]"
>     writer.go:29: 2020-02-23T02:46:43.811Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.server.raft: entering follower state: follower="Node at 127.0.0.1:16636 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:43.814Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.server.serf.wan: serf: EventMemberJoin: Node-94305821-f4be-955a-b777-edb1b91b0472.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.828Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.server.serf.lan: serf: EventMemberJoin: Node-94305821-f4be-955a-b777-edb1b91b0472 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.828Z [INFO]  TestDNS_Lookup_TaggedIPAddresses: Started DNS server: address=127.0.0.1:16631 network=udp
>     writer.go:29: 2020-02-23T02:46:43.828Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.server: Adding LAN server: server="Node-94305821-f4be-955a-b777-edb1b91b0472 (Addr: tcp/127.0.0.1:16636) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:43.828Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.server: Handled event for server in area: event=member-join server=Node-94305821-f4be-955a-b777-edb1b91b0472.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:43.829Z [INFO]  TestDNS_Lookup_TaggedIPAddresses: Started DNS server: address=127.0.0.1:16631 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.829Z [INFO]  TestDNS_Lookup_TaggedIPAddresses: Started HTTP server: address=127.0.0.1:16632 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.829Z [INFO]  TestDNS_Lookup_TaggedIPAddresses: started state syncer
>     writer.go:29: 2020-02-23T02:46:43.876Z [WARN]  TestDNS_Lookup_TaggedIPAddresses.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:43.876Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.server.raft: entering candidate state: node="Node at 127.0.0.1:16636 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:43.879Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:43.879Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.server.raft: vote granted: from=94305821-f4be-955a-b777-edb1b91b0472 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:43.879Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:43.879Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.server.raft: entering leader state: leader="Node at 127.0.0.1:16636 [Leader]"
>     writer.go:29: 2020-02-23T02:46:43.879Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:43.879Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.server: New leader elected: payload=Node-94305821-f4be-955a-b777-edb1b91b0472
>     writer.go:29: 2020-02-23T02:46:43.886Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:43.894Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:43.894Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.894Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.server: Skipping self join check for node since the cluster is too small: node=Node-94305821-f4be-955a-b777-edb1b91b0472
>     writer.go:29: 2020-02-23T02:46:43.894Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.server: member joined, marking health alive: member=Node-94305821-f4be-955a-b777-edb1b91b0472
>     writer.go:29: 2020-02-23T02:46:44.068Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:44.071Z [INFO]  TestDNS_Lookup_TaggedIPAddresses: Synced node info
>     writer.go:29: 2020-02-23T02:46:44.133Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=db.service.consul. type=A class=IN latency=103.343µs client=127.0.0.1:42582 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.133Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=db.service.consul. type=AAAA class=IN latency=63.417µs client=127.0.0.1:55252 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.133Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=3548c38d-27ed-811f-6c4c-55f87e38212a.query.consul. type=A class=IN latency=68.006µs client=127.0.0.1:42601 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.134Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=3548c38d-27ed-811f-6c4c-55f87e38212a.query.consul. type=AAAA class=IN latency=53.586µs client=127.0.0.1:52220 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.134Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=foo.node.consul. type=A class=IN latency=79.624µs client=127.0.0.1:59996 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.134Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=foo.node.consul. type=AAAA class=IN latency=46.055µs client=127.0.0.1:48769 client_network=udp
>     --- PASS: TestDNS_Lookup_TaggedIPAddresses/simple-ipv4 (0.00s)
>     writer.go:29: 2020-02-23T02:46:44.137Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=db.service.consul. type=A class=IN latency=73.901µs client=127.0.0.1:44288 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.137Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=db.service.consul. type=AAAA class=IN latency=54.731µs client=127.0.0.1:37268 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.137Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=3548c38d-27ed-811f-6c4c-55f87e38212a.query.consul. type=A class=IN latency=64.045µs client=127.0.0.1:44921 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.137Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=3548c38d-27ed-811f-6c4c-55f87e38212a.query.consul. type=AAAA class=IN latency=55.374µs client=127.0.0.1:60755 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.137Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=foo.node.consul. type=A class=IN latency=39.841µs client=127.0.0.1:53245 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.138Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=foo.node.consul. type=AAAA class=IN latency=37.33µs client=127.0.0.1:52511 client_network=udp
>     --- PASS: TestDNS_Lookup_TaggedIPAddresses/simple-ipv6 (0.00s)
>     writer.go:29: 2020-02-23T02:46:44.139Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=db.service.consul. type=A class=IN latency=68.882µs client=127.0.0.1:54380 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.140Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=db.service.consul. type=AAAA class=IN latency=55.034µs client=127.0.0.1:35954 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.140Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=3548c38d-27ed-811f-6c4c-55f87e38212a.query.consul. type=A class=IN latency=58.182µs client=127.0.0.1:44737 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.140Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=3548c38d-27ed-811f-6c4c-55f87e38212a.query.consul. type=AAAA class=IN latency=55.162µs client=127.0.0.1:37277 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.140Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=foo.node.consul. type=A class=IN latency=43.264µs client=127.0.0.1:36545 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.140Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=foo.node.consul. type=AAAA class=IN latency=35.849µs client=127.0.0.1:35838 client_network=udp
>     --- PASS: TestDNS_Lookup_TaggedIPAddresses/ipv4-with-tagged-ipv6 (0.00s)
>     writer.go:29: 2020-02-23T02:46:44.142Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=db.service.consul. type=A class=IN latency=67.471µs client=127.0.0.1:46029 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.142Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=db.service.consul. type=AAAA class=IN latency=50.368µs client=127.0.0.1:35149 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.142Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=3548c38d-27ed-811f-6c4c-55f87e38212a.query.consul. type=A class=IN latency=57.857µs client=127.0.0.1:47752 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.142Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=3548c38d-27ed-811f-6c4c-55f87e38212a.query.consul. type=AAAA class=IN latency=52.575µs client=127.0.0.1:60864 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.143Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=foo.node.consul. type=A class=IN latency=40.22µs client=127.0.0.1:48991 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.143Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.dns: request served from client: name=foo.node.consul. type=AAAA class=IN latency=35.385µs client=127.0.0.1:54851 client_network=udp
>     --- PASS: TestDNS_Lookup_TaggedIPAddresses/ipv6-with-tagged-ipv4 (0.00s)
>     writer.go:29: 2020-02-23T02:46:44.143Z [INFO]  TestDNS_Lookup_TaggedIPAddresses: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:44.143Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:44.143Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.143Z [WARN]  TestDNS_Lookup_TaggedIPAddresses.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.143Z [DEBUG] TestDNS_Lookup_TaggedIPAddresses.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.144Z [WARN]  TestDNS_Lookup_TaggedIPAddresses.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.146Z [INFO]  TestDNS_Lookup_TaggedIPAddresses.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:44.146Z [INFO]  TestDNS_Lookup_TaggedIPAddresses: consul server down
>     writer.go:29: 2020-02-23T02:46:44.146Z [INFO]  TestDNS_Lookup_TaggedIPAddresses: shutdown complete
>     writer.go:29: 2020-02-23T02:46:44.146Z [INFO]  TestDNS_Lookup_TaggedIPAddresses: Stopping server: protocol=DNS address=127.0.0.1:16631 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.146Z [INFO]  TestDNS_Lookup_TaggedIPAddresses: Stopping server: protocol=DNS address=127.0.0.1:16631 network=udp
>     writer.go:29: 2020-02-23T02:46:44.146Z [INFO]  TestDNS_Lookup_TaggedIPAddresses: Stopping server: protocol=HTTP address=127.0.0.1:16632 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.146Z [INFO]  TestDNS_Lookup_TaggedIPAddresses: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:44.146Z [INFO]  TestDNS_Lookup_TaggedIPAddresses: Endpoints down
> === CONT  TestDNS_ServiceLookup_ServiceAddress_SRV
> === RUN   TestDNS_ServiceLookup_WanTranslation/service-wan-from-dc2
> === RUN   TestDNS_ServiceLookup_WanTranslation/node-addr-from-dc1
> === RUN   TestDNS_ServiceLookup_WanTranslation/node-wan-from-dc1
> === RUN   TestDNS_ServiceLookup_WanTranslation/service-addr-from-dc1
> === RUN   TestDNS_ServiceLookup_WanTranslation/service-wan-from-dc1
> === RUN   TestDNS_ServiceLookup_WanTranslation/node-addr-from-dc2
> === RUN   TestDNS_ServiceLookup_WanTranslation/node-wan-from-dc2
> === RUN   TestDNS_ServiceLookup_WanTranslation/service-addr-from-dc2
> --- PASS: TestDNS_ServiceLookup_WanTranslation (0.32s)
>     writer.go:29: 2020-02-23T02:46:43.879Z [WARN]  TestDNS_ServiceLookup_WanTranslation: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:43.879Z [WARN]  TestDNS_ServiceLookup_WanTranslation: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:43.879Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:43.880Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:43.891Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:1fc8cc38-c383-cffd-ebe0-8ad688dc2b61 Address:127.0.0.1:16648}]"
>     writer.go:29: 2020-02-23T02:46:43.891Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.raft: entering follower state: follower="Node at 127.0.0.1:16648 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:43.892Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.serf.wan: serf: EventMemberJoin: Node-1fc8cc38-c383-cffd-ebe0-8ad688dc2b61.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.894Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.serf.lan: serf: EventMemberJoin: Node-1fc8cc38-c383-cffd-ebe0-8ad688dc2b61 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:43.894Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: Adding LAN server: server="Node-1fc8cc38-c383-cffd-ebe0-8ad688dc2b61 (Addr: tcp/127.0.0.1:16648) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:43.894Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: Handled event for server in area: event=member-join server=Node-1fc8cc38-c383-cffd-ebe0-8ad688dc2b61.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:43.894Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Started DNS server: address=127.0.0.1:16643 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.894Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Started DNS server: address=127.0.0.1:16643 network=udp
>     writer.go:29: 2020-02-23T02:46:43.895Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Started HTTP server: address=127.0.0.1:16644 network=tcp
>     writer.go:29: 2020-02-23T02:46:43.895Z [INFO]  TestDNS_ServiceLookup_WanTranslation: started state syncer
>     writer.go:29: 2020-02-23T02:46:43.947Z [WARN]  TestDNS_ServiceLookup_WanTranslation.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:43.947Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.raft: entering candidate state: node="Node at 127.0.0.1:16648 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:43.955Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:43.955Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.server.raft: vote granted: from=1fc8cc38-c383-cffd-ebe0-8ad688dc2b61 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:43.955Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:43.955Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.raft: entering leader state: leader="Node at 127.0.0.1:16648 [Leader]"
>     writer.go:29: 2020-02-23T02:46:43.955Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:43.955Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: New leader elected: payload=Node-1fc8cc38-c383-cffd-ebe0-8ad688dc2b61
>     writer.go:29: 2020-02-23T02:46:43.957Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:43.958Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:43.963Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:43.963Z [INFO]  TestDNS_ServiceLookup_WanTranslation.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:43.963Z [INFO]  TestDNS_ServiceLookup_WanTranslation.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:43.963Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.serf.lan: serf: EventMemberUpdate: Node-1fc8cc38-c383-cffd-ebe0-8ad688dc2b61
>     writer.go:29: 2020-02-23T02:46:43.964Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.serf.wan: serf: EventMemberUpdate: Node-1fc8cc38-c383-cffd-ebe0-8ad688dc2b61.dc1
>     writer.go:29: 2020-02-23T02:46:43.964Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: Handled event for server in area: event=member-update server=Node-1fc8cc38-c383-cffd-ebe0-8ad688dc2b61.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:43.971Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:43.977Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:43.977Z [INFO]  TestDNS_ServiceLookup_WanTranslation.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.977Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.server: Skipping self join check for node since the cluster is too small: node=Node-1fc8cc38-c383-cffd-ebe0-8ad688dc2b61
>     writer.go:29: 2020-02-23T02:46:43.977Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: member joined, marking health alive: member=Node-1fc8cc38-c383-cffd-ebe0-8ad688dc2b61
>     writer.go:29: 2020-02-23T02:46:43.979Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.server: Skipping self join check for node since the cluster is too small: node=Node-1fc8cc38-c383-cffd-ebe0-8ad688dc2b61
>     writer.go:29: 2020-02-23T02:46:44.052Z [WARN]  TestDNS_ServiceLookup_WanTranslation: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:44.052Z [WARN]  TestDNS_ServiceLookup_WanTranslation: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:44.052Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:44.052Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:44.061Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:906d7d1b-f435-72c4-5a2b-49e25cf300af Address:127.0.0.1:16660}]"
>     writer.go:29: 2020-02-23T02:46:44.061Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.raft: entering follower state: follower="Node at 127.0.0.1:16660 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:44.061Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.serf.wan: serf: EventMemberJoin: Node-906d7d1b-f435-72c4-5a2b-49e25cf300af.dc2 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.062Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.serf.lan: serf: EventMemberJoin: Node-906d7d1b-f435-72c4-5a2b-49e25cf300af 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.062Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: Handled event for server in area: event=member-join server=Node-906d7d1b-f435-72c4-5a2b-49e25cf300af.dc2 area=wan
>     writer.go:29: 2020-02-23T02:46:44.062Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: Adding LAN server: server="Node-906d7d1b-f435-72c4-5a2b-49e25cf300af (Addr: tcp/127.0.0.1:16660) (DC: dc2)"
>     writer.go:29: 2020-02-23T02:46:44.063Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Started DNS server: address=127.0.0.1:16655 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.063Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Started DNS server: address=127.0.0.1:16655 network=udp
>     writer.go:29: 2020-02-23T02:46:44.063Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Started HTTP server: address=127.0.0.1:16656 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.063Z [INFO]  TestDNS_ServiceLookup_WanTranslation: started state syncer
>     writer.go:29: 2020-02-23T02:46:44.119Z [WARN]  TestDNS_ServiceLookup_WanTranslation.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:44.119Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.raft: entering candidate state: node="Node at 127.0.0.1:16660 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:44.122Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:44.122Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.server.raft: vote granted: from=906d7d1b-f435-72c4-5a2b-49e25cf300af term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:44.122Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:44.122Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.raft: entering leader state: leader="Node at 127.0.0.1:16660 [Leader]"
>     writer.go:29: 2020-02-23T02:46:44.122Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:44.122Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: New leader elected: payload=Node-906d7d1b-f435-72c4-5a2b-49e25cf300af
>     writer.go:29: 2020-02-23T02:46:44.124Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:44.125Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:44.128Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:44.128Z [INFO]  TestDNS_ServiceLookup_WanTranslation.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:44.128Z [INFO]  TestDNS_ServiceLookup_WanTranslation.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:44.128Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.serf.lan: serf: EventMemberUpdate: Node-906d7d1b-f435-72c4-5a2b-49e25cf300af
>     writer.go:29: 2020-02-23T02:46:44.128Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.serf.wan: serf: EventMemberUpdate: Node-906d7d1b-f435-72c4-5a2b-49e25cf300af.dc2
>     writer.go:29: 2020-02-23T02:46:44.130Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: Handled event for server in area: event=member-update server=Node-906d7d1b-f435-72c4-5a2b-49e25cf300af.dc2 area=wan
>     writer.go:29: 2020-02-23T02:46:44.134Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:44.141Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:44.141Z [INFO]  TestDNS_ServiceLookup_WanTranslation.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.141Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.server: Skipping self join check for node since the cluster is too small: node=Node-906d7d1b-f435-72c4-5a2b-49e25cf300af
>     writer.go:29: 2020-02-23T02:46:44.141Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: member joined, marking health alive: member=Node-906d7d1b-f435-72c4-5a2b-49e25cf300af
>     writer.go:29: 2020-02-23T02:46:44.145Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.server: Skipping self join check for node since the cluster is too small: node=Node-906d7d1b-f435-72c4-5a2b-49e25cf300af
>     writer.go:29: 2020-02-23T02:46:44.151Z [INFO]  TestDNS_ServiceLookup_WanTranslation: (WAN) joining: wan_addresses=[127.0.0.1:16647]
>     writer.go:29: 2020-02-23T02:46:44.151Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.server.memberlist.wan: memberlist: Initiating push/pull sync with: 127.0.0.1:16647
>     writer.go:29: 2020-02-23T02:46:44.151Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.server.memberlist.wan: memberlist: Stream connection from=127.0.0.1:39034
>     writer.go:29: 2020-02-23T02:46:44.151Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.serf.wan: serf: EventMemberJoin: Node-906d7d1b-f435-72c4-5a2b-49e25cf300af.dc2 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.151Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.serf.wan: serf: EventMemberJoin: Node-1fc8cc38-c383-cffd-ebe0-8ad688dc2b61.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.152Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: Handled event for server in area: event=member-join server=Node-906d7d1b-f435-72c4-5a2b-49e25cf300af.dc2 area=wan
>     writer.go:29: 2020-02-23T02:46:44.152Z [INFO]  TestDNS_ServiceLookup_WanTranslation: (WAN) joined: number_of_nodes=1
>     writer.go:29: 2020-02-23T02:46:44.152Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: Handled event for server in area: event=member-join server=Node-1fc8cc38-c383-cffd-ebe0-8ad688dc2b61.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:44.157Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=SRV class=IN latency=117.395µs client=127.0.0.1:47558 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.157Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=SRV class=IN latency=88.66µs client=127.0.0.1:34989 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.157Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=A class=IN latency=81.004µs client=127.0.0.1:41197 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.158Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=A class=IN latency=84.969µs client=127.0.0.1:60046 client_network=udp
>     --- PASS: TestDNS_ServiceLookup_WanTranslation/service-wan-from-dc2 (0.00s)
>     writer.go:29: 2020-02-23T02:46:44.160Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:44.160Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=SRV class=IN latency=556.068µs client=127.0.0.1:56174 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.161Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=SRV class=IN latency=323.187µs client=127.0.0.1:53584 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.161Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=A class=IN latency=264.23µs client=127.0.0.1:48767 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.161Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=A class=IN latency=239.528µs client=127.0.0.1:51080 client_network=udp
>     --- PASS: TestDNS_ServiceLookup_WanTranslation/node-addr-from-dc1 (0.00s)
>     writer.go:29: 2020-02-23T02:46:44.164Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=SRV class=IN latency=267.428µs client=127.0.0.1:49044 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.164Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=SRV class=IN latency=251.371µs client=127.0.0.1:34970 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.165Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=A class=IN latency=263.613µs client=127.0.0.1:60234 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.165Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=A class=IN latency=252.381µs client=127.0.0.1:58458 client_network=udp
>     --- PASS: TestDNS_ServiceLookup_WanTranslation/node-wan-from-dc1 (0.00s)
>     writer.go:29: 2020-02-23T02:46:44.167Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=SRV class=IN latency=276.558µs client=127.0.0.1:41762 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.168Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=SRV class=IN latency=270.635µs client=127.0.0.1:48875 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.168Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=A class=IN latency=239.126µs client=127.0.0.1:45293 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.168Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=A class=IN latency=234.491µs client=127.0.0.1:54141 client_network=udp
>     --- PASS: TestDNS_ServiceLookup_WanTranslation/service-addr-from-dc1 (0.00s)
>     writer.go:29: 2020-02-23T02:46:44.170Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=SRV class=IN latency=285.908µs client=127.0.0.1:53064 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.171Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=SRV class=IN latency=251.727µs client=127.0.0.1:58910 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.171Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=A class=IN latency=271.941µs client=127.0.0.1:39063 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.171Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=A class=IN latency=273.544µs client=127.0.0.1:48394 client_network=udp
>     --- PASS: TestDNS_ServiceLookup_WanTranslation/service-wan-from-dc1 (0.00s)
>     writer.go:29: 2020-02-23T02:46:44.174Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=SRV class=IN latency=95.291µs client=127.0.0.1:35753 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.174Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=SRV class=IN latency=68.14µs client=127.0.0.1:34127 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.174Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=A class=IN latency=94.142µs client=127.0.0.1:51381 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.174Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=A class=IN latency=84.177µs client=127.0.0.1:52989 client_network=udp
>     --- PASS: TestDNS_ServiceLookup_WanTranslation/node-addr-from-dc2 (0.00s)
>     writer.go:29: 2020-02-23T02:46:44.176Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=SRV class=IN latency=83.412µs client=127.0.0.1:53878 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.176Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=SRV class=IN latency=67.09µs client=127.0.0.1:53103 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.177Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=A class=IN latency=63.94µs client=127.0.0.1:47162 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.177Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=A class=IN latency=60.366µs client=127.0.0.1:38340 client_network=udp
>     --- PASS: TestDNS_ServiceLookup_WanTranslation/node-wan-from-dc2 (0.00s)
>     writer.go:29: 2020-02-23T02:46:44.181Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=SRV class=IN latency=101.175µs client=127.0.0.1:49857 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.181Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=SRV class=IN latency=68.061µs client=127.0.0.1:58069 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.182Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=db.service.dc2.consul. type=A class=IN latency=85.109µs client=127.0.0.1:50410 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.182Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.dns: request served from client: name=80d26762-eaa8-bffd-7cbf-a0ba69472635.query.dc2.consul. type=A class=IN latency=67.331µs client=127.0.0.1:39579 client_network=udp
>     --- PASS: TestDNS_ServiceLookup_WanTranslation/service-addr-from-dc2 (0.00s)
>     writer.go:29: 2020-02-23T02:46:44.182Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:44.182Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:44.182Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:44.182Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:44.182Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.182Z [WARN]  TestDNS_ServiceLookup_WanTranslation.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.182Z [ERROR] TestDNS_ServiceLookup_WanTranslation.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:44.182Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:44.182Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:44.182Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.184Z [WARN]  TestDNS_ServiceLookup_WanTranslation.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.185Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:44.185Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:44.185Z [INFO]  TestDNS_ServiceLookup_WanTranslation: consul server down
>     writer.go:29: 2020-02-23T02:46:44.185Z [INFO]  TestDNS_ServiceLookup_WanTranslation: shutdown complete
>     writer.go:29: 2020-02-23T02:46:44.185Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Stopping server: protocol=DNS address=127.0.0.1:16655 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.185Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Stopping server: protocol=DNS address=127.0.0.1:16655 network=udp
>     writer.go:29: 2020-02-23T02:46:44.185Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Stopping server: protocol=HTTP address=127.0.0.1:16656 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.185Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:44.185Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Endpoints down
>     writer.go:29: 2020-02-23T02:46:44.185Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:44.185Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:44.185Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.185Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:44.185Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:44.185Z [WARN]  TestDNS_ServiceLookup_WanTranslation.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.186Z [ERROR] TestDNS_ServiceLookup_WanTranslation.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:44.186Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.186Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:44.186Z [DEBUG] TestDNS_ServiceLookup_WanTranslation.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:44.187Z [WARN]  TestDNS_ServiceLookup_WanTranslation.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.190Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:44.190Z [INFO]  TestDNS_ServiceLookup_WanTranslation.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:44.190Z [INFO]  TestDNS_ServiceLookup_WanTranslation: consul server down
>     writer.go:29: 2020-02-23T02:46:44.190Z [INFO]  TestDNS_ServiceLookup_WanTranslation: shutdown complete
>     writer.go:29: 2020-02-23T02:46:44.190Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Stopping server: protocol=DNS address=127.0.0.1:16643 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.190Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Stopping server: protocol=DNS address=127.0.0.1:16643 network=udp
>     writer.go:29: 2020-02-23T02:46:44.190Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Stopping server: protocol=HTTP address=127.0.0.1:16644 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.190Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:44.190Z [INFO]  TestDNS_ServiceLookup_WanTranslation: Endpoints down
> === CONT  TestDNS_ServiceLookup_ServiceAddress_A
> --- PASS: TestDNS_ServiceLookup_ServiceAddressIPV6 (0.22s)
>     writer.go:29: 2020-02-23T02:46:44.005Z [WARN]  TestDNS_ServiceLookup_ServiceAddressIPV6: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:44.005Z [DEBUG] TestDNS_ServiceLookup_ServiceAddressIPV6.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:44.006Z [DEBUG] TestDNS_ServiceLookup_ServiceAddressIPV6.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:44.017Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:21122491-30b2-e6ce-244a-17002ab3f3ef Address:127.0.0.1:16642}]"
>     writer.go:29: 2020-02-23T02:46:44.018Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.server.raft: entering follower state: follower="Node at 127.0.0.1:16642 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:44.018Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.server.serf.wan: serf: EventMemberJoin: Node-21122491-30b2-e6ce-244a-17002ab3f3ef.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.019Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.server.serf.lan: serf: EventMemberJoin: Node-21122491-30b2-e6ce-244a-17002ab3f3ef 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.019Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.server: Handled event for server in area: event=member-join server=Node-21122491-30b2-e6ce-244a-17002ab3f3ef.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:44.019Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.server: Adding LAN server: server="Node-21122491-30b2-e6ce-244a-17002ab3f3ef (Addr: tcp/127.0.0.1:16642) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:44.019Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6: Started DNS server: address=127.0.0.1:16637 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.019Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6: Started DNS server: address=127.0.0.1:16637 network=udp
>     writer.go:29: 2020-02-23T02:46:44.020Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6: Started HTTP server: address=127.0.0.1:16638 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.020Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6: started state syncer
>     writer.go:29: 2020-02-23T02:46:44.082Z [WARN]  TestDNS_ServiceLookup_ServiceAddressIPV6.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:44.082Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.server.raft: entering candidate state: node="Node at 127.0.0.1:16642 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:44.085Z [DEBUG] TestDNS_ServiceLookup_ServiceAddressIPV6.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:44.085Z [DEBUG] TestDNS_ServiceLookup_ServiceAddressIPV6.server.raft: vote granted: from=21122491-30b2-e6ce-244a-17002ab3f3ef term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:44.085Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:44.085Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.server.raft: entering leader state: leader="Node at 127.0.0.1:16642 [Leader]"
>     writer.go:29: 2020-02-23T02:46:44.086Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:44.086Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.server: New leader elected: payload=Node-21122491-30b2-e6ce-244a-17002ab3f3ef
>     writer.go:29: 2020-02-23T02:46:44.097Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:44.104Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:44.104Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.104Z [DEBUG] TestDNS_ServiceLookup_ServiceAddressIPV6.server: Skipping self join check for node since the cluster is too small: node=Node-21122491-30b2-e6ce-244a-17002ab3f3ef
>     writer.go:29: 2020-02-23T02:46:44.104Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.server: member joined, marking health alive: member=Node-21122491-30b2-e6ce-244a-17002ab3f3ef
>     writer.go:29: 2020-02-23T02:46:44.214Z [DEBUG] TestDNS_ServiceLookup_ServiceAddressIPV6.dns: request served from client: name=db.service.consul. type=SRV class=IN latency=98.746µs client=127.0.0.1:40249 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.215Z [DEBUG] TestDNS_ServiceLookup_ServiceAddressIPV6.dns: request served from client: name=8d212b24-4a8e-4f2f-0f5f-e42c7d7274be.query.consul. type=SRV class=IN latency=65.658µs client=127.0.0.1:42289 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.215Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:44.215Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:44.215Z [DEBUG] TestDNS_ServiceLookup_ServiceAddressIPV6.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.215Z [WARN]  TestDNS_ServiceLookup_ServiceAddressIPV6.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.215Z [ERROR] TestDNS_ServiceLookup_ServiceAddressIPV6.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:44.215Z [DEBUG] TestDNS_ServiceLookup_ServiceAddressIPV6.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.216Z [WARN]  TestDNS_ServiceLookup_ServiceAddressIPV6.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.220Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:44.220Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6: consul server down
>     writer.go:29: 2020-02-23T02:46:44.220Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6: shutdown complete
>     writer.go:29: 2020-02-23T02:46:44.220Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6: Stopping server: protocol=DNS address=127.0.0.1:16637 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.220Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6: Stopping server: protocol=DNS address=127.0.0.1:16637 network=udp
>     writer.go:29: 2020-02-23T02:46:44.220Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6: Stopping server: protocol=HTTP address=127.0.0.1:16638 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.220Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:44.220Z [INFO]  TestDNS_ServiceLookup_ServiceAddressIPV6: Endpoints down
> === CONT  TestDNS_ExternalServiceToConsulCNAMENestedLookup
> --- PASS: TestDNS_ServiceLookup_ServiceAddress_SRV (0.12s)
>     writer.go:29: 2020-02-23T02:46:44.155Z [WARN]  TestDNS_ServiceLookup_ServiceAddress_SRV: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:44.155Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_SRV.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:44.155Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_SRV.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:44.174Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:e25d67f1-6e50-36ba-1fb0-32cb59281c4d Address:127.0.0.1:16654}]"
>     writer.go:29: 2020-02-23T02:46:44.175Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.server.raft: entering follower state: follower="Node at 127.0.0.1:16654 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:44.175Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.server.serf.wan: serf: EventMemberJoin: Node-e25d67f1-6e50-36ba-1fb0-32cb59281c4d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.176Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.server.serf.lan: serf: EventMemberJoin: Node-e25d67f1-6e50-36ba-1fb0-32cb59281c4d 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.176Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_SRV.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:44.176Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.server: Adding LAN server: server="Node-e25d67f1-6e50-36ba-1fb0-32cb59281c4d (Addr: tcp/127.0.0.1:16654) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:44.176Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.server: Handled event for server in area: event=member-join server=Node-e25d67f1-6e50-36ba-1fb0-32cb59281c4d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:44.176Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_SRV.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:44.177Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV: Started DNS server: address=127.0.0.1:16649 network=udp
>     writer.go:29: 2020-02-23T02:46:44.177Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV: Started DNS server: address=127.0.0.1:16649 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.178Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV: Started HTTP server: address=127.0.0.1:16650 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.178Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV: started state syncer
>     writer.go:29: 2020-02-23T02:46:44.229Z [WARN]  TestDNS_ServiceLookup_ServiceAddress_SRV.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:44.229Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.server.raft: entering candidate state: node="Node at 127.0.0.1:16654 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:44.233Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_SRV.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:44.233Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_SRV.server.raft: vote granted: from=e25d67f1-6e50-36ba-1fb0-32cb59281c4d term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:44.233Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:44.233Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.server.raft: entering leader state: leader="Node at 127.0.0.1:16654 [Leader]"
>     writer.go:29: 2020-02-23T02:46:44.233Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:44.233Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.server: New leader elected: payload=Node-e25d67f1-6e50-36ba-1fb0-32cb59281c4d
>     writer.go:29: 2020-02-23T02:46:44.241Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:44.250Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:44.250Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.250Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_SRV.server: Skipping self join check for node since the cluster is too small: node=Node-e25d67f1-6e50-36ba-1fb0-32cb59281c4d
>     writer.go:29: 2020-02-23T02:46:44.250Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.server: member joined, marking health alive: member=Node-e25d67f1-6e50-36ba-1fb0-32cb59281c4d
>     writer.go:29: 2020-02-23T02:46:44.266Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_SRV.dns: cname recurse RTT for name: name=www.google.com. rtt=54.371µs
>     writer.go:29: 2020-02-23T02:46:44.266Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_SRV.dns: request served from client: name=db.service.consul. type=SRV class=IN latency=220.261µs client=127.0.0.1:49032 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.266Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_SRV.dns: cname recurse RTT for name: name=www.google.com. rtt=41.763µs
>     writer.go:29: 2020-02-23T02:46:44.266Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_SRV.dns: request served from client: name=59a22421-da8b-63c8-def4-6a426af051f7.query.consul. type=SRV class=IN latency=166.424µs client=127.0.0.1:36255 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.266Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:44.267Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:44.267Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_SRV.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.267Z [WARN]  TestDNS_ServiceLookup_ServiceAddress_SRV.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.267Z [ERROR] TestDNS_ServiceLookup_ServiceAddress_SRV.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:44.267Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_SRV.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.268Z [WARN]  TestDNS_ServiceLookup_ServiceAddress_SRV.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.270Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:44.270Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV: consul server down
>     writer.go:29: 2020-02-23T02:46:44.270Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV: shutdown complete
>     writer.go:29: 2020-02-23T02:46:44.270Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV: Stopping server: protocol=DNS address=127.0.0.1:16649 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.270Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV: Stopping server: protocol=DNS address=127.0.0.1:16649 network=udp
>     writer.go:29: 2020-02-23T02:46:44.270Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV: Stopping server: protocol=HTTP address=127.0.0.1:16650 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.270Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:44.270Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_SRV: Endpoints down
> === CONT  TestDNS_NSRecords_IPV6
> --- PASS: TestDNS_ServiceLookup_ServiceAddress_A (0.20s)
>     writer.go:29: 2020-02-23T02:46:44.198Z [WARN]  TestDNS_ServiceLookup_ServiceAddress_A: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:44.198Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_A.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:44.199Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_A.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:44.217Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b4350ceb-b6ce-fb9e-8c75-e90391056d57 Address:127.0.0.1:16672}]"
>     writer.go:29: 2020-02-23T02:46:44.217Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.server.serf.wan: serf: EventMemberJoin: Node-b4350ceb-b6ce-fb9e-8c75-e90391056d57.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.217Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.server.serf.lan: serf: EventMemberJoin: Node-b4350ceb-b6ce-fb9e-8c75-e90391056d57 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.218Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A: Started DNS server: address=127.0.0.1:16667 network=udp
>     writer.go:29: 2020-02-23T02:46:44.218Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.server.raft: entering follower state: follower="Node at 127.0.0.1:16672 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:44.218Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.server: Adding LAN server: server="Node-b4350ceb-b6ce-fb9e-8c75-e90391056d57 (Addr: tcp/127.0.0.1:16672) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:44.218Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.server: Handled event for server in area: event=member-join server=Node-b4350ceb-b6ce-fb9e-8c75-e90391056d57.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:44.218Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A: Started DNS server: address=127.0.0.1:16667 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.218Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A: Started HTTP server: address=127.0.0.1:16668 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.218Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A: started state syncer
>     writer.go:29: 2020-02-23T02:46:44.285Z [WARN]  TestDNS_ServiceLookup_ServiceAddress_A.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:44.285Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.server.raft: entering candidate state: node="Node at 127.0.0.1:16672 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:44.293Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_A.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:44.293Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_A.server.raft: vote granted: from=b4350ceb-b6ce-fb9e-8c75-e90391056d57 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:44.293Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:44.293Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.server.raft: entering leader state: leader="Node at 127.0.0.1:16672 [Leader]"
>     writer.go:29: 2020-02-23T02:46:44.294Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:44.294Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.server: New leader elected: payload=Node-b4350ceb-b6ce-fb9e-8c75-e90391056d57
>     writer.go:29: 2020-02-23T02:46:44.303Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:44.310Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:44.310Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.310Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_A.server: Skipping self join check for node since the cluster is too small: node=Node-b4350ceb-b6ce-fb9e-8c75-e90391056d57
>     writer.go:29: 2020-02-23T02:46:44.310Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.server: member joined, marking health alive: member=Node-b4350ceb-b6ce-fb9e-8c75-e90391056d57
>     writer.go:29: 2020-02-23T02:46:44.389Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_A.dns: request served from client: name=db.service.consul. type=SRV class=IN latency=131.052µs client=127.0.0.1:43138 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.389Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_A.dns: request served from client: name=0707f6c2-d36b-358a-7a3d-f2946b2a79b9.query.consul. type=SRV class=IN latency=75.03µs client=127.0.0.1:51465 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.389Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:44.389Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:44.389Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_A.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.389Z [WARN]  TestDNS_ServiceLookup_ServiceAddress_A.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.389Z [ERROR] TestDNS_ServiceLookup_ServiceAddress_A.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:44.389Z [DEBUG] TestDNS_ServiceLookup_ServiceAddress_A.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.391Z [WARN]  TestDNS_ServiceLookup_ServiceAddress_A.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.393Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:44.393Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A: consul server down
>     writer.go:29: 2020-02-23T02:46:44.393Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A: shutdown complete
>     writer.go:29: 2020-02-23T02:46:44.393Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A: Stopping server: protocol=DNS address=127.0.0.1:16667 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.393Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A: Stopping server: protocol=DNS address=127.0.0.1:16667 network=udp
>     writer.go:29: 2020-02-23T02:46:44.393Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A: Stopping server: protocol=HTTP address=127.0.0.1:16668 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.393Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:44.393Z [INFO]  TestDNS_ServiceLookup_ServiceAddress_A: Endpoints down
> === CONT  TestDNS_ExternalServiceToConsulCNAMELookup
> --- PASS: TestDNS_ExternalServiceToConsulCNAMENestedLookup (0.25s)
>     writer.go:29: 2020-02-23T02:46:44.228Z [WARN]  TestDNS_ExternalServiceToConsulCNAMENestedLookup: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:44.228Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMENestedLookup.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:44.229Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMENestedLookup.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:44.242Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f2b994e2-a3cc-56ec-2d74-0b05e6cd82ff Address:127.0.0.1:16666}]"
>     writer.go:29: 2020-02-23T02:46:44.242Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server.serf.wan: serf: EventMemberJoin: test-node.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.243Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server.serf.lan: serf: EventMemberJoin: test-node 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.243Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup: Started DNS server: address=127.0.0.1:16661 network=udp
>     writer.go:29: 2020-02-23T02:46:44.243Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server.raft: entering follower state: follower="Node at 127.0.0.1:16666 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:44.243Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server: Adding LAN server: server="test-node (Addr: tcp/127.0.0.1:16666) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:44.243Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server: Handled event for server in area: event=member-join server=test-node.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:44.243Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup: Started DNS server: address=127.0.0.1:16661 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.244Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup: Started HTTP server: address=127.0.0.1:16662 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.244Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup: started state syncer
>     writer.go:29: 2020-02-23T02:46:44.306Z [WARN]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:44.306Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server.raft: entering candidate state: node="Node at 127.0.0.1:16666 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:44.311Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMENestedLookup.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:44.311Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMENestedLookup.server.raft: vote granted: from=f2b994e2-a3cc-56ec-2d74-0b05e6cd82ff term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:44.311Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:44.311Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server.raft: entering leader state: leader="Node at 127.0.0.1:16666 [Leader]"
>     writer.go:29: 2020-02-23T02:46:44.311Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:44.311Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server: New leader elected: payload=test-node
>     writer.go:29: 2020-02-23T02:46:44.320Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:44.334Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:44.334Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.334Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMENestedLookup.server: Skipping self join check for node since the cluster is too small: node=test-node
>     writer.go:29: 2020-02-23T02:46:44.334Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server: member joined, marking health alive: member=test-node
>     writer.go:29: 2020-02-23T02:46:44.426Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMENestedLookup: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:44.430Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup: Synced node info
>     writer.go:29: 2020-02-23T02:46:44.466Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMENestedLookup.dns: request served from client: name=alias2.service.consul. type=SRV class=IN latency=196.28µs client=127.0.0.1:49484 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.466Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:44.466Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:44.466Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMENestedLookup.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.467Z [WARN]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.467Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMENestedLookup.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.468Z [WARN]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.471Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:44.471Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup: consul server down
>     writer.go:29: 2020-02-23T02:46:44.471Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup: shutdown complete
>     writer.go:29: 2020-02-23T02:46:44.471Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup: Stopping server: protocol=DNS address=127.0.0.1:16661 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.471Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup: Stopping server: protocol=DNS address=127.0.0.1:16661 network=udp
>     writer.go:29: 2020-02-23T02:46:44.471Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup: Stopping server: protocol=HTTP address=127.0.0.1:16662 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.471Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:44.471Z [INFO]  TestDNS_ExternalServiceToConsulCNAMENestedLookup: Endpoints down
> === CONT  TestDNS_InifiniteRecursion
> --- PASS: TestDNS_ExternalServiceToConsulCNAMELookup (0.15s)
>     writer.go:29: 2020-02-23T02:46:44.402Z [WARN]  TestDNS_ExternalServiceToConsulCNAMELookup: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:44.402Z [WARN]  TestDNS_ExternalServiceToConsulCNAMELookup: Node name will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.: node_name="test node"
>     writer.go:29: 2020-02-23T02:46:44.402Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMELookup.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:44.402Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMELookup.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:44.412Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:afc7137e-8a7e-8586-c8d0-bdc8c80c70a4 Address:127.0.0.1:16684}]"
>     writer.go:29: 2020-02-23T02:46:44.413Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.server.serf.wan: serf: EventMemberJoin: test node.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.413Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.server.serf.lan: serf: EventMemberJoin: test node 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.413Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup: Started DNS server: address=127.0.0.1:16679 network=udp
>     writer.go:29: 2020-02-23T02:46:44.413Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.server.raft: entering follower state: follower="Node at 127.0.0.1:16684 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:44.413Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.server: Adding LAN server: server="test node (Addr: tcp/127.0.0.1:16684) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:44.413Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.server: Handled event for server in area: event=member-join server="test node.dc1" area=wan
>     writer.go:29: 2020-02-23T02:46:44.413Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup: Started DNS server: address=127.0.0.1:16679 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.414Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup: Started HTTP server: address=127.0.0.1:16680 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.414Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup: started state syncer
>     writer.go:29: 2020-02-23T02:46:44.473Z [WARN]  TestDNS_ExternalServiceToConsulCNAMELookup.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:44.473Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.server.raft: entering candidate state: node="Node at 127.0.0.1:16684 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:44.476Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMELookup.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:44.476Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMELookup.server.raft: vote granted: from=afc7137e-8a7e-8586-c8d0-bdc8c80c70a4 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:44.476Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:44.476Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.server.raft: entering leader state: leader="Node at 127.0.0.1:16684 [Leader]"
>     writer.go:29: 2020-02-23T02:46:44.476Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:44.476Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.server: New leader elected: payload="test node"
>     writer.go:29: 2020-02-23T02:46:44.484Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:44.513Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:44.513Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.513Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMELookup.server: Skipping self join check for node since the cluster is too small: node="test node"
>     writer.go:29: 2020-02-23T02:46:44.513Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.server: member joined, marking health alive: member="test node"
>     writer.go:29: 2020-02-23T02:46:44.535Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMELookup.dns: request served from client: name=alias.service.consul. type=SRV class=IN latency=166.691µs client=127.0.0.1:60339 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.535Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMELookup.dns: request served from client: name=alias.service.CoNsUl. type=SRV class=IN latency=114.445µs client=127.0.0.1:54173 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.535Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:44.535Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:44.535Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMELookup.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.535Z [WARN]  TestDNS_ExternalServiceToConsulCNAMELookup.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.535Z [ERROR] TestDNS_ExternalServiceToConsulCNAMELookup.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:44.535Z [DEBUG] TestDNS_ExternalServiceToConsulCNAMELookup.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.537Z [WARN]  TestDNS_ExternalServiceToConsulCNAMELookup.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.538Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:44.538Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup: consul server down
>     writer.go:29: 2020-02-23T02:46:44.538Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup: shutdown complete
>     writer.go:29: 2020-02-23T02:46:44.539Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup: Stopping server: protocol=DNS address=127.0.0.1:16679 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.539Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup: Stopping server: protocol=DNS address=127.0.0.1:16679 network=udp
>     writer.go:29: 2020-02-23T02:46:44.539Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup: Stopping server: protocol=HTTP address=127.0.0.1:16680 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.539Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:44.539Z [INFO]  TestDNS_ExternalServiceToConsulCNAMELookup: Endpoints down
> === CONT  TestDNS_ExternalServiceLookup
> --- PASS: TestDNS_NSRecords_IPV6 (0.30s)
>     writer.go:29: 2020-02-23T02:46:44.278Z [WARN]  TestDNS_NSRecords_IPV6: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:44.278Z [DEBUG] TestDNS_NSRecords_IPV6.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:44.278Z [DEBUG] TestDNS_NSRecords_IPV6.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:44.297Z [INFO]  TestDNS_NSRecords_IPV6.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:da19f8fa-1dde-8e55-7dc6-d53e153bd7b1 Address:[::1]:16678}]"
>     writer.go:29: 2020-02-23T02:46:44.297Z [INFO]  TestDNS_NSRecords_IPV6.server.serf.wan: serf: EventMemberJoin: server1.dc1 ::1
>     writer.go:29: 2020-02-23T02:46:44.298Z [INFO]  TestDNS_NSRecords_IPV6.server.serf.lan: serf: EventMemberJoin: server1 ::1
>     writer.go:29: 2020-02-23T02:46:44.298Z [INFO]  TestDNS_NSRecords_IPV6: Started DNS server: address=127.0.0.1:16673 network=udp
>     writer.go:29: 2020-02-23T02:46:44.298Z [INFO]  TestDNS_NSRecords_IPV6.server.raft: entering follower state: follower="Node at [::1]:16678 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:44.298Z [INFO]  TestDNS_NSRecords_IPV6.server: Adding LAN server: server="server1 (Addr: tcp/[::1]:16678) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:44.298Z [INFO]  TestDNS_NSRecords_IPV6.server: Handled event for server in area: event=member-join server=server1.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:44.298Z [INFO]  TestDNS_NSRecords_IPV6: Started DNS server: address=127.0.0.1:16673 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.299Z [INFO]  TestDNS_NSRecords_IPV6: Started HTTP server: address=127.0.0.1:16674 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.299Z [INFO]  TestDNS_NSRecords_IPV6: started state syncer
>     writer.go:29: 2020-02-23T02:46:44.336Z [WARN]  TestDNS_NSRecords_IPV6.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:44.336Z [INFO]  TestDNS_NSRecords_IPV6.server.raft: entering candidate state: node="Node at [::1]:16678 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:44.340Z [DEBUG] TestDNS_NSRecords_IPV6.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:44.340Z [DEBUG] TestDNS_NSRecords_IPV6.server.raft: vote granted: from=da19f8fa-1dde-8e55-7dc6-d53e153bd7b1 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:44.340Z [INFO]  TestDNS_NSRecords_IPV6.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:44.340Z [INFO]  TestDNS_NSRecords_IPV6.server.raft: entering leader state: leader="Node at [::1]:16678 [Leader]"
>     writer.go:29: 2020-02-23T02:46:44.340Z [INFO]  TestDNS_NSRecords_IPV6.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:44.340Z [INFO]  TestDNS_NSRecords_IPV6.server: New leader elected: payload=server1
>     writer.go:29: 2020-02-23T02:46:44.347Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:44.354Z [INFO]  TestDNS_NSRecords_IPV6.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:44.354Z [INFO]  TestDNS_NSRecords_IPV6.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.354Z [DEBUG] TestDNS_NSRecords_IPV6.server: Skipping self join check for node since the cluster is too small: node=server1
>     writer.go:29: 2020-02-23T02:46:44.354Z [INFO]  TestDNS_NSRecords_IPV6.server: member joined, marking health alive: member=server1
>     writer.go:29: 2020-02-23T02:46:44.514Z [DEBUG] TestDNS_NSRecords_IPV6: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:44.519Z [INFO]  TestDNS_NSRecords_IPV6: Synced node info
>     writer.go:29: 2020-02-23T02:46:44.519Z [DEBUG] TestDNS_NSRecords_IPV6: Node info in sync
>     writer.go:29: 2020-02-23T02:46:44.563Z [DEBUG] TestDNS_NSRecords_IPV6.dns: request served from client: name=server1.node.dc1.consul. type=NS class=IN latency=99.899µs client=127.0.0.1:42502 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.563Z [INFO]  TestDNS_NSRecords_IPV6: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:44.563Z [INFO]  TestDNS_NSRecords_IPV6.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:44.563Z [DEBUG] TestDNS_NSRecords_IPV6.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.563Z [WARN]  TestDNS_NSRecords_IPV6.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.563Z [DEBUG] TestDNS_NSRecords_IPV6.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.565Z [WARN]  TestDNS_NSRecords_IPV6.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.568Z [INFO]  TestDNS_NSRecords_IPV6.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:44.568Z [INFO]  TestDNS_NSRecords_IPV6: consul server down
>     writer.go:29: 2020-02-23T02:46:44.568Z [INFO]  TestDNS_NSRecords_IPV6: shutdown complete
>     writer.go:29: 2020-02-23T02:46:44.568Z [INFO]  TestDNS_NSRecords_IPV6: Stopping server: protocol=DNS address=127.0.0.1:16673 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.568Z [INFO]  TestDNS_NSRecords_IPV6: Stopping server: protocol=DNS address=127.0.0.1:16673 network=udp
>     writer.go:29: 2020-02-23T02:46:44.568Z [INFO]  TestDNS_NSRecords_IPV6: Stopping server: protocol=HTTP address=127.0.0.1:16674 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.568Z [INFO]  TestDNS_NSRecords_IPV6: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:44.568Z [INFO]  TestDNS_NSRecords_IPV6: Endpoints down
> === CONT  TestDNS_ConnectServiceLookup
> --- PASS: TestDNS_InifiniteRecursion (0.31s)
>     writer.go:29: 2020-02-23T02:46:44.479Z [WARN]  TestDNS_InifiniteRecursion: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:44.479Z [WARN]  TestDNS_InifiniteRecursion: Node name will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.: node_name="test node"
>     writer.go:29: 2020-02-23T02:46:44.479Z [DEBUG] TestDNS_InifiniteRecursion.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:44.480Z [DEBUG] TestDNS_InifiniteRecursion.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:44.517Z [INFO]  TestDNS_InifiniteRecursion.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:40e2784c-d378-244e-75e1-df8972f7fdb3 Address:127.0.0.1:16690}]"
>     writer.go:29: 2020-02-23T02:46:44.517Z [INFO]  TestDNS_InifiniteRecursion.server.serf.wan: serf: EventMemberJoin: test node.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.518Z [INFO]  TestDNS_InifiniteRecursion.server.serf.lan: serf: EventMemberJoin: test node 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.518Z [INFO]  TestDNS_InifiniteRecursion: Started DNS server: address=127.0.0.1:16685 network=udp
>     writer.go:29: 2020-02-23T02:46:44.518Z [INFO]  TestDNS_InifiniteRecursion.server.raft: entering follower state: follower="Node at 127.0.0.1:16690 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:44.518Z [INFO]  TestDNS_InifiniteRecursion.server: Adding LAN server: server="test node (Addr: tcp/127.0.0.1:16690) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:44.518Z [INFO]  TestDNS_InifiniteRecursion.server: Handled event for server in area: event=member-join server="test node.dc1" area=wan
>     writer.go:29: 2020-02-23T02:46:44.518Z [INFO]  TestDNS_InifiniteRecursion: Started DNS server: address=127.0.0.1:16685 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.519Z [INFO]  TestDNS_InifiniteRecursion: Started HTTP server: address=127.0.0.1:16686 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.519Z [INFO]  TestDNS_InifiniteRecursion: started state syncer
>     writer.go:29: 2020-02-23T02:46:44.578Z [WARN]  TestDNS_InifiniteRecursion.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:44.578Z [INFO]  TestDNS_InifiniteRecursion.server.raft: entering candidate state: node="Node at 127.0.0.1:16690 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:44.582Z [DEBUG] TestDNS_InifiniteRecursion.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:44.582Z [DEBUG] TestDNS_InifiniteRecursion.server.raft: vote granted: from=40e2784c-d378-244e-75e1-df8972f7fdb3 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:44.582Z [INFO]  TestDNS_InifiniteRecursion.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:44.582Z [INFO]  TestDNS_InifiniteRecursion.server.raft: entering leader state: leader="Node at 127.0.0.1:16690 [Leader]"
>     writer.go:29: 2020-02-23T02:46:44.582Z [INFO]  TestDNS_InifiniteRecursion.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:44.582Z [INFO]  TestDNS_InifiniteRecursion.server: New leader elected: payload="test node"
>     writer.go:29: 2020-02-23T02:46:44.593Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:44.601Z [INFO]  TestDNS_InifiniteRecursion.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:44.601Z [INFO]  TestDNS_InifiniteRecursion.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.601Z [DEBUG] TestDNS_InifiniteRecursion.server: Skipping self join check for node since the cluster is too small: node="test node"
>     writer.go:29: 2020-02-23T02:46:44.601Z [INFO]  TestDNS_InifiniteRecursion.server: member joined, marking health alive: member="test node"
>     writer.go:29: 2020-02-23T02:46:44.627Z [DEBUG] TestDNS_InifiniteRecursion: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:44.630Z [INFO]  TestDNS_InifiniteRecursion: Synced node info
>     writer.go:29: 2020-02-23T02:46:44.630Z [DEBUG] TestDNS_InifiniteRecursion: Node info in sync
>     writer.go:29: 2020-02-23T02:46:44.662Z [DEBUG] TestDNS_InifiniteRecursion: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:44.662Z [DEBUG] TestDNS_InifiniteRecursion: Node info in sync
>     writer.go:29: 2020-02-23T02:46:44.777Z [ERROR] TestDNS_InifiniteRecursion.dns: Infinite recursion detected for name, won't perform any CNAME resolution.: name=web.service.consul.
>     writer.go:29: 2020-02-23T02:46:44.777Z [DEBUG] TestDNS_InifiniteRecursion.dns: request served from client: name=web.service.consul. type=A class=IN latency=276.944µs client=127.0.0.1:54375 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.777Z [INFO]  TestDNS_InifiniteRecursion: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:44.777Z [INFO]  TestDNS_InifiniteRecursion.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:44.777Z [DEBUG] TestDNS_InifiniteRecursion.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.777Z [WARN]  TestDNS_InifiniteRecursion.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.777Z [DEBUG] TestDNS_InifiniteRecursion.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.779Z [WARN]  TestDNS_InifiniteRecursion.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.780Z [INFO]  TestDNS_InifiniteRecursion.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:44.781Z [INFO]  TestDNS_InifiniteRecursion: consul server down
>     writer.go:29: 2020-02-23T02:46:44.781Z [INFO]  TestDNS_InifiniteRecursion: shutdown complete
>     writer.go:29: 2020-02-23T02:46:44.781Z [INFO]  TestDNS_InifiniteRecursion: Stopping server: protocol=DNS address=127.0.0.1:16685 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.781Z [INFO]  TestDNS_InifiniteRecursion: Stopping server: protocol=DNS address=127.0.0.1:16685 network=udp
>     writer.go:29: 2020-02-23T02:46:44.781Z [INFO]  TestDNS_InifiniteRecursion: Stopping server: protocol=HTTP address=127.0.0.1:16686 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.781Z [INFO]  TestDNS_InifiniteRecursion: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:44.781Z [INFO]  TestDNS_InifiniteRecursion: Endpoints down
> === CONT  TestDNS_ServiceLookupWithInternalServiceAddress
> --- PASS: TestDNS_ExternalServiceLookup (0.34s)
>     writer.go:29: 2020-02-23T02:46:44.545Z [WARN]  TestDNS_ExternalServiceLookup: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:44.546Z [DEBUG] TestDNS_ExternalServiceLookup.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:44.546Z [DEBUG] TestDNS_ExternalServiceLookup.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:44.559Z [INFO]  TestDNS_ExternalServiceLookup.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:7be06d77-6f6d-ae95-62ca-c6b9e9266c3f Address:127.0.0.1:16696}]"
>     writer.go:29: 2020-02-23T02:46:44.559Z [INFO]  TestDNS_ExternalServiceLookup.server.raft: entering follower state: follower="Node at 127.0.0.1:16696 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:44.559Z [INFO]  TestDNS_ExternalServiceLookup.server.serf.wan: serf: EventMemberJoin: Node-7be06d77-6f6d-ae95-62ca-c6b9e9266c3f.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.559Z [INFO]  TestDNS_ExternalServiceLookup.server.serf.lan: serf: EventMemberJoin: Node-7be06d77-6f6d-ae95-62ca-c6b9e9266c3f 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.560Z [INFO]  TestDNS_ExternalServiceLookup: Started DNS server: address=127.0.0.1:16691 network=udp
>     writer.go:29: 2020-02-23T02:46:44.560Z [INFO]  TestDNS_ExternalServiceLookup.server: Adding LAN server: server="Node-7be06d77-6f6d-ae95-62ca-c6b9e9266c3f (Addr: tcp/127.0.0.1:16696) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:44.560Z [INFO]  TestDNS_ExternalServiceLookup.server: Handled event for server in area: event=member-join server=Node-7be06d77-6f6d-ae95-62ca-c6b9e9266c3f.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:44.560Z [INFO]  TestDNS_ExternalServiceLookup: Started DNS server: address=127.0.0.1:16691 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.560Z [INFO]  TestDNS_ExternalServiceLookup: Started HTTP server: address=127.0.0.1:16692 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.560Z [INFO]  TestDNS_ExternalServiceLookup: started state syncer
>     writer.go:29: 2020-02-23T02:46:44.620Z [WARN]  TestDNS_ExternalServiceLookup.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:44.621Z [INFO]  TestDNS_ExternalServiceLookup.server.raft: entering candidate state: node="Node at 127.0.0.1:16696 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:44.624Z [DEBUG] TestDNS_ExternalServiceLookup.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:44.624Z [DEBUG] TestDNS_ExternalServiceLookup.server.raft: vote granted: from=7be06d77-6f6d-ae95-62ca-c6b9e9266c3f term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:44.624Z [INFO]  TestDNS_ExternalServiceLookup.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:44.624Z [INFO]  TestDNS_ExternalServiceLookup.server.raft: entering leader state: leader="Node at 127.0.0.1:16696 [Leader]"
>     writer.go:29: 2020-02-23T02:46:44.624Z [INFO]  TestDNS_ExternalServiceLookup.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:44.624Z [INFO]  TestDNS_ExternalServiceLookup.server: New leader elected: payload=Node-7be06d77-6f6d-ae95-62ca-c6b9e9266c3f
>     writer.go:29: 2020-02-23T02:46:44.631Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:44.639Z [INFO]  TestDNS_ExternalServiceLookup.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:44.639Z [INFO]  TestDNS_ExternalServiceLookup.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.639Z [DEBUG] TestDNS_ExternalServiceLookup.server: Skipping self join check for node since the cluster is too small: node=Node-7be06d77-6f6d-ae95-62ca-c6b9e9266c3f
>     writer.go:29: 2020-02-23T02:46:44.639Z [INFO]  TestDNS_ExternalServiceLookup.server: member joined, marking health alive: member=Node-7be06d77-6f6d-ae95-62ca-c6b9e9266c3f
>     writer.go:29: 2020-02-23T02:46:44.849Z [DEBUG] TestDNS_ExternalServiceLookup: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:44.853Z [INFO]  TestDNS_ExternalServiceLookup: Synced node info
>     writer.go:29: 2020-02-23T02:46:44.872Z [DEBUG] TestDNS_ExternalServiceLookup.dns: request served from client: name=db.service.consul. type=SRV class=IN latency=100.881µs client=127.0.0.1:37030 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.873Z [INFO]  TestDNS_ExternalServiceLookup: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:44.873Z [INFO]  TestDNS_ExternalServiceLookup.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:44.873Z [DEBUG] TestDNS_ExternalServiceLookup.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.873Z [WARN]  TestDNS_ExternalServiceLookup.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.873Z [DEBUG] TestDNS_ExternalServiceLookup.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.874Z [WARN]  TestDNS_ExternalServiceLookup.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.876Z [INFO]  TestDNS_ExternalServiceLookup.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:44.876Z [INFO]  TestDNS_ExternalServiceLookup: consul server down
>     writer.go:29: 2020-02-23T02:46:44.876Z [INFO]  TestDNS_ExternalServiceLookup: shutdown complete
>     writer.go:29: 2020-02-23T02:46:44.876Z [INFO]  TestDNS_ExternalServiceLookup: Stopping server: protocol=DNS address=127.0.0.1:16691 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.876Z [INFO]  TestDNS_ExternalServiceLookup: Stopping server: protocol=DNS address=127.0.0.1:16691 network=udp
>     writer.go:29: 2020-02-23T02:46:44.876Z [INFO]  TestDNS_ExternalServiceLookup: Stopping server: protocol=HTTP address=127.0.0.1:16692 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.876Z [INFO]  TestDNS_ExternalServiceLookup: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:44.876Z [INFO]  TestDNS_ExternalServiceLookup: Endpoints down
> === CONT  TestDNS_ServiceLookup
> --- PASS: TestDNS_ServiceLookupWithInternalServiceAddress (0.12s)
>     writer.go:29: 2020-02-23T02:46:44.789Z [WARN]  TestDNS_ServiceLookupWithInternalServiceAddress: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:44.789Z [WARN]  TestDNS_ServiceLookupWithInternalServiceAddress: Node name will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.: node_name=my.test-node
>     writer.go:29: 2020-02-23T02:46:44.789Z [DEBUG] TestDNS_ServiceLookupWithInternalServiceAddress.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:44.789Z [DEBUG] TestDNS_ServiceLookupWithInternalServiceAddress.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:44.801Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:e446f232-918d-e1ff-b153-db278a746b1c Address:127.0.0.1:16702}]"
>     writer.go:29: 2020-02-23T02:46:44.801Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.server.raft: entering follower state: follower="Node at 127.0.0.1:16702 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:44.801Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.server.serf.wan: serf: EventMemberJoin: my.test-node.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.802Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.server.serf.lan: serf: EventMemberJoin: my.test-node 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.802Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.server: Handled event for server in area: event=member-join server=my.test-node.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:44.802Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.server: Adding LAN server: server="my.test-node (Addr: tcp/127.0.0.1:16702) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:44.807Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress: Started DNS server: address=127.0.0.1:16697 network=udp
>     writer.go:29: 2020-02-23T02:46:44.807Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress: Started DNS server: address=127.0.0.1:16697 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.808Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress: Started HTTP server: address=127.0.0.1:16698 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.808Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress: started state syncer
>     writer.go:29: 2020-02-23T02:46:44.869Z [WARN]  TestDNS_ServiceLookupWithInternalServiceAddress.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:44.869Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.server.raft: entering candidate state: node="Node at 127.0.0.1:16702 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:44.873Z [DEBUG] TestDNS_ServiceLookupWithInternalServiceAddress.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:44.873Z [DEBUG] TestDNS_ServiceLookupWithInternalServiceAddress.server.raft: vote granted: from=e446f232-918d-e1ff-b153-db278a746b1c term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:44.873Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:44.873Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.server.raft: entering leader state: leader="Node at 127.0.0.1:16702 [Leader]"
>     writer.go:29: 2020-02-23T02:46:44.873Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:44.873Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.server: New leader elected: payload=my.test-node
>     writer.go:29: 2020-02-23T02:46:44.882Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:44.890Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:44.890Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.890Z [DEBUG] TestDNS_ServiceLookupWithInternalServiceAddress.server: Skipping self join check for node since the cluster is too small: node=my.test-node
>     writer.go:29: 2020-02-23T02:46:44.890Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.server: member joined, marking health alive: member=my.test-node
>     writer.go:29: 2020-02-23T02:46:44.900Z [DEBUG] TestDNS_ServiceLookupWithInternalServiceAddress.dns: request served from client: name=db.service.consul. type=SRV class=IN latency=101.434µs client=127.0.0.1:56385 client_network=udp
>     writer.go:29: 2020-02-23T02:46:44.900Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:44.900Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:44.900Z [DEBUG] TestDNS_ServiceLookupWithInternalServiceAddress.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.900Z [WARN]  TestDNS_ServiceLookupWithInternalServiceAddress.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.900Z [ERROR] TestDNS_ServiceLookupWithInternalServiceAddress.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:44.900Z [DEBUG] TestDNS_ServiceLookupWithInternalServiceAddress.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.902Z [WARN]  TestDNS_ServiceLookupWithInternalServiceAddress.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:44.904Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:44.904Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress: consul server down
>     writer.go:29: 2020-02-23T02:46:44.904Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress: shutdown complete
>     writer.go:29: 2020-02-23T02:46:44.904Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress: Stopping server: protocol=DNS address=127.0.0.1:16697 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.904Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress: Stopping server: protocol=DNS address=127.0.0.1:16697 network=udp
>     writer.go:29: 2020-02-23T02:46:44.904Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress: Stopping server: protocol=HTTP address=127.0.0.1:16698 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.904Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:44.904Z [INFO]  TestDNS_ServiceLookupWithInternalServiceAddress: Endpoints down
> === CONT  TestDNS_ServiceLookupMultiAddrNoCNAME
> --- PASS: TestDNS_ConnectServiceLookup (0.45s)
>     writer.go:29: 2020-02-23T02:46:44.576Z [WARN]  TestDNS_ConnectServiceLookup: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:44.576Z [DEBUG] TestDNS_ConnectServiceLookup.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:44.577Z [DEBUG] TestDNS_ConnectServiceLookup.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:44.589Z [INFO]  TestDNS_ConnectServiceLookup.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:dab320e4-bfa1-d40a-4210-c71e996f078c Address:127.0.0.1:16708}]"
>     writer.go:29: 2020-02-23T02:46:44.589Z [INFO]  TestDNS_ConnectServiceLookup.server.raft: entering follower state: follower="Node at 127.0.0.1:16708 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:44.589Z [INFO]  TestDNS_ConnectServiceLookup.server.serf.wan: serf: EventMemberJoin: Node-dab320e4-bfa1-d40a-4210-c71e996f078c.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.590Z [INFO]  TestDNS_ConnectServiceLookup.server.serf.lan: serf: EventMemberJoin: Node-dab320e4-bfa1-d40a-4210-c71e996f078c 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.590Z [INFO]  TestDNS_ConnectServiceLookup.server: Adding LAN server: server="Node-dab320e4-bfa1-d40a-4210-c71e996f078c (Addr: tcp/127.0.0.1:16708) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:44.590Z [INFO]  TestDNS_ConnectServiceLookup: Started DNS server: address=127.0.0.1:16703 network=udp
>     writer.go:29: 2020-02-23T02:46:44.590Z [INFO]  TestDNS_ConnectServiceLookup.server: Handled event for server in area: event=member-join server=Node-dab320e4-bfa1-d40a-4210-c71e996f078c.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:44.590Z [INFO]  TestDNS_ConnectServiceLookup: Started DNS server: address=127.0.0.1:16703 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.590Z [INFO]  TestDNS_ConnectServiceLookup: Started HTTP server: address=127.0.0.1:16704 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.590Z [INFO]  TestDNS_ConnectServiceLookup: started state syncer
>     writer.go:29: 2020-02-23T02:46:44.631Z [WARN]  TestDNS_ConnectServiceLookup.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:44.631Z [INFO]  TestDNS_ConnectServiceLookup.server.raft: entering candidate state: node="Node at 127.0.0.1:16708 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:44.636Z [DEBUG] TestDNS_ConnectServiceLookup.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:44.636Z [DEBUG] TestDNS_ConnectServiceLookup.server.raft: vote granted: from=dab320e4-bfa1-d40a-4210-c71e996f078c term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:44.636Z [INFO]  TestDNS_ConnectServiceLookup.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:44.636Z [INFO]  TestDNS_ConnectServiceLookup.server.raft: entering leader state: leader="Node at 127.0.0.1:16708 [Leader]"
>     writer.go:29: 2020-02-23T02:46:44.636Z [INFO]  TestDNS_ConnectServiceLookup.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:44.636Z [INFO]  TestDNS_ConnectServiceLookup.server: New leader elected: payload=Node-dab320e4-bfa1-d40a-4210-c71e996f078c
>     writer.go:29: 2020-02-23T02:46:44.644Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:44.653Z [INFO]  TestDNS_ConnectServiceLookup.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:44.653Z [INFO]  TestDNS_ConnectServiceLookup.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.653Z [DEBUG] TestDNS_ConnectServiceLookup.server: Skipping self join check for node since the cluster is too small: node=Node-dab320e4-bfa1-d40a-4210-c71e996f078c
>     writer.go:29: 2020-02-23T02:46:44.653Z [INFO]  TestDNS_ConnectServiceLookup.server: member joined, marking health alive: member=Node-dab320e4-bfa1-d40a-4210-c71e996f078c
>     writer.go:29: 2020-02-23T02:46:44.693Z [DEBUG] TestDNS_ConnectServiceLookup: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:44.697Z [INFO]  TestDNS_ConnectServiceLookup: Synced node info
>     writer.go:29: 2020-02-23T02:46:44.697Z [DEBUG] TestDNS_ConnectServiceLookup: Node info in sync
>     writer.go:29: 2020-02-23T02:46:45.013Z [DEBUG] TestDNS_ConnectServiceLookup.dns: request served from client: name=db.connect.consul. type=SRV class=IN latency=111.05µs client=127.0.0.1:41318 client_network=udp
>     writer.go:29: 2020-02-23T02:46:45.013Z [INFO]  TestDNS_ConnectServiceLookup: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:45.013Z [INFO]  TestDNS_ConnectServiceLookup.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:45.013Z [DEBUG] TestDNS_ConnectServiceLookup.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.013Z [WARN]  TestDNS_ConnectServiceLookup.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.013Z [DEBUG] TestDNS_ConnectServiceLookup.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.015Z [WARN]  TestDNS_ConnectServiceLookup.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.017Z [INFO]  TestDNS_ConnectServiceLookup.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:45.017Z [INFO]  TestDNS_ConnectServiceLookup: consul server down
>     writer.go:29: 2020-02-23T02:46:45.017Z [INFO]  TestDNS_ConnectServiceLookup: shutdown complete
>     writer.go:29: 2020-02-23T02:46:45.017Z [INFO]  TestDNS_ConnectServiceLookup: Stopping server: protocol=DNS address=127.0.0.1:16703 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.017Z [INFO]  TestDNS_ConnectServiceLookup: Stopping server: protocol=DNS address=127.0.0.1:16703 network=udp
>     writer.go:29: 2020-02-23T02:46:45.017Z [INFO]  TestDNS_ConnectServiceLookup: Stopping server: protocol=HTTP address=127.0.0.1:16704 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.017Z [INFO]  TestDNS_ConnectServiceLookup: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:45.017Z [INFO]  TestDNS_ConnectServiceLookup: Endpoints down
> === CONT  TestDNS_ServiceLookupPreferNoCNAME
> --- PASS: TestDNS_ServiceLookupMultiAddrNoCNAME (0.23s)
>     writer.go:29: 2020-02-23T02:46:44.912Z [WARN]  TestDNS_ServiceLookupMultiAddrNoCNAME: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:44.912Z [DEBUG] TestDNS_ServiceLookupMultiAddrNoCNAME.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:44.912Z [DEBUG] TestDNS_ServiceLookupMultiAddrNoCNAME.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:44.922Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4e18185e-2aac-a2c6-af42-847b39b6f6fd Address:127.0.0.1:16720}]"
>     writer.go:29: 2020-02-23T02:46:44.922Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.server.serf.wan: serf: EventMemberJoin: Node-4e18185e-2aac-a2c6-af42-847b39b6f6fd.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.923Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.server.serf.lan: serf: EventMemberJoin: Node-4e18185e-2aac-a2c6-af42-847b39b6f6fd 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.923Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME: Started DNS server: address=127.0.0.1:16715 network=udp
>     writer.go:29: 2020-02-23T02:46:44.923Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.server.raft: entering follower state: follower="Node at 127.0.0.1:16720 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:44.923Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.server: Adding LAN server: server="Node-4e18185e-2aac-a2c6-af42-847b39b6f6fd (Addr: tcp/127.0.0.1:16720) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:44.923Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.server: Handled event for server in area: event=member-join server=Node-4e18185e-2aac-a2c6-af42-847b39b6f6fd.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:44.923Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME: Started DNS server: address=127.0.0.1:16715 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.924Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME: Started HTTP server: address=127.0.0.1:16716 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.924Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME: started state syncer
>     writer.go:29: 2020-02-23T02:46:44.963Z [WARN]  TestDNS_ServiceLookupMultiAddrNoCNAME.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:44.963Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.server.raft: entering candidate state: node="Node at 127.0.0.1:16720 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:44.966Z [DEBUG] TestDNS_ServiceLookupMultiAddrNoCNAME.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:44.966Z [DEBUG] TestDNS_ServiceLookupMultiAddrNoCNAME.server.raft: vote granted: from=4e18185e-2aac-a2c6-af42-847b39b6f6fd term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:44.966Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:44.966Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.server.raft: entering leader state: leader="Node at 127.0.0.1:16720 [Leader]"
>     writer.go:29: 2020-02-23T02:46:44.966Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:44.966Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.server: New leader elected: payload=Node-4e18185e-2aac-a2c6-af42-847b39b6f6fd
>     writer.go:29: 2020-02-23T02:46:44.975Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:45.000Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:45.000Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.000Z [DEBUG] TestDNS_ServiceLookupMultiAddrNoCNAME.server: Skipping self join check for node since the cluster is too small: node=Node-4e18185e-2aac-a2c6-af42-847b39b6f6fd
>     writer.go:29: 2020-02-23T02:46:45.000Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.server: member joined, marking health alive: member=Node-4e18185e-2aac-a2c6-af42-847b39b6f6fd
>     writer.go:29: 2020-02-23T02:46:45.026Z [DEBUG] TestDNS_ServiceLookupMultiAddrNoCNAME: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:45.030Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME: Synced node info
>     writer.go:29: 2020-02-23T02:46:45.030Z [DEBUG] TestDNS_ServiceLookupMultiAddrNoCNAME: Node info in sync
>     writer.go:29: 2020-02-23T02:46:45.129Z [DEBUG] TestDNS_ServiceLookupMultiAddrNoCNAME.dns: request served from client: name=db.service.consul. type=ANY class=IN latency=189.056µs client=127.0.0.1:49765 client_network=udp
>     writer.go:29: 2020-02-23T02:46:45.129Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:45.129Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:45.129Z [DEBUG] TestDNS_ServiceLookupMultiAddrNoCNAME.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.129Z [WARN]  TestDNS_ServiceLookupMultiAddrNoCNAME.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.129Z [DEBUG] TestDNS_ServiceLookupMultiAddrNoCNAME.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.131Z [WARN]  TestDNS_ServiceLookupMultiAddrNoCNAME.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.133Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:45.134Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME: consul server down
>     writer.go:29: 2020-02-23T02:46:45.134Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME: shutdown complete
>     writer.go:29: 2020-02-23T02:46:45.134Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME: Stopping server: protocol=DNS address=127.0.0.1:16715 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.134Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME: Stopping server: protocol=DNS address=127.0.0.1:16715 network=udp
>     writer.go:29: 2020-02-23T02:46:45.134Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME: Stopping server: protocol=HTTP address=127.0.0.1:16716 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.134Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:45.134Z [INFO]  TestDNS_ServiceLookupMultiAddrNoCNAME: Endpoints down
> === CONT  TestDNS_ServiceReverseLookupNodeAddress
> --- PASS: TestDNS_ServiceLookupPreferNoCNAME (0.16s)
>     writer.go:29: 2020-02-23T02:46:45.024Z [WARN]  TestDNS_ServiceLookupPreferNoCNAME: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:45.025Z [DEBUG] TestDNS_ServiceLookupPreferNoCNAME.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:45.025Z [DEBUG] TestDNS_ServiceLookupPreferNoCNAME.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:45.036Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:02867a4e-5360-3d5e-54c9-cc74560e3528 Address:127.0.0.1:16726}]"
>     writer.go:29: 2020-02-23T02:46:45.036Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.server.serf.wan: serf: EventMemberJoin: Node-02867a4e-5360-3d5e-54c9-cc74560e3528.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.037Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.server.serf.lan: serf: EventMemberJoin: Node-02867a4e-5360-3d5e-54c9-cc74560e3528 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.037Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME: Started DNS server: address=127.0.0.1:16721 network=udp
>     writer.go:29: 2020-02-23T02:46:45.037Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.server.raft: entering follower state: follower="Node at 127.0.0.1:16726 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:45.037Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.server: Adding LAN server: server="Node-02867a4e-5360-3d5e-54c9-cc74560e3528 (Addr: tcp/127.0.0.1:16726) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:45.037Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.server: Handled event for server in area: event=member-join server=Node-02867a4e-5360-3d5e-54c9-cc74560e3528.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:45.037Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME: Started DNS server: address=127.0.0.1:16721 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.038Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME: Started HTTP server: address=127.0.0.1:16722 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.038Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME: started state syncer
>     writer.go:29: 2020-02-23T02:46:45.079Z [WARN]  TestDNS_ServiceLookupPreferNoCNAME.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:45.079Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.server.raft: entering candidate state: node="Node at 127.0.0.1:16726 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:45.082Z [DEBUG] TestDNS_ServiceLookupPreferNoCNAME.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:45.082Z [DEBUG] TestDNS_ServiceLookupPreferNoCNAME.server.raft: vote granted: from=02867a4e-5360-3d5e-54c9-cc74560e3528 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:45.082Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:45.082Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.server.raft: entering leader state: leader="Node at 127.0.0.1:16726 [Leader]"
>     writer.go:29: 2020-02-23T02:46:45.082Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:45.082Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.server: New leader elected: payload=Node-02867a4e-5360-3d5e-54c9-cc74560e3528
>     writer.go:29: 2020-02-23T02:46:45.089Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:45.097Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:45.097Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.097Z [DEBUG] TestDNS_ServiceLookupPreferNoCNAME.server: Skipping self join check for node since the cluster is too small: node=Node-02867a4e-5360-3d5e-54c9-cc74560e3528
>     writer.go:29: 2020-02-23T02:46:45.097Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.server: member joined, marking health alive: member=Node-02867a4e-5360-3d5e-54c9-cc74560e3528
>     writer.go:29: 2020-02-23T02:46:45.145Z [DEBUG] TestDNS_ServiceLookupPreferNoCNAME: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:45.148Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME: Synced node info
>     writer.go:29: 2020-02-23T02:46:45.171Z [DEBUG] TestDNS_ServiceLookupPreferNoCNAME.dns: request served from client: name=db.service.consul. type=ANY class=IN latency=197.953µs client=127.0.0.1:58554 client_network=udp
>     writer.go:29: 2020-02-23T02:46:45.171Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:45.171Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:45.171Z [DEBUG] TestDNS_ServiceLookupPreferNoCNAME.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.171Z [WARN]  TestDNS_ServiceLookupPreferNoCNAME.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.171Z [DEBUG] TestDNS_ServiceLookupPreferNoCNAME.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.173Z [WARN]  TestDNS_ServiceLookupPreferNoCNAME.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.175Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:45.175Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME: consul server down
>     writer.go:29: 2020-02-23T02:46:45.175Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME: shutdown complete
>     writer.go:29: 2020-02-23T02:46:45.175Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME: Stopping server: protocol=DNS address=127.0.0.1:16721 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.175Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME: Stopping server: protocol=DNS address=127.0.0.1:16721 network=udp
>     writer.go:29: 2020-02-23T02:46:45.175Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME: Stopping server: protocol=HTTP address=127.0.0.1:16722 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.175Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:45.175Z [INFO]  TestDNS_ServiceLookupPreferNoCNAME: Endpoints down
> === CONT  TestDNS_SOA_Settings
> --- PASS: TestDNS_ServiceLookup (0.40s)
>     writer.go:29: 2020-02-23T02:46:44.887Z [WARN]  TestDNS_ServiceLookup: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:44.887Z [DEBUG] TestDNS_ServiceLookup.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:44.888Z [DEBUG] TestDNS_ServiceLookup.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:44.913Z [INFO]  TestDNS_ServiceLookup.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:99a48ee8-ee5e-d25b-dd96-350542718160 Address:127.0.0.1:16714}]"
>     writer.go:29: 2020-02-23T02:46:44.913Z [INFO]  TestDNS_ServiceLookup.server.raft: entering follower state: follower="Node at 127.0.0.1:16714 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:44.914Z [INFO]  TestDNS_ServiceLookup.server.serf.wan: serf: EventMemberJoin: Node-99a48ee8-ee5e-d25b-dd96-350542718160.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.915Z [INFO]  TestDNS_ServiceLookup.server.serf.lan: serf: EventMemberJoin: Node-99a48ee8-ee5e-d25b-dd96-350542718160 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:44.915Z [INFO]  TestDNS_ServiceLookup.server: Adding LAN server: server="Node-99a48ee8-ee5e-d25b-dd96-350542718160 (Addr: tcp/127.0.0.1:16714) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:44.915Z [INFO]  TestDNS_ServiceLookup.server: Handled event for server in area: event=member-join server=Node-99a48ee8-ee5e-d25b-dd96-350542718160.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:44.916Z [INFO]  TestDNS_ServiceLookup: Started DNS server: address=127.0.0.1:16709 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.916Z [INFO]  TestDNS_ServiceLookup: Started DNS server: address=127.0.0.1:16709 network=udp
>     writer.go:29: 2020-02-23T02:46:44.916Z [INFO]  TestDNS_ServiceLookup: Started HTTP server: address=127.0.0.1:16710 network=tcp
>     writer.go:29: 2020-02-23T02:46:44.916Z [INFO]  TestDNS_ServiceLookup: started state syncer
>     writer.go:29: 2020-02-23T02:46:44.965Z [WARN]  TestDNS_ServiceLookup.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:44.966Z [INFO]  TestDNS_ServiceLookup.server.raft: entering candidate state: node="Node at 127.0.0.1:16714 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:44.969Z [DEBUG] TestDNS_ServiceLookup.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:44.969Z [DEBUG] TestDNS_ServiceLookup.server.raft: vote granted: from=99a48ee8-ee5e-d25b-dd96-350542718160 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:44.969Z [INFO]  TestDNS_ServiceLookup.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:44.969Z [INFO]  TestDNS_ServiceLookup.server.raft: entering leader state: leader="Node at 127.0.0.1:16714 [Leader]"
>     writer.go:29: 2020-02-23T02:46:44.969Z [INFO]  TestDNS_ServiceLookup.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:44.969Z [INFO]  TestDNS_ServiceLookup.server: New leader elected: payload=Node-99a48ee8-ee5e-d25b-dd96-350542718160
>     writer.go:29: 2020-02-23T02:46:44.977Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:44.991Z [INFO]  TestDNS_ServiceLookup.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:44.991Z [INFO]  TestDNS_ServiceLookup.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:44.991Z [DEBUG] TestDNS_ServiceLookup.server: Skipping self join check for node since the cluster is too small: node=Node-99a48ee8-ee5e-d25b-dd96-350542718160
>     writer.go:29: 2020-02-23T02:46:44.991Z [INFO]  TestDNS_ServiceLookup.server: member joined, marking health alive: member=Node-99a48ee8-ee5e-d25b-dd96-350542718160
>     writer.go:29: 2020-02-23T02:46:45.011Z [DEBUG] TestDNS_ServiceLookup: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:45.014Z [INFO]  TestDNS_ServiceLookup: Synced node info
>     writer.go:29: 2020-02-23T02:46:45.269Z [DEBUG] TestDNS_ServiceLookup.dns: request served from client: name=db.service.consul. type=SRV class=IN latency=119.228µs client=127.0.0.1:38498 client_network=udp
>     writer.go:29: 2020-02-23T02:46:45.270Z [DEBUG] TestDNS_ServiceLookup.dns: request served from client: name=338d873c-24d4-a0dd-e70f-971dd0ffc213.query.consul. type=SRV class=IN latency=72.76µs client=127.0.0.1:39500 client_network=udp
>     writer.go:29: 2020-02-23T02:46:45.270Z [DEBUG] TestDNS_ServiceLookup.dns: request served from client: name=nodb.service.consul. type=SRV class=IN latency=52.53µs client=127.0.0.1:39733 client_network=udp
>     writer.go:29: 2020-02-23T02:46:45.270Z [DEBUG] TestDNS_ServiceLookup.dns: request served from client: name=nope.query.consul. type=SRV class=IN latency=52.034µs client=127.0.0.1:34026 client_network=udp
>     writer.go:29: 2020-02-23T02:46:45.270Z [INFO]  TestDNS_ServiceLookup: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:45.270Z [INFO]  TestDNS_ServiceLookup.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:45.270Z [DEBUG] TestDNS_ServiceLookup.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.270Z [WARN]  TestDNS_ServiceLookup.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.270Z [DEBUG] TestDNS_ServiceLookup.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.272Z [WARN]  TestDNS_ServiceLookup.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.273Z [INFO]  TestDNS_ServiceLookup.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:45.273Z [INFO]  TestDNS_ServiceLookup: consul server down
>     writer.go:29: 2020-02-23T02:46:45.273Z [INFO]  TestDNS_ServiceLookup: shutdown complete
>     writer.go:29: 2020-02-23T02:46:45.273Z [INFO]  TestDNS_ServiceLookup: Stopping server: protocol=DNS address=127.0.0.1:16709 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.273Z [INFO]  TestDNS_ServiceLookup: Stopping server: protocol=DNS address=127.0.0.1:16709 network=udp
>     writer.go:29: 2020-02-23T02:46:45.273Z [INFO]  TestDNS_ServiceLookup: Stopping server: protocol=HTTP address=127.0.0.1:16710 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.273Z [INFO]  TestDNS_ServiceLookup: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:45.273Z [INFO]  TestDNS_ServiceLookup: Endpoints down
> === CONT  TestDNS_ServiceReverseLookup_CustomDomain
> --- PASS: TestDNS_ServiceReverseLookup_CustomDomain (0.28s)
>     writer.go:29: 2020-02-23T02:46:45.283Z [WARN]  TestDNS_ServiceReverseLookup_CustomDomain: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:45.283Z [DEBUG] TestDNS_ServiceReverseLookup_CustomDomain.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:45.285Z [DEBUG] TestDNS_ServiceReverseLookup_CustomDomain.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:45.298Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:aa40b53a-2a56-8a6e-a84c-685b43d613f6 Address:127.0.0.1:16744}]"
>     writer.go:29: 2020-02-23T02:46:45.299Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.server.raft: entering follower state: follower="Node at 127.0.0.1:16744 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:45.299Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.server.serf.wan: serf: EventMemberJoin: Node-aa40b53a-2a56-8a6e-a84c-685b43d613f6.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.300Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.server.serf.lan: serf: EventMemberJoin: Node-aa40b53a-2a56-8a6e-a84c-685b43d613f6 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.300Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.server: Handled event for server in area: event=member-join server=Node-aa40b53a-2a56-8a6e-a84c-685b43d613f6.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:45.300Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.server: Adding LAN server: server="Node-aa40b53a-2a56-8a6e-a84c-685b43d613f6 (Addr: tcp/127.0.0.1:16744) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:45.300Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain: Started DNS server: address=127.0.0.1:16739 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.300Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain: Started DNS server: address=127.0.0.1:16739 network=udp
>     writer.go:29: 2020-02-23T02:46:45.301Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain: Started HTTP server: address=127.0.0.1:16740 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.301Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain: started state syncer
>     writer.go:29: 2020-02-23T02:46:45.360Z [WARN]  TestDNS_ServiceReverseLookup_CustomDomain.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:45.360Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.server.raft: entering candidate state: node="Node at 127.0.0.1:16744 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:45.364Z [DEBUG] TestDNS_ServiceReverseLookup_CustomDomain.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:45.364Z [DEBUG] TestDNS_ServiceReverseLookup_CustomDomain.server.raft: vote granted: from=aa40b53a-2a56-8a6e-a84c-685b43d613f6 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:45.364Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:45.364Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.server.raft: entering leader state: leader="Node at 127.0.0.1:16744 [Leader]"
>     writer.go:29: 2020-02-23T02:46:45.364Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:45.364Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.server: New leader elected: payload=Node-aa40b53a-2a56-8a6e-a84c-685b43d613f6
>     writer.go:29: 2020-02-23T02:46:45.371Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:45.379Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:45.379Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.379Z [DEBUG] TestDNS_ServiceReverseLookup_CustomDomain.server: Skipping self join check for node since the cluster is too small: node=Node-aa40b53a-2a56-8a6e-a84c-685b43d613f6
>     writer.go:29: 2020-02-23T02:46:45.379Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.server: member joined, marking health alive: member=Node-aa40b53a-2a56-8a6e-a84c-685b43d613f6
>     writer.go:29: 2020-02-23T02:46:45.538Z [DEBUG] TestDNS_ServiceReverseLookup_CustomDomain.dns: request served from client: question="{2.0.0.127.in-addr.arpa. 255 1}" latency=2.792835ms client=127.0.0.1:58040 client_network=udp
>     writer.go:29: 2020-02-23T02:46:45.543Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:45.543Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:45.543Z [DEBUG] TestDNS_ServiceReverseLookup_CustomDomain.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.543Z [WARN]  TestDNS_ServiceReverseLookup_CustomDomain.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.543Z [ERROR] TestDNS_ServiceReverseLookup_CustomDomain.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:45.543Z [DEBUG] TestDNS_ServiceReverseLookup_CustomDomain.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.548Z [WARN]  TestDNS_ServiceReverseLookup_CustomDomain.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.549Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:45.549Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain: consul server down
>     writer.go:29: 2020-02-23T02:46:45.549Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain: shutdown complete
>     writer.go:29: 2020-02-23T02:46:45.549Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain: Stopping server: protocol=DNS address=127.0.0.1:16739 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.549Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain: Stopping server: protocol=DNS address=127.0.0.1:16739 network=udp
>     writer.go:29: 2020-02-23T02:46:45.549Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain: Stopping server: protocol=HTTP address=127.0.0.1:16740 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.549Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:45.549Z [INFO]  TestDNS_ServiceReverseLookup_CustomDomain: Endpoints down
> === CONT  TestDNS_ServiceReverseLookup_IPV6
> --- PASS: TestDNS_ServiceReverseLookupNodeAddress (0.43s)
>     writer.go:29: 2020-02-23T02:46:45.141Z [WARN]  TestDNS_ServiceReverseLookupNodeAddress: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:45.141Z [DEBUG] TestDNS_ServiceReverseLookupNodeAddress.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:45.142Z [DEBUG] TestDNS_ServiceReverseLookupNodeAddress.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:45.151Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:0de52b69-648b-c96b-f746-092f1666f41c Address:127.0.0.1:16732}]"
>     writer.go:29: 2020-02-23T02:46:45.151Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.server.raft: entering follower state: follower="Node at 127.0.0.1:16732 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:45.152Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.server.serf.wan: serf: EventMemberJoin: Node-0de52b69-648b-c96b-f746-092f1666f41c.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.153Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.server.serf.lan: serf: EventMemberJoin: Node-0de52b69-648b-c96b-f746-092f1666f41c 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.153Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.server: Handled event for server in area: event=member-join server=Node-0de52b69-648b-c96b-f746-092f1666f41c.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:45.153Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.server: Adding LAN server: server="Node-0de52b69-648b-c96b-f746-092f1666f41c (Addr: tcp/127.0.0.1:16732) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:45.153Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress: Started DNS server: address=127.0.0.1:16727 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.153Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress: Started DNS server: address=127.0.0.1:16727 network=udp
>     writer.go:29: 2020-02-23T02:46:45.153Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress: Started HTTP server: address=127.0.0.1:16728 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.153Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress: started state syncer
>     writer.go:29: 2020-02-23T02:46:45.216Z [WARN]  TestDNS_ServiceReverseLookupNodeAddress.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:45.216Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.server.raft: entering candidate state: node="Node at 127.0.0.1:16732 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:45.219Z [DEBUG] TestDNS_ServiceReverseLookupNodeAddress.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:45.219Z [DEBUG] TestDNS_ServiceReverseLookupNodeAddress.server.raft: vote granted: from=0de52b69-648b-c96b-f746-092f1666f41c term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:45.219Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:45.219Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.server.raft: entering leader state: leader="Node at 127.0.0.1:16732 [Leader]"
>     writer.go:29: 2020-02-23T02:46:45.219Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:45.219Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.server: New leader elected: payload=Node-0de52b69-648b-c96b-f746-092f1666f41c
>     writer.go:29: 2020-02-23T02:46:45.226Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:45.235Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:45.235Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.235Z [DEBUG] TestDNS_ServiceReverseLookupNodeAddress.server: Skipping self join check for node since the cluster is too small: node=Node-0de52b69-648b-c96b-f746-092f1666f41c
>     writer.go:29: 2020-02-23T02:46:45.235Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.server: member joined, marking health alive: member=Node-0de52b69-648b-c96b-f746-092f1666f41c
>     writer.go:29: 2020-02-23T02:46:45.349Z [DEBUG] TestDNS_ServiceReverseLookupNodeAddress: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:45.352Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress: Synced node info
>     writer.go:29: 2020-02-23T02:46:45.353Z [DEBUG] TestDNS_ServiceReverseLookupNodeAddress: Node info in sync
>     writer.go:29: 2020-02-23T02:46:45.561Z [DEBUG] TestDNS_ServiceReverseLookupNodeAddress.dns: request served from client: question="{1.0.0.127.in-addr.arpa. 255 1}" latency=62.145µs client=127.0.0.1:49593 client_network=udp
>     writer.go:29: 2020-02-23T02:46:45.561Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:45.561Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:45.561Z [DEBUG] TestDNS_ServiceReverseLookupNodeAddress.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.561Z [WARN]  TestDNS_ServiceReverseLookupNodeAddress.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.561Z [DEBUG] TestDNS_ServiceReverseLookupNodeAddress.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.563Z [WARN]  TestDNS_ServiceReverseLookupNodeAddress.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.565Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:45.565Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress: consul server down
>     writer.go:29: 2020-02-23T02:46:45.565Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress: shutdown complete
>     writer.go:29: 2020-02-23T02:46:45.565Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress: Stopping server: protocol=DNS address=127.0.0.1:16727 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.566Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress: Stopping server: protocol=DNS address=127.0.0.1:16727 network=udp
>     writer.go:29: 2020-02-23T02:46:45.566Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress: Stopping server: protocol=HTTP address=127.0.0.1:16728 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.566Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:45.566Z [INFO]  TestDNS_ServiceReverseLookupNodeAddress: Endpoints down
> === CONT  TestDNS_ReverseLookup_IPV6
> --- PASS: TestDNS_ReverseLookup_IPV6 (0.43s)
>     writer.go:29: 2020-02-23T02:46:45.573Z [WARN]  TestDNS_ReverseLookup_IPV6: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:45.574Z [DEBUG] TestDNS_ReverseLookup_IPV6.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:45.574Z [DEBUG] TestDNS_ReverseLookup_IPV6.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:45.585Z [INFO]  TestDNS_ReverseLookup_IPV6.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:be53274b-cef8-ec7b-8032-126f6f2a3c34 Address:127.0.0.1:16762}]"
>     writer.go:29: 2020-02-23T02:46:45.586Z [INFO]  TestDNS_ReverseLookup_IPV6.server.raft: entering follower state: follower="Node at 127.0.0.1:16762 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:45.586Z [INFO]  TestDNS_ReverseLookup_IPV6.server.serf.wan: serf: EventMemberJoin: Node-be53274b-cef8-ec7b-8032-126f6f2a3c34.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.587Z [INFO]  TestDNS_ReverseLookup_IPV6.server.serf.lan: serf: EventMemberJoin: Node-be53274b-cef8-ec7b-8032-126f6f2a3c34 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.587Z [INFO]  TestDNS_ReverseLookup_IPV6.server: Handled event for server in area: event=member-join server=Node-be53274b-cef8-ec7b-8032-126f6f2a3c34.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:45.587Z [INFO]  TestDNS_ReverseLookup_IPV6: Started DNS server: address=127.0.0.1:16757 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.587Z [INFO]  TestDNS_ReverseLookup_IPV6.server: Adding LAN server: server="Node-be53274b-cef8-ec7b-8032-126f6f2a3c34 (Addr: tcp/127.0.0.1:16762) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:45.587Z [INFO]  TestDNS_ReverseLookup_IPV6: Started DNS server: address=127.0.0.1:16757 network=udp
>     writer.go:29: 2020-02-23T02:46:45.587Z [INFO]  TestDNS_ReverseLookup_IPV6: Started HTTP server: address=127.0.0.1:16758 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.587Z [INFO]  TestDNS_ReverseLookup_IPV6: started state syncer
>     writer.go:29: 2020-02-23T02:46:45.640Z [WARN]  TestDNS_ReverseLookup_IPV6.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:45.641Z [INFO]  TestDNS_ReverseLookup_IPV6.server.raft: entering candidate state: node="Node at 127.0.0.1:16762 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:45.694Z [DEBUG] TestDNS_ReverseLookup_IPV6.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:45.694Z [DEBUG] TestDNS_ReverseLookup_IPV6.server.raft: vote granted: from=be53274b-cef8-ec7b-8032-126f6f2a3c34 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:45.694Z [INFO]  TestDNS_ReverseLookup_IPV6.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:45.694Z [INFO]  TestDNS_ReverseLookup_IPV6.server.raft: entering leader state: leader="Node at 127.0.0.1:16762 [Leader]"
>     writer.go:29: 2020-02-23T02:46:45.694Z [INFO]  TestDNS_ReverseLookup_IPV6.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:45.694Z [INFO]  TestDNS_ReverseLookup_IPV6.server: New leader elected: payload=Node-be53274b-cef8-ec7b-8032-126f6f2a3c34
>     writer.go:29: 2020-02-23T02:46:45.701Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:45.717Z [INFO]  TestDNS_ReverseLookup_IPV6.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:45.718Z [INFO]  TestDNS_ReverseLookup_IPV6.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.718Z [DEBUG] TestDNS_ReverseLookup_IPV6.server: Skipping self join check for node since the cluster is too small: node=Node-be53274b-cef8-ec7b-8032-126f6f2a3c34
>     writer.go:29: 2020-02-23T02:46:45.718Z [INFO]  TestDNS_ReverseLookup_IPV6.server: member joined, marking health alive: member=Node-be53274b-cef8-ec7b-8032-126f6f2a3c34
>     writer.go:29: 2020-02-23T02:46:45.856Z [DEBUG] TestDNS_ReverseLookup_IPV6: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:45.859Z [INFO]  TestDNS_ReverseLookup_IPV6: Synced node info
>     writer.go:29: 2020-02-23T02:46:45.968Z [DEBUG] TestDNS_ReverseLookup_IPV6.dns: request served from client: question="{2.4.2.4.2.4.2.4.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa. 255 1}" latency=103.545µs client=127.0.0.1:58452 client_network=udp
>     writer.go:29: 2020-02-23T02:46:45.968Z [INFO]  TestDNS_ReverseLookup_IPV6: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:45.968Z [INFO]  TestDNS_ReverseLookup_IPV6.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:45.968Z [DEBUG] TestDNS_ReverseLookup_IPV6.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.968Z [WARN]  TestDNS_ReverseLookup_IPV6.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.968Z [DEBUG] TestDNS_ReverseLookup_IPV6.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.991Z [WARN]  TestDNS_ReverseLookup_IPV6.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.000Z [INFO]  TestDNS_ReverseLookup_IPV6.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:46.000Z [INFO]  TestDNS_ReverseLookup_IPV6: consul server down
>     writer.go:29: 2020-02-23T02:46:46.000Z [INFO]  TestDNS_ReverseLookup_IPV6: shutdown complete
>     writer.go:29: 2020-02-23T02:46:46.000Z [INFO]  TestDNS_ReverseLookup_IPV6: Stopping server: protocol=DNS address=127.0.0.1:16757 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.001Z [INFO]  TestDNS_ReverseLookup_IPV6: Stopping server: protocol=DNS address=127.0.0.1:16757 network=udp
>     writer.go:29: 2020-02-23T02:46:46.001Z [INFO]  TestDNS_ReverseLookup_IPV6: Stopping server: protocol=HTTP address=127.0.0.1:16758 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.001Z [INFO]  TestDNS_ReverseLookup_IPV6: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:46.001Z [INFO]  TestDNS_ReverseLookup_IPV6: Endpoints down
> === CONT  TestDNS_ReverseLookup_CustomDomain
> --- PASS: TestDNS_ServiceReverseLookup_IPV6 (0.46s)
>     writer.go:29: 2020-02-23T02:46:45.557Z [WARN]  TestDNS_ServiceReverseLookup_IPV6: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:45.557Z [DEBUG] TestDNS_ServiceReverseLookup_IPV6.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:45.557Z [DEBUG] TestDNS_ServiceReverseLookup_IPV6.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:45.578Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d004771a-29f6-84d1-8472-27f4f14ac0a0 Address:127.0.0.1:16756}]"
>     writer.go:29: 2020-02-23T02:46:45.578Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.server.raft: entering follower state: follower="Node at 127.0.0.1:16756 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:45.579Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.server.serf.wan: serf: EventMemberJoin: Node-d004771a-29f6-84d1-8472-27f4f14ac0a0.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.580Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.server.serf.lan: serf: EventMemberJoin: Node-d004771a-29f6-84d1-8472-27f4f14ac0a0 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.580Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.server: Handled event for server in area: event=member-join server=Node-d004771a-29f6-84d1-8472-27f4f14ac0a0.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:45.580Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.server: Adding LAN server: server="Node-d004771a-29f6-84d1-8472-27f4f14ac0a0 (Addr: tcp/127.0.0.1:16756) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:45.580Z [INFO]  TestDNS_ServiceReverseLookup_IPV6: Started DNS server: address=127.0.0.1:16751 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.580Z [INFO]  TestDNS_ServiceReverseLookup_IPV6: Started DNS server: address=127.0.0.1:16751 network=udp
>     writer.go:29: 2020-02-23T02:46:45.581Z [INFO]  TestDNS_ServiceReverseLookup_IPV6: Started HTTP server: address=127.0.0.1:16752 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.581Z [INFO]  TestDNS_ServiceReverseLookup_IPV6: started state syncer
>     writer.go:29: 2020-02-23T02:46:45.633Z [WARN]  TestDNS_ServiceReverseLookup_IPV6.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:45.633Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.server.raft: entering candidate state: node="Node at 127.0.0.1:16756 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:45.692Z [DEBUG] TestDNS_ServiceReverseLookup_IPV6.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:45.692Z [DEBUG] TestDNS_ServiceReverseLookup_IPV6.server.raft: vote granted: from=d004771a-29f6-84d1-8472-27f4f14ac0a0 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:45.692Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:45.692Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.server.raft: entering leader state: leader="Node at 127.0.0.1:16756 [Leader]"
>     writer.go:29: 2020-02-23T02:46:45.692Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:45.693Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.server: New leader elected: payload=Node-d004771a-29f6-84d1-8472-27f4f14ac0a0
>     writer.go:29: 2020-02-23T02:46:45.702Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:45.718Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:45.718Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.718Z [DEBUG] TestDNS_ServiceReverseLookup_IPV6.server: Skipping self join check for node since the cluster is too small: node=Node-d004771a-29f6-84d1-8472-27f4f14ac0a0
>     writer.go:29: 2020-02-23T02:46:45.718Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.server: member joined, marking health alive: member=Node-d004771a-29f6-84d1-8472-27f4f14ac0a0
>     writer.go:29: 2020-02-23T02:46:45.943Z [DEBUG] TestDNS_ServiceReverseLookup_IPV6: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:45.976Z [DEBUG] TestDNS_ServiceReverseLookup_IPV6.dns: request served from client: question="{9.2.3.8.2.4.0.0.0.0.f.f.0.0.0.0.0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. 255 1}" latency=105.866µs client=127.0.0.1:55402 client_network=udp
>     writer.go:29: 2020-02-23T02:46:45.976Z [INFO]  TestDNS_ServiceReverseLookup_IPV6: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:45.976Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:45.976Z [DEBUG] TestDNS_ServiceReverseLookup_IPV6.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.976Z [WARN]  TestDNS_ServiceReverseLookup_IPV6.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.976Z [DEBUG] TestDNS_ServiceReverseLookup_IPV6.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.003Z [INFO]  TestDNS_ServiceReverseLookup_IPV6: Synced node info
>     writer.go:29: 2020-02-23T02:46:46.003Z [DEBUG] TestDNS_ServiceReverseLookup_IPV6: Node info in sync
>     writer.go:29: 2020-02-23T02:46:46.003Z [WARN]  TestDNS_ServiceReverseLookup_IPV6.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.006Z [INFO]  TestDNS_ServiceReverseLookup_IPV6.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:46.006Z [INFO]  TestDNS_ServiceReverseLookup_IPV6: consul server down
>     writer.go:29: 2020-02-23T02:46:46.006Z [INFO]  TestDNS_ServiceReverseLookup_IPV6: shutdown complete
>     writer.go:29: 2020-02-23T02:46:46.006Z [INFO]  TestDNS_ServiceReverseLookup_IPV6: Stopping server: protocol=DNS address=127.0.0.1:16751 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.006Z [INFO]  TestDNS_ServiceReverseLookup_IPV6: Stopping server: protocol=DNS address=127.0.0.1:16751 network=udp
>     writer.go:29: 2020-02-23T02:46:46.006Z [INFO]  TestDNS_ServiceReverseLookup_IPV6: Stopping server: protocol=HTTP address=127.0.0.1:16752 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.006Z [INFO]  TestDNS_ServiceReverseLookup_IPV6: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:46.006Z [INFO]  TestDNS_ServiceReverseLookup_IPV6: Endpoints down
> === CONT  TestDNS_ReverseLookup
> --- PASS: TestDNS_ReverseLookup_CustomDomain (0.20s)
>     writer.go:29: 2020-02-23T02:46:46.017Z [WARN]  TestDNS_ReverseLookup_CustomDomain: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:46.023Z [DEBUG] TestDNS_ReverseLookup_CustomDomain.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:46.023Z [DEBUG] TestDNS_ReverseLookup_CustomDomain.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:46.057Z [INFO]  TestDNS_ReverseLookup_CustomDomain.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d73c080c-84a5-c695-55bb-84be6a80d79b Address:127.0.0.1:16774}]"
>     writer.go:29: 2020-02-23T02:46:46.057Z [INFO]  TestDNS_ReverseLookup_CustomDomain.server.serf.wan: serf: EventMemberJoin: Node-d73c080c-84a5-c695-55bb-84be6a80d79b.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.058Z [INFO]  TestDNS_ReverseLookup_CustomDomain.server.serf.lan: serf: EventMemberJoin: Node-d73c080c-84a5-c695-55bb-84be6a80d79b 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.058Z [INFO]  TestDNS_ReverseLookup_CustomDomain: Started DNS server: address=127.0.0.1:16769 network=udp
>     writer.go:29: 2020-02-23T02:46:46.058Z [INFO]  TestDNS_ReverseLookup_CustomDomain.server.raft: entering follower state: follower="Node at 127.0.0.1:16774 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:46.058Z [INFO]  TestDNS_ReverseLookup_CustomDomain.server: Adding LAN server: server="Node-d73c080c-84a5-c695-55bb-84be6a80d79b (Addr: tcp/127.0.0.1:16774) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:46.058Z [INFO]  TestDNS_ReverseLookup_CustomDomain.server: Handled event for server in area: event=member-join server=Node-d73c080c-84a5-c695-55bb-84be6a80d79b.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:46.058Z [INFO]  TestDNS_ReverseLookup_CustomDomain: Started DNS server: address=127.0.0.1:16769 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.059Z [INFO]  TestDNS_ReverseLookup_CustomDomain: Started HTTP server: address=127.0.0.1:16770 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.059Z [INFO]  TestDNS_ReverseLookup_CustomDomain: started state syncer
>     writer.go:29: 2020-02-23T02:46:46.121Z [WARN]  TestDNS_ReverseLookup_CustomDomain.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:46.121Z [INFO]  TestDNS_ReverseLookup_CustomDomain.server.raft: entering candidate state: node="Node at 127.0.0.1:16774 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:46.124Z [DEBUG] TestDNS_ReverseLookup_CustomDomain.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:46.124Z [DEBUG] TestDNS_ReverseLookup_CustomDomain.server.raft: vote granted: from=d73c080c-84a5-c695-55bb-84be6a80d79b term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:46.124Z [INFO]  TestDNS_ReverseLookup_CustomDomain.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:46.124Z [INFO]  TestDNS_ReverseLookup_CustomDomain.server.raft: entering leader state: leader="Node at 127.0.0.1:16774 [Leader]"
>     writer.go:29: 2020-02-23T02:46:46.124Z [INFO]  TestDNS_ReverseLookup_CustomDomain.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:46.124Z [INFO]  TestDNS_ReverseLookup_CustomDomain.server: New leader elected: payload=Node-d73c080c-84a5-c695-55bb-84be6a80d79b
>     writer.go:29: 2020-02-23T02:46:46.131Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:46.141Z [INFO]  TestDNS_ReverseLookup_CustomDomain.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:46.141Z [INFO]  TestDNS_ReverseLookup_CustomDomain.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.141Z [DEBUG] TestDNS_ReverseLookup_CustomDomain.server: Skipping self join check for node since the cluster is too small: node=Node-d73c080c-84a5-c695-55bb-84be6a80d79b
>     writer.go:29: 2020-02-23T02:46:46.141Z [INFO]  TestDNS_ReverseLookup_CustomDomain.server: member joined, marking health alive: member=Node-d73c080c-84a5-c695-55bb-84be6a80d79b
>     writer.go:29: 2020-02-23T02:46:46.195Z [DEBUG] TestDNS_ReverseLookup_CustomDomain.dns: request served from client: question="{2.0.0.127.in-addr.arpa. 255 1}" latency=62.448µs client=127.0.0.1:48571 client_network=udp
>     writer.go:29: 2020-02-23T02:46:46.195Z [INFO]  TestDNS_ReverseLookup_CustomDomain: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:46.195Z [INFO]  TestDNS_ReverseLookup_CustomDomain.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:46.195Z [DEBUG] TestDNS_ReverseLookup_CustomDomain.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.195Z [WARN]  TestDNS_ReverseLookup_CustomDomain.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.195Z [ERROR] TestDNS_ReverseLookup_CustomDomain.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:46.195Z [DEBUG] TestDNS_ReverseLookup_CustomDomain.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.197Z [WARN]  TestDNS_ReverseLookup_CustomDomain.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.199Z [INFO]  TestDNS_ReverseLookup_CustomDomain.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:46.199Z [INFO]  TestDNS_ReverseLookup_CustomDomain: consul server down
>     writer.go:29: 2020-02-23T02:46:46.199Z [INFO]  TestDNS_ReverseLookup_CustomDomain: shutdown complete
>     writer.go:29: 2020-02-23T02:46:46.199Z [INFO]  TestDNS_ReverseLookup_CustomDomain: Stopping server: protocol=DNS address=127.0.0.1:16769 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.199Z [INFO]  TestDNS_ReverseLookup_CustomDomain: Stopping server: protocol=DNS address=127.0.0.1:16769 network=udp
>     writer.go:29: 2020-02-23T02:46:46.199Z [INFO]  TestDNS_ReverseLookup_CustomDomain: Stopping server: protocol=HTTP address=127.0.0.1:16770 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.199Z [INFO]  TestDNS_ReverseLookup_CustomDomain: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:46.199Z [INFO]  TestDNS_ReverseLookup_CustomDomain: Endpoints down
> === CONT  TestDNS_EDNS0
> --- PASS: TestDNS_RecursorTimeout (3.47s)
>     writer.go:29: 2020-02-23T02:46:42.849Z [WARN]  TestDNS_RecursorTimeout: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:42.850Z [DEBUG] TestDNS_RecursorTimeout.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:42.852Z [DEBUG] TestDNS_RecursorTimeout.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:42.869Z [INFO]  TestDNS_RecursorTimeout.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:496af5d8-a32e-e0f5-5caa-b9fe2373d0dd Address:127.0.0.1:16582}]"
>     writer.go:29: 2020-02-23T02:46:42.869Z [INFO]  TestDNS_RecursorTimeout.server.raft: entering follower state: follower="Node at 127.0.0.1:16582 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:42.870Z [INFO]  TestDNS_RecursorTimeout.server.serf.wan: serf: EventMemberJoin: Node-496af5d8-a32e-e0f5-5caa-b9fe2373d0dd.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:42.871Z [INFO]  TestDNS_RecursorTimeout.server.serf.lan: serf: EventMemberJoin: Node-496af5d8-a32e-e0f5-5caa-b9fe2373d0dd 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:42.872Z [DEBUG] TestDNS_RecursorTimeout.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:42.872Z [INFO]  TestDNS_RecursorTimeout: Started DNS server: address=127.0.0.1:16577 network=udp
>     writer.go:29: 2020-02-23T02:46:42.872Z [INFO]  TestDNS_RecursorTimeout.server: Adding LAN server: server="Node-496af5d8-a32e-e0f5-5caa-b9fe2373d0dd (Addr: tcp/127.0.0.1:16582) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:42.872Z [INFO]  TestDNS_RecursorTimeout.server: Handled event for server in area: event=member-join server=Node-496af5d8-a32e-e0f5-5caa-b9fe2373d0dd.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:42.872Z [DEBUG] TestDNS_RecursorTimeout.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:42.872Z [INFO]  TestDNS_RecursorTimeout: Started DNS server: address=127.0.0.1:16577 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.873Z [INFO]  TestDNS_RecursorTimeout: Started HTTP server: address=127.0.0.1:16578 network=tcp
>     writer.go:29: 2020-02-23T02:46:42.873Z [INFO]  TestDNS_RecursorTimeout: started state syncer
>     writer.go:29: 2020-02-23T02:46:42.911Z [WARN]  TestDNS_RecursorTimeout.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:42.911Z [INFO]  TestDNS_RecursorTimeout.server.raft: entering candidate state: node="Node at 127.0.0.1:16582 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:42.967Z [DEBUG] TestDNS_RecursorTimeout.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:42.967Z [DEBUG] TestDNS_RecursorTimeout.server.raft: vote granted: from=496af5d8-a32e-e0f5-5caa-b9fe2373d0dd term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:42.967Z [INFO]  TestDNS_RecursorTimeout.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:42.967Z [INFO]  TestDNS_RecursorTimeout.server.raft: entering leader state: leader="Node at 127.0.0.1:16582 [Leader]"
>     writer.go:29: 2020-02-23T02:46:42.967Z [INFO]  TestDNS_RecursorTimeout.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:42.967Z [INFO]  TestDNS_RecursorTimeout.server: New leader elected: payload=Node-496af5d8-a32e-e0f5-5caa-b9fe2373d0dd
>     writer.go:29: 2020-02-23T02:46:42.978Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:43.017Z [INFO]  TestDNS_RecursorTimeout.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:43.017Z [INFO]  TestDNS_RecursorTimeout.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:43.017Z [DEBUG] TestDNS_RecursorTimeout.server: Skipping self join check for node since the cluster is too small: node=Node-496af5d8-a32e-e0f5-5caa-b9fe2373d0dd
>     writer.go:29: 2020-02-23T02:46:43.017Z [INFO]  TestDNS_RecursorTimeout.server: member joined, marking health alive: member=Node-496af5d8-a32e-e0f5-5caa-b9fe2373d0dd
>     writer.go:29: 2020-02-23T02:46:43.300Z [DEBUG] TestDNS_RecursorTimeout: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:43.332Z [INFO]  TestDNS_RecursorTimeout: Synced node info
>     writer.go:29: 2020-02-23T02:46:44.227Z [DEBUG] TestDNS_RecursorTimeout: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:44.227Z [DEBUG] TestDNS_RecursorTimeout: Node info in sync
>     writer.go:29: 2020-02-23T02:46:44.227Z [DEBUG] TestDNS_RecursorTimeout: Node info in sync
>     writer.go:29: 2020-02-23T02:46:44.972Z [DEBUG] TestDNS_RecursorTimeout.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:46.292Z [ERROR] TestDNS_RecursorTimeout.dns: recurse failed: error="read udp 127.0.0.1:38881->127.0.0.1:42602: i/o timeout"
>     writer.go:29: 2020-02-23T02:46:46.292Z [ERROR] TestDNS_RecursorTimeout.dns: all resolvers failed for question from client: question="{apple.com. 255 1}" client=127.0.0.1:48092 client_network=udp
>     writer.go:29: 2020-02-23T02:46:46.292Z [DEBUG] TestDNS_RecursorTimeout.dns: request served from client: question="{apple.com. 255 1}" network=udp latency=3.000251014s client=127.0.0.1:48092 client_network=udp
>     writer.go:29: 2020-02-23T02:46:46.292Z [INFO]  TestDNS_RecursorTimeout: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:46.292Z [INFO]  TestDNS_RecursorTimeout.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:46.292Z [DEBUG] TestDNS_RecursorTimeout.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.292Z [WARN]  TestDNS_RecursorTimeout.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.292Z [DEBUG] TestDNS_RecursorTimeout.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.294Z [WARN]  TestDNS_RecursorTimeout.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.296Z [INFO]  TestDNS_RecursorTimeout.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:46.296Z [INFO]  TestDNS_RecursorTimeout: consul server down
>     writer.go:29: 2020-02-23T02:46:46.296Z [INFO]  TestDNS_RecursorTimeout: shutdown complete
>     writer.go:29: 2020-02-23T02:46:46.296Z [INFO]  TestDNS_RecursorTimeout: Stopping server: protocol=DNS address=127.0.0.1:16577 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.296Z [INFO]  TestDNS_RecursorTimeout: Stopping server: protocol=DNS address=127.0.0.1:16577 network=udp
>     writer.go:29: 2020-02-23T02:46:46.296Z [INFO]  TestDNS_RecursorTimeout: Stopping server: protocol=HTTP address=127.0.0.1:16578 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.296Z [INFO]  TestDNS_RecursorTimeout: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:46.296Z [INFO]  TestDNS_RecursorTimeout: Endpoints down
> === CONT  TestDNS_NodeLookup_CNAME
> --- PASS: TestDNS_ReverseLookup (0.30s)
>     writer.go:29: 2020-02-23T02:46:46.022Z [WARN]  TestDNS_ReverseLookup: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:46.022Z [DEBUG] TestDNS_ReverseLookup.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:46.022Z [DEBUG] TestDNS_ReverseLookup.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:46.059Z [INFO]  TestDNS_ReverseLookup.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b32e94eb-c715-9cce-a752-f5fa3ee3f6da Address:127.0.0.1:16780}]"
>     writer.go:29: 2020-02-23T02:46:46.059Z [INFO]  TestDNS_ReverseLookup.server.serf.wan: serf: EventMemberJoin: Node-b32e94eb-c715-9cce-a752-f5fa3ee3f6da.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.060Z [INFO]  TestDNS_ReverseLookup.server.serf.lan: serf: EventMemberJoin: Node-b32e94eb-c715-9cce-a752-f5fa3ee3f6da 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.060Z [INFO]  TestDNS_ReverseLookup: Started DNS server: address=127.0.0.1:16775 network=udp
>     writer.go:29: 2020-02-23T02:46:46.060Z [INFO]  TestDNS_ReverseLookup.server.raft: entering follower state: follower="Node at 127.0.0.1:16780 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:46.060Z [INFO]  TestDNS_ReverseLookup.server: Adding LAN server: server="Node-b32e94eb-c715-9cce-a752-f5fa3ee3f6da (Addr: tcp/127.0.0.1:16780) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:46.060Z [INFO]  TestDNS_ReverseLookup.server: Handled event for server in area: event=member-join server=Node-b32e94eb-c715-9cce-a752-f5fa3ee3f6da.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:46.060Z [INFO]  TestDNS_ReverseLookup: Started DNS server: address=127.0.0.1:16775 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.061Z [INFO]  TestDNS_ReverseLookup: Started HTTP server: address=127.0.0.1:16776 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.061Z [INFO]  TestDNS_ReverseLookup: started state syncer
>     writer.go:29: 2020-02-23T02:46:46.096Z [WARN]  TestDNS_ReverseLookup.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:46.096Z [INFO]  TestDNS_ReverseLookup.server.raft: entering candidate state: node="Node at 127.0.0.1:16780 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:46.100Z [DEBUG] TestDNS_ReverseLookup.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:46.100Z [DEBUG] TestDNS_ReverseLookup.server.raft: vote granted: from=b32e94eb-c715-9cce-a752-f5fa3ee3f6da term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:46.100Z [INFO]  TestDNS_ReverseLookup.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:46.100Z [INFO]  TestDNS_ReverseLookup.server.raft: entering leader state: leader="Node at 127.0.0.1:16780 [Leader]"
>     writer.go:29: 2020-02-23T02:46:46.100Z [INFO]  TestDNS_ReverseLookup.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:46.100Z [INFO]  TestDNS_ReverseLookup.server: New leader elected: payload=Node-b32e94eb-c715-9cce-a752-f5fa3ee3f6da
>     writer.go:29: 2020-02-23T02:46:46.108Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:46.115Z [INFO]  TestDNS_ReverseLookup.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:46.115Z [INFO]  TestDNS_ReverseLookup.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.115Z [DEBUG] TestDNS_ReverseLookup.server: Skipping self join check for node since the cluster is too small: node=Node-b32e94eb-c715-9cce-a752-f5fa3ee3f6da
>     writer.go:29: 2020-02-23T02:46:46.115Z [INFO]  TestDNS_ReverseLookup.server: member joined, marking health alive: member=Node-b32e94eb-c715-9cce-a752-f5fa3ee3f6da
>     writer.go:29: 2020-02-23T02:46:46.300Z [DEBUG] TestDNS_ReverseLookup.dns: request served from client: question="{2.0.0.127.in-addr.arpa. 255 1}" latency=69.898µs client=127.0.0.1:40224 client_network=udp
>     writer.go:29: 2020-02-23T02:46:46.301Z [INFO]  TestDNS_ReverseLookup: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:46.301Z [INFO]  TestDNS_ReverseLookup.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:46.301Z [DEBUG] TestDNS_ReverseLookup.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.301Z [WARN]  TestDNS_ReverseLookup.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.301Z [ERROR] TestDNS_ReverseLookup.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:46.301Z [DEBUG] TestDNS_ReverseLookup.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.303Z [WARN]  TestDNS_ReverseLookup.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.305Z [INFO]  TestDNS_ReverseLookup.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:46.305Z [INFO]  TestDNS_ReverseLookup: consul server down
>     writer.go:29: 2020-02-23T02:46:46.305Z [INFO]  TestDNS_ReverseLookup: shutdown complete
>     writer.go:29: 2020-02-23T02:46:46.305Z [INFO]  TestDNS_ReverseLookup: Stopping server: protocol=DNS address=127.0.0.1:16775 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.305Z [INFO]  TestDNS_ReverseLookup: Stopping server: protocol=DNS address=127.0.0.1:16775 network=udp
>     writer.go:29: 2020-02-23T02:46:46.305Z [INFO]  TestDNS_ReverseLookup: Stopping server: protocol=HTTP address=127.0.0.1:16776 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.305Z [INFO]  TestDNS_ReverseLookup: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:46.305Z [INFO]  TestDNS_ReverseLookup: Endpoints down
> === CONT  TestDNSCycleRecursorCheck
> --- PASS: TestDNS_SOA_Settings (1.32s)
>     writer.go:29: 2020-02-23T02:46:45.202Z [WARN]  TestDNS_SOA_Settings: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:45.202Z [DEBUG] TestDNS_SOA_Settings.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:45.202Z [DEBUG] TestDNS_SOA_Settings.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:45.218Z [INFO]  TestDNS_SOA_Settings.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:e1f1e791-30d0-4157-9fcc-462a73505846 Address:127.0.0.1:16738}]"
>     writer.go:29: 2020-02-23T02:46:45.218Z [INFO]  TestDNS_SOA_Settings.server.raft: entering follower state: follower="Node at 127.0.0.1:16738 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:45.219Z [INFO]  TestDNS_SOA_Settings.server.serf.wan: serf: EventMemberJoin: Node-e1f1e791-30d0-4157-9fcc-462a73505846.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.220Z [INFO]  TestDNS_SOA_Settings.server.serf.lan: serf: EventMemberJoin: Node-e1f1e791-30d0-4157-9fcc-462a73505846 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.220Z [INFO]  TestDNS_SOA_Settings.server: Handled event for server in area: event=member-join server=Node-e1f1e791-30d0-4157-9fcc-462a73505846.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:45.220Z [INFO]  TestDNS_SOA_Settings.server: Adding LAN server: server="Node-e1f1e791-30d0-4157-9fcc-462a73505846 (Addr: tcp/127.0.0.1:16738) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:45.221Z [INFO]  TestDNS_SOA_Settings: Started DNS server: address=127.0.0.1:16733 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.221Z [INFO]  TestDNS_SOA_Settings: Started DNS server: address=127.0.0.1:16733 network=udp
>     writer.go:29: 2020-02-23T02:46:45.221Z [INFO]  TestDNS_SOA_Settings: Started HTTP server: address=127.0.0.1:16734 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.221Z [INFO]  TestDNS_SOA_Settings: started state syncer
>     writer.go:29: 2020-02-23T02:46:45.267Z [WARN]  TestDNS_SOA_Settings.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:45.267Z [INFO]  TestDNS_SOA_Settings.server.raft: entering candidate state: node="Node at 127.0.0.1:16738 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:45.272Z [DEBUG] TestDNS_SOA_Settings.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:45.272Z [DEBUG] TestDNS_SOA_Settings.server.raft: vote granted: from=e1f1e791-30d0-4157-9fcc-462a73505846 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:45.272Z [INFO]  TestDNS_SOA_Settings.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:45.272Z [INFO]  TestDNS_SOA_Settings.server.raft: entering leader state: leader="Node at 127.0.0.1:16738 [Leader]"
>     writer.go:29: 2020-02-23T02:46:45.272Z [INFO]  TestDNS_SOA_Settings.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:45.272Z [INFO]  TestDNS_SOA_Settings.server: New leader elected: payload=Node-e1f1e791-30d0-4157-9fcc-462a73505846
>     writer.go:29: 2020-02-23T02:46:45.285Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:45.293Z [INFO]  TestDNS_SOA_Settings.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:45.293Z [INFO]  TestDNS_SOA_Settings.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.293Z [DEBUG] TestDNS_SOA_Settings.server: Skipping self join check for node since the cluster is too small: node=Node-e1f1e791-30d0-4157-9fcc-462a73505846
>     writer.go:29: 2020-02-23T02:46:45.293Z [INFO]  TestDNS_SOA_Settings.server: member joined, marking health alive: member=Node-e1f1e791-30d0-4157-9fcc-462a73505846
>     writer.go:29: 2020-02-23T02:46:45.474Z [DEBUG] TestDNS_SOA_Settings.dns: request served from client: name=nofoo.node.dc1.consul. type=ANY class=IN latency=109.409µs client=127.0.0.1:39243 client_network=udp
>     writer.go:29: 2020-02-23T02:46:45.474Z [INFO]  TestDNS_SOA_Settings: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:45.474Z [INFO]  TestDNS_SOA_Settings.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:45.474Z [DEBUG] TestDNS_SOA_Settings.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.475Z [WARN]  TestDNS_SOA_Settings.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.475Z [ERROR] TestDNS_SOA_Settings.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:45.475Z [DEBUG] TestDNS_SOA_Settings.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.497Z [WARN]  TestDNS_SOA_Settings.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.499Z [INFO]  TestDNS_SOA_Settings.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:45.499Z [INFO]  TestDNS_SOA_Settings: consul server down
>     writer.go:29: 2020-02-23T02:46:45.499Z [INFO]  TestDNS_SOA_Settings: shutdown complete
>     writer.go:29: 2020-02-23T02:46:45.499Z [INFO]  TestDNS_SOA_Settings: Stopping server: protocol=DNS address=127.0.0.1:16733 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.499Z [INFO]  TestDNS_SOA_Settings: Stopping server: protocol=DNS address=127.0.0.1:16733 network=udp
>     writer.go:29: 2020-02-23T02:46:45.499Z [INFO]  TestDNS_SOA_Settings: Stopping server: protocol=HTTP address=127.0.0.1:16734 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.499Z [INFO]  TestDNS_SOA_Settings: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:45.499Z [INFO]  TestDNS_SOA_Settings: Endpoints down
>     writer.go:29: 2020-02-23T02:46:45.543Z [WARN]  TestDNS_SOA_Settings: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:45.543Z [DEBUG] TestDNS_SOA_Settings.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:45.544Z [DEBUG] TestDNS_SOA_Settings.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:45.553Z [INFO]  TestDNS_SOA_Settings.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d4920065-b895-e536-d635-7ef863f59a2d Address:127.0.0.1:16750}]"
>     writer.go:29: 2020-02-23T02:46:45.553Z [INFO]  TestDNS_SOA_Settings.server.raft: entering follower state: follower="Node at 127.0.0.1:16750 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:45.553Z [INFO]  TestDNS_SOA_Settings.server.serf.wan: serf: EventMemberJoin: Node-d4920065-b895-e536-d635-7ef863f59a2d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.554Z [INFO]  TestDNS_SOA_Settings.server.serf.lan: serf: EventMemberJoin: Node-d4920065-b895-e536-d635-7ef863f59a2d 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.554Z [INFO]  TestDNS_SOA_Settings.server: Handled event for server in area: event=member-join server=Node-d4920065-b895-e536-d635-7ef863f59a2d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:45.554Z [INFO]  TestDNS_SOA_Settings.server: Adding LAN server: server="Node-d4920065-b895-e536-d635-7ef863f59a2d (Addr: tcp/127.0.0.1:16750) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:45.554Z [INFO]  TestDNS_SOA_Settings: Started DNS server: address=127.0.0.1:16745 network=udp
>     writer.go:29: 2020-02-23T02:46:45.554Z [INFO]  TestDNS_SOA_Settings: Started DNS server: address=127.0.0.1:16745 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.554Z [INFO]  TestDNS_SOA_Settings: Started HTTP server: address=127.0.0.1:16746 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.554Z [INFO]  TestDNS_SOA_Settings: started state syncer
>     writer.go:29: 2020-02-23T02:46:45.589Z [WARN]  TestDNS_SOA_Settings.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:45.589Z [INFO]  TestDNS_SOA_Settings.server.raft: entering candidate state: node="Node at 127.0.0.1:16750 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:45.592Z [DEBUG] TestDNS_SOA_Settings.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:45.592Z [DEBUG] TestDNS_SOA_Settings.server.raft: vote granted: from=d4920065-b895-e536-d635-7ef863f59a2d term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:45.592Z [INFO]  TestDNS_SOA_Settings.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:45.592Z [INFO]  TestDNS_SOA_Settings.server.raft: entering leader state: leader="Node at 127.0.0.1:16750 [Leader]"
>     writer.go:29: 2020-02-23T02:46:45.593Z [INFO]  TestDNS_SOA_Settings.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:45.593Z [INFO]  TestDNS_SOA_Settings.server: New leader elected: payload=Node-d4920065-b895-e536-d635-7ef863f59a2d
>     writer.go:29: 2020-02-23T02:46:45.601Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:45.609Z [INFO]  TestDNS_SOA_Settings.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:45.609Z [INFO]  TestDNS_SOA_Settings.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.609Z [DEBUG] TestDNS_SOA_Settings.server: Skipping self join check for node since the cluster is too small: node=Node-d4920065-b895-e536-d635-7ef863f59a2d
>     writer.go:29: 2020-02-23T02:46:45.609Z [INFO]  TestDNS_SOA_Settings.server: member joined, marking health alive: member=Node-d4920065-b895-e536-d635-7ef863f59a2d
>     writer.go:29: 2020-02-23T02:46:45.652Z [INFO]  TestDNS_SOA_Settings: Synced node info
>     writer.go:29: 2020-02-23T02:46:45.652Z [DEBUG] TestDNS_SOA_Settings: Node info in sync
>     writer.go:29: 2020-02-23T02:46:45.844Z [DEBUG] TestDNS_SOA_Settings.dns: request served from client: name=nofoo.node.dc1.consul. type=ANY class=IN latency=86.001µs client=127.0.0.1:46129 client_network=udp
>     writer.go:29: 2020-02-23T02:46:45.844Z [INFO]  TestDNS_SOA_Settings: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:45.844Z [INFO]  TestDNS_SOA_Settings.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:45.844Z [DEBUG] TestDNS_SOA_Settings.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.844Z [WARN]  TestDNS_SOA_Settings.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.844Z [DEBUG] TestDNS_SOA_Settings.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:45.846Z [WARN]  TestDNS_SOA_Settings.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:45.848Z [INFO]  TestDNS_SOA_Settings.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:45.848Z [INFO]  TestDNS_SOA_Settings: consul server down
>     writer.go:29: 2020-02-23T02:46:45.848Z [INFO]  TestDNS_SOA_Settings: shutdown complete
>     writer.go:29: 2020-02-23T02:46:45.848Z [INFO]  TestDNS_SOA_Settings: Stopping server: protocol=DNS address=127.0.0.1:16745 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.848Z [INFO]  TestDNS_SOA_Settings: Stopping server: protocol=DNS address=127.0.0.1:16745 network=udp
>     writer.go:29: 2020-02-23T02:46:45.848Z [INFO]  TestDNS_SOA_Settings: Stopping server: protocol=HTTP address=127.0.0.1:16746 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.848Z [INFO]  TestDNS_SOA_Settings: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:45.848Z [INFO]  TestDNS_SOA_Settings: Endpoints down
>     writer.go:29: 2020-02-23T02:46:45.857Z [WARN]  TestDNS_SOA_Settings: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:45.857Z [DEBUG] TestDNS_SOA_Settings.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:45.857Z [DEBUG] TestDNS_SOA_Settings.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:45.871Z [INFO]  TestDNS_SOA_Settings.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:88129d7e-0765-ab7f-d14b-096bd364d02f Address:127.0.0.1:16768}]"
>     writer.go:29: 2020-02-23T02:46:45.871Z [INFO]  TestDNS_SOA_Settings.server.serf.wan: serf: EventMemberJoin: Node-88129d7e-0765-ab7f-d14b-096bd364d02f.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.872Z [INFO]  TestDNS_SOA_Settings.server.serf.lan: serf: EventMemberJoin: Node-88129d7e-0765-ab7f-d14b-096bd364d02f 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:45.872Z [INFO]  TestDNS_SOA_Settings: Started DNS server: address=127.0.0.1:16763 network=udp
>     writer.go:29: 2020-02-23T02:46:45.872Z [INFO]  TestDNS_SOA_Settings.server.raft: entering follower state: follower="Node at 127.0.0.1:16768 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:45.872Z [INFO]  TestDNS_SOA_Settings.server: Adding LAN server: server="Node-88129d7e-0765-ab7f-d14b-096bd364d02f (Addr: tcp/127.0.0.1:16768) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:45.872Z [INFO]  TestDNS_SOA_Settings.server: Handled event for server in area: event=member-join server=Node-88129d7e-0765-ab7f-d14b-096bd364d02f.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:45.872Z [INFO]  TestDNS_SOA_Settings: Started DNS server: address=127.0.0.1:16763 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.873Z [INFO]  TestDNS_SOA_Settings: Started HTTP server: address=127.0.0.1:16764 network=tcp
>     writer.go:29: 2020-02-23T02:46:45.873Z [INFO]  TestDNS_SOA_Settings: started state syncer
>     writer.go:29: 2020-02-23T02:46:45.933Z [WARN]  TestDNS_SOA_Settings.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:45.933Z [INFO]  TestDNS_SOA_Settings.server.raft: entering candidate state: node="Node at 127.0.0.1:16768 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:45.985Z [DEBUG] TestDNS_SOA_Settings.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:45.985Z [DEBUG] TestDNS_SOA_Settings.server.raft: vote granted: from=88129d7e-0765-ab7f-d14b-096bd364d02f term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:45.985Z [INFO]  TestDNS_SOA_Settings.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:45.985Z [INFO]  TestDNS_SOA_Settings.server.raft: entering leader state: leader="Node at 127.0.0.1:16768 [Leader]"
>     writer.go:29: 2020-02-23T02:46:45.985Z [INFO]  TestDNS_SOA_Settings.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:45.986Z [INFO]  TestDNS_SOA_Settings.server: New leader elected: payload=Node-88129d7e-0765-ab7f-d14b-096bd364d02f
>     writer.go:29: 2020-02-23T02:46:46.051Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:46.060Z [INFO]  TestDNS_SOA_Settings.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:46.060Z [INFO]  TestDNS_SOA_Settings.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.060Z [DEBUG] TestDNS_SOA_Settings.server: Skipping self join check for node since the cluster is too small: node=Node-88129d7e-0765-ab7f-d14b-096bd364d02f
>     writer.go:29: 2020-02-23T02:46:46.060Z [INFO]  TestDNS_SOA_Settings.server: member joined, marking health alive: member=Node-88129d7e-0765-ab7f-d14b-096bd364d02f
>     writer.go:29: 2020-02-23T02:46:46.100Z [DEBUG] TestDNS_SOA_Settings: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:46.102Z [INFO]  TestDNS_SOA_Settings: Synced node info
>     writer.go:29: 2020-02-23T02:46:46.182Z [DEBUG] TestDNS_SOA_Settings.dns: request served from client: name=nofoo.node.dc1.consul. type=ANY class=IN latency=80.475µs client=127.0.0.1:57978 client_network=udp
>     writer.go:29: 2020-02-23T02:46:46.182Z [INFO]  TestDNS_SOA_Settings: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:46.182Z [INFO]  TestDNS_SOA_Settings.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:46.182Z [DEBUG] TestDNS_SOA_Settings.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.182Z [WARN]  TestDNS_SOA_Settings.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.182Z [DEBUG] TestDNS_SOA_Settings.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.184Z [WARN]  TestDNS_SOA_Settings.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.187Z [INFO]  TestDNS_SOA_Settings.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:46.187Z [INFO]  TestDNS_SOA_Settings: consul server down
>     writer.go:29: 2020-02-23T02:46:46.187Z [INFO]  TestDNS_SOA_Settings: shutdown complete
>     writer.go:29: 2020-02-23T02:46:46.187Z [INFO]  TestDNS_SOA_Settings: Stopping server: protocol=DNS address=127.0.0.1:16763 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.187Z [INFO]  TestDNS_SOA_Settings: Stopping server: protocol=DNS address=127.0.0.1:16763 network=udp
>     writer.go:29: 2020-02-23T02:46:46.187Z [INFO]  TestDNS_SOA_Settings: Stopping server: protocol=HTTP address=127.0.0.1:16764 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.187Z [INFO]  TestDNS_SOA_Settings: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:46.187Z [INFO]  TestDNS_SOA_Settings: Endpoints down
>     writer.go:29: 2020-02-23T02:46:46.196Z [WARN]  TestDNS_SOA_Settings: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:46.196Z [DEBUG] TestDNS_SOA_Settings.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:46.196Z [DEBUG] TestDNS_SOA_Settings.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:46.222Z [INFO]  TestDNS_SOA_Settings.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:706adc68-b372-0b9d-a43c-8cb1c1238e4e Address:127.0.0.1:16786}]"
>     writer.go:29: 2020-02-23T02:46:46.222Z [INFO]  TestDNS_SOA_Settings.server.raft: entering follower state: follower="Node at 127.0.0.1:16786 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:46.223Z [INFO]  TestDNS_SOA_Settings.server.serf.wan: serf: EventMemberJoin: Node-706adc68-b372-0b9d-a43c-8cb1c1238e4e.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.223Z [INFO]  TestDNS_SOA_Settings.server.serf.lan: serf: EventMemberJoin: Node-706adc68-b372-0b9d-a43c-8cb1c1238e4e 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.223Z [INFO]  TestDNS_SOA_Settings: Started DNS server: address=127.0.0.1:16781 network=udp
>     writer.go:29: 2020-02-23T02:46:46.223Z [INFO]  TestDNS_SOA_Settings.server: Adding LAN server: server="Node-706adc68-b372-0b9d-a43c-8cb1c1238e4e (Addr: tcp/127.0.0.1:16786) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:46.223Z [INFO]  TestDNS_SOA_Settings.server: Handled event for server in area: event=member-join server=Node-706adc68-b372-0b9d-a43c-8cb1c1238e4e.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:46.223Z [INFO]  TestDNS_SOA_Settings: Started DNS server: address=127.0.0.1:16781 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.224Z [INFO]  TestDNS_SOA_Settings: Started HTTP server: address=127.0.0.1:16782 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.224Z [INFO]  TestDNS_SOA_Settings: started state syncer
>     writer.go:29: 2020-02-23T02:46:46.273Z [WARN]  TestDNS_SOA_Settings.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:46.273Z [INFO]  TestDNS_SOA_Settings.server.raft: entering candidate state: node="Node at 127.0.0.1:16786 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:46.276Z [DEBUG] TestDNS_SOA_Settings.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:46.276Z [DEBUG] TestDNS_SOA_Settings.server.raft: vote granted: from=706adc68-b372-0b9d-a43c-8cb1c1238e4e term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:46.276Z [INFO]  TestDNS_SOA_Settings.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:46.276Z [INFO]  TestDNS_SOA_Settings.server.raft: entering leader state: leader="Node at 127.0.0.1:16786 [Leader]"
>     writer.go:29: 2020-02-23T02:46:46.276Z [INFO]  TestDNS_SOA_Settings.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:46.276Z [INFO]  TestDNS_SOA_Settings.server: New leader elected: payload=Node-706adc68-b372-0b9d-a43c-8cb1c1238e4e
>     writer.go:29: 2020-02-23T02:46:46.284Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:46.292Z [INFO]  TestDNS_SOA_Settings.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:46.292Z [INFO]  TestDNS_SOA_Settings.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.292Z [DEBUG] TestDNS_SOA_Settings.server: Skipping self join check for node since the cluster is too small: node=Node-706adc68-b372-0b9d-a43c-8cb1c1238e4e
>     writer.go:29: 2020-02-23T02:46:46.292Z [INFO]  TestDNS_SOA_Settings.server: member joined, marking health alive: member=Node-706adc68-b372-0b9d-a43c-8cb1c1238e4e
>     writer.go:29: 2020-02-23T02:46:46.322Z [DEBUG] TestDNS_SOA_Settings: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:46.325Z [INFO]  TestDNS_SOA_Settings: Synced node info
>     writer.go:29: 2020-02-23T02:46:46.325Z [DEBUG] TestDNS_SOA_Settings: Node info in sync
>     writer.go:29: 2020-02-23T02:46:46.489Z [DEBUG] TestDNS_SOA_Settings.dns: request served from client: name=nofoo.node.dc1.consul. type=ANY class=IN latency=82.175µs client=127.0.0.1:40043 client_network=udp
>     writer.go:29: 2020-02-23T02:46:46.489Z [INFO]  TestDNS_SOA_Settings: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:46.489Z [INFO]  TestDNS_SOA_Settings.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:46.489Z [DEBUG] TestDNS_SOA_Settings.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.489Z [WARN]  TestDNS_SOA_Settings.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.489Z [DEBUG] TestDNS_SOA_Settings.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.491Z [WARN]  TestDNS_SOA_Settings.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.492Z [INFO]  TestDNS_SOA_Settings.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:46.493Z [INFO]  TestDNS_SOA_Settings: consul server down
>     writer.go:29: 2020-02-23T02:46:46.493Z [INFO]  TestDNS_SOA_Settings: shutdown complete
>     writer.go:29: 2020-02-23T02:46:46.493Z [INFO]  TestDNS_SOA_Settings: Stopping server: protocol=DNS address=127.0.0.1:16781 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.493Z [INFO]  TestDNS_SOA_Settings: Stopping server: protocol=DNS address=127.0.0.1:16781 network=udp
>     writer.go:29: 2020-02-23T02:46:46.493Z [INFO]  TestDNS_SOA_Settings: Stopping server: protocol=HTTP address=127.0.0.1:16782 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.493Z [INFO]  TestDNS_SOA_Settings: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:46.493Z [INFO]  TestDNS_SOA_Settings: Endpoints down
> === CONT  TestDNS_NodeLookup_PeriodName
> --- PASS: TestDNSCycleRecursorCheck (0.26s)
>     writer.go:29: 2020-02-23T02:46:46.312Z [WARN]  TestDNSCycleRecursorCheck: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:46.313Z [DEBUG] TestDNSCycleRecursorCheck.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:46.313Z [DEBUG] TestDNSCycleRecursorCheck.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:46.328Z [INFO]  TestDNSCycleRecursorCheck.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:166dfcd1-4d42-f7d2-a07e-fcfe46328c3f Address:127.0.0.1:16804}]"
>     writer.go:29: 2020-02-23T02:46:46.328Z [INFO]  TestDNSCycleRecursorCheck.server.serf.wan: serf: EventMemberJoin: Node-166dfcd1-4d42-f7d2-a07e-fcfe46328c3f.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.329Z [INFO]  TestDNSCycleRecursorCheck.server.serf.lan: serf: EventMemberJoin: Node-166dfcd1-4d42-f7d2-a07e-fcfe46328c3f 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.329Z [DEBUG] TestDNSCycleRecursorCheck.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:46.329Z [INFO]  TestDNSCycleRecursorCheck.server.raft: entering follower state: follower="Node at 127.0.0.1:16804 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:46.329Z [INFO]  TestDNSCycleRecursorCheck.server: Adding LAN server: server="Node-166dfcd1-4d42-f7d2-a07e-fcfe46328c3f (Addr: tcp/127.0.0.1:16804) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:46.329Z [INFO]  TestDNSCycleRecursorCheck.server: Handled event for server in area: event=member-join server=Node-166dfcd1-4d42-f7d2-a07e-fcfe46328c3f.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:46.329Z [DEBUG] TestDNSCycleRecursorCheck.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:46.329Z [INFO]  TestDNSCycleRecursorCheck: Started DNS server: address=127.0.0.1:16799 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.329Z [INFO]  TestDNSCycleRecursorCheck: Started DNS server: address=127.0.0.1:16799 network=udp
>     writer.go:29: 2020-02-23T02:46:46.330Z [INFO]  TestDNSCycleRecursorCheck: Started HTTP server: address=127.0.0.1:16800 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.330Z [INFO]  TestDNSCycleRecursorCheck: started state syncer
>     writer.go:29: 2020-02-23T02:46:46.391Z [WARN]  TestDNSCycleRecursorCheck.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:46.391Z [INFO]  TestDNSCycleRecursorCheck.server.raft: entering candidate state: node="Node at 127.0.0.1:16804 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:46.394Z [DEBUG] TestDNSCycleRecursorCheck.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:46.394Z [DEBUG] TestDNSCycleRecursorCheck.server.raft: vote granted: from=166dfcd1-4d42-f7d2-a07e-fcfe46328c3f term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:46.394Z [INFO]  TestDNSCycleRecursorCheck.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:46.394Z [INFO]  TestDNSCycleRecursorCheck.server.raft: entering leader state: leader="Node at 127.0.0.1:16804 [Leader]"
>     writer.go:29: 2020-02-23T02:46:46.395Z [INFO]  TestDNSCycleRecursorCheck.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:46.395Z [INFO]  TestDNSCycleRecursorCheck.server: New leader elected: payload=Node-166dfcd1-4d42-f7d2-a07e-fcfe46328c3f
>     writer.go:29: 2020-02-23T02:46:46.401Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:46.409Z [INFO]  TestDNSCycleRecursorCheck.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:46.409Z [INFO]  TestDNSCycleRecursorCheck.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.409Z [DEBUG] TestDNSCycleRecursorCheck.server: Skipping self join check for node since the cluster is too small: node=Node-166dfcd1-4d42-f7d2-a07e-fcfe46328c3f
>     writer.go:29: 2020-02-23T02:46:46.409Z [INFO]  TestDNSCycleRecursorCheck.server: member joined, marking health alive: member=Node-166dfcd1-4d42-f7d2-a07e-fcfe46328c3f
>     writer.go:29: 2020-02-23T02:46:46.552Z [DEBUG] TestDNSCycleRecursorCheck: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:46.557Z [INFO]  TestDNSCycleRecursorCheck: Synced node info
>     writer.go:29: 2020-02-23T02:46:46.557Z [DEBUG] TestDNSCycleRecursorCheck: Node info in sync
>     writer.go:29: 2020-02-23T02:46:46.560Z [DEBUG] TestDNSCycleRecursorCheck.dns: recurse failed for question: question="{google.com. 1 1}" rtt=52.274µs recursor=127.0.0.1:59309 rcode=SERVFAIL
>     writer.go:29: 2020-02-23T02:46:46.560Z [DEBUG] TestDNSCycleRecursorCheck.dns: recurse succeeded for question: question="{google.com. 1 1}" rtt=58.335µs recursor=127.0.0.1:51068
>     writer.go:29: 2020-02-23T02:46:46.560Z [DEBUG] TestDNSCycleRecursorCheck.dns: request served from client: question="{google.com. 1 1}" network=udp latency=305.7µs client=127.0.0.1:38845 client_network=udp
>     writer.go:29: 2020-02-23T02:46:46.560Z [INFO]  TestDNSCycleRecursorCheck: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:46.560Z [INFO]  TestDNSCycleRecursorCheck.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:46.560Z [DEBUG] TestDNSCycleRecursorCheck.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.560Z [WARN]  TestDNSCycleRecursorCheck.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.560Z [DEBUG] TestDNSCycleRecursorCheck.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.562Z [WARN]  TestDNSCycleRecursorCheck.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.565Z [INFO]  TestDNSCycleRecursorCheck.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:46.565Z [INFO]  TestDNSCycleRecursorCheck: consul server down
>     writer.go:29: 2020-02-23T02:46:46.565Z [INFO]  TestDNSCycleRecursorCheck: shutdown complete
>     writer.go:29: 2020-02-23T02:46:46.565Z [INFO]  TestDNSCycleRecursorCheck: Stopping server: protocol=DNS address=127.0.0.1:16799 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.565Z [INFO]  TestDNSCycleRecursorCheck: Stopping server: protocol=DNS address=127.0.0.1:16799 network=udp
>     writer.go:29: 2020-02-23T02:46:46.565Z [INFO]  TestDNSCycleRecursorCheck: Stopping server: protocol=HTTP address=127.0.0.1:16800 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.565Z [INFO]  TestDNSCycleRecursorCheck: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:46.565Z [INFO]  TestDNSCycleRecursorCheck: Endpoints down
> === CONT  TestDNS_NodeLookup_AAAA
> --- PASS: TestDNS_EDNS0 (0.37s)
>     writer.go:29: 2020-02-23T02:46:46.206Z [WARN]  TestDNS_EDNS0: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:46.206Z [DEBUG] TestDNS_EDNS0.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:46.207Z [DEBUG] TestDNS_EDNS0.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:46.225Z [INFO]  TestDNS_EDNS0.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:ee176e11-0c0f-c1c2-cbcd-163fbc03681c Address:127.0.0.1:16792}]"
>     writer.go:29: 2020-02-23T02:46:46.226Z [INFO]  TestDNS_EDNS0.server.serf.wan: serf: EventMemberJoin: Node-ee176e11-0c0f-c1c2-cbcd-163fbc03681c.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.226Z [INFO]  TestDNS_EDNS0.server.serf.lan: serf: EventMemberJoin: Node-ee176e11-0c0f-c1c2-cbcd-163fbc03681c 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.226Z [INFO]  TestDNS_EDNS0: Started DNS server: address=127.0.0.1:16787 network=udp
>     writer.go:29: 2020-02-23T02:46:46.226Z [INFO]  TestDNS_EDNS0.server.raft: entering follower state: follower="Node at 127.0.0.1:16792 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:46.227Z [INFO]  TestDNS_EDNS0.server: Adding LAN server: server="Node-ee176e11-0c0f-c1c2-cbcd-163fbc03681c (Addr: tcp/127.0.0.1:16792) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:46.227Z [INFO]  TestDNS_EDNS0.server: Handled event for server in area: event=member-join server=Node-ee176e11-0c0f-c1c2-cbcd-163fbc03681c.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:46.227Z [INFO]  TestDNS_EDNS0: Started DNS server: address=127.0.0.1:16787 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.227Z [INFO]  TestDNS_EDNS0: Started HTTP server: address=127.0.0.1:16788 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.227Z [INFO]  TestDNS_EDNS0: started state syncer
>     writer.go:29: 2020-02-23T02:46:46.292Z [WARN]  TestDNS_EDNS0.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:46.292Z [INFO]  TestDNS_EDNS0.server.raft: entering candidate state: node="Node at 127.0.0.1:16792 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:46.296Z [DEBUG] TestDNS_EDNS0.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:46.296Z [DEBUG] TestDNS_EDNS0.server.raft: vote granted: from=ee176e11-0c0f-c1c2-cbcd-163fbc03681c term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:46.296Z [INFO]  TestDNS_EDNS0.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:46.296Z [INFO]  TestDNS_EDNS0.server.raft: entering leader state: leader="Node at 127.0.0.1:16792 [Leader]"
>     writer.go:29: 2020-02-23T02:46:46.297Z [INFO]  TestDNS_EDNS0.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:46.300Z [INFO]  TestDNS_EDNS0.server: New leader elected: payload=Node-ee176e11-0c0f-c1c2-cbcd-163fbc03681c
>     writer.go:29: 2020-02-23T02:46:46.310Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:46.310Z [INFO]  TestDNS_EDNS0: Synced node info
>     writer.go:29: 2020-02-23T02:46:46.310Z [DEBUG] TestDNS_EDNS0: Node info in sync
>     writer.go:29: 2020-02-23T02:46:46.315Z [INFO]  TestDNS_EDNS0.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:46.315Z [INFO]  TestDNS_EDNS0.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.315Z [DEBUG] TestDNS_EDNS0.server: Skipping self join check for node since the cluster is too small: node=Node-ee176e11-0c0f-c1c2-cbcd-163fbc03681c
>     writer.go:29: 2020-02-23T02:46:46.315Z [INFO]  TestDNS_EDNS0.server: member joined, marking health alive: member=Node-ee176e11-0c0f-c1c2-cbcd-163fbc03681c
>     writer.go:29: 2020-02-23T02:46:46.559Z [DEBUG] TestDNS_EDNS0.dns: request served from client: name=foo.node.dc1.consul. type=ANY class=IN latency=96.691µs client=127.0.0.1:44822 client_network=udp
>     writer.go:29: 2020-02-23T02:46:46.559Z [INFO]  TestDNS_EDNS0: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:46.559Z [INFO]  TestDNS_EDNS0.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:46.559Z [DEBUG] TestDNS_EDNS0.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.559Z [WARN]  TestDNS_EDNS0.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.559Z [DEBUG] TestDNS_EDNS0.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.561Z [WARN]  TestDNS_EDNS0.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.567Z [INFO]  TestDNS_EDNS0.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:46.567Z [INFO]  TestDNS_EDNS0: consul server down
>     writer.go:29: 2020-02-23T02:46:46.567Z [INFO]  TestDNS_EDNS0: shutdown complete
>     writer.go:29: 2020-02-23T02:46:46.567Z [INFO]  TestDNS_EDNS0: Stopping server: protocol=DNS address=127.0.0.1:16787 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.567Z [INFO]  TestDNS_EDNS0: Stopping server: protocol=DNS address=127.0.0.1:16787 network=udp
>     writer.go:29: 2020-02-23T02:46:46.567Z [INFO]  TestDNS_EDNS0: Stopping server: protocol=HTTP address=127.0.0.1:16788 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.567Z [INFO]  TestDNS_EDNS0: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:46.567Z [INFO]  TestDNS_EDNS0: Endpoints down
> === CONT  TestDNS_CaseInsensitiveNodeLookup
> --- PASS: TestDNS_NodeLookup_CNAME (0.28s)
>     writer.go:29: 2020-02-23T02:46:46.304Z [WARN]  TestDNS_NodeLookup_CNAME: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:46.304Z [DEBUG] TestDNS_NodeLookup_CNAME.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:46.305Z [DEBUG] TestDNS_NodeLookup_CNAME.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:46.322Z [INFO]  TestDNS_NodeLookup_CNAME.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a05611be-9328-d9f3-a341-eacaa6b70736 Address:127.0.0.1:16798}]"
>     writer.go:29: 2020-02-23T02:46:46.323Z [INFO]  TestDNS_NodeLookup_CNAME.server.raft: entering follower state: follower="Node at 127.0.0.1:16798 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:46.323Z [INFO]  TestDNS_NodeLookup_CNAME.server.serf.wan: serf: EventMemberJoin: Node-a05611be-9328-d9f3-a341-eacaa6b70736.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.324Z [INFO]  TestDNS_NodeLookup_CNAME.server.serf.lan: serf: EventMemberJoin: Node-a05611be-9328-d9f3-a341-eacaa6b70736 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.325Z [INFO]  TestDNS_NodeLookup_CNAME.server: Handled event for server in area: event=member-join server=Node-a05611be-9328-d9f3-a341-eacaa6b70736.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:46.325Z [INFO]  TestDNS_NodeLookup_CNAME.server: Adding LAN server: server="Node-a05611be-9328-d9f3-a341-eacaa6b70736 (Addr: tcp/127.0.0.1:16798) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:46.325Z [DEBUG] TestDNS_NodeLookup_CNAME.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:46.325Z [INFO]  TestDNS_NodeLookup_CNAME: Started DNS server: address=127.0.0.1:16793 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.325Z [DEBUG] TestDNS_NodeLookup_CNAME.dns: recursor enabled
>     writer.go:29: 2020-02-23T02:46:46.325Z [INFO]  TestDNS_NodeLookup_CNAME: Started DNS server: address=127.0.0.1:16793 network=udp
>     writer.go:29: 2020-02-23T02:46:46.326Z [INFO]  TestDNS_NodeLookup_CNAME: Started HTTP server: address=127.0.0.1:16794 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.326Z [INFO]  TestDNS_NodeLookup_CNAME: started state syncer
>     writer.go:29: 2020-02-23T02:46:46.364Z [WARN]  TestDNS_NodeLookup_CNAME.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:46.364Z [INFO]  TestDNS_NodeLookup_CNAME.server.raft: entering candidate state: node="Node at 127.0.0.1:16798 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:46.367Z [DEBUG] TestDNS_NodeLookup_CNAME.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:46.367Z [DEBUG] TestDNS_NodeLookup_CNAME.server.raft: vote granted: from=a05611be-9328-d9f3-a341-eacaa6b70736 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:46.368Z [INFO]  TestDNS_NodeLookup_CNAME.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:46.368Z [INFO]  TestDNS_NodeLookup_CNAME.server.raft: entering leader state: leader="Node at 127.0.0.1:16798 [Leader]"
>     writer.go:29: 2020-02-23T02:46:46.368Z [INFO]  TestDNS_NodeLookup_CNAME.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:46.368Z [INFO]  TestDNS_NodeLookup_CNAME.server: New leader elected: payload=Node-a05611be-9328-d9f3-a341-eacaa6b70736
>     writer.go:29: 2020-02-23T02:46:46.375Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:46.383Z [INFO]  TestDNS_NodeLookup_CNAME.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:46.383Z [INFO]  TestDNS_NodeLookup_CNAME.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.383Z [DEBUG] TestDNS_NodeLookup_CNAME.server: Skipping self join check for node since the cluster is too small: node=Node-a05611be-9328-d9f3-a341-eacaa6b70736
>     writer.go:29: 2020-02-23T02:46:46.383Z [INFO]  TestDNS_NodeLookup_CNAME.server: member joined, marking health alive: member=Node-a05611be-9328-d9f3-a341-eacaa6b70736
>     writer.go:29: 2020-02-23T02:46:46.575Z [DEBUG] TestDNS_NodeLookup_CNAME.dns: cname recurse RTT for name: name=www.google.com. rtt=54.032µs
>     writer.go:29: 2020-02-23T02:46:46.575Z [DEBUG] TestDNS_NodeLookup_CNAME.dns: request served from client: name=google.node.consul. type=ANY class=IN latency=194.193µs client=127.0.0.1:58553 client_network=udp
>     writer.go:29: 2020-02-23T02:46:46.575Z [INFO]  TestDNS_NodeLookup_CNAME: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:46.575Z [INFO]  TestDNS_NodeLookup_CNAME.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:46.575Z [DEBUG] TestDNS_NodeLookup_CNAME.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.575Z [WARN]  TestDNS_NodeLookup_CNAME.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.575Z [ERROR] TestDNS_NodeLookup_CNAME.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:46.575Z [DEBUG] TestDNS_NodeLookup_CNAME.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.577Z [WARN]  TestDNS_NodeLookup_CNAME.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.578Z [INFO]  TestDNS_NodeLookup_CNAME.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:46.579Z [INFO]  TestDNS_NodeLookup_CNAME: consul server down
>     writer.go:29: 2020-02-23T02:46:46.579Z [INFO]  TestDNS_NodeLookup_CNAME: shutdown complete
>     writer.go:29: 2020-02-23T02:46:46.579Z [INFO]  TestDNS_NodeLookup_CNAME: Stopping server: protocol=DNS address=127.0.0.1:16793 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.579Z [INFO]  TestDNS_NodeLookup_CNAME: Stopping server: protocol=DNS address=127.0.0.1:16793 network=udp
>     writer.go:29: 2020-02-23T02:46:46.579Z [INFO]  TestDNS_NodeLookup_CNAME: Stopping server: protocol=HTTP address=127.0.0.1:16794 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.579Z [INFO]  TestDNS_NodeLookup_CNAME: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:46.579Z [INFO]  TestDNS_NodeLookup_CNAME: Endpoints down
> === CONT  TestDNS_Over_TCP
> --- PASS: TestDNS_CaseInsensitiveNodeLookup (0.12s)
>     writer.go:29: 2020-02-23T02:46:46.575Z [WARN]  TestDNS_CaseInsensitiveNodeLookup: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:46.575Z [DEBUG] TestDNS_CaseInsensitiveNodeLookup.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:46.576Z [DEBUG] TestDNS_CaseInsensitiveNodeLookup.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:46.596Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4ae40657-14e2-2a46-6199-65e9a6f30804 Address:127.0.0.1:16822}]"
>     writer.go:29: 2020-02-23T02:46:46.596Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.server.serf.wan: serf: EventMemberJoin: Node-4ae40657-14e2-2a46-6199-65e9a6f30804.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.596Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.server.serf.lan: serf: EventMemberJoin: Node-4ae40657-14e2-2a46-6199-65e9a6f30804 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.597Z [INFO]  TestDNS_CaseInsensitiveNodeLookup: Started DNS server: address=127.0.0.1:16817 network=udp
>     writer.go:29: 2020-02-23T02:46:46.597Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.server.raft: entering follower state: follower="Node at 127.0.0.1:16822 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:46.597Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.server: Adding LAN server: server="Node-4ae40657-14e2-2a46-6199-65e9a6f30804 (Addr: tcp/127.0.0.1:16822) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:46.597Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.server: Handled event for server in area: event=member-join server=Node-4ae40657-14e2-2a46-6199-65e9a6f30804.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:46.597Z [INFO]  TestDNS_CaseInsensitiveNodeLookup: Started DNS server: address=127.0.0.1:16817 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.597Z [INFO]  TestDNS_CaseInsensitiveNodeLookup: Started HTTP server: address=127.0.0.1:16818 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.597Z [INFO]  TestDNS_CaseInsensitiveNodeLookup: started state syncer
>     writer.go:29: 2020-02-23T02:46:46.648Z [WARN]  TestDNS_CaseInsensitiveNodeLookup.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:46.648Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.server.raft: entering candidate state: node="Node at 127.0.0.1:16822 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:46.652Z [DEBUG] TestDNS_CaseInsensitiveNodeLookup.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:46.652Z [DEBUG] TestDNS_CaseInsensitiveNodeLookup.server.raft: vote granted: from=4ae40657-14e2-2a46-6199-65e9a6f30804 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:46.652Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:46.652Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.server.raft: entering leader state: leader="Node at 127.0.0.1:16822 [Leader]"
>     writer.go:29: 2020-02-23T02:46:46.652Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:46.652Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.server: New leader elected: payload=Node-4ae40657-14e2-2a46-6199-65e9a6f30804
>     writer.go:29: 2020-02-23T02:46:46.661Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:46.669Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:46.669Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.669Z [DEBUG] TestDNS_CaseInsensitiveNodeLookup.server: Skipping self join check for node since the cluster is too small: node=Node-4ae40657-14e2-2a46-6199-65e9a6f30804
>     writer.go:29: 2020-02-23T02:46:46.669Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.server: member joined, marking health alive: member=Node-4ae40657-14e2-2a46-6199-65e9a6f30804
>     writer.go:29: 2020-02-23T02:46:46.685Z [DEBUG] TestDNS_CaseInsensitiveNodeLookup.dns: request served from client: name=fOO.node.dc1.consul. type=ANY class=IN latency=90.35µs client=127.0.0.1:34895 client_network=udp
>     writer.go:29: 2020-02-23T02:46:46.685Z [INFO]  TestDNS_CaseInsensitiveNodeLookup: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:46.685Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:46.685Z [DEBUG] TestDNS_CaseInsensitiveNodeLookup.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.685Z [WARN]  TestDNS_CaseInsensitiveNodeLookup.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.685Z [ERROR] TestDNS_CaseInsensitiveNodeLookup.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:46.686Z [DEBUG] TestDNS_CaseInsensitiveNodeLookup.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.688Z [WARN]  TestDNS_CaseInsensitiveNodeLookup.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.689Z [INFO]  TestDNS_CaseInsensitiveNodeLookup.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:46.690Z [INFO]  TestDNS_CaseInsensitiveNodeLookup: consul server down
>     writer.go:29: 2020-02-23T02:46:46.690Z [INFO]  TestDNS_CaseInsensitiveNodeLookup: shutdown complete
>     writer.go:29: 2020-02-23T02:46:46.690Z [INFO]  TestDNS_CaseInsensitiveNodeLookup: Stopping server: protocol=DNS address=127.0.0.1:16817 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.690Z [INFO]  TestDNS_CaseInsensitiveNodeLookup: Stopping server: protocol=DNS address=127.0.0.1:16817 network=udp
>     writer.go:29: 2020-02-23T02:46:46.690Z [INFO]  TestDNS_CaseInsensitiveNodeLookup: Stopping server: protocol=HTTP address=127.0.0.1:16818 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.690Z [INFO]  TestDNS_CaseInsensitiveNodeLookup: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:46.690Z [INFO]  TestDNS_CaseInsensitiveNodeLookup: Endpoints down
> === CONT  TestRecursorAddr
> --- PASS: TestRecursorAddr (0.00s)
> === CONT  TestDiscoveryChainRead
> --- PASS: TestDNS_NodeLookup_AAAA (0.30s)
>     writer.go:29: 2020-02-23T02:46:46.571Z [WARN]  TestDNS_NodeLookup_AAAA: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:46.571Z [DEBUG] TestDNS_NodeLookup_AAAA.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:46.572Z [DEBUG] TestDNS_NodeLookup_AAAA.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:46.593Z [INFO]  TestDNS_NodeLookup_AAAA.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:30ebb9cb-ff17-eb61-d8f9-31b2923e88bf Address:127.0.0.1:16816}]"
>     writer.go:29: 2020-02-23T02:46:46.593Z [INFO]  TestDNS_NodeLookup_AAAA.server.raft: entering follower state: follower="Node at 127.0.0.1:16816 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:46.594Z [INFO]  TestDNS_NodeLookup_AAAA.server.serf.wan: serf: EventMemberJoin: Node-30ebb9cb-ff17-eb61-d8f9-31b2923e88bf.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.595Z [INFO]  TestDNS_NodeLookup_AAAA.server.serf.lan: serf: EventMemberJoin: Node-30ebb9cb-ff17-eb61-d8f9-31b2923e88bf 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.595Z [INFO]  TestDNS_NodeLookup_AAAA: Started DNS server: address=127.0.0.1:16811 network=udp
>     writer.go:29: 2020-02-23T02:46:46.595Z [INFO]  TestDNS_NodeLookup_AAAA.server: Adding LAN server: server="Node-30ebb9cb-ff17-eb61-d8f9-31b2923e88bf (Addr: tcp/127.0.0.1:16816) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:46.595Z [INFO]  TestDNS_NodeLookup_AAAA.server: Handled event for server in area: event=member-join server=Node-30ebb9cb-ff17-eb61-d8f9-31b2923e88bf.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:46.595Z [INFO]  TestDNS_NodeLookup_AAAA: Started DNS server: address=127.0.0.1:16811 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.595Z [INFO]  TestDNS_NodeLookup_AAAA: Started HTTP server: address=127.0.0.1:16812 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.595Z [INFO]  TestDNS_NodeLookup_AAAA: started state syncer
>     writer.go:29: 2020-02-23T02:46:46.661Z [WARN]  TestDNS_NodeLookup_AAAA.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:46.661Z [INFO]  TestDNS_NodeLookup_AAAA.server.raft: entering candidate state: node="Node at 127.0.0.1:16816 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:46.665Z [DEBUG] TestDNS_NodeLookup_AAAA.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:46.665Z [DEBUG] TestDNS_NodeLookup_AAAA.server.raft: vote granted: from=30ebb9cb-ff17-eb61-d8f9-31b2923e88bf term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:46.665Z [INFO]  TestDNS_NodeLookup_AAAA.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:46.665Z [INFO]  TestDNS_NodeLookup_AAAA.server.raft: entering leader state: leader="Node at 127.0.0.1:16816 [Leader]"
>     writer.go:29: 2020-02-23T02:46:46.665Z [INFO]  TestDNS_NodeLookup_AAAA.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:46.665Z [INFO]  TestDNS_NodeLookup_AAAA.server: New leader elected: payload=Node-30ebb9cb-ff17-eb61-d8f9-31b2923e88bf
>     writer.go:29: 2020-02-23T02:46:46.672Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:46.680Z [INFO]  TestDNS_NodeLookup_AAAA.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:46.680Z [INFO]  TestDNS_NodeLookup_AAAA.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.680Z [DEBUG] TestDNS_NodeLookup_AAAA.server: Skipping self join check for node since the cluster is too small: node=Node-30ebb9cb-ff17-eb61-d8f9-31b2923e88bf
>     writer.go:29: 2020-02-23T02:46:46.680Z [INFO]  TestDNS_NodeLookup_AAAA.server: member joined, marking health alive: member=Node-30ebb9cb-ff17-eb61-d8f9-31b2923e88bf
>     writer.go:29: 2020-02-23T02:46:46.834Z [DEBUG] TestDNS_NodeLookup_AAAA: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:46.837Z [INFO]  TestDNS_NodeLookup_AAAA: Synced node info
>     writer.go:29: 2020-02-23T02:46:46.837Z [DEBUG] TestDNS_NodeLookup_AAAA: Node info in sync
>     writer.go:29: 2020-02-23T02:46:46.862Z [DEBUG] TestDNS_NodeLookup_AAAA.dns: request served from client: name=bar.node.consul. type=AAAA class=IN latency=87.027µs client=127.0.0.1:44401 client_network=udp
>     writer.go:29: 2020-02-23T02:46:46.862Z [INFO]  TestDNS_NodeLookup_AAAA: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:46.862Z [INFO]  TestDNS_NodeLookup_AAAA.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:46.862Z [DEBUG] TestDNS_NodeLookup_AAAA.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.862Z [WARN]  TestDNS_NodeLookup_AAAA.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.862Z [DEBUG] TestDNS_NodeLookup_AAAA.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.864Z [WARN]  TestDNS_NodeLookup_AAAA.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.866Z [INFO]  TestDNS_NodeLookup_AAAA.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:46.866Z [INFO]  TestDNS_NodeLookup_AAAA: consul server down
>     writer.go:29: 2020-02-23T02:46:46.866Z [INFO]  TestDNS_NodeLookup_AAAA: shutdown complete
>     writer.go:29: 2020-02-23T02:46:46.866Z [INFO]  TestDNS_NodeLookup_AAAA: Stopping server: protocol=DNS address=127.0.0.1:16811 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.866Z [INFO]  TestDNS_NodeLookup_AAAA: Stopping server: protocol=DNS address=127.0.0.1:16811 network=udp
>     writer.go:29: 2020-02-23T02:46:46.866Z [INFO]  TestDNS_NodeLookup_AAAA: Stopping server: protocol=HTTP address=127.0.0.1:16812 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.866Z [INFO]  TestDNS_NodeLookup_AAAA: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:46.866Z [INFO]  TestDNS_NodeLookup_AAAA: Endpoints down
> === CONT  TestCoordinate_Update_ACLDeny
> --- PASS: TestDNS_NodeLookup_PeriodName (0.39s)
>     writer.go:29: 2020-02-23T02:46:46.499Z [WARN]  TestDNS_NodeLookup_PeriodName: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:46.500Z [DEBUG] TestDNS_NodeLookup_PeriodName.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:46.527Z [DEBUG] TestDNS_NodeLookup_PeriodName.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:46.544Z [INFO]  TestDNS_NodeLookup_PeriodName.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:108b4019-09b1-66bd-7155-94f8bd5bba99 Address:127.0.0.1:16810}]"
>     writer.go:29: 2020-02-23T02:46:46.545Z [INFO]  TestDNS_NodeLookup_PeriodName.server.serf.wan: serf: EventMemberJoin: Node-108b4019-09b1-66bd-7155-94f8bd5bba99.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.545Z [INFO]  TestDNS_NodeLookup_PeriodName.server.serf.lan: serf: EventMemberJoin: Node-108b4019-09b1-66bd-7155-94f8bd5bba99 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.545Z [INFO]  TestDNS_NodeLookup_PeriodName: Started DNS server: address=127.0.0.1:16805 network=udp
>     writer.go:29: 2020-02-23T02:46:46.545Z [INFO]  TestDNS_NodeLookup_PeriodName.server.raft: entering follower state: follower="Node at 127.0.0.1:16810 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:46.545Z [INFO]  TestDNS_NodeLookup_PeriodName.server: Adding LAN server: server="Node-108b4019-09b1-66bd-7155-94f8bd5bba99 (Addr: tcp/127.0.0.1:16810) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:46.545Z [INFO]  TestDNS_NodeLookup_PeriodName.server: Handled event for server in area: event=member-join server=Node-108b4019-09b1-66bd-7155-94f8bd5bba99.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:46.546Z [INFO]  TestDNS_NodeLookup_PeriodName: Started DNS server: address=127.0.0.1:16805 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.546Z [INFO]  TestDNS_NodeLookup_PeriodName: Started HTTP server: address=127.0.0.1:16806 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.546Z [INFO]  TestDNS_NodeLookup_PeriodName: started state syncer
>     writer.go:29: 2020-02-23T02:46:46.610Z [WARN]  TestDNS_NodeLookup_PeriodName.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:46.610Z [INFO]  TestDNS_NodeLookup_PeriodName.server.raft: entering candidate state: node="Node at 127.0.0.1:16810 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:46.615Z [DEBUG] TestDNS_NodeLookup_PeriodName.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:46.615Z [DEBUG] TestDNS_NodeLookup_PeriodName.server.raft: vote granted: from=108b4019-09b1-66bd-7155-94f8bd5bba99 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:46.615Z [INFO]  TestDNS_NodeLookup_PeriodName.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:46.615Z [INFO]  TestDNS_NodeLookup_PeriodName.server.raft: entering leader state: leader="Node at 127.0.0.1:16810 [Leader]"
>     writer.go:29: 2020-02-23T02:46:46.615Z [INFO]  TestDNS_NodeLookup_PeriodName.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:46.615Z [INFO]  TestDNS_NodeLookup_PeriodName.server: New leader elected: payload=Node-108b4019-09b1-66bd-7155-94f8bd5bba99
>     writer.go:29: 2020-02-23T02:46:46.622Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:46.631Z [INFO]  TestDNS_NodeLookup_PeriodName.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:46.631Z [INFO]  TestDNS_NodeLookup_PeriodName.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.631Z [DEBUG] TestDNS_NodeLookup_PeriodName.server: Skipping self join check for node since the cluster is too small: node=Node-108b4019-09b1-66bd-7155-94f8bd5bba99
>     writer.go:29: 2020-02-23T02:46:46.631Z [INFO]  TestDNS_NodeLookup_PeriodName.server: member joined, marking health alive: member=Node-108b4019-09b1-66bd-7155-94f8bd5bba99
>     writer.go:29: 2020-02-23T02:46:46.874Z [DEBUG] TestDNS_NodeLookup_PeriodName.dns: request served from client: name=foo.bar.node.consul. type=ANY class=IN latency=72.918µs client=127.0.0.1:41821 client_network=udp
>     writer.go:29: 2020-02-23T02:46:46.874Z [INFO]  TestDNS_NodeLookup_PeriodName: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:46.874Z [INFO]  TestDNS_NodeLookup_PeriodName.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:46.874Z [DEBUG] TestDNS_NodeLookup_PeriodName.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.874Z [WARN]  TestDNS_NodeLookup_PeriodName.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.874Z [ERROR] TestDNS_NodeLookup_PeriodName.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:46.874Z [DEBUG] TestDNS_NodeLookup_PeriodName.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.876Z [WARN]  TestDNS_NodeLookup_PeriodName.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.887Z [INFO]  TestDNS_NodeLookup_PeriodName.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:46.888Z [INFO]  TestDNS_NodeLookup_PeriodName: consul server down
>     writer.go:29: 2020-02-23T02:46:46.888Z [INFO]  TestDNS_NodeLookup_PeriodName: shutdown complete
>     writer.go:29: 2020-02-23T02:46:46.888Z [INFO]  TestDNS_NodeLookup_PeriodName: Stopping server: protocol=DNS address=127.0.0.1:16805 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.888Z [INFO]  TestDNS_NodeLookup_PeriodName: Stopping server: protocol=DNS address=127.0.0.1:16805 network=udp
>     writer.go:29: 2020-02-23T02:46:46.888Z [INFO]  TestDNS_NodeLookup_PeriodName: Stopping server: protocol=HTTP address=127.0.0.1:16806 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.888Z [INFO]  TestDNS_NodeLookup_PeriodName: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:46.888Z [INFO]  TestDNS_NodeLookup_PeriodName: Endpoints down
> === CONT  TestCoordinate_Update
> --- PASS: TestDNS_Over_TCP (0.41s)
>     writer.go:29: 2020-02-23T02:46:46.586Z [WARN]  TestDNS_Over_TCP: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:46.586Z [DEBUG] TestDNS_Over_TCP.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:46.586Z [DEBUG] TestDNS_Over_TCP.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:46.598Z [INFO]  TestDNS_Over_TCP.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:70d8be8c-02a2-ae24-bbe1-319c2e827725 Address:127.0.0.1:16828}]"
>     writer.go:29: 2020-02-23T02:46:46.599Z [INFO]  TestDNS_Over_TCP.server.serf.wan: serf: EventMemberJoin: Node-70d8be8c-02a2-ae24-bbe1-319c2e827725.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.599Z [INFO]  TestDNS_Over_TCP.server.raft: entering follower state: follower="Node at 127.0.0.1:16828 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:46.599Z [INFO]  TestDNS_Over_TCP.server.serf.lan: serf: EventMemberJoin: Node-70d8be8c-02a2-ae24-bbe1-319c2e827725 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.600Z [INFO]  TestDNS_Over_TCP.server: Adding LAN server: server="Node-70d8be8c-02a2-ae24-bbe1-319c2e827725 (Addr: tcp/127.0.0.1:16828) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:46.600Z [INFO]  TestDNS_Over_TCP.server: Handled event for server in area: event=member-join server=Node-70d8be8c-02a2-ae24-bbe1-319c2e827725.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:46.600Z [INFO]  TestDNS_Over_TCP: Started DNS server: address=127.0.0.1:16823 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.600Z [INFO]  TestDNS_Over_TCP: Started DNS server: address=127.0.0.1:16823 network=udp
>     writer.go:29: 2020-02-23T02:46:46.600Z [INFO]  TestDNS_Over_TCP: Started HTTP server: address=127.0.0.1:16824 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.600Z [INFO]  TestDNS_Over_TCP: started state syncer
>     writer.go:29: 2020-02-23T02:46:46.639Z [WARN]  TestDNS_Over_TCP.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:46.639Z [INFO]  TestDNS_Over_TCP.server.raft: entering candidate state: node="Node at 127.0.0.1:16828 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:46.642Z [DEBUG] TestDNS_Over_TCP.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:46.642Z [DEBUG] TestDNS_Over_TCP.server.raft: vote granted: from=70d8be8c-02a2-ae24-bbe1-319c2e827725 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:46.642Z [INFO]  TestDNS_Over_TCP.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:46.642Z [INFO]  TestDNS_Over_TCP.server.raft: entering leader state: leader="Node at 127.0.0.1:16828 [Leader]"
>     writer.go:29: 2020-02-23T02:46:46.642Z [INFO]  TestDNS_Over_TCP.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:46.642Z [INFO]  TestDNS_Over_TCP.server: New leader elected: payload=Node-70d8be8c-02a2-ae24-bbe1-319c2e827725
>     writer.go:29: 2020-02-23T02:46:46.649Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:46.657Z [INFO]  TestDNS_Over_TCP.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:46.657Z [INFO]  TestDNS_Over_TCP.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.657Z [DEBUG] TestDNS_Over_TCP.server: Skipping self join check for node since the cluster is too small: node=Node-70d8be8c-02a2-ae24-bbe1-319c2e827725
>     writer.go:29: 2020-02-23T02:46:46.657Z [INFO]  TestDNS_Over_TCP.server: member joined, marking health alive: member=Node-70d8be8c-02a2-ae24-bbe1-319c2e827725
>     writer.go:29: 2020-02-23T02:46:46.965Z [DEBUG] TestDNS_Over_TCP: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:46.968Z [INFO]  TestDNS_Over_TCP: Synced node info
>     writer.go:29: 2020-02-23T02:46:46.980Z [DEBUG] TestDNS_Over_TCP.dns: request served from client: name=foo.node.dc1.consul. type=ANY class=IN latency=74.719µs client=127.0.0.1:37338 client_network=tcp
>     writer.go:29: 2020-02-23T02:46:46.980Z [INFO]  TestDNS_Over_TCP: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:46.980Z [INFO]  TestDNS_Over_TCP.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:46.980Z [DEBUG] TestDNS_Over_TCP.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.980Z [WARN]  TestDNS_Over_TCP.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.980Z [DEBUG] TestDNS_Over_TCP.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.982Z [WARN]  TestDNS_Over_TCP.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:46.984Z [INFO]  TestDNS_Over_TCP.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:46.984Z [INFO]  TestDNS_Over_TCP: consul server down
>     writer.go:29: 2020-02-23T02:46:46.984Z [INFO]  TestDNS_Over_TCP: shutdown complete
>     writer.go:29: 2020-02-23T02:46:46.984Z [INFO]  TestDNS_Over_TCP: Stopping server: protocol=DNS address=127.0.0.1:16823 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.984Z [INFO]  TestDNS_Over_TCP: Stopping server: protocol=DNS address=127.0.0.1:16823 network=udp
>     writer.go:29: 2020-02-23T02:46:46.984Z [INFO]  TestDNS_Over_TCP: Stopping server: protocol=HTTP address=127.0.0.1:16824 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.984Z [INFO]  TestDNS_Over_TCP: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:46.984Z [INFO]  TestDNS_Over_TCP: Endpoints down
> === CONT  TestCoordinate_Node
> === RUN   TestDiscoveryChainRead/GET:_error_on_no_service_name
> === RUN   TestDiscoveryChainRead/GET:_read_default_chain
> === RUN   TestDiscoveryChainRead/GET:_read_default_chain;_evaluate_in_dc2
> === RUN   TestDiscoveryChainRead/GET:_read_default_chain_with_cache
> === RUN   TestDiscoveryChainRead/POST:_error_on_no_service_name
> === RUN   TestDiscoveryChainRead/POST:_read_default_chain
> === RUN   TestDiscoveryChainRead/POST:_read_default_chain;_evaluate_in_dc2
> === RUN   TestDiscoveryChainRead/POST:_read_default_chain_with_cache
> === RUN   TestDiscoveryChainRead/GET:_read_modified_chain
> === RUN   TestDiscoveryChainRead/POST:_read_modified_chain_with_overrides_(camel_case)
> === RUN   TestDiscoveryChainRead/POST:_read_modified_chain_with_overrides_(snake_case)
> --- PASS: TestDiscoveryChainRead (0.39s)
>     writer.go:29: 2020-02-23T02:46:46.697Z [WARN]  TestDiscoveryChainRead: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:46.697Z [DEBUG] TestDiscoveryChainRead.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:46.698Z [DEBUG] TestDiscoveryChainRead.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:46.708Z [INFO]  TestDiscoveryChainRead.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:ae5a7e36-4233-a663-d2a7-8bf925073bec Address:127.0.0.1:16834}]"
>     writer.go:29: 2020-02-23T02:46:46.708Z [INFO]  TestDiscoveryChainRead.server.raft: entering follower state: follower="Node at 127.0.0.1:16834 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:46.708Z [INFO]  TestDiscoveryChainRead.server.serf.wan: serf: EventMemberJoin: Node-ae5a7e36-4233-a663-d2a7-8bf925073bec.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.709Z [INFO]  TestDiscoveryChainRead.server.serf.lan: serf: EventMemberJoin: Node-ae5a7e36-4233-a663-d2a7-8bf925073bec 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.709Z [INFO]  TestDiscoveryChainRead.server: Handled event for server in area: event=member-join server=Node-ae5a7e36-4233-a663-d2a7-8bf925073bec.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:46.709Z [INFO]  TestDiscoveryChainRead.server: Adding LAN server: server="Node-ae5a7e36-4233-a663-d2a7-8bf925073bec (Addr: tcp/127.0.0.1:16834) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:46.710Z [INFO]  TestDiscoveryChainRead: Started DNS server: address=127.0.0.1:16829 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.710Z [INFO]  TestDiscoveryChainRead: Started DNS server: address=127.0.0.1:16829 network=udp
>     writer.go:29: 2020-02-23T02:46:46.710Z [INFO]  TestDiscoveryChainRead: Started HTTP server: address=127.0.0.1:16830 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.710Z [INFO]  TestDiscoveryChainRead: started state syncer
>     writer.go:29: 2020-02-23T02:46:46.769Z [WARN]  TestDiscoveryChainRead.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:46.769Z [INFO]  TestDiscoveryChainRead.server.raft: entering candidate state: node="Node at 127.0.0.1:16834 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:46.772Z [DEBUG] TestDiscoveryChainRead.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:46.772Z [DEBUG] TestDiscoveryChainRead.server.raft: vote granted: from=ae5a7e36-4233-a663-d2a7-8bf925073bec term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:46.772Z [INFO]  TestDiscoveryChainRead.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:46.772Z [INFO]  TestDiscoveryChainRead.server.raft: entering leader state: leader="Node at 127.0.0.1:16834 [Leader]"
>     writer.go:29: 2020-02-23T02:46:46.772Z [INFO]  TestDiscoveryChainRead.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:46.772Z [INFO]  TestDiscoveryChainRead.server: New leader elected: payload=Node-ae5a7e36-4233-a663-d2a7-8bf925073bec
>     writer.go:29: 2020-02-23T02:46:46.779Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:46.795Z [INFO]  TestDiscoveryChainRead.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:46.795Z [INFO]  TestDiscoveryChainRead.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.795Z [DEBUG] TestDiscoveryChainRead.server: Skipping self join check for node since the cluster is too small: node=Node-ae5a7e36-4233-a663-d2a7-8bf925073bec
>     writer.go:29: 2020-02-23T02:46:46.795Z [INFO]  TestDiscoveryChainRead.server: member joined, marking health alive: member=Node-ae5a7e36-4233-a663-d2a7-8bf925073bec
>     --- PASS: TestDiscoveryChainRead/GET:_error_on_no_service_name (0.00s)
>     --- PASS: TestDiscoveryChainRead/GET:_read_default_chain (0.00s)
>     --- PASS: TestDiscoveryChainRead/GET:_read_default_chain;_evaluate_in_dc2 (0.00s)
>     --- PASS: TestDiscoveryChainRead/GET:_read_default_chain_with_cache (0.00s)
>     --- PASS: TestDiscoveryChainRead/POST:_error_on_no_service_name (0.00s)
>     --- PASS: TestDiscoveryChainRead/POST:_read_default_chain (0.00s)
>     --- PASS: TestDiscoveryChainRead/POST:_read_default_chain;_evaluate_in_dc2 (0.00s)
>     --- PASS: TestDiscoveryChainRead/POST:_read_default_chain_with_cache (0.00s)
>     writer.go:29: 2020-02-23T02:46:47.068Z [DEBUG] TestDiscoveryChainRead: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:47.069Z [INFO]  TestDiscoveryChainRead: Synced node info
>     --- PASS: TestDiscoveryChainRead/GET:_read_modified_chain (0.03s)
>     --- PASS: TestDiscoveryChainRead/POST:_read_modified_chain_with_overrides_(camel_case) (0.00s)
>     --- PASS: TestDiscoveryChainRead/POST:_read_modified_chain_with_overrides_(snake_case) (0.00s)
>     writer.go:29: 2020-02-23T02:46:47.080Z [INFO]  TestDiscoveryChainRead: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:47.080Z [INFO]  TestDiscoveryChainRead.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:47.080Z [DEBUG] TestDiscoveryChainRead.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.080Z [WARN]  TestDiscoveryChainRead.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.080Z [DEBUG] TestDiscoveryChainRead.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.082Z [WARN]  TestDiscoveryChainRead.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.084Z [INFO]  TestDiscoveryChainRead.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:47.085Z [INFO]  TestDiscoveryChainRead: consul server down
>     writer.go:29: 2020-02-23T02:46:47.085Z [INFO]  TestDiscoveryChainRead: shutdown complete
>     writer.go:29: 2020-02-23T02:46:47.085Z [INFO]  TestDiscoveryChainRead: Stopping server: protocol=DNS address=127.0.0.1:16829 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.085Z [INFO]  TestDiscoveryChainRead: Stopping server: protocol=DNS address=127.0.0.1:16829 network=udp
>     writer.go:29: 2020-02-23T02:46:47.085Z [INFO]  TestDiscoveryChainRead: Stopping server: protocol=HTTP address=127.0.0.1:16830 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.085Z [INFO]  TestDiscoveryChainRead: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:47.085Z [INFO]  TestDiscoveryChainRead: Endpoints down
> === CONT  TestCoordinate_Disabled_Response
> === RUN   TestCoordinate_Update_ACLDeny/no_token
> === RUN   TestCoordinate_Update_ACLDeny/valid_token
> --- PASS: TestCoordinate_Update_ACLDeny (0.42s)
>     writer.go:29: 2020-02-23T02:46:46.874Z [WARN]  TestCoordinate_Update_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:46.874Z [WARN]  TestCoordinate_Update_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:46.875Z [DEBUG] TestCoordinate_Update_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:46.875Z [DEBUG] TestCoordinate_Update_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:46.899Z [INFO]  TestCoordinate_Update_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:8e0c825d-7e6c-5697-981b-913c3426e6d1 Address:127.0.0.1:16840}]"
>     writer.go:29: 2020-02-23T02:46:46.899Z [INFO]  TestCoordinate_Update_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16840 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:46.899Z [INFO]  TestCoordinate_Update_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-8e0c825d-7e6c-5697-981b-913c3426e6d1.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.900Z [INFO]  TestCoordinate_Update_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-8e0c825d-7e6c-5697-981b-913c3426e6d1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.900Z [INFO]  TestCoordinate_Update_ACLDeny: Started DNS server: address=127.0.0.1:16835 network=udp
>     writer.go:29: 2020-02-23T02:46:46.900Z [INFO]  TestCoordinate_Update_ACLDeny.server: Adding LAN server: server="Node-8e0c825d-7e6c-5697-981b-913c3426e6d1 (Addr: tcp/127.0.0.1:16840) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:46.900Z [INFO]  TestCoordinate_Update_ACLDeny.server: Handled event for server in area: event=member-join server=Node-8e0c825d-7e6c-5697-981b-913c3426e6d1.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:46.900Z [INFO]  TestCoordinate_Update_ACLDeny: Started DNS server: address=127.0.0.1:16835 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.900Z [INFO]  TestCoordinate_Update_ACLDeny: Started HTTP server: address=127.0.0.1:16836 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.900Z [INFO]  TestCoordinate_Update_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:46:46.959Z [WARN]  TestCoordinate_Update_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:46.959Z [INFO]  TestCoordinate_Update_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16840 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:46.962Z [DEBUG] TestCoordinate_Update_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:46.962Z [DEBUG] TestCoordinate_Update_ACLDeny.server.raft: vote granted: from=8e0c825d-7e6c-5697-981b-913c3426e6d1 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:46.962Z [INFO]  TestCoordinate_Update_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:46.962Z [INFO]  TestCoordinate_Update_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16840 [Leader]"
>     writer.go:29: 2020-02-23T02:46:46.962Z [INFO]  TestCoordinate_Update_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:46.963Z [INFO]  TestCoordinate_Update_ACLDeny.server: New leader elected: payload=Node-8e0c825d-7e6c-5697-981b-913c3426e6d1
>     writer.go:29: 2020-02-23T02:46:46.965Z [INFO]  TestCoordinate_Update_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:46.967Z [INFO]  TestCoordinate_Update_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:46.967Z [WARN]  TestCoordinate_Update_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:46.970Z [INFO]  TestCoordinate_Update_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:46.974Z [INFO]  TestCoordinate_Update_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:46.974Z [INFO]  TestCoordinate_Update_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:46.974Z [INFO]  TestCoordinate_Update_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:46.974Z [INFO]  TestCoordinate_Update_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-8e0c825d-7e6c-5697-981b-913c3426e6d1
>     writer.go:29: 2020-02-23T02:46:46.974Z [INFO]  TestCoordinate_Update_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-8e0c825d-7e6c-5697-981b-913c3426e6d1.dc1
>     writer.go:29: 2020-02-23T02:46:46.974Z [INFO]  TestCoordinate_Update_ACLDeny.server: Handled event for server in area: event=member-update server=Node-8e0c825d-7e6c-5697-981b-913c3426e6d1.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:46.981Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:46.991Z [INFO]  TestCoordinate_Update_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:46.991Z [INFO]  TestCoordinate_Update_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.991Z [DEBUG] TestCoordinate_Update_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-8e0c825d-7e6c-5697-981b-913c3426e6d1
>     writer.go:29: 2020-02-23T02:46:46.991Z [INFO]  TestCoordinate_Update_ACLDeny.server: member joined, marking health alive: member=Node-8e0c825d-7e6c-5697-981b-913c3426e6d1
>     writer.go:29: 2020-02-23T02:46:46.994Z [DEBUG] TestCoordinate_Update_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-8e0c825d-7e6c-5697-981b-913c3426e6d1
>     writer.go:29: 2020-02-23T02:46:47.031Z [DEBUG] TestCoordinate_Update_ACLDeny: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:47.034Z [INFO]  TestCoordinate_Update_ACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:46:47.034Z [DEBUG] TestCoordinate_Update_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:46:47.278Z [DEBUG] TestCoordinate_Update_ACLDeny.acl: dropping node from result due to ACLs: node=Node-8e0c825d-7e6c-5697-981b-913c3426e6d1
>     writer.go:29: 2020-02-23T02:46:47.278Z [DEBUG] TestCoordinate_Update_ACLDeny.acl: dropping node from result due to ACLs: node=Node-8e0c825d-7e6c-5697-981b-913c3426e6d1
>     --- PASS: TestCoordinate_Update_ACLDeny/no_token (0.00s)
>     --- PASS: TestCoordinate_Update_ACLDeny/valid_token (0.00s)
>     writer.go:29: 2020-02-23T02:46:47.278Z [INFO]  TestCoordinate_Update_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:47.278Z [INFO]  TestCoordinate_Update_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:47.278Z [DEBUG] TestCoordinate_Update_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:47.278Z [DEBUG] TestCoordinate_Update_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:47.278Z [DEBUG] TestCoordinate_Update_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.278Z [WARN]  TestCoordinate_Update_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.278Z [DEBUG] TestCoordinate_Update_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:47.278Z [DEBUG] TestCoordinate_Update_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:47.278Z [DEBUG] TestCoordinate_Update_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.280Z [WARN]  TestCoordinate_Update_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.283Z [INFO]  TestCoordinate_Update_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:47.283Z [INFO]  TestCoordinate_Update_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:46:47.283Z [INFO]  TestCoordinate_Update_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:46:47.283Z [INFO]  TestCoordinate_Update_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16835 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.283Z [INFO]  TestCoordinate_Update_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16835 network=udp
>     writer.go:29: 2020-02-23T02:46:47.283Z [INFO]  TestCoordinate_Update_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16836 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.283Z [INFO]  TestCoordinate_Update_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:47.283Z [INFO]  TestCoordinate_Update_ACLDeny: Endpoints down
> === CONT  TestConnectCAConfig
> === RUN   TestConnectCAConfig/basic
> --- PASS: TestCoordinate_Node (0.46s)
>     writer.go:29: 2020-02-23T02:46:46.991Z [WARN]  TestCoordinate_Node: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:46.991Z [DEBUG] TestCoordinate_Node.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:46.992Z [DEBUG] TestCoordinate_Node.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:47.001Z [INFO]  TestCoordinate_Node.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:86918ebb-237e-43ef-0d1e-fb0e27ca3d17 Address:127.0.0.1:16852}]"
>     writer.go:29: 2020-02-23T02:46:47.001Z [INFO]  TestCoordinate_Node.server.raft: entering follower state: follower="Node at 127.0.0.1:16852 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:47.002Z [INFO]  TestCoordinate_Node.server.serf.wan: serf: EventMemberJoin: Node-86918ebb-237e-43ef-0d1e-fb0e27ca3d17.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.002Z [INFO]  TestCoordinate_Node.server.serf.lan: serf: EventMemberJoin: Node-86918ebb-237e-43ef-0d1e-fb0e27ca3d17 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.002Z [INFO]  TestCoordinate_Node.server: Handled event for server in area: event=member-join server=Node-86918ebb-237e-43ef-0d1e-fb0e27ca3d17.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:47.003Z [INFO]  TestCoordinate_Node.server: Adding LAN server: server="Node-86918ebb-237e-43ef-0d1e-fb0e27ca3d17 (Addr: tcp/127.0.0.1:16852) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:47.003Z [INFO]  TestCoordinate_Node: Started DNS server: address=127.0.0.1:16847 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.003Z [INFO]  TestCoordinate_Node: Started DNS server: address=127.0.0.1:16847 network=udp
>     writer.go:29: 2020-02-23T02:46:47.003Z [INFO]  TestCoordinate_Node: Started HTTP server: address=127.0.0.1:16848 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.003Z [INFO]  TestCoordinate_Node: started state syncer
>     writer.go:29: 2020-02-23T02:46:47.065Z [WARN]  TestCoordinate_Node.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:47.065Z [INFO]  TestCoordinate_Node.server.raft: entering candidate state: node="Node at 127.0.0.1:16852 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:47.069Z [DEBUG] TestCoordinate_Node.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:47.069Z [DEBUG] TestCoordinate_Node.server.raft: vote granted: from=86918ebb-237e-43ef-0d1e-fb0e27ca3d17 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:47.069Z [INFO]  TestCoordinate_Node.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:47.069Z [INFO]  TestCoordinate_Node.server.raft: entering leader state: leader="Node at 127.0.0.1:16852 [Leader]"
>     writer.go:29: 2020-02-23T02:46:47.069Z [INFO]  TestCoordinate_Node.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:47.069Z [INFO]  TestCoordinate_Node.server: New leader elected: payload=Node-86918ebb-237e-43ef-0d1e-fb0e27ca3d17
>     writer.go:29: 2020-02-23T02:46:47.076Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:47.089Z [INFO]  TestCoordinate_Node.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:47.089Z [INFO]  TestCoordinate_Node.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.089Z [DEBUG] TestCoordinate_Node.server: Skipping self join check for node since the cluster is too small: node=Node-86918ebb-237e-43ef-0d1e-fb0e27ca3d17
>     writer.go:29: 2020-02-23T02:46:47.089Z [INFO]  TestCoordinate_Node.server: member joined, marking health alive: member=Node-86918ebb-237e-43ef-0d1e-fb0e27ca3d17
>     writer.go:29: 2020-02-23T02:46:47.330Z [DEBUG] TestCoordinate_Node: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:47.332Z [INFO]  TestCoordinate_Node: Synced node info
>     writer.go:29: 2020-02-23T02:46:47.332Z [DEBUG] TestCoordinate_Node: Node info in sync
>     writer.go:29: 2020-02-23T02:46:47.437Z [INFO]  TestCoordinate_Node: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:47.437Z [INFO]  TestCoordinate_Node.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:47.437Z [DEBUG] TestCoordinate_Node.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.437Z [WARN]  TestCoordinate_Node.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.437Z [DEBUG] TestCoordinate_Node.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.439Z [WARN]  TestCoordinate_Node.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.440Z [INFO]  TestCoordinate_Node.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:47.441Z [INFO]  TestCoordinate_Node: consul server down
>     writer.go:29: 2020-02-23T02:46:47.441Z [INFO]  TestCoordinate_Node: shutdown complete
>     writer.go:29: 2020-02-23T02:46:47.441Z [INFO]  TestCoordinate_Node: Stopping server: protocol=DNS address=127.0.0.1:16847 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.441Z [INFO]  TestCoordinate_Node: Stopping server: protocol=DNS address=127.0.0.1:16847 network=udp
>     writer.go:29: 2020-02-23T02:46:47.441Z [INFO]  TestCoordinate_Node: Stopping server: protocol=HTTP address=127.0.0.1:16848 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.441Z [INFO]  TestCoordinate_Node: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:47.441Z [INFO]  TestCoordinate_Node: Endpoints down
> === CONT  TestConnectCARoots_list
> === RUN   TestCoordinate_Disabled_Response/0
> === RUN   TestCoordinate_Disabled_Response/1
> === RUN   TestCoordinate_Disabled_Response/2
> === RUN   TestCoordinate_Disabled_Response/3
> --- PASS: TestCoordinate_Disabled_Response (0.42s)
>     writer.go:29: 2020-02-23T02:46:47.094Z [WARN]  TestCoordinate_Disabled_Response: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:47.094Z [DEBUG] TestCoordinate_Disabled_Response.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:47.095Z [DEBUG] TestCoordinate_Disabled_Response.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:47.106Z [INFO]  TestCoordinate_Disabled_Response.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b5690e74-65b0-5996-a5fe-c6300a7d76b6 Address:127.0.0.1:16858}]"
>     writer.go:29: 2020-02-23T02:46:47.107Z [INFO]  TestCoordinate_Disabled_Response.server.serf.wan: serf: EventMemberJoin: Node-b5690e74-65b0-5996-a5fe-c6300a7d76b6.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.107Z [INFO]  TestCoordinate_Disabled_Response.server.serf.lan: serf: EventMemberJoin: Node-b5690e74-65b0-5996-a5fe-c6300a7d76b6 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.107Z [INFO]  TestCoordinate_Disabled_Response: Started DNS server: address=127.0.0.1:16853 network=udp
>     writer.go:29: 2020-02-23T02:46:47.107Z [INFO]  TestCoordinate_Disabled_Response.server.raft: entering follower state: follower="Node at 127.0.0.1:16858 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:47.108Z [INFO]  TestCoordinate_Disabled_Response.server: Adding LAN server: server="Node-b5690e74-65b0-5996-a5fe-c6300a7d76b6 (Addr: tcp/127.0.0.1:16858) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:47.108Z [INFO]  TestCoordinate_Disabled_Response.server: Handled event for server in area: event=member-join server=Node-b5690e74-65b0-5996-a5fe-c6300a7d76b6.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:47.108Z [INFO]  TestCoordinate_Disabled_Response: Started DNS server: address=127.0.0.1:16853 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.108Z [INFO]  TestCoordinate_Disabled_Response: Started HTTP server: address=127.0.0.1:16854 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.108Z [INFO]  TestCoordinate_Disabled_Response: started state syncer
>     writer.go:29: 2020-02-23T02:46:47.155Z [WARN]  TestCoordinate_Disabled_Response.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:47.155Z [INFO]  TestCoordinate_Disabled_Response.server.raft: entering candidate state: node="Node at 127.0.0.1:16858 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:47.158Z [DEBUG] TestCoordinate_Disabled_Response.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:47.158Z [DEBUG] TestCoordinate_Disabled_Response.server.raft: vote granted: from=b5690e74-65b0-5996-a5fe-c6300a7d76b6 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:47.158Z [INFO]  TestCoordinate_Disabled_Response.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:47.158Z [INFO]  TestCoordinate_Disabled_Response.server.raft: entering leader state: leader="Node at 127.0.0.1:16858 [Leader]"
>     writer.go:29: 2020-02-23T02:46:47.158Z [INFO]  TestCoordinate_Disabled_Response.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:47.158Z [INFO]  TestCoordinate_Disabled_Response.server: New leader elected: payload=Node-b5690e74-65b0-5996-a5fe-c6300a7d76b6
>     writer.go:29: 2020-02-23T02:46:47.165Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:47.174Z [INFO]  TestCoordinate_Disabled_Response.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:47.174Z [INFO]  TestCoordinate_Disabled_Response.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.174Z [DEBUG] TestCoordinate_Disabled_Response.server: Skipping self join check for node since the cluster is too small: node=Node-b5690e74-65b0-5996-a5fe-c6300a7d76b6
>     writer.go:29: 2020-02-23T02:46:47.174Z [INFO]  TestCoordinate_Disabled_Response.server: member joined, marking health alive: member=Node-b5690e74-65b0-5996-a5fe-c6300a7d76b6
>     --- PASS: TestCoordinate_Disabled_Response/0 (0.00s)
>     --- PASS: TestCoordinate_Disabled_Response/1 (0.00s)
>     --- PASS: TestCoordinate_Disabled_Response/2 (0.00s)
>     --- PASS: TestCoordinate_Disabled_Response/3 (0.00s)
>     writer.go:29: 2020-02-23T02:46:47.497Z [INFO]  TestCoordinate_Disabled_Response: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:47.497Z [INFO]  TestCoordinate_Disabled_Response.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:47.497Z [DEBUG] TestCoordinate_Disabled_Response.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.497Z [WARN]  TestCoordinate_Disabled_Response.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.497Z [ERROR] TestCoordinate_Disabled_Response.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:47.497Z [DEBUG] TestCoordinate_Disabled_Response.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.499Z [WARN]  TestCoordinate_Disabled_Response.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.501Z [INFO]  TestCoordinate_Disabled_Response.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:47.501Z [INFO]  TestCoordinate_Disabled_Response: consul server down
>     writer.go:29: 2020-02-23T02:46:47.501Z [INFO]  TestCoordinate_Disabled_Response: shutdown complete
>     writer.go:29: 2020-02-23T02:46:47.501Z [INFO]  TestCoordinate_Disabled_Response: Stopping server: protocol=DNS address=127.0.0.1:16853 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.501Z [INFO]  TestCoordinate_Disabled_Response: Stopping server: protocol=DNS address=127.0.0.1:16853 network=udp
>     writer.go:29: 2020-02-23T02:46:47.501Z [INFO]  TestCoordinate_Disabled_Response: Stopping server: protocol=HTTP address=127.0.0.1:16854 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.501Z [INFO]  TestCoordinate_Disabled_Response: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:47.501Z [INFO]  TestCoordinate_Disabled_Response: Endpoints down
> === CONT  TestConnectCARoots_empty
> --- PASS: TestCoordinate_Update (0.69s)
>     writer.go:29: 2020-02-23T02:46:46.897Z [WARN]  TestCoordinate_Update: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:46.897Z [DEBUG] TestCoordinate_Update.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:46.897Z [DEBUG] TestCoordinate_Update.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:46.909Z [INFO]  TestCoordinate_Update.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:137267ab-c609-71b0-0459-94322b336b1b Address:127.0.0.1:16846}]"
>     writer.go:29: 2020-02-23T02:46:46.910Z [INFO]  TestCoordinate_Update.server.serf.wan: serf: EventMemberJoin: Node-137267ab-c609-71b0-0459-94322b336b1b.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.910Z [INFO]  TestCoordinate_Update.server.serf.lan: serf: EventMemberJoin: Node-137267ab-c609-71b0-0459-94322b336b1b 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:46.910Z [INFO]  TestCoordinate_Update: Started DNS server: address=127.0.0.1:16841 network=udp
>     writer.go:29: 2020-02-23T02:46:46.910Z [INFO]  TestCoordinate_Update.server.raft: entering follower state: follower="Node at 127.0.0.1:16846 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:46.910Z [INFO]  TestCoordinate_Update.server: Adding LAN server: server="Node-137267ab-c609-71b0-0459-94322b336b1b (Addr: tcp/127.0.0.1:16846) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:46.910Z [INFO]  TestCoordinate_Update.server: Handled event for server in area: event=member-join server=Node-137267ab-c609-71b0-0459-94322b336b1b.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:46.910Z [INFO]  TestCoordinate_Update: Started DNS server: address=127.0.0.1:16841 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.911Z [INFO]  TestCoordinate_Update: Started HTTP server: address=127.0.0.1:16842 network=tcp
>     writer.go:29: 2020-02-23T02:46:46.911Z [INFO]  TestCoordinate_Update: started state syncer
>     writer.go:29: 2020-02-23T02:46:46.955Z [WARN]  TestCoordinate_Update.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:46.955Z [INFO]  TestCoordinate_Update.server.raft: entering candidate state: node="Node at 127.0.0.1:16846 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:46.958Z [DEBUG] TestCoordinate_Update.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:46.958Z [DEBUG] TestCoordinate_Update.server.raft: vote granted: from=137267ab-c609-71b0-0459-94322b336b1b term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:46.958Z [INFO]  TestCoordinate_Update.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:46.958Z [INFO]  TestCoordinate_Update.server.raft: entering leader state: leader="Node at 127.0.0.1:16846 [Leader]"
>     writer.go:29: 2020-02-23T02:46:46.958Z [INFO]  TestCoordinate_Update.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:46.958Z [INFO]  TestCoordinate_Update.server: New leader elected: payload=Node-137267ab-c609-71b0-0459-94322b336b1b
>     writer.go:29: 2020-02-23T02:46:46.966Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:46.976Z [INFO]  TestCoordinate_Update.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:46.976Z [INFO]  TestCoordinate_Update.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:46.976Z [DEBUG] TestCoordinate_Update.server: Skipping self join check for node since the cluster is too small: node=Node-137267ab-c609-71b0-0459-94322b336b1b
>     writer.go:29: 2020-02-23T02:46:46.976Z [INFO]  TestCoordinate_Update.server: member joined, marking health alive: member=Node-137267ab-c609-71b0-0459-94322b336b1b
>     writer.go:29: 2020-02-23T02:46:47.173Z [DEBUG] TestCoordinate_Update: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:47.176Z [INFO]  TestCoordinate_Update: Synced node info
>     writer.go:29: 2020-02-23T02:46:47.176Z [DEBUG] TestCoordinate_Update: Node info in sync
>     writer.go:29: 2020-02-23T02:46:47.507Z [DEBUG] TestCoordinate_Update: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:47.507Z [DEBUG] TestCoordinate_Update: Node info in sync
>     writer.go:29: 2020-02-23T02:46:47.572Z [INFO]  TestCoordinate_Update: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:47.572Z [INFO]  TestCoordinate_Update.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:47.572Z [DEBUG] TestCoordinate_Update.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.572Z [WARN]  TestCoordinate_Update.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.572Z [DEBUG] TestCoordinate_Update.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.574Z [WARN]  TestCoordinate_Update.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.577Z [INFO]  TestCoordinate_Update.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:47.577Z [INFO]  TestCoordinate_Update: consul server down
>     writer.go:29: 2020-02-23T02:46:47.577Z [INFO]  TestCoordinate_Update: shutdown complete
>     writer.go:29: 2020-02-23T02:46:47.577Z [INFO]  TestCoordinate_Update: Stopping server: protocol=DNS address=127.0.0.1:16841 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.577Z [INFO]  TestCoordinate_Update: Stopping server: protocol=DNS address=127.0.0.1:16841 network=udp
>     writer.go:29: 2020-02-23T02:46:47.577Z [INFO]  TestCoordinate_Update: Stopping server: protocol=HTTP address=127.0.0.1:16842 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.577Z [INFO]  TestCoordinate_Update: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:47.577Z [INFO]  TestCoordinate_Update: Endpoints down
> === CONT  TestConfig_Apply_ProxyDefaultsExpose
> --- PASS: TestConnectCARoots_empty (0.14s)
>     writer.go:29: 2020-02-23T02:46:47.520Z [WARN]  TestConnectCARoots_empty: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:47.520Z [DEBUG] TestConnectCARoots_empty.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:47.520Z [DEBUG] TestConnectCARoots_empty.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:47.530Z [INFO]  TestConnectCARoots_empty.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:9d429e87-7073-de6d-0961-40a009d399d7 Address:127.0.0.1:16876}]"
>     writer.go:29: 2020-02-23T02:46:47.530Z [INFO]  TestConnectCARoots_empty.server.serf.wan: serf: EventMemberJoin: Node-9d429e87-7073-de6d-0961-40a009d399d7.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.531Z [INFO]  TestConnectCARoots_empty.server.serf.lan: serf: EventMemberJoin: Node-9d429e87-7073-de6d-0961-40a009d399d7 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.531Z [INFO]  TestConnectCARoots_empty: Started DNS server: address=127.0.0.1:16871 network=udp
>     writer.go:29: 2020-02-23T02:46:47.531Z [INFO]  TestConnectCARoots_empty.server.raft: entering follower state: follower="Node at 127.0.0.1:16876 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:47.531Z [INFO]  TestConnectCARoots_empty.server: Adding LAN server: server="Node-9d429e87-7073-de6d-0961-40a009d399d7 (Addr: tcp/127.0.0.1:16876) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:47.531Z [INFO]  TestConnectCARoots_empty.server: Handled event for server in area: event=member-join server=Node-9d429e87-7073-de6d-0961-40a009d399d7.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:47.531Z [INFO]  TestConnectCARoots_empty: Started DNS server: address=127.0.0.1:16871 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.532Z [INFO]  TestConnectCARoots_empty: Started HTTP server: address=127.0.0.1:16872 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.532Z [INFO]  TestConnectCARoots_empty: started state syncer
>     writer.go:29: 2020-02-23T02:46:47.567Z [WARN]  TestConnectCARoots_empty.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:47.567Z [INFO]  TestConnectCARoots_empty.server.raft: entering candidate state: node="Node at 127.0.0.1:16876 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:47.570Z [DEBUG] TestConnectCARoots_empty.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:47.570Z [DEBUG] TestConnectCARoots_empty.server.raft: vote granted: from=9d429e87-7073-de6d-0961-40a009d399d7 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:47.570Z [INFO]  TestConnectCARoots_empty.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:47.570Z [INFO]  TestConnectCARoots_empty.server.raft: entering leader state: leader="Node at 127.0.0.1:16876 [Leader]"
>     writer.go:29: 2020-02-23T02:46:47.570Z [INFO]  TestConnectCARoots_empty.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:47.570Z [INFO]  TestConnectCARoots_empty.server: New leader elected: payload=Node-9d429e87-7073-de6d-0961-40a009d399d7
>     writer.go:29: 2020-02-23T02:46:47.576Z [INFO]  TestConnectCARoots_empty.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.576Z [DEBUG] TestConnectCARoots_empty.server: Skipping self join check for node since the cluster is too small: node=Node-9d429e87-7073-de6d-0961-40a009d399d7
>     writer.go:29: 2020-02-23T02:46:47.576Z [INFO]  TestConnectCARoots_empty.server: member joined, marking health alive: member=Node-9d429e87-7073-de6d-0961-40a009d399d7
>     writer.go:29: 2020-02-23T02:46:47.584Z [DEBUG] TestConnectCARoots_empty: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:47.589Z [INFO]  TestConnectCARoots_empty: Synced node info
>     writer.go:29: 2020-02-23T02:46:47.641Z [INFO]  TestConnectCARoots_empty: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:47.641Z [INFO]  TestConnectCARoots_empty.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:47.641Z [DEBUG] TestConnectCARoots_empty.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.641Z [WARN]  TestConnectCARoots_empty.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.641Z [DEBUG] TestConnectCARoots_empty.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.643Z [WARN]  TestConnectCARoots_empty.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.645Z [INFO]  TestConnectCARoots_empty.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:47.645Z [INFO]  TestConnectCARoots_empty: consul server down
>     writer.go:29: 2020-02-23T02:46:47.645Z [INFO]  TestConnectCARoots_empty: shutdown complete
>     writer.go:29: 2020-02-23T02:46:47.645Z [INFO]  TestConnectCARoots_empty: Stopping server: protocol=DNS address=127.0.0.1:16871 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.645Z [INFO]  TestConnectCARoots_empty: Stopping server: protocol=DNS address=127.0.0.1:16871 network=udp
>     writer.go:29: 2020-02-23T02:46:47.645Z [INFO]  TestConnectCARoots_empty: Stopping server: protocol=HTTP address=127.0.0.1:16872 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.645Z [INFO]  TestConnectCARoots_empty: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:47.645Z [INFO]  TestConnectCARoots_empty: Endpoints down
> === CONT  TestConfig_Apply_Decoding
> === RUN   TestConnectCAConfig/basic_with_IntermediateCertTTL
> === RUN   TestConfig_Apply_Decoding/No_Kind
> === RUN   TestConfig_Apply_Decoding/Kind_Not_String
> === RUN   TestConfig_Apply_Decoding/Lowercase_kind
> --- PASS: TestConfig_Apply_Decoding (0.12s)
>     writer.go:29: 2020-02-23T02:46:47.653Z [WARN]  TestConfig_Apply_Decoding: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:47.653Z [DEBUG] TestConfig_Apply_Decoding.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:47.654Z [DEBUG] TestConfig_Apply_Decoding.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:47.663Z [INFO]  TestConfig_Apply_Decoding.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:e0aefd43-87dd-69a2-ffff-e9b8e6c6a06e Address:127.0.0.1:16888}]"
>     writer.go:29: 2020-02-23T02:46:47.663Z [INFO]  TestConfig_Apply_Decoding.server.serf.wan: serf: EventMemberJoin: Node-e0aefd43-87dd-69a2-ffff-e9b8e6c6a06e.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.664Z [INFO]  TestConfig_Apply_Decoding.server.serf.lan: serf: EventMemberJoin: Node-e0aefd43-87dd-69a2-ffff-e9b8e6c6a06e 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.664Z [INFO]  TestConfig_Apply_Decoding: Started DNS server: address=127.0.0.1:16883 network=udp
>     writer.go:29: 2020-02-23T02:46:47.664Z [INFO]  TestConfig_Apply_Decoding.server.raft: entering follower state: follower="Node at 127.0.0.1:16888 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:47.664Z [INFO]  TestConfig_Apply_Decoding.server: Adding LAN server: server="Node-e0aefd43-87dd-69a2-ffff-e9b8e6c6a06e (Addr: tcp/127.0.0.1:16888) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:47.664Z [INFO]  TestConfig_Apply_Decoding.server: Handled event for server in area: event=member-join server=Node-e0aefd43-87dd-69a2-ffff-e9b8e6c6a06e.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:47.664Z [INFO]  TestConfig_Apply_Decoding: Started DNS server: address=127.0.0.1:16883 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.664Z [INFO]  TestConfig_Apply_Decoding: Started HTTP server: address=127.0.0.1:16884 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.664Z [INFO]  TestConfig_Apply_Decoding: started state syncer
>     writer.go:29: 2020-02-23T02:46:47.706Z [WARN]  TestConfig_Apply_Decoding.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:47.706Z [INFO]  TestConfig_Apply_Decoding.server.raft: entering candidate state: node="Node at 127.0.0.1:16888 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:47.710Z [DEBUG] TestConfig_Apply_Decoding.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:47.710Z [DEBUG] TestConfig_Apply_Decoding.server.raft: vote granted: from=e0aefd43-87dd-69a2-ffff-e9b8e6c6a06e term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:47.710Z [INFO]  TestConfig_Apply_Decoding.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:47.710Z [INFO]  TestConfig_Apply_Decoding.server.raft: entering leader state: leader="Node at 127.0.0.1:16888 [Leader]"
>     writer.go:29: 2020-02-23T02:46:47.710Z [INFO]  TestConfig_Apply_Decoding.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:47.710Z [INFO]  TestConfig_Apply_Decoding.server: New leader elected: payload=Node-e0aefd43-87dd-69a2-ffff-e9b8e6c6a06e
>     writer.go:29: 2020-02-23T02:46:47.718Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:47.728Z [INFO]  TestConfig_Apply_Decoding.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:47.728Z [INFO]  TestConfig_Apply_Decoding.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.728Z [DEBUG] TestConfig_Apply_Decoding.server: Skipping self join check for node since the cluster is too small: node=Node-e0aefd43-87dd-69a2-ffff-e9b8e6c6a06e
>     writer.go:29: 2020-02-23T02:46:47.728Z [INFO]  TestConfig_Apply_Decoding.server: member joined, marking health alive: member=Node-e0aefd43-87dd-69a2-ffff-e9b8e6c6a06e
>     --- PASS: TestConfig_Apply_Decoding/No_Kind (0.00s)
>     --- PASS: TestConfig_Apply_Decoding/Kind_Not_String (0.00s)
>     --- PASS: TestConfig_Apply_Decoding/Lowercase_kind (0.00s)
>     writer.go:29: 2020-02-23T02:46:47.766Z [INFO]  TestConfig_Apply_Decoding: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:47.766Z [INFO]  TestConfig_Apply_Decoding.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:47.766Z [DEBUG] TestConfig_Apply_Decoding.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.766Z [WARN]  TestConfig_Apply_Decoding.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.767Z [ERROR] TestConfig_Apply_Decoding.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:47.767Z [DEBUG] TestConfig_Apply_Decoding.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.768Z [WARN]  TestConfig_Apply_Decoding.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.770Z [INFO]  TestConfig_Apply_Decoding.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:47.770Z [INFO]  TestConfig_Apply_Decoding: consul server down
>     writer.go:29: 2020-02-23T02:46:47.770Z [INFO]  TestConfig_Apply_Decoding: shutdown complete
>     writer.go:29: 2020-02-23T02:46:47.770Z [INFO]  TestConfig_Apply_Decoding: Stopping server: protocol=DNS address=127.0.0.1:16883 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.770Z [INFO]  TestConfig_Apply_Decoding: Stopping server: protocol=DNS address=127.0.0.1:16883 network=udp
>     writer.go:29: 2020-02-23T02:46:47.770Z [INFO]  TestConfig_Apply_Decoding: Stopping server: protocol=HTTP address=127.0.0.1:16884 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.770Z [INFO]  TestConfig_Apply_Decoding: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:47.770Z [INFO]  TestConfig_Apply_Decoding: Endpoints down
> === CONT  TestConfig_Apply_CAS
> --- PASS: TestConnectCARoots_list (0.53s)
>     writer.go:29: 2020-02-23T02:46:47.490Z [WARN]  TestConnectCARoots_list: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:47.490Z [DEBUG] TestConnectCARoots_list.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:47.507Z [DEBUG] TestConnectCARoots_list.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:47.532Z [INFO]  TestConnectCARoots_list.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:e2a64d0b-01aa-2526-f44c-97f1f1635880 Address:127.0.0.1:16870}]"
>     writer.go:29: 2020-02-23T02:46:47.532Z [INFO]  TestConnectCARoots_list.server.raft: entering follower state: follower="Node at 127.0.0.1:16870 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:47.532Z [INFO]  TestConnectCARoots_list.server.serf.wan: serf: EventMemberJoin: Node-e2a64d0b-01aa-2526-f44c-97f1f1635880.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.533Z [INFO]  TestConnectCARoots_list.server.serf.lan: serf: EventMemberJoin: Node-e2a64d0b-01aa-2526-f44c-97f1f1635880 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.533Z [INFO]  TestConnectCARoots_list.server: Adding LAN server: server="Node-e2a64d0b-01aa-2526-f44c-97f1f1635880 (Addr: tcp/127.0.0.1:16870) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:47.533Z [INFO]  TestConnectCARoots_list.server: Handled event for server in area: event=member-join server=Node-e2a64d0b-01aa-2526-f44c-97f1f1635880.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:47.533Z [INFO]  TestConnectCARoots_list: Started DNS server: address=127.0.0.1:16865 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.533Z [INFO]  TestConnectCARoots_list: Started DNS server: address=127.0.0.1:16865 network=udp
>     writer.go:29: 2020-02-23T02:46:47.534Z [INFO]  TestConnectCARoots_list: Started HTTP server: address=127.0.0.1:16866 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.534Z [INFO]  TestConnectCARoots_list: started state syncer
>     writer.go:29: 2020-02-23T02:46:47.583Z [WARN]  TestConnectCARoots_list.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:47.583Z [INFO]  TestConnectCARoots_list.server.raft: entering candidate state: node="Node at 127.0.0.1:16870 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:47.589Z [DEBUG] TestConnectCARoots_list.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:47.589Z [DEBUG] TestConnectCARoots_list.server.raft: vote granted: from=e2a64d0b-01aa-2526-f44c-97f1f1635880 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:47.589Z [INFO]  TestConnectCARoots_list.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:47.589Z [INFO]  TestConnectCARoots_list.server.raft: entering leader state: leader="Node at 127.0.0.1:16870 [Leader]"
>     writer.go:29: 2020-02-23T02:46:47.589Z [INFO]  TestConnectCARoots_list.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:47.589Z [INFO]  TestConnectCARoots_list.server: New leader elected: payload=Node-e2a64d0b-01aa-2526-f44c-97f1f1635880
>     writer.go:29: 2020-02-23T02:46:47.597Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:47.604Z [INFO]  TestConnectCARoots_list.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:47.604Z [INFO]  TestConnectCARoots_list.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.604Z [DEBUG] TestConnectCARoots_list.server: Skipping self join check for node since the cluster is too small: node=Node-e2a64d0b-01aa-2526-f44c-97f1f1635880
>     writer.go:29: 2020-02-23T02:46:47.605Z [INFO]  TestConnectCARoots_list.server: member joined, marking health alive: member=Node-e2a64d0b-01aa-2526-f44c-97f1f1635880
>     writer.go:29: 2020-02-23T02:46:47.910Z [DEBUG] TestConnectCARoots_list: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:47.925Z [INFO]  TestConnectCARoots_list: Synced node info
>     writer.go:29: 2020-02-23T02:46:47.925Z [DEBUG] TestConnectCARoots_list: Node info in sync
>     writer.go:29: 2020-02-23T02:46:47.951Z [DEBUG] connect.ca.consul: consul CA provider configured: id=0a:f0:36:b3:ee:bd:eb:10:b9:f1:8f:9d:02:cb:f3:de:75:e3:95:ba:b5:12:c1:a5:f0:8c:53:e0:a9:db:ef:bb is_primary=true
>     writer.go:29: 2020-02-23T02:46:47.964Z [INFO]  TestConnectCARoots_list.server.connect: CA rotated to new root under provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:47.965Z [INFO]  TestConnectCARoots_list: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:47.965Z [INFO]  TestConnectCARoots_list.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:47.965Z [DEBUG] TestConnectCARoots_list.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.965Z [WARN]  TestConnectCARoots_list.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.965Z [DEBUG] TestConnectCARoots_list.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.966Z [WARN]  TestConnectCARoots_list.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:47.968Z [INFO]  TestConnectCARoots_list.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:47.968Z [INFO]  TestConnectCARoots_list: consul server down
>     writer.go:29: 2020-02-23T02:46:47.968Z [INFO]  TestConnectCARoots_list: shutdown complete
>     writer.go:29: 2020-02-23T02:46:47.968Z [INFO]  TestConnectCARoots_list: Stopping server: protocol=DNS address=127.0.0.1:16865 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.968Z [INFO]  TestConnectCARoots_list: Stopping server: protocol=DNS address=127.0.0.1:16865 network=udp
>     writer.go:29: 2020-02-23T02:46:47.968Z [INFO]  TestConnectCARoots_list: Stopping server: protocol=HTTP address=127.0.0.1:16866 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.968Z [INFO]  TestConnectCARoots_list: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:47.968Z [INFO]  TestConnectCARoots_list: Endpoints down
> === CONT  TestConfig_Apply_ProxyDefaultsMeshGateway
> === RUN   TestConnectCAConfig/force_without_cross_sign_CamelCase
> --- PASS: TestConfig_Apply_ProxyDefaultsExpose (0.65s)
>     writer.go:29: 2020-02-23T02:46:47.586Z [WARN]  TestConfig_Apply_ProxyDefaultsExpose: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:47.587Z [DEBUG] TestConfig_Apply_ProxyDefaultsExpose.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:47.587Z [DEBUG] TestConfig_Apply_ProxyDefaultsExpose.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:47.600Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fb16a32c-e42e-280b-21e4-15043cb6271b Address:127.0.0.1:16882}]"
>     writer.go:29: 2020-02-23T02:46:47.600Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.server.raft: entering follower state: follower="Node at 127.0.0.1:16882 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:47.601Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.server.serf.wan: serf: EventMemberJoin: Node-fb16a32c-e42e-280b-21e4-15043cb6271b.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.601Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.server.serf.lan: serf: EventMemberJoin: Node-fb16a32c-e42e-280b-21e4-15043cb6271b 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.602Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.server: Handled event for server in area: event=member-join server=Node-fb16a32c-e42e-280b-21e4-15043cb6271b.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:47.602Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.server: Adding LAN server: server="Node-fb16a32c-e42e-280b-21e4-15043cb6271b (Addr: tcp/127.0.0.1:16882) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:47.602Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose: Started DNS server: address=127.0.0.1:16877 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.602Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose: Started DNS server: address=127.0.0.1:16877 network=udp
>     writer.go:29: 2020-02-23T02:46:47.603Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose: Started HTTP server: address=127.0.0.1:16878 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.603Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose: started state syncer
>     writer.go:29: 2020-02-23T02:46:47.668Z [WARN]  TestConfig_Apply_ProxyDefaultsExpose.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:47.668Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.server.raft: entering candidate state: node="Node at 127.0.0.1:16882 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:47.671Z [DEBUG] TestConfig_Apply_ProxyDefaultsExpose.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:47.671Z [DEBUG] TestConfig_Apply_ProxyDefaultsExpose.server.raft: vote granted: from=fb16a32c-e42e-280b-21e4-15043cb6271b term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:47.671Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:47.671Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.server.raft: entering leader state: leader="Node at 127.0.0.1:16882 [Leader]"
>     writer.go:29: 2020-02-23T02:46:47.671Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:47.671Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.server: New leader elected: payload=Node-fb16a32c-e42e-280b-21e4-15043cb6271b
>     writer.go:29: 2020-02-23T02:46:47.686Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:47.694Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:47.694Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.694Z [DEBUG] TestConfig_Apply_ProxyDefaultsExpose.server: Skipping self join check for node since the cluster is too small: node=Node-fb16a32c-e42e-280b-21e4-15043cb6271b
>     writer.go:29: 2020-02-23T02:46:47.694Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.server: member joined, marking health alive: member=Node-fb16a32c-e42e-280b-21e4-15043cb6271b
>     writer.go:29: 2020-02-23T02:46:47.930Z [DEBUG] TestConfig_Apply_ProxyDefaultsExpose: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:47.933Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose: Synced node info
>     writer.go:29: 2020-02-23T02:46:48.089Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:48.089Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:48.089Z [DEBUG] TestConfig_Apply_ProxyDefaultsExpose.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.090Z [WARN]  TestConfig_Apply_ProxyDefaultsExpose.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.090Z [DEBUG] TestConfig_Apply_ProxyDefaultsExpose.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.210Z [WARN]  TestConfig_Apply_ProxyDefaultsExpose.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.227Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:48.227Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose: consul server down
>     writer.go:29: 2020-02-23T02:46:48.227Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose: shutdown complete
>     writer.go:29: 2020-02-23T02:46:48.227Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose: Stopping server: protocol=DNS address=127.0.0.1:16877 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.227Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose: Stopping server: protocol=DNS address=127.0.0.1:16877 network=udp
>     writer.go:29: 2020-02-23T02:46:48.227Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose: Stopping server: protocol=HTTP address=127.0.0.1:16878 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.227Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:48.227Z [INFO]  TestConfig_Apply_ProxyDefaultsExpose: Endpoints down
> === CONT  TestConfig_Apply
> --- PASS: TestConfig_Apply_CAS (0.46s)
>     writer.go:29: 2020-02-23T02:46:47.779Z [WARN]  TestConfig_Apply_CAS: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:47.779Z [DEBUG] TestConfig_Apply_CAS.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:47.780Z [DEBUG] TestConfig_Apply_CAS.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:47.791Z [INFO]  TestConfig_Apply_CAS.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d7e949b7-ec71-89d7-d94a-1a86507728ac Address:127.0.0.1:16900}]"
>     writer.go:29: 2020-02-23T02:46:47.791Z [INFO]  TestConfig_Apply_CAS.server.serf.wan: serf: EventMemberJoin: Node-d7e949b7-ec71-89d7-d94a-1a86507728ac.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.791Z [INFO]  TestConfig_Apply_CAS.server.serf.lan: serf: EventMemberJoin: Node-d7e949b7-ec71-89d7-d94a-1a86507728ac 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.792Z [INFO]  TestConfig_Apply_CAS: Started DNS server: address=127.0.0.1:16895 network=udp
>     writer.go:29: 2020-02-23T02:46:47.792Z [INFO]  TestConfig_Apply_CAS.server.raft: entering follower state: follower="Node at 127.0.0.1:16900 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:47.792Z [INFO]  TestConfig_Apply_CAS.server: Adding LAN server: server="Node-d7e949b7-ec71-89d7-d94a-1a86507728ac (Addr: tcp/127.0.0.1:16900) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:47.792Z [INFO]  TestConfig_Apply_CAS.server: Handled event for server in area: event=member-join server=Node-d7e949b7-ec71-89d7-d94a-1a86507728ac.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:47.792Z [INFO]  TestConfig_Apply_CAS: Started DNS server: address=127.0.0.1:16895 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.792Z [INFO]  TestConfig_Apply_CAS: Started HTTP server: address=127.0.0.1:16896 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.792Z [INFO]  TestConfig_Apply_CAS: started state syncer
>     writer.go:29: 2020-02-23T02:46:47.837Z [WARN]  TestConfig_Apply_CAS.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:47.837Z [INFO]  TestConfig_Apply_CAS.server.raft: entering candidate state: node="Node at 127.0.0.1:16900 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:47.924Z [DEBUG] TestConfig_Apply_CAS.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:47.924Z [DEBUG] TestConfig_Apply_CAS.server.raft: vote granted: from=d7e949b7-ec71-89d7-d94a-1a86507728ac term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:47.924Z [INFO]  TestConfig_Apply_CAS.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:47.924Z [INFO]  TestConfig_Apply_CAS.server.raft: entering leader state: leader="Node at 127.0.0.1:16900 [Leader]"
>     writer.go:29: 2020-02-23T02:46:47.924Z [INFO]  TestConfig_Apply_CAS.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:47.924Z [INFO]  TestConfig_Apply_CAS.server: New leader elected: payload=Node-d7e949b7-ec71-89d7-d94a-1a86507728ac
>     writer.go:29: 2020-02-23T02:46:47.939Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:47.947Z [INFO]  TestConfig_Apply_CAS.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:47.947Z [INFO]  TestConfig_Apply_CAS.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:47.947Z [DEBUG] TestConfig_Apply_CAS.server: Skipping self join check for node since the cluster is too small: node=Node-d7e949b7-ec71-89d7-d94a-1a86507728ac
>     writer.go:29: 2020-02-23T02:46:47.947Z [INFO]  TestConfig_Apply_CAS.server: member joined, marking health alive: member=Node-d7e949b7-ec71-89d7-d94a-1a86507728ac
>     writer.go:29: 2020-02-23T02:46:48.018Z [DEBUG] TestConfig_Apply_CAS: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:48.089Z [INFO]  TestConfig_Apply_CAS: Synced node info
>     writer.go:29: 2020-02-23T02:46:48.229Z [INFO]  TestConfig_Apply_CAS: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:48.229Z [INFO]  TestConfig_Apply_CAS.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:48.229Z [DEBUG] TestConfig_Apply_CAS.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.229Z [WARN]  TestConfig_Apply_CAS.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.229Z [DEBUG] TestConfig_Apply_CAS.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.231Z [WARN]  TestConfig_Apply_CAS.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.232Z [INFO]  TestConfig_Apply_CAS.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:48.232Z [INFO]  TestConfig_Apply_CAS: consul server down
>     writer.go:29: 2020-02-23T02:46:48.232Z [INFO]  TestConfig_Apply_CAS: shutdown complete
>     writer.go:29: 2020-02-23T02:46:48.232Z [INFO]  TestConfig_Apply_CAS: Stopping server: protocol=DNS address=127.0.0.1:16895 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.232Z [INFO]  TestConfig_Apply_CAS: Stopping server: protocol=DNS address=127.0.0.1:16895 network=udp
>     writer.go:29: 2020-02-23T02:46:48.232Z [INFO]  TestConfig_Apply_CAS: Stopping server: protocol=HTTP address=127.0.0.1:16896 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.232Z [INFO]  TestConfig_Apply_CAS: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:48.232Z [INFO]  TestConfig_Apply_CAS: Endpoints down
> === CONT  TestConfig_Delete
> --- PASS: TestConfig_Apply_ProxyDefaultsMeshGateway (0.32s)
>     writer.go:29: 2020-02-23T02:46:47.974Z [WARN]  TestConfig_Apply_ProxyDefaultsMeshGateway: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:47.974Z [DEBUG] TestConfig_Apply_ProxyDefaultsMeshGateway.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:47.975Z [DEBUG] TestConfig_Apply_ProxyDefaultsMeshGateway.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:47.984Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d86bc2d0-a83f-0028-bd11-58d204842524 Address:127.0.0.1:16906}]"
>     writer.go:29: 2020-02-23T02:46:47.985Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.server.raft: entering follower state: follower="Node at 127.0.0.1:16906 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:47.985Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.server.serf.wan: serf: EventMemberJoin: Node-d86bc2d0-a83f-0028-bd11-58d204842524.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.986Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.server.serf.lan: serf: EventMemberJoin: Node-d86bc2d0-a83f-0028-bd11-58d204842524 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:47.986Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.server: Handled event for server in area: event=member-join server=Node-d86bc2d0-a83f-0028-bd11-58d204842524.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:47.986Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.server: Adding LAN server: server="Node-d86bc2d0-a83f-0028-bd11-58d204842524 (Addr: tcp/127.0.0.1:16906) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:47.986Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway: Started DNS server: address=127.0.0.1:16901 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.986Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway: Started DNS server: address=127.0.0.1:16901 network=udp
>     writer.go:29: 2020-02-23T02:46:47.987Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway: Started HTTP server: address=127.0.0.1:16902 network=tcp
>     writer.go:29: 2020-02-23T02:46:47.987Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway: started state syncer
>     writer.go:29: 2020-02-23T02:46:48.049Z [WARN]  TestConfig_Apply_ProxyDefaultsMeshGateway.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:48.049Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.server.raft: entering candidate state: node="Node at 127.0.0.1:16906 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:48.126Z [DEBUG] TestConfig_Apply_ProxyDefaultsMeshGateway.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:48.126Z [DEBUG] TestConfig_Apply_ProxyDefaultsMeshGateway.server.raft: vote granted: from=d86bc2d0-a83f-0028-bd11-58d204842524 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:48.126Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:48.126Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.server.raft: entering leader state: leader="Node at 127.0.0.1:16906 [Leader]"
>     writer.go:29: 2020-02-23T02:46:48.126Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:48.126Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.server: New leader elected: payload=Node-d86bc2d0-a83f-0028-bd11-58d204842524
>     writer.go:29: 2020-02-23T02:46:48.190Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway: Synced node info
>     writer.go:29: 2020-02-23T02:46:48.190Z [DEBUG] TestConfig_Apply_ProxyDefaultsMeshGateway: Node info in sync
>     writer.go:29: 2020-02-23T02:46:48.230Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:48.238Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:48.238Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.239Z [DEBUG] TestConfig_Apply_ProxyDefaultsMeshGateway.server: Skipping self join check for node since the cluster is too small: node=Node-d86bc2d0-a83f-0028-bd11-58d204842524
>     writer.go:29: 2020-02-23T02:46:48.239Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.server: member joined, marking health alive: member=Node-d86bc2d0-a83f-0028-bd11-58d204842524
>     writer.go:29: 2020-02-23T02:46:48.277Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:48.277Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:48.277Z [DEBUG] TestConfig_Apply_ProxyDefaultsMeshGateway.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.277Z [WARN]  TestConfig_Apply_ProxyDefaultsMeshGateway.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.277Z [DEBUG] TestConfig_Apply_ProxyDefaultsMeshGateway.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.280Z [WARN]  TestConfig_Apply_ProxyDefaultsMeshGateway.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.283Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:48.283Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway: consul server down
>     writer.go:29: 2020-02-23T02:46:48.283Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway: shutdown complete
>     writer.go:29: 2020-02-23T02:46:48.283Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway: Stopping server: protocol=DNS address=127.0.0.1:16901 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.283Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway: Stopping server: protocol=DNS address=127.0.0.1:16901 network=udp
>     writer.go:29: 2020-02-23T02:46:48.283Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway: Stopping server: protocol=HTTP address=127.0.0.1:16902 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.283Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:48.283Z [INFO]  TestConfig_Apply_ProxyDefaultsMeshGateway: Endpoints down
> === CONT  TestConfig_Get
> --- PASS: TestConfig_Apply (0.14s)
>     writer.go:29: 2020-02-23T02:46:48.245Z [WARN]  TestConfig_Apply: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:48.246Z [DEBUG] TestConfig_Apply.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:48.246Z [DEBUG] TestConfig_Apply.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:48.268Z [INFO]  TestConfig_Apply.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:ed3140e2-9a64-252e-8fb3-06c19ccc69bd Address:127.0.0.1:16918}]"
>     writer.go:29: 2020-02-23T02:46:48.268Z [INFO]  TestConfig_Apply.server.serf.wan: serf: EventMemberJoin: Node-ed3140e2-9a64-252e-8fb3-06c19ccc69bd.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.269Z [INFO]  TestConfig_Apply.server.serf.lan: serf: EventMemberJoin: Node-ed3140e2-9a64-252e-8fb3-06c19ccc69bd 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.269Z [INFO]  TestConfig_Apply.server.raft: entering follower state: follower="Node at 127.0.0.1:16918 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:48.269Z [INFO]  TestConfig_Apply.server: Adding LAN server: server="Node-ed3140e2-9a64-252e-8fb3-06c19ccc69bd (Addr: tcp/127.0.0.1:16918) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:48.269Z [INFO]  TestConfig_Apply.server: Handled event for server in area: event=member-join server=Node-ed3140e2-9a64-252e-8fb3-06c19ccc69bd.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:48.269Z [INFO]  TestConfig_Apply: Started DNS server: address=127.0.0.1:16913 network=udp
>     writer.go:29: 2020-02-23T02:46:48.269Z [INFO]  TestConfig_Apply: Started DNS server: address=127.0.0.1:16913 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.270Z [INFO]  TestConfig_Apply: Started HTTP server: address=127.0.0.1:16914 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.270Z [INFO]  TestConfig_Apply: started state syncer
>     writer.go:29: 2020-02-23T02:46:48.322Z [WARN]  TestConfig_Apply.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:48.322Z [INFO]  TestConfig_Apply.server.raft: entering candidate state: node="Node at 127.0.0.1:16918 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:48.328Z [DEBUG] TestConfig_Apply.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:48.328Z [DEBUG] TestConfig_Apply.server.raft: vote granted: from=ed3140e2-9a64-252e-8fb3-06c19ccc69bd term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:48.328Z [INFO]  TestConfig_Apply.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:48.328Z [INFO]  TestConfig_Apply.server.raft: entering leader state: leader="Node at 127.0.0.1:16918 [Leader]"
>     writer.go:29: 2020-02-23T02:46:48.328Z [INFO]  TestConfig_Apply.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:48.328Z [INFO]  TestConfig_Apply.server: New leader elected: payload=Node-ed3140e2-9a64-252e-8fb3-06c19ccc69bd
>     writer.go:29: 2020-02-23T02:46:48.338Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:48.347Z [INFO]  TestConfig_Apply.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:48.347Z [INFO]  TestConfig_Apply.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.347Z [DEBUG] TestConfig_Apply.server: Skipping self join check for node since the cluster is too small: node=Node-ed3140e2-9a64-252e-8fb3-06c19ccc69bd
>     writer.go:29: 2020-02-23T02:46:48.347Z [INFO]  TestConfig_Apply.server: member joined, marking health alive: member=Node-ed3140e2-9a64-252e-8fb3-06c19ccc69bd
>     writer.go:29: 2020-02-23T02:46:48.359Z [INFO]  TestConfig_Apply: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:48.359Z [INFO]  TestConfig_Apply.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:48.359Z [DEBUG] TestConfig_Apply.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.359Z [WARN]  TestConfig_Apply.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.359Z [ERROR] TestConfig_Apply.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:48.359Z [DEBUG] TestConfig_Apply.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.361Z [WARN]  TestConfig_Apply.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.363Z [INFO]  TestConfig_Apply.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:48.363Z [INFO]  TestConfig_Apply: consul server down
>     writer.go:29: 2020-02-23T02:46:48.363Z [INFO]  TestConfig_Apply: shutdown complete
>     writer.go:29: 2020-02-23T02:46:48.363Z [INFO]  TestConfig_Apply: Stopping server: protocol=DNS address=127.0.0.1:16913 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.363Z [INFO]  TestConfig_Apply: Stopping server: protocol=DNS address=127.0.0.1:16913 network=udp
>     writer.go:29: 2020-02-23T02:46:48.363Z [INFO]  TestConfig_Apply: Stopping server: protocol=HTTP address=127.0.0.1:16914 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.363Z [INFO]  TestConfig_Apply: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:48.363Z [INFO]  TestConfig_Apply: Endpoints down
> === CONT  TestCatalogNodeServices_ConnectProxy
> --- PASS: TestConfig_Delete (0.36s)
>     writer.go:29: 2020-02-23T02:46:48.240Z [WARN]  TestConfig_Delete: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:48.240Z [DEBUG] TestConfig_Delete.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:48.240Z [DEBUG] TestConfig_Delete.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:48.268Z [INFO]  TestConfig_Delete.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:5cf158df-9529-069d-d951-f46970c314e6 Address:127.0.0.1:16924}]"
>     writer.go:29: 2020-02-23T02:46:48.268Z [INFO]  TestConfig_Delete.server.raft: entering follower state: follower="Node at 127.0.0.1:16924 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:48.268Z [INFO]  TestConfig_Delete.server.serf.wan: serf: EventMemberJoin: Node-5cf158df-9529-069d-d951-f46970c314e6.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.269Z [INFO]  TestConfig_Delete.server.serf.lan: serf: EventMemberJoin: Node-5cf158df-9529-069d-d951-f46970c314e6 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.269Z [INFO]  TestConfig_Delete: Started DNS server: address=127.0.0.1:16919 network=udp
>     writer.go:29: 2020-02-23T02:46:48.269Z [INFO]  TestConfig_Delete.server: Adding LAN server: server="Node-5cf158df-9529-069d-d951-f46970c314e6 (Addr: tcp/127.0.0.1:16924) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:48.269Z [INFO]  TestConfig_Delete.server: Handled event for server in area: event=member-join server=Node-5cf158df-9529-069d-d951-f46970c314e6.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:48.269Z [INFO]  TestConfig_Delete: Started DNS server: address=127.0.0.1:16919 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.270Z [INFO]  TestConfig_Delete: Started HTTP server: address=127.0.0.1:16920 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.270Z [INFO]  TestConfig_Delete: started state syncer
>     writer.go:29: 2020-02-23T02:46:48.325Z [WARN]  TestConfig_Delete.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:48.325Z [INFO]  TestConfig_Delete.server.raft: entering candidate state: node="Node at 127.0.0.1:16924 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:48.330Z [DEBUG] TestConfig_Delete.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:48.330Z [DEBUG] TestConfig_Delete.server.raft: vote granted: from=5cf158df-9529-069d-d951-f46970c314e6 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:48.330Z [INFO]  TestConfig_Delete.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:48.330Z [INFO]  TestConfig_Delete.server.raft: entering leader state: leader="Node at 127.0.0.1:16924 [Leader]"
>     writer.go:29: 2020-02-23T02:46:48.330Z [INFO]  TestConfig_Delete.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:48.330Z [INFO]  TestConfig_Delete.server: New leader elected: payload=Node-5cf158df-9529-069d-d951-f46970c314e6
>     writer.go:29: 2020-02-23T02:46:48.340Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:48.350Z [INFO]  TestConfig_Delete.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:48.350Z [INFO]  TestConfig_Delete.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.350Z [DEBUG] TestConfig_Delete.server: Skipping self join check for node since the cluster is too small: node=Node-5cf158df-9529-069d-d951-f46970c314e6
>     writer.go:29: 2020-02-23T02:46:48.350Z [INFO]  TestConfig_Delete.server: member joined, marking health alive: member=Node-5cf158df-9529-069d-d951-f46970c314e6
>     writer.go:29: 2020-02-23T02:46:48.545Z [INFO]  TestConfig_Delete: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:48.545Z [INFO]  TestConfig_Delete.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:48.545Z [DEBUG] TestConfig_Delete.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.545Z [WARN]  TestConfig_Delete.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.545Z [ERROR] TestConfig_Delete.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:48.545Z [DEBUG] TestConfig_Delete.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.571Z [WARN]  TestConfig_Delete.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.591Z [INFO]  TestConfig_Delete.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:48.591Z [INFO]  TestConfig_Delete: consul server down
>     writer.go:29: 2020-02-23T02:46:48.591Z [INFO]  TestConfig_Delete: shutdown complete
>     writer.go:29: 2020-02-23T02:46:48.591Z [INFO]  TestConfig_Delete: Stopping server: protocol=DNS address=127.0.0.1:16919 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.591Z [INFO]  TestConfig_Delete: Stopping server: protocol=DNS address=127.0.0.1:16919 network=udp
>     writer.go:29: 2020-02-23T02:46:48.591Z [INFO]  TestConfig_Delete: Stopping server: protocol=HTTP address=127.0.0.1:16920 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.591Z [INFO]  TestConfig_Delete: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:48.591Z [INFO]  TestConfig_Delete: Endpoints down
> === CONT  TestCatalogNodeServices_Filter
> === RUN   TestConfig_Get/get_a_single_service_entry
> === RUN   TestConfig_Get/list_both_service_entries
> === RUN   TestConfig_Get/get_global_proxy_config
> === RUN   TestConfig_Get/error_on_no_arguments
> --- PASS: TestConfig_Get (0.38s)
>     writer.go:29: 2020-02-23T02:46:48.291Z [WARN]  TestConfig_Get: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:48.291Z [DEBUG] TestConfig_Get.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:48.291Z [DEBUG] TestConfig_Get.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:48.307Z [INFO]  TestConfig_Get.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f3ad6cbe-4b20-a130-0dcd-a61532024750 Address:127.0.0.1:16930}]"
>     writer.go:29: 2020-02-23T02:46:48.307Z [INFO]  TestConfig_Get.server.raft: entering follower state: follower="Node at 127.0.0.1:16930 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:48.307Z [INFO]  TestConfig_Get.server.serf.wan: serf: EventMemberJoin: Node-f3ad6cbe-4b20-a130-0dcd-a61532024750.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.308Z [INFO]  TestConfig_Get.server.serf.lan: serf: EventMemberJoin: Node-f3ad6cbe-4b20-a130-0dcd-a61532024750 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.308Z [INFO]  TestConfig_Get.server: Handled event for server in area: event=member-join server=Node-f3ad6cbe-4b20-a130-0dcd-a61532024750.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:48.308Z [INFO]  TestConfig_Get.server: Adding LAN server: server="Node-f3ad6cbe-4b20-a130-0dcd-a61532024750 (Addr: tcp/127.0.0.1:16930) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:48.308Z [INFO]  TestConfig_Get: Started DNS server: address=127.0.0.1:16925 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.308Z [INFO]  TestConfig_Get: Started DNS server: address=127.0.0.1:16925 network=udp
>     writer.go:29: 2020-02-23T02:46:48.309Z [INFO]  TestConfig_Get: Started HTTP server: address=127.0.0.1:16926 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.309Z [INFO]  TestConfig_Get: started state syncer
>     writer.go:29: 2020-02-23T02:46:48.351Z [WARN]  TestConfig_Get.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:48.351Z [INFO]  TestConfig_Get.server.raft: entering candidate state: node="Node at 127.0.0.1:16930 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:48.355Z [DEBUG] TestConfig_Get.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:48.355Z [DEBUG] TestConfig_Get.server.raft: vote granted: from=f3ad6cbe-4b20-a130-0dcd-a61532024750 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:48.355Z [INFO]  TestConfig_Get.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:48.355Z [INFO]  TestConfig_Get.server.raft: entering leader state: leader="Node at 127.0.0.1:16930 [Leader]"
>     writer.go:29: 2020-02-23T02:46:48.355Z [INFO]  TestConfig_Get.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:48.355Z [INFO]  TestConfig_Get.server: New leader elected: payload=Node-f3ad6cbe-4b20-a130-0dcd-a61532024750
>     writer.go:29: 2020-02-23T02:46:48.372Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:48.382Z [INFO]  TestConfig_Get.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:48.382Z [INFO]  TestConfig_Get.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.383Z [DEBUG] TestConfig_Get.server: Skipping self join check for node since the cluster is too small: node=Node-f3ad6cbe-4b20-a130-0dcd-a61532024750
>     writer.go:29: 2020-02-23T02:46:48.383Z [INFO]  TestConfig_Get.server: member joined, marking health alive: member=Node-f3ad6cbe-4b20-a130-0dcd-a61532024750
>     writer.go:29: 2020-02-23T02:46:48.423Z [DEBUG] TestConfig_Get: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:48.469Z [INFO]  TestConfig_Get: Synced node info
>     --- PASS: TestConfig_Get/get_a_single_service_entry (0.00s)
>     --- PASS: TestConfig_Get/list_both_service_entries (0.00s)
>     --- PASS: TestConfig_Get/get_global_proxy_config (0.00s)
>     --- PASS: TestConfig_Get/error_on_no_arguments (0.00s)
>     writer.go:29: 2020-02-23T02:46:48.652Z [INFO]  TestConfig_Get: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:48.652Z [INFO]  TestConfig_Get.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:48.652Z [DEBUG] TestConfig_Get.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.652Z [WARN]  TestConfig_Get.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.652Z [DEBUG] TestConfig_Get.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.665Z [WARN]  TestConfig_Get.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.667Z [INFO]  TestConfig_Get.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:48.668Z [INFO]  TestConfig_Get: consul server down
>     writer.go:29: 2020-02-23T02:46:48.668Z [INFO]  TestConfig_Get: shutdown complete
>     writer.go:29: 2020-02-23T02:46:48.668Z [INFO]  TestConfig_Get: Stopping server: protocol=DNS address=127.0.0.1:16925 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.668Z [INFO]  TestConfig_Get: Stopping server: protocol=DNS address=127.0.0.1:16925 network=udp
>     writer.go:29: 2020-02-23T02:46:48.668Z [INFO]  TestConfig_Get: Stopping server: protocol=HTTP address=127.0.0.1:16926 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.668Z [INFO]  TestConfig_Get: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:48.668Z [INFO]  TestConfig_Get: Endpoints down
> === CONT  TestCatalogNodeServiceList
> === RUN   TestConnectCAConfig/force_without_cross_sign_snake_case
> --- PASS: TestCatalogNodeServices_Filter (0.28s)
>     writer.go:29: 2020-02-23T02:46:48.596Z [WARN]  TestCatalogNodeServices_Filter: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:48.596Z [DEBUG] TestCatalogNodeServices_Filter.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:48.597Z [DEBUG] TestCatalogNodeServices_Filter.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:48.671Z [INFO]  TestCatalogNodeServices_Filter.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f11181f3-3e0a-f73a-6ded-2c33b4e1a3b5 Address:127.0.0.1:16942}]"
>     writer.go:29: 2020-02-23T02:46:48.672Z [INFO]  TestCatalogNodeServices_Filter.server.serf.wan: serf: EventMemberJoin: Node-f11181f3-3e0a-f73a-6ded-2c33b4e1a3b5.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.672Z [INFO]  TestCatalogNodeServices_Filter.server.serf.lan: serf: EventMemberJoin: Node-f11181f3-3e0a-f73a-6ded-2c33b4e1a3b5 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.672Z [INFO]  TestCatalogNodeServices_Filter: Started DNS server: address=127.0.0.1:16937 network=udp
>     writer.go:29: 2020-02-23T02:46:48.672Z [INFO]  TestCatalogNodeServices_Filter.server.raft: entering follower state: follower="Node at 127.0.0.1:16942 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:48.672Z [INFO]  TestCatalogNodeServices_Filter.server: Adding LAN server: server="Node-f11181f3-3e0a-f73a-6ded-2c33b4e1a3b5 (Addr: tcp/127.0.0.1:16942) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:48.673Z [INFO]  TestCatalogNodeServices_Filter.server: Handled event for server in area: event=member-join server=Node-f11181f3-3e0a-f73a-6ded-2c33b4e1a3b5.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:48.673Z [INFO]  TestCatalogNodeServices_Filter: Started DNS server: address=127.0.0.1:16937 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.673Z [INFO]  TestCatalogNodeServices_Filter: Started HTTP server: address=127.0.0.1:16938 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.673Z [INFO]  TestCatalogNodeServices_Filter: started state syncer
>     writer.go:29: 2020-02-23T02:46:48.726Z [WARN]  TestCatalogNodeServices_Filter.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:48.726Z [INFO]  TestCatalogNodeServices_Filter.server.raft: entering candidate state: node="Node at 127.0.0.1:16942 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:48.729Z [DEBUG] TestCatalogNodeServices_Filter.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:48.729Z [DEBUG] TestCatalogNodeServices_Filter.server.raft: vote granted: from=f11181f3-3e0a-f73a-6ded-2c33b4e1a3b5 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:48.729Z [INFO]  TestCatalogNodeServices_Filter.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:48.729Z [INFO]  TestCatalogNodeServices_Filter.server.raft: entering leader state: leader="Node at 127.0.0.1:16942 [Leader]"
>     writer.go:29: 2020-02-23T02:46:48.731Z [INFO]  TestCatalogNodeServices_Filter.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:48.731Z [INFO]  TestCatalogNodeServices_Filter.server: New leader elected: payload=Node-f11181f3-3e0a-f73a-6ded-2c33b4e1a3b5
>     writer.go:29: 2020-02-23T02:46:48.738Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:48.748Z [INFO]  TestCatalogNodeServices_Filter.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:48.748Z [INFO]  TestCatalogNodeServices_Filter.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.748Z [DEBUG] TestCatalogNodeServices_Filter.server: Skipping self join check for node since the cluster is too small: node=Node-f11181f3-3e0a-f73a-6ded-2c33b4e1a3b5
>     writer.go:29: 2020-02-23T02:46:48.748Z [INFO]  TestCatalogNodeServices_Filter.server: member joined, marking health alive: member=Node-f11181f3-3e0a-f73a-6ded-2c33b4e1a3b5
>     writer.go:29: 2020-02-23T02:46:48.824Z [INFO]  TestCatalogNodeServices_Filter: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:48.824Z [INFO]  TestCatalogNodeServices_Filter.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:48.824Z [DEBUG] TestCatalogNodeServices_Filter.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.824Z [WARN]  TestCatalogNodeServices_Filter.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.824Z [DEBUG] TestCatalogNodeServices_Filter.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.824Z [ERROR] TestCatalogNodeServices_Filter.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:48.845Z [WARN]  TestCatalogNodeServices_Filter.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.869Z [INFO]  TestCatalogNodeServices_Filter.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:48.869Z [INFO]  TestCatalogNodeServices_Filter: consul server down
>     writer.go:29: 2020-02-23T02:46:48.869Z [INFO]  TestCatalogNodeServices_Filter: shutdown complete
>     writer.go:29: 2020-02-23T02:46:48.869Z [INFO]  TestCatalogNodeServices_Filter: Stopping server: protocol=DNS address=127.0.0.1:16937 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.869Z [INFO]  TestCatalogNodeServices_Filter: Stopping server: protocol=DNS address=127.0.0.1:16937 network=udp
>     writer.go:29: 2020-02-23T02:46:48.869Z [INFO]  TestCatalogNodeServices_Filter: Stopping server: protocol=HTTP address=127.0.0.1:16938 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.869Z [INFO]  TestCatalogNodeServices_Filter: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:48.869Z [INFO]  TestCatalogNodeServices_Filter: Endpoints down
> === CONT  TestCatalogNodeServices
> --- PASS: TestCatalogNodeServiceList (0.25s)
>     writer.go:29: 2020-02-23T02:46:48.679Z [WARN]  TestCatalogNodeServiceList: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:48.679Z [DEBUG] TestCatalogNodeServiceList.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:48.679Z [DEBUG] TestCatalogNodeServiceList.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:48.691Z [INFO]  TestCatalogNodeServiceList.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:774f7322-7472-95ed-b656-d3ac0ccf7471 Address:127.0.0.1:16948}]"
>     writer.go:29: 2020-02-23T02:46:48.691Z [INFO]  TestCatalogNodeServiceList.server.raft: entering follower state: follower="Node at 127.0.0.1:16948 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:48.692Z [INFO]  TestCatalogNodeServiceList.server.serf.wan: serf: EventMemberJoin: Node-774f7322-7472-95ed-b656-d3ac0ccf7471.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.692Z [INFO]  TestCatalogNodeServiceList.server.serf.lan: serf: EventMemberJoin: Node-774f7322-7472-95ed-b656-d3ac0ccf7471 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.693Z [INFO]  TestCatalogNodeServiceList.server: Handled event for server in area: event=member-join server=Node-774f7322-7472-95ed-b656-d3ac0ccf7471.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:48.693Z [INFO]  TestCatalogNodeServiceList.server: Adding LAN server: server="Node-774f7322-7472-95ed-b656-d3ac0ccf7471 (Addr: tcp/127.0.0.1:16948) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:48.693Z [INFO]  TestCatalogNodeServiceList: Started DNS server: address=127.0.0.1:16943 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.693Z [INFO]  TestCatalogNodeServiceList: Started DNS server: address=127.0.0.1:16943 network=udp
>     writer.go:29: 2020-02-23T02:46:48.693Z [INFO]  TestCatalogNodeServiceList: Started HTTP server: address=127.0.0.1:16944 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.693Z [INFO]  TestCatalogNodeServiceList: started state syncer
>     writer.go:29: 2020-02-23T02:46:48.730Z [WARN]  TestCatalogNodeServiceList.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:48.730Z [INFO]  TestCatalogNodeServiceList.server.raft: entering candidate state: node="Node at 127.0.0.1:16948 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:48.733Z [DEBUG] TestCatalogNodeServiceList.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:48.733Z [DEBUG] TestCatalogNodeServiceList.server.raft: vote granted: from=774f7322-7472-95ed-b656-d3ac0ccf7471 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:48.733Z [INFO]  TestCatalogNodeServiceList.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:48.733Z [INFO]  TestCatalogNodeServiceList.server.raft: entering leader state: leader="Node at 127.0.0.1:16948 [Leader]"
>     writer.go:29: 2020-02-23T02:46:48.733Z [INFO]  TestCatalogNodeServiceList.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:48.733Z [INFO]  TestCatalogNodeServiceList.server: New leader elected: payload=Node-774f7322-7472-95ed-b656-d3ac0ccf7471
>     writer.go:29: 2020-02-23T02:46:48.744Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:48.759Z [INFO]  TestCatalogNodeServiceList.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:48.759Z [INFO]  TestCatalogNodeServiceList.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.759Z [DEBUG] TestCatalogNodeServiceList.server: Skipping self join check for node since the cluster is too small: node=Node-774f7322-7472-95ed-b656-d3ac0ccf7471
>     writer.go:29: 2020-02-23T02:46:48.759Z [INFO]  TestCatalogNodeServiceList.server: member joined, marking health alive: member=Node-774f7322-7472-95ed-b656-d3ac0ccf7471
>     writer.go:29: 2020-02-23T02:46:48.884Z [INFO]  TestCatalogNodeServiceList: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:48.884Z [INFO]  TestCatalogNodeServiceList.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:48.884Z [DEBUG] TestCatalogNodeServiceList.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.884Z [WARN]  TestCatalogNodeServiceList.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.884Z [ERROR] TestCatalogNodeServiceList.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:48.884Z [DEBUG] TestCatalogNodeServiceList.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.910Z [WARN]  TestCatalogNodeServiceList.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.914Z [INFO]  TestCatalogNodeServiceList.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:48.914Z [INFO]  TestCatalogNodeServiceList: consul server down
>     writer.go:29: 2020-02-23T02:46:48.914Z [INFO]  TestCatalogNodeServiceList: shutdown complete
>     writer.go:29: 2020-02-23T02:46:48.914Z [INFO]  TestCatalogNodeServiceList: Stopping server: protocol=DNS address=127.0.0.1:16943 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.914Z [INFO]  TestCatalogNodeServiceList: Stopping server: protocol=DNS address=127.0.0.1:16943 network=udp
>     writer.go:29: 2020-02-23T02:46:48.914Z [INFO]  TestCatalogNodeServiceList: Stopping server: protocol=HTTP address=127.0.0.1:16944 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.914Z [INFO]  TestCatalogNodeServiceList: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:48.914Z [INFO]  TestCatalogNodeServiceList: Endpoints down
> === CONT  TestCatalogConnectServiceNodes_good
> --- PASS: TestCatalogNodeServices_ConnectProxy (0.56s)
>     writer.go:29: 2020-02-23T02:46:48.392Z [WARN]  TestCatalogNodeServices_ConnectProxy: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:48.393Z [DEBUG] TestCatalogNodeServices_ConnectProxy.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:48.398Z [DEBUG] TestCatalogNodeServices_ConnectProxy.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:48.417Z [INFO]  TestCatalogNodeServices_ConnectProxy.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:5cba664b-5184-546a-8cf9-36a49d899fc4 Address:127.0.0.1:16936}]"
>     writer.go:29: 2020-02-23T02:46:48.418Z [INFO]  TestCatalogNodeServices_ConnectProxy.server.serf.wan: serf: EventMemberJoin: Node-5cba664b-5184-546a-8cf9-36a49d899fc4.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.418Z [INFO]  TestCatalogNodeServices_ConnectProxy.server.serf.lan: serf: EventMemberJoin: Node-5cba664b-5184-546a-8cf9-36a49d899fc4 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.418Z [INFO]  TestCatalogNodeServices_ConnectProxy: Started DNS server: address=127.0.0.1:16931 network=udp
>     writer.go:29: 2020-02-23T02:46:48.418Z [INFO]  TestCatalogNodeServices_ConnectProxy.server.raft: entering follower state: follower="Node at 127.0.0.1:16936 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:48.418Z [INFO]  TestCatalogNodeServices_ConnectProxy.server: Adding LAN server: server="Node-5cba664b-5184-546a-8cf9-36a49d899fc4 (Addr: tcp/127.0.0.1:16936) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:48.418Z [INFO]  TestCatalogNodeServices_ConnectProxy.server: Handled event for server in area: event=member-join server=Node-5cba664b-5184-546a-8cf9-36a49d899fc4.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:48.419Z [INFO]  TestCatalogNodeServices_ConnectProxy: Started DNS server: address=127.0.0.1:16931 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.419Z [INFO]  TestCatalogNodeServices_ConnectProxy: Started HTTP server: address=127.0.0.1:16932 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.419Z [INFO]  TestCatalogNodeServices_ConnectProxy: started state syncer
>     writer.go:29: 2020-02-23T02:46:48.465Z [WARN]  TestCatalogNodeServices_ConnectProxy.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:48.465Z [INFO]  TestCatalogNodeServices_ConnectProxy.server.raft: entering candidate state: node="Node at 127.0.0.1:16936 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:48.530Z [DEBUG] TestCatalogNodeServices_ConnectProxy.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:48.530Z [DEBUG] TestCatalogNodeServices_ConnectProxy.server.raft: vote granted: from=5cba664b-5184-546a-8cf9-36a49d899fc4 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:48.530Z [INFO]  TestCatalogNodeServices_ConnectProxy.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:48.530Z [INFO]  TestCatalogNodeServices_ConnectProxy.server.raft: entering leader state: leader="Node at 127.0.0.1:16936 [Leader]"
>     writer.go:29: 2020-02-23T02:46:48.531Z [INFO]  TestCatalogNodeServices_ConnectProxy.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:48.531Z [INFO]  TestCatalogNodeServices_ConnectProxy.server: New leader elected: payload=Node-5cba664b-5184-546a-8cf9-36a49d899fc4
>     writer.go:29: 2020-02-23T02:46:48.651Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:48.675Z [INFO]  TestCatalogNodeServices_ConnectProxy.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:48.675Z [INFO]  TestCatalogNodeServices_ConnectProxy.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.675Z [DEBUG] TestCatalogNodeServices_ConnectProxy.server: Skipping self join check for node since the cluster is too small: node=Node-5cba664b-5184-546a-8cf9-36a49d899fc4
>     writer.go:29: 2020-02-23T02:46:48.675Z [INFO]  TestCatalogNodeServices_ConnectProxy.server: member joined, marking health alive: member=Node-5cba664b-5184-546a-8cf9-36a49d899fc4
>     writer.go:29: 2020-02-23T02:46:48.710Z [DEBUG] TestCatalogNodeServices_ConnectProxy: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:48.715Z [INFO]  TestCatalogNodeServices_ConnectProxy: Synced node info
>     writer.go:29: 2020-02-23T02:46:48.895Z [INFO]  TestCatalogNodeServices_ConnectProxy: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:48.895Z [INFO]  TestCatalogNodeServices_ConnectProxy.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:48.895Z [DEBUG] TestCatalogNodeServices_ConnectProxy.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.895Z [WARN]  TestCatalogNodeServices_ConnectProxy.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.895Z [DEBUG] TestCatalogNodeServices_ConnectProxy.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:48.913Z [WARN]  TestCatalogNodeServices_ConnectProxy.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:48.920Z [INFO]  TestCatalogNodeServices_ConnectProxy.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:48.920Z [INFO]  TestCatalogNodeServices_ConnectProxy: consul server down
>     writer.go:29: 2020-02-23T02:46:48.920Z [INFO]  TestCatalogNodeServices_ConnectProxy: shutdown complete
>     writer.go:29: 2020-02-23T02:46:48.920Z [INFO]  TestCatalogNodeServices_ConnectProxy: Stopping server: protocol=DNS address=127.0.0.1:16931 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.920Z [INFO]  TestCatalogNodeServices_ConnectProxy: Stopping server: protocol=DNS address=127.0.0.1:16931 network=udp
>     writer.go:29: 2020-02-23T02:46:48.920Z [INFO]  TestCatalogNodeServices_ConnectProxy: Stopping server: protocol=HTTP address=127.0.0.1:16932 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.920Z [INFO]  TestCatalogNodeServices_ConnectProxy: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:48.920Z [INFO]  TestCatalogNodeServices_ConnectProxy: Endpoints down
> === CONT  TestCatalogServiceNodes_ConnectProxy
> === RUN   TestConnectCAConfig/setting_state_fails
> --- PASS: TestCatalogConnectServiceNodes_good (0.21s)
>     writer.go:29: 2020-02-23T02:46:48.933Z [WARN]  TestCatalogConnectServiceNodes_good: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:48.933Z [DEBUG] TestCatalogConnectServiceNodes_good.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:48.934Z [DEBUG] TestCatalogConnectServiceNodes_good.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:48.956Z [INFO]  TestCatalogConnectServiceNodes_good.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:871b21e8-6a55-6e8c-7aee-692496a7502a Address:127.0.0.1:16972}]"
>     writer.go:29: 2020-02-23T02:46:48.957Z [INFO]  TestCatalogConnectServiceNodes_good.server.serf.wan: serf: EventMemberJoin: Node-871b21e8-6a55-6e8c-7aee-692496a7502a.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.957Z [INFO]  TestCatalogConnectServiceNodes_good.server.serf.lan: serf: EventMemberJoin: Node-871b21e8-6a55-6e8c-7aee-692496a7502a 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.957Z [INFO]  TestCatalogConnectServiceNodes_good: Started DNS server: address=127.0.0.1:16967 network=udp
>     writer.go:29: 2020-02-23T02:46:48.957Z [INFO]  TestCatalogConnectServiceNodes_good.server.raft: entering follower state: follower="Node at 127.0.0.1:16972 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:48.958Z [INFO]  TestCatalogConnectServiceNodes_good.server: Adding LAN server: server="Node-871b21e8-6a55-6e8c-7aee-692496a7502a (Addr: tcp/127.0.0.1:16972) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:48.958Z [INFO]  TestCatalogConnectServiceNodes_good.server: Handled event for server in area: event=member-join server=Node-871b21e8-6a55-6e8c-7aee-692496a7502a.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:48.958Z [INFO]  TestCatalogConnectServiceNodes_good: Started DNS server: address=127.0.0.1:16967 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.958Z [INFO]  TestCatalogConnectServiceNodes_good: Started HTTP server: address=127.0.0.1:16968 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.958Z [INFO]  TestCatalogConnectServiceNodes_good: started state syncer
>     writer.go:29: 2020-02-23T02:46:49.014Z [WARN]  TestCatalogConnectServiceNodes_good.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:49.014Z [INFO]  TestCatalogConnectServiceNodes_good.server.raft: entering candidate state: node="Node at 127.0.0.1:16972 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:49.019Z [DEBUG] TestCatalogConnectServiceNodes_good.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:49.019Z [DEBUG] TestCatalogConnectServiceNodes_good.server.raft: vote granted: from=871b21e8-6a55-6e8c-7aee-692496a7502a term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:49.019Z [INFO]  TestCatalogConnectServiceNodes_good.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:49.019Z [INFO]  TestCatalogConnectServiceNodes_good.server.raft: entering leader state: leader="Node at 127.0.0.1:16972 [Leader]"
>     writer.go:29: 2020-02-23T02:46:49.019Z [INFO]  TestCatalogConnectServiceNodes_good.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:49.020Z [INFO]  TestCatalogConnectServiceNodes_good.server: New leader elected: payload=Node-871b21e8-6a55-6e8c-7aee-692496a7502a
>     writer.go:29: 2020-02-23T02:46:49.028Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:49.037Z [INFO]  TestCatalogConnectServiceNodes_good.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:49.037Z [INFO]  TestCatalogConnectServiceNodes_good.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.037Z [DEBUG] TestCatalogConnectServiceNodes_good.server: Skipping self join check for node since the cluster is too small: node=Node-871b21e8-6a55-6e8c-7aee-692496a7502a
>     writer.go:29: 2020-02-23T02:46:49.037Z [INFO]  TestCatalogConnectServiceNodes_good.server: member joined, marking health alive: member=Node-871b21e8-6a55-6e8c-7aee-692496a7502a
>     writer.go:29: 2020-02-23T02:46:49.116Z [INFO]  TestCatalogConnectServiceNodes_good: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:49.116Z [INFO]  TestCatalogConnectServiceNodes_good.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:49.116Z [DEBUG] TestCatalogConnectServiceNodes_good.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.116Z [WARN]  TestCatalogConnectServiceNodes_good.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.117Z [ERROR] TestCatalogConnectServiceNodes_good.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:49.117Z [DEBUG] TestCatalogConnectServiceNodes_good.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.118Z [WARN]  TestCatalogConnectServiceNodes_good.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.120Z [INFO]  TestCatalogConnectServiceNodes_good.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:49.120Z [INFO]  TestCatalogConnectServiceNodes_good: consul server down
>     writer.go:29: 2020-02-23T02:46:49.120Z [INFO]  TestCatalogConnectServiceNodes_good: shutdown complete
>     writer.go:29: 2020-02-23T02:46:49.120Z [INFO]  TestCatalogConnectServiceNodes_good: Stopping server: protocol=DNS address=127.0.0.1:16967 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.120Z [INFO]  TestCatalogConnectServiceNodes_good: Stopping server: protocol=DNS address=127.0.0.1:16967 network=udp
>     writer.go:29: 2020-02-23T02:46:49.120Z [INFO]  TestCatalogConnectServiceNodes_good: Stopping server: protocol=HTTP address=127.0.0.1:16968 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.120Z [INFO]  TestCatalogConnectServiceNodes_good: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:49.120Z [INFO]  TestCatalogConnectServiceNodes_good: Endpoints down
> === CONT  TestCatalogServiceNodes_DistanceSort
> --- PASS: TestCatalogNodeServices (0.31s)
>     writer.go:29: 2020-02-23T02:46:48.876Z [WARN]  TestCatalogNodeServices: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:48.876Z [DEBUG] TestCatalogNodeServices.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:48.877Z [DEBUG] TestCatalogNodeServices.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:48.925Z [INFO]  TestCatalogNodeServices.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f24227ca-fbd1-7b5d-bb96-b792bff79aa6 Address:127.0.0.1:16978}]"
>     writer.go:29: 2020-02-23T02:46:48.925Z [INFO]  TestCatalogNodeServices.server.raft: entering follower state: follower="Node at 127.0.0.1:16978 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:48.926Z [INFO]  TestCatalogNodeServices.server.serf.wan: serf: EventMemberJoin: Node-f24227ca-fbd1-7b5d-bb96-b792bff79aa6.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.926Z [INFO]  TestCatalogNodeServices.server.serf.lan: serf: EventMemberJoin: Node-f24227ca-fbd1-7b5d-bb96-b792bff79aa6 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.926Z [INFO]  TestCatalogNodeServices.server: Handled event for server in area: event=member-join server=Node-f24227ca-fbd1-7b5d-bb96-b792bff79aa6.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:48.926Z [INFO]  TestCatalogNodeServices.server: Adding LAN server: server="Node-f24227ca-fbd1-7b5d-bb96-b792bff79aa6 (Addr: tcp/127.0.0.1:16978) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:48.926Z [INFO]  TestCatalogNodeServices: Started DNS server: address=127.0.0.1:16973 network=udp
>     writer.go:29: 2020-02-23T02:46:48.926Z [INFO]  TestCatalogNodeServices: Started DNS server: address=127.0.0.1:16973 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.927Z [INFO]  TestCatalogNodeServices: Started HTTP server: address=127.0.0.1:16974 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.927Z [INFO]  TestCatalogNodeServices: started state syncer
>     writer.go:29: 2020-02-23T02:46:48.988Z [WARN]  TestCatalogNodeServices.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:48.988Z [INFO]  TestCatalogNodeServices.server.raft: entering candidate state: node="Node at 127.0.0.1:16978 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:48.991Z [DEBUG] TestCatalogNodeServices.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:48.991Z [DEBUG] TestCatalogNodeServices.server.raft: vote granted: from=f24227ca-fbd1-7b5d-bb96-b792bff79aa6 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:48.991Z [INFO]  TestCatalogNodeServices.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:48.991Z [INFO]  TestCatalogNodeServices.server.raft: entering leader state: leader="Node at 127.0.0.1:16978 [Leader]"
>     writer.go:29: 2020-02-23T02:46:48.991Z [INFO]  TestCatalogNodeServices.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:48.992Z [INFO]  TestCatalogNodeServices.server: New leader elected: payload=Node-f24227ca-fbd1-7b5d-bb96-b792bff79aa6
>     writer.go:29: 2020-02-23T02:46:49.000Z [INFO]  TestCatalogNodeServices: Synced node info
>     writer.go:29: 2020-02-23T02:46:49.004Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:49.010Z [INFO]  TestCatalogNodeServices.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:49.010Z [INFO]  TestCatalogNodeServices.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.010Z [DEBUG] TestCatalogNodeServices.server: Skipping self join check for node since the cluster is too small: node=Node-f24227ca-fbd1-7b5d-bb96-b792bff79aa6
>     writer.go:29: 2020-02-23T02:46:49.010Z [INFO]  TestCatalogNodeServices.server: member joined, marking health alive: member=Node-f24227ca-fbd1-7b5d-bb96-b792bff79aa6
>     writer.go:29: 2020-02-23T02:46:49.171Z [INFO]  TestCatalogNodeServices: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:49.171Z [INFO]  TestCatalogNodeServices.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:49.171Z [DEBUG] TestCatalogNodeServices.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.171Z [WARN]  TestCatalogNodeServices.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.171Z [DEBUG] TestCatalogNodeServices.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.172Z [WARN]  TestCatalogNodeServices.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.174Z [INFO]  TestCatalogNodeServices.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:49.174Z [INFO]  TestCatalogNodeServices: consul server down
>     writer.go:29: 2020-02-23T02:46:49.174Z [INFO]  TestCatalogNodeServices: shutdown complete
>     writer.go:29: 2020-02-23T02:46:49.174Z [INFO]  TestCatalogNodeServices: Stopping server: protocol=DNS address=127.0.0.1:16973 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.174Z [INFO]  TestCatalogNodeServices: Stopping server: protocol=DNS address=127.0.0.1:16973 network=udp
>     writer.go:29: 2020-02-23T02:46:49.175Z [INFO]  TestCatalogNodeServices: Stopping server: protocol=HTTP address=127.0.0.1:16974 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.175Z [INFO]  TestCatalogNodeServices: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:49.175Z [INFO]  TestCatalogNodeServices: Endpoints down
> === CONT  TestCatalogServiceNodes_Filter
> --- PASS: TestCatalogServiceNodes_ConnectProxy (0.40s)
>     writer.go:29: 2020-02-23T02:46:48.931Z [WARN]  TestCatalogServiceNodes_ConnectProxy: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:48.931Z [DEBUG] TestCatalogServiceNodes_ConnectProxy.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:48.931Z [DEBUG] TestCatalogServiceNodes_ConnectProxy.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:48.964Z [INFO]  TestCatalogServiceNodes_ConnectProxy.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f923088e-9b21-def0-8ccf-b7067dd6ec22 Address:127.0.0.1:16960}]"
>     writer.go:29: 2020-02-23T02:46:48.964Z [INFO]  TestCatalogServiceNodes_ConnectProxy.server.raft: entering follower state: follower="Node at 127.0.0.1:16960 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:48.964Z [INFO]  TestCatalogServiceNodes_ConnectProxy.server.serf.wan: serf: EventMemberJoin: Node-f923088e-9b21-def0-8ccf-b7067dd6ec22.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.965Z [INFO]  TestCatalogServiceNodes_ConnectProxy.server.serf.lan: serf: EventMemberJoin: Node-f923088e-9b21-def0-8ccf-b7067dd6ec22 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:48.965Z [INFO]  TestCatalogServiceNodes_ConnectProxy.server: Handled event for server in area: event=member-join server=Node-f923088e-9b21-def0-8ccf-b7067dd6ec22.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:48.965Z [INFO]  TestCatalogServiceNodes_ConnectProxy.server: Adding LAN server: server="Node-f923088e-9b21-def0-8ccf-b7067dd6ec22 (Addr: tcp/127.0.0.1:16960) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:48.965Z [INFO]  TestCatalogServiceNodes_ConnectProxy: Started DNS server: address=127.0.0.1:16955 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.965Z [INFO]  TestCatalogServiceNodes_ConnectProxy: Started DNS server: address=127.0.0.1:16955 network=udp
>     writer.go:29: 2020-02-23T02:46:48.966Z [INFO]  TestCatalogServiceNodes_ConnectProxy: Started HTTP server: address=127.0.0.1:16956 network=tcp
>     writer.go:29: 2020-02-23T02:46:48.966Z [INFO]  TestCatalogServiceNodes_ConnectProxy: started state syncer
>     writer.go:29: 2020-02-23T02:46:49.034Z [WARN]  TestCatalogServiceNodes_ConnectProxy.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:49.034Z [INFO]  TestCatalogServiceNodes_ConnectProxy.server.raft: entering candidate state: node="Node at 127.0.0.1:16960 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:49.039Z [DEBUG] TestCatalogServiceNodes_ConnectProxy.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:49.039Z [DEBUG] TestCatalogServiceNodes_ConnectProxy.server.raft: vote granted: from=f923088e-9b21-def0-8ccf-b7067dd6ec22 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:49.039Z [INFO]  TestCatalogServiceNodes_ConnectProxy.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:49.039Z [INFO]  TestCatalogServiceNodes_ConnectProxy.server.raft: entering leader state: leader="Node at 127.0.0.1:16960 [Leader]"
>     writer.go:29: 2020-02-23T02:46:49.039Z [INFO]  TestCatalogServiceNodes_ConnectProxy.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:49.039Z [INFO]  TestCatalogServiceNodes_ConnectProxy.server: New leader elected: payload=Node-f923088e-9b21-def0-8ccf-b7067dd6ec22
>     writer.go:29: 2020-02-23T02:46:49.046Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:49.055Z [INFO]  TestCatalogServiceNodes_ConnectProxy.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:49.055Z [INFO]  TestCatalogServiceNodes_ConnectProxy.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.055Z [DEBUG] TestCatalogServiceNodes_ConnectProxy.server: Skipping self join check for node since the cluster is too small: node=Node-f923088e-9b21-def0-8ccf-b7067dd6ec22
>     writer.go:29: 2020-02-23T02:46:49.055Z [INFO]  TestCatalogServiceNodes_ConnectProxy.server: member joined, marking health alive: member=Node-f923088e-9b21-def0-8ccf-b7067dd6ec22
>     writer.go:29: 2020-02-23T02:46:49.187Z [DEBUG] TestCatalogServiceNodes_ConnectProxy: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:49.191Z [INFO]  TestCatalogServiceNodes_ConnectProxy: Synced node info
>     writer.go:29: 2020-02-23T02:46:49.313Z [INFO]  TestCatalogServiceNodes_ConnectProxy: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:49.313Z [INFO]  TestCatalogServiceNodes_ConnectProxy.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:49.313Z [DEBUG] TestCatalogServiceNodes_ConnectProxy.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.313Z [WARN]  TestCatalogServiceNodes_ConnectProxy.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.313Z [DEBUG] TestCatalogServiceNodes_ConnectProxy.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.315Z [WARN]  TestCatalogServiceNodes_ConnectProxy.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.317Z [INFO]  TestCatalogServiceNodes_ConnectProxy.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:49.317Z [INFO]  TestCatalogServiceNodes_ConnectProxy: consul server down
>     writer.go:29: 2020-02-23T02:46:49.317Z [INFO]  TestCatalogServiceNodes_ConnectProxy: shutdown complete
>     writer.go:29: 2020-02-23T02:46:49.317Z [INFO]  TestCatalogServiceNodes_ConnectProxy: Stopping server: protocol=DNS address=127.0.0.1:16955 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.317Z [INFO]  TestCatalogServiceNodes_ConnectProxy: Stopping server: protocol=DNS address=127.0.0.1:16955 network=udp
>     writer.go:29: 2020-02-23T02:46:49.317Z [INFO]  TestCatalogServiceNodes_ConnectProxy: Stopping server: protocol=HTTP address=127.0.0.1:16956 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.317Z [INFO]  TestCatalogServiceNodes_ConnectProxy: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:49.317Z [INFO]  TestCatalogServiceNodes_ConnectProxy: Endpoints down
> === CONT  TestCatalogServiceNodes_NodeMetaFilter
> === RUN   TestConnectCAConfig/updating_config_with_same_state
> --- PASS: TestCatalogServiceNodes_DistanceSort (0.33s)
>     writer.go:29: 2020-02-23T02:46:49.127Z [WARN]  TestCatalogServiceNodes_DistanceSort: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:49.128Z [DEBUG] TestCatalogServiceNodes_DistanceSort.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:49.128Z [DEBUG] TestCatalogServiceNodes_DistanceSort.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:49.139Z [INFO]  TestCatalogServiceNodes_DistanceSort.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a4888f4e-e606-0210-6f7e-7fd48ac590fc Address:127.0.0.1:16996}]"
>     writer.go:29: 2020-02-23T02:46:49.139Z [INFO]  TestCatalogServiceNodes_DistanceSort.server.raft: entering follower state: follower="Node at 127.0.0.1:16996 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:49.140Z [INFO]  TestCatalogServiceNodes_DistanceSort.server.serf.wan: serf: EventMemberJoin: Node-a4888f4e-e606-0210-6f7e-7fd48ac590fc.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.140Z [INFO]  TestCatalogServiceNodes_DistanceSort.server.serf.lan: serf: EventMemberJoin: Node-a4888f4e-e606-0210-6f7e-7fd48ac590fc 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.140Z [INFO]  TestCatalogServiceNodes_DistanceSort.server: Adding LAN server: server="Node-a4888f4e-e606-0210-6f7e-7fd48ac590fc (Addr: tcp/127.0.0.1:16996) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:49.140Z [INFO]  TestCatalogServiceNodes_DistanceSort: Started DNS server: address=127.0.0.1:16991 network=udp
>     writer.go:29: 2020-02-23T02:46:49.140Z [INFO]  TestCatalogServiceNodes_DistanceSort.server: Handled event for server in area: event=member-join server=Node-a4888f4e-e606-0210-6f7e-7fd48ac590fc.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:49.141Z [INFO]  TestCatalogServiceNodes_DistanceSort: Started DNS server: address=127.0.0.1:16991 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.141Z [INFO]  TestCatalogServiceNodes_DistanceSort: Started HTTP server: address=127.0.0.1:16992 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.141Z [INFO]  TestCatalogServiceNodes_DistanceSort: started state syncer
>     writer.go:29: 2020-02-23T02:46:49.187Z [WARN]  TestCatalogServiceNodes_DistanceSort.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:49.187Z [INFO]  TestCatalogServiceNodes_DistanceSort.server.raft: entering candidate state: node="Node at 127.0.0.1:16996 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:49.192Z [DEBUG] TestCatalogServiceNodes_DistanceSort.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:49.192Z [DEBUG] TestCatalogServiceNodes_DistanceSort.server.raft: vote granted: from=a4888f4e-e606-0210-6f7e-7fd48ac590fc term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:49.192Z [INFO]  TestCatalogServiceNodes_DistanceSort.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:49.192Z [INFO]  TestCatalogServiceNodes_DistanceSort.server.raft: entering leader state: leader="Node at 127.0.0.1:16996 [Leader]"
>     writer.go:29: 2020-02-23T02:46:49.192Z [INFO]  TestCatalogServiceNodes_DistanceSort.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:49.192Z [INFO]  TestCatalogServiceNodes_DistanceSort.server: New leader elected: payload=Node-a4888f4e-e606-0210-6f7e-7fd48ac590fc
>     writer.go:29: 2020-02-23T02:46:49.200Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:49.208Z [INFO]  TestCatalogServiceNodes_DistanceSort.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:49.208Z [INFO]  TestCatalogServiceNodes_DistanceSort.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.208Z [DEBUG] TestCatalogServiceNodes_DistanceSort.server: Skipping self join check for node since the cluster is too small: node=Node-a4888f4e-e606-0210-6f7e-7fd48ac590fc
>     writer.go:29: 2020-02-23T02:46:49.208Z [INFO]  TestCatalogServiceNodes_DistanceSort.server: member joined, marking health alive: member=Node-a4888f4e-e606-0210-6f7e-7fd48ac590fc
>     writer.go:29: 2020-02-23T02:46:49.215Z [DEBUG] TestCatalogServiceNodes_DistanceSort: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:49.217Z [INFO]  TestCatalogServiceNodes_DistanceSort: Synced node info
>     writer.go:29: 2020-02-23T02:46:49.217Z [DEBUG] TestCatalogServiceNodes_DistanceSort: Node info in sync
>     writer.go:29: 2020-02-23T02:46:49.443Z [INFO]  TestCatalogServiceNodes_DistanceSort: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:49.443Z [INFO]  TestCatalogServiceNodes_DistanceSort.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:49.443Z [DEBUG] TestCatalogServiceNodes_DistanceSort.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.443Z [WARN]  TestCatalogServiceNodes_DistanceSort.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.443Z [DEBUG] TestCatalogServiceNodes_DistanceSort.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.445Z [WARN]  TestCatalogServiceNodes_DistanceSort.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.446Z [INFO]  TestCatalogServiceNodes_DistanceSort.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:49.447Z [INFO]  TestCatalogServiceNodes_DistanceSort: consul server down
>     writer.go:29: 2020-02-23T02:46:49.447Z [INFO]  TestCatalogServiceNodes_DistanceSort: shutdown complete
>     writer.go:29: 2020-02-23T02:46:49.447Z [INFO]  TestCatalogServiceNodes_DistanceSort: Stopping server: protocol=DNS address=127.0.0.1:16991 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.447Z [INFO]  TestCatalogServiceNodes_DistanceSort: Stopping server: protocol=DNS address=127.0.0.1:16991 network=udp
>     writer.go:29: 2020-02-23T02:46:49.447Z [INFO]  TestCatalogServiceNodes_DistanceSort: Stopping server: protocol=HTTP address=127.0.0.1:16992 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.447Z [INFO]  TestCatalogServiceNodes_DistanceSort: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:49.447Z [INFO]  TestCatalogServiceNodes_DistanceSort: Endpoints down
> === CONT  TestCatalogServiceNodes
> --- PASS: TestCatalogServiceNodes_Filter (0.33s)
>     writer.go:29: 2020-02-23T02:46:49.182Z [WARN]  TestCatalogServiceNodes_Filter: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:49.182Z [DEBUG] TestCatalogServiceNodes_Filter.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:49.183Z [DEBUG] TestCatalogServiceNodes_Filter.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:49.195Z [INFO]  TestCatalogServiceNodes_Filter.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:62aa91c1-76a9-8514-6a18-bb8fdc0ed720 Address:127.0.0.1:16984}]"
>     writer.go:29: 2020-02-23T02:46:49.195Z [INFO]  TestCatalogServiceNodes_Filter.server.raft: entering follower state: follower="Node at 127.0.0.1:16984 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:49.196Z [INFO]  TestCatalogServiceNodes_Filter.server.serf.wan: serf: EventMemberJoin: Node-62aa91c1-76a9-8514-6a18-bb8fdc0ed720.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.197Z [INFO]  TestCatalogServiceNodes_Filter.server.serf.lan: serf: EventMemberJoin: Node-62aa91c1-76a9-8514-6a18-bb8fdc0ed720 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.197Z [INFO]  TestCatalogServiceNodes_Filter.server: Handled event for server in area: event=member-join server=Node-62aa91c1-76a9-8514-6a18-bb8fdc0ed720.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:49.197Z [INFO]  TestCatalogServiceNodes_Filter.server: Adding LAN server: server="Node-62aa91c1-76a9-8514-6a18-bb8fdc0ed720 (Addr: tcp/127.0.0.1:16984) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:49.197Z [INFO]  TestCatalogServiceNodes_Filter: Started DNS server: address=127.0.0.1:16979 network=udp
>     writer.go:29: 2020-02-23T02:46:49.197Z [INFO]  TestCatalogServiceNodes_Filter: Started DNS server: address=127.0.0.1:16979 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.197Z [INFO]  TestCatalogServiceNodes_Filter: Started HTTP server: address=127.0.0.1:16980 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.197Z [INFO]  TestCatalogServiceNodes_Filter: started state syncer
>     writer.go:29: 2020-02-23T02:46:49.239Z [WARN]  TestCatalogServiceNodes_Filter.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:49.239Z [INFO]  TestCatalogServiceNodes_Filter.server.raft: entering candidate state: node="Node at 127.0.0.1:16984 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:49.242Z [DEBUG] TestCatalogServiceNodes_Filter.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:49.242Z [DEBUG] TestCatalogServiceNodes_Filter.server.raft: vote granted: from=62aa91c1-76a9-8514-6a18-bb8fdc0ed720 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:49.242Z [INFO]  TestCatalogServiceNodes_Filter.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:49.242Z [INFO]  TestCatalogServiceNodes_Filter.server.raft: entering leader state: leader="Node at 127.0.0.1:16984 [Leader]"
>     writer.go:29: 2020-02-23T02:46:49.242Z [INFO]  TestCatalogServiceNodes_Filter.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:49.242Z [INFO]  TestCatalogServiceNodes_Filter.server: New leader elected: payload=Node-62aa91c1-76a9-8514-6a18-bb8fdc0ed720
>     writer.go:29: 2020-02-23T02:46:49.249Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:49.258Z [INFO]  TestCatalogServiceNodes_Filter.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:49.258Z [INFO]  TestCatalogServiceNodes_Filter.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.258Z [DEBUG] TestCatalogServiceNodes_Filter.server: Skipping self join check for node since the cluster is too small: node=Node-62aa91c1-76a9-8514-6a18-bb8fdc0ed720
>     writer.go:29: 2020-02-23T02:46:49.258Z [INFO]  TestCatalogServiceNodes_Filter.server: member joined, marking health alive: member=Node-62aa91c1-76a9-8514-6a18-bb8fdc0ed720
>     writer.go:29: 2020-02-23T02:46:49.444Z [DEBUG] TestCatalogServiceNodes_Filter: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:49.449Z [INFO]  TestCatalogServiceNodes_Filter: Synced node info
>     writer.go:29: 2020-02-23T02:46:49.506Z [INFO]  TestCatalogServiceNodes_Filter: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:49.506Z [INFO]  TestCatalogServiceNodes_Filter.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:49.506Z [DEBUG] TestCatalogServiceNodes_Filter.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.506Z [WARN]  TestCatalogServiceNodes_Filter.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.506Z [DEBUG] TestCatalogServiceNodes_Filter.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.508Z [WARN]  TestCatalogServiceNodes_Filter.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.509Z [INFO]  TestCatalogServiceNodes_Filter.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:49.509Z [INFO]  TestCatalogServiceNodes_Filter: consul server down
>     writer.go:29: 2020-02-23T02:46:49.509Z [INFO]  TestCatalogServiceNodes_Filter: shutdown complete
>     writer.go:29: 2020-02-23T02:46:49.509Z [INFO]  TestCatalogServiceNodes_Filter: Stopping server: protocol=DNS address=127.0.0.1:16979 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.509Z [INFO]  TestCatalogServiceNodes_Filter: Stopping server: protocol=DNS address=127.0.0.1:16979 network=udp
>     writer.go:29: 2020-02-23T02:46:49.509Z [INFO]  TestCatalogServiceNodes_Filter: Stopping server: protocol=HTTP address=127.0.0.1:16980 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.509Z [INFO]  TestCatalogServiceNodes_Filter: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:49.509Z [INFO]  TestCatalogServiceNodes_Filter: Endpoints down
> === CONT  TestCatalogRegister_checkRegistration
> --- PASS: TestCatalogServiceNodes_NodeMetaFilter (0.21s)
>     writer.go:29: 2020-02-23T02:46:49.323Z [WARN]  TestCatalogServiceNodes_NodeMetaFilter: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:49.323Z [DEBUG] TestCatalogServiceNodes_NodeMetaFilter.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:49.324Z [DEBUG] TestCatalogServiceNodes_NodeMetaFilter.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:49.350Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:71f04740-b1ef-513c-e374-eeed9f9be184 Address:127.0.0.1:17002}]"
>     writer.go:29: 2020-02-23T02:46:49.351Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.server.serf.wan: serf: EventMemberJoin: Node-71f04740-b1ef-513c-e374-eeed9f9be184.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.352Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.server.raft: entering follower state: follower="Node at 127.0.0.1:17002 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:49.354Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.server.serf.lan: serf: EventMemberJoin: Node-71f04740-b1ef-513c-e374-eeed9f9be184 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.356Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.server: Adding LAN server: server="Node-71f04740-b1ef-513c-e374-eeed9f9be184 (Addr: tcp/127.0.0.1:17002) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:49.359Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.server: Handled event for server in area: event=member-join server=Node-71f04740-b1ef-513c-e374-eeed9f9be184.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:49.360Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter: Started DNS server: address=127.0.0.1:16997 network=udp
>     writer.go:29: 2020-02-23T02:46:49.361Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter: Started DNS server: address=127.0.0.1:16997 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.364Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter: Started HTTP server: address=127.0.0.1:16998 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.364Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter: started state syncer
>     writer.go:29: 2020-02-23T02:46:49.409Z [WARN]  TestCatalogServiceNodes_NodeMetaFilter.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:49.409Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.server.raft: entering candidate state: node="Node at 127.0.0.1:17002 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:49.417Z [DEBUG] TestCatalogServiceNodes_NodeMetaFilter.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:49.417Z [DEBUG] TestCatalogServiceNodes_NodeMetaFilter.server.raft: vote granted: from=71f04740-b1ef-513c-e374-eeed9f9be184 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:49.417Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:49.417Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.server.raft: entering leader state: leader="Node at 127.0.0.1:17002 [Leader]"
>     writer.go:29: 2020-02-23T02:46:49.418Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:49.418Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.server: New leader elected: payload=Node-71f04740-b1ef-513c-e374-eeed9f9be184
>     writer.go:29: 2020-02-23T02:46:49.425Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:49.434Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:49.434Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.434Z [DEBUG] TestCatalogServiceNodes_NodeMetaFilter.server: Skipping self join check for node since the cluster is too small: node=Node-71f04740-b1ef-513c-e374-eeed9f9be184
>     writer.go:29: 2020-02-23T02:46:49.434Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.server: member joined, marking health alive: member=Node-71f04740-b1ef-513c-e374-eeed9f9be184
>     writer.go:29: 2020-02-23T02:46:49.523Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:49.523Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:49.523Z [DEBUG] TestCatalogServiceNodes_NodeMetaFilter.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.523Z [WARN]  TestCatalogServiceNodes_NodeMetaFilter.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.523Z [ERROR] TestCatalogServiceNodes_NodeMetaFilter.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:49.523Z [DEBUG] TestCatalogServiceNodes_NodeMetaFilter.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.525Z [WARN]  TestCatalogServiceNodes_NodeMetaFilter.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.527Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:49.527Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter: consul server down
>     writer.go:29: 2020-02-23T02:46:49.527Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter: shutdown complete
>     writer.go:29: 2020-02-23T02:46:49.527Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter: Stopping server: protocol=DNS address=127.0.0.1:16997 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.527Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter: Stopping server: protocol=DNS address=127.0.0.1:16997 network=udp
>     writer.go:29: 2020-02-23T02:46:49.527Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter: Stopping server: protocol=HTTP address=127.0.0.1:16998 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.527Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:49.527Z [INFO]  TestCatalogServiceNodes_NodeMetaFilter: Endpoints down
> === CONT  TestCatalogServices_NodeMetaFilter
> --- PASS: TestConnectCAConfig (2.35s)
>     --- PASS: TestConnectCAConfig/basic (0.39s)
>         writer.go:29: 2020-02-23T02:46:47.291Z [WARN]  TestConnectCAConfig/basic: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:47.291Z [DEBUG] TestConnectCAConfig/basic.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:47.291Z [DEBUG] TestConnectCAConfig/basic.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:47.304Z [INFO]  TestConnectCAConfig/basic.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:63bdbaf7-beac-d610-52ce-60e56213091e Address:127.0.0.1:16864}]"
>         writer.go:29: 2020-02-23T02:46:47.304Z [INFO]  TestConnectCAConfig/basic.server.raft: entering follower state: follower="Node at 127.0.0.1:16864 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:47.305Z [INFO]  TestConnectCAConfig/basic.server.serf.wan: serf: EventMemberJoin: Node-63bdbaf7-beac-d610-52ce-60e56213091e.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:47.306Z [INFO]  TestConnectCAConfig/basic.server.serf.lan: serf: EventMemberJoin: Node-63bdbaf7-beac-d610-52ce-60e56213091e 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:47.306Z [INFO]  TestConnectCAConfig/basic.server: Handled event for server in area: event=member-join server=Node-63bdbaf7-beac-d610-52ce-60e56213091e.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:47.306Z [INFO]  TestConnectCAConfig/basic.server: Adding LAN server: server="Node-63bdbaf7-beac-d610-52ce-60e56213091e (Addr: tcp/127.0.0.1:16864) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:47.306Z [INFO]  TestConnectCAConfig/basic: Started DNS server: address=127.0.0.1:16859 network=tcp
>         writer.go:29: 2020-02-23T02:46:47.306Z [INFO]  TestConnectCAConfig/basic: Started DNS server: address=127.0.0.1:16859 network=udp
>         writer.go:29: 2020-02-23T02:46:47.307Z [INFO]  TestConnectCAConfig/basic: Started HTTP server: address=127.0.0.1:16860 network=tcp
>         writer.go:29: 2020-02-23T02:46:47.307Z [INFO]  TestConnectCAConfig/basic: started state syncer
>         writer.go:29: 2020-02-23T02:46:47.353Z [WARN]  TestConnectCAConfig/basic.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:47.353Z [INFO]  TestConnectCAConfig/basic.server.raft: entering candidate state: node="Node at 127.0.0.1:16864 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:47.357Z [DEBUG] TestConnectCAConfig/basic.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:47.357Z [DEBUG] TestConnectCAConfig/basic.server.raft: vote granted: from=63bdbaf7-beac-d610-52ce-60e56213091e term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:47.357Z [INFO]  TestConnectCAConfig/basic.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:47.357Z [INFO]  TestConnectCAConfig/basic.server.raft: entering leader state: leader="Node at 127.0.0.1:16864 [Leader]"
>         writer.go:29: 2020-02-23T02:46:47.357Z [INFO]  TestConnectCAConfig/basic.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:47.357Z [INFO]  TestConnectCAConfig/basic.server: New leader elected: payload=Node-63bdbaf7-beac-d610-52ce-60e56213091e
>         writer.go:29: 2020-02-23T02:46:47.365Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:47.373Z [INFO]  TestConnectCAConfig/basic.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:47.373Z [INFO]  TestConnectCAConfig/basic.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:47.373Z [DEBUG] TestConnectCAConfig/basic.server: Skipping self join check for node since the cluster is too small: node=Node-63bdbaf7-beac-d610-52ce-60e56213091e
>         writer.go:29: 2020-02-23T02:46:47.373Z [INFO]  TestConnectCAConfig/basic.server: member joined, marking health alive: member=Node-63bdbaf7-beac-d610-52ce-60e56213091e
>         writer.go:29: 2020-02-23T02:46:47.618Z [DEBUG] TestConnectCAConfig/basic: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:47.620Z [INFO]  TestConnectCAConfig/basic: Synced node info
>         writer.go:29: 2020-02-23T02:46:47.620Z [DEBUG] TestConnectCAConfig/basic: Node info in sync
>         writer.go:29: 2020-02-23T02:46:47.673Z [INFO]  TestConnectCAConfig/basic.server.connect: CA provider config updated
>         writer.go:29: 2020-02-23T02:46:47.673Z [INFO]  TestConnectCAConfig/basic: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:47.673Z [INFO]  TestConnectCAConfig/basic.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:47.673Z [DEBUG] TestConnectCAConfig/basic.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:47.673Z [WARN]  TestConnectCAConfig/basic.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:47.673Z [DEBUG] TestConnectCAConfig/basic.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:47.675Z [WARN]  TestConnectCAConfig/basic.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:47.677Z [INFO]  TestConnectCAConfig/basic.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:47.677Z [INFO]  TestConnectCAConfig/basic: consul server down
>         writer.go:29: 2020-02-23T02:46:47.677Z [INFO]  TestConnectCAConfig/basic: shutdown complete
>         writer.go:29: 2020-02-23T02:46:47.677Z [INFO]  TestConnectCAConfig/basic: Stopping server: protocol=DNS address=127.0.0.1:16859 network=tcp
>         writer.go:29: 2020-02-23T02:46:47.677Z [INFO]  TestConnectCAConfig/basic: Stopping server: protocol=DNS address=127.0.0.1:16859 network=udp
>         writer.go:29: 2020-02-23T02:46:47.677Z [INFO]  TestConnectCAConfig/basic: Stopping server: protocol=HTTP address=127.0.0.1:16860 network=tcp
>         writer.go:29: 2020-02-23T02:46:47.677Z [INFO]  TestConnectCAConfig/basic: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:47.677Z [INFO]  TestConnectCAConfig/basic: Endpoints down
>     --- PASS: TestConnectCAConfig/basic_with_IntermediateCertTTL (0.54s)
>         writer.go:29: 2020-02-23T02:46:47.685Z [WARN]  TestConnectCAConfig/basic_with_IntermediateCertTTL: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:47.685Z [DEBUG] TestConnectCAConfig/basic_with_IntermediateCertTTL.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:47.685Z [DEBUG] TestConnectCAConfig/basic_with_IntermediateCertTTL.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:47.698Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4381c5aa-0669-8084-f397-cea6ce3e8aaf Address:127.0.0.1:16894}]"
>         writer.go:29: 2020-02-23T02:46:47.698Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server.serf.wan: serf: EventMemberJoin: Node-4381c5aa-0669-8084-f397-cea6ce3e8aaf.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:47.699Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server.serf.lan: serf: EventMemberJoin: Node-4381c5aa-0669-8084-f397-cea6ce3e8aaf 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:47.699Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL: Started DNS server: address=127.0.0.1:16889 network=udp
>         writer.go:29: 2020-02-23T02:46:47.699Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server.raft: entering follower state: follower="Node at 127.0.0.1:16894 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:47.699Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server: Adding LAN server: server="Node-4381c5aa-0669-8084-f397-cea6ce3e8aaf (Addr: tcp/127.0.0.1:16894) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:47.699Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server: Handled event for server in area: event=member-join server=Node-4381c5aa-0669-8084-f397-cea6ce3e8aaf.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:47.699Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL: Started DNS server: address=127.0.0.1:16889 network=tcp
>         writer.go:29: 2020-02-23T02:46:47.700Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL: Started HTTP server: address=127.0.0.1:16890 network=tcp
>         writer.go:29: 2020-02-23T02:46:47.700Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL: started state syncer
>         writer.go:29: 2020-02-23T02:46:47.760Z [WARN]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:47.760Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server.raft: entering candidate state: node="Node at 127.0.0.1:16894 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:47.763Z [DEBUG] TestConnectCAConfig/basic_with_IntermediateCertTTL.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:47.763Z [DEBUG] TestConnectCAConfig/basic_with_IntermediateCertTTL.server.raft: vote granted: from=4381c5aa-0669-8084-f397-cea6ce3e8aaf term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:47.763Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:47.763Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server.raft: entering leader state: leader="Node at 127.0.0.1:16894 [Leader]"
>         writer.go:29: 2020-02-23T02:46:47.763Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:47.763Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server: New leader elected: payload=Node-4381c5aa-0669-8084-f397-cea6ce3e8aaf
>         writer.go:29: 2020-02-23T02:46:47.775Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:47.783Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:47.783Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:47.783Z [DEBUG] TestConnectCAConfig/basic_with_IntermediateCertTTL.server: Skipping self join check for node since the cluster is too small: node=Node-4381c5aa-0669-8084-f397-cea6ce3e8aaf
>         writer.go:29: 2020-02-23T02:46:47.783Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server: member joined, marking health alive: member=Node-4381c5aa-0669-8084-f397-cea6ce3e8aaf
>         writer.go:29: 2020-02-23T02:46:47.931Z [DEBUG] TestConnectCAConfig/basic_with_IntermediateCertTTL: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:47.933Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL: Synced node info
>         writer.go:29: 2020-02-23T02:46:47.954Z [DEBUG] TestConnectCAConfig/basic_with_IntermediateCertTTL: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:47.954Z [DEBUG] TestConnectCAConfig/basic_with_IntermediateCertTTL: Node info in sync
>         writer.go:29: 2020-02-23T02:46:47.954Z [DEBUG] TestConnectCAConfig/basic_with_IntermediateCertTTL: Node info in sync
>         writer.go:29: 2020-02-23T02:46:48.176Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server.connect: CA provider config updated
>         writer.go:29: 2020-02-23T02:46:48.176Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:48.176Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:48.176Z [DEBUG] TestConnectCAConfig/basic_with_IntermediateCertTTL.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:48.176Z [WARN]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:48.176Z [DEBUG] TestConnectCAConfig/basic_with_IntermediateCertTTL.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:48.219Z [WARN]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:48.221Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:48.221Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL: consul server down
>         writer.go:29: 2020-02-23T02:46:48.221Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL: shutdown complete
>         writer.go:29: 2020-02-23T02:46:48.221Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL: Stopping server: protocol=DNS address=127.0.0.1:16889 network=tcp
>         writer.go:29: 2020-02-23T02:46:48.221Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL: Stopping server: protocol=DNS address=127.0.0.1:16889 network=udp
>         writer.go:29: 2020-02-23T02:46:48.222Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL: Stopping server: protocol=HTTP address=127.0.0.1:16890 network=tcp
>         writer.go:29: 2020-02-23T02:46:48.222Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:48.222Z [INFO]  TestConnectCAConfig/basic_with_IntermediateCertTTL: Endpoints down
>     --- PASS: TestConnectCAConfig/force_without_cross_sign_CamelCase (0.45s)
>         writer.go:29: 2020-02-23T02:46:48.232Z [WARN]  TestConnectCAConfig/force_without_cross_sign_CamelCase: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:48.233Z [DEBUG] TestConnectCAConfig/force_without_cross_sign_CamelCase.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:48.235Z [DEBUG] TestConnectCAConfig/force_without_cross_sign_CamelCase.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:48.246Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:36b82b4b-adc4-18e6-4f5e-77c2619e1fab Address:127.0.0.1:16912}]"
>         writer.go:29: 2020-02-23T02:46:48.247Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server.raft: entering follower state: follower="Node at 127.0.0.1:16912 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:48.247Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server.serf.wan: serf: EventMemberJoin: Node-36b82b4b-adc4-18e6-4f5e-77c2619e1fab.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:48.248Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server.serf.lan: serf: EventMemberJoin: Node-36b82b4b-adc4-18e6-4f5e-77c2619e1fab 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:48.248Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server: Handled event for server in area: event=member-join server=Node-36b82b4b-adc4-18e6-4f5e-77c2619e1fab.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:48.248Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server: Adding LAN server: server="Node-36b82b4b-adc4-18e6-4f5e-77c2619e1fab (Addr: tcp/127.0.0.1:16912) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:48.248Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase: Started DNS server: address=127.0.0.1:16907 network=tcp
>         writer.go:29: 2020-02-23T02:46:48.248Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase: Started DNS server: address=127.0.0.1:16907 network=udp
>         writer.go:29: 2020-02-23T02:46:48.249Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase: Started HTTP server: address=127.0.0.1:16908 network=tcp
>         writer.go:29: 2020-02-23T02:46:48.249Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase: started state syncer
>         writer.go:29: 2020-02-23T02:46:48.311Z [WARN]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:48.311Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server.raft: entering candidate state: node="Node at 127.0.0.1:16912 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:48.315Z [DEBUG] TestConnectCAConfig/force_without_cross_sign_CamelCase.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:48.315Z [DEBUG] TestConnectCAConfig/force_without_cross_sign_CamelCase.server.raft: vote granted: from=36b82b4b-adc4-18e6-4f5e-77c2619e1fab term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:48.315Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:48.315Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server.raft: entering leader state: leader="Node at 127.0.0.1:16912 [Leader]"
>         writer.go:29: 2020-02-23T02:46:48.315Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:48.315Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server: New leader elected: payload=Node-36b82b4b-adc4-18e6-4f5e-77c2619e1fab
>         writer.go:29: 2020-02-23T02:46:48.322Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:48.331Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:48.331Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:48.331Z [DEBUG] TestConnectCAConfig/force_without_cross_sign_CamelCase.server: Skipping self join check for node since the cluster is too small: node=Node-36b82b4b-adc4-18e6-4f5e-77c2619e1fab
>         writer.go:29: 2020-02-23T02:46:48.331Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server: member joined, marking health alive: member=Node-36b82b4b-adc4-18e6-4f5e-77c2619e1fab
>         writer.go:29: 2020-02-23T02:46:48.446Z [DEBUG] TestConnectCAConfig/force_without_cross_sign_CamelCase: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:48.489Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase: Synced node info
>         writer.go:29: 2020-02-23T02:46:48.651Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server.connect: CA provider config updated
>         writer.go:29: 2020-02-23T02:46:48.652Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:48.652Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:48.652Z [DEBUG] TestConnectCAConfig/force_without_cross_sign_CamelCase.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:48.652Z [WARN]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:48.652Z [DEBUG] TestConnectCAConfig/force_without_cross_sign_CamelCase.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:48.665Z [WARN]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:48.671Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:48.671Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase: consul server down
>         writer.go:29: 2020-02-23T02:46:48.671Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase: shutdown complete
>         writer.go:29: 2020-02-23T02:46:48.671Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase: Stopping server: protocol=DNS address=127.0.0.1:16907 network=tcp
>         writer.go:29: 2020-02-23T02:46:48.671Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase: Stopping server: protocol=DNS address=127.0.0.1:16907 network=udp
>         writer.go:29: 2020-02-23T02:46:48.671Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase: Stopping server: protocol=HTTP address=127.0.0.1:16908 network=tcp
>         writer.go:29: 2020-02-23T02:46:48.671Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:48.671Z [INFO]  TestConnectCAConfig/force_without_cross_sign_CamelCase: Endpoints down
>     --- PASS: TestConnectCAConfig/force_without_cross_sign_snake_case (0.28s)
>         writer.go:29: 2020-02-23T02:46:48.682Z [WARN]  TestConnectCAConfig/force_without_cross_sign_snake_case: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:48.682Z [DEBUG] TestConnectCAConfig/force_without_cross_sign_snake_case.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:48.683Z [DEBUG] TestConnectCAConfig/force_without_cross_sign_snake_case.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:48.695Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:ed9aa999-d67b-6e0c-4b60-a899c9fb7ff3 Address:127.0.0.1:16954}]"
>         writer.go:29: 2020-02-23T02:46:48.696Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server.serf.wan: serf: EventMemberJoin: Node-ed9aa999-d67b-6e0c-4b60-a899c9fb7ff3.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:48.696Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server.serf.lan: serf: EventMemberJoin: Node-ed9aa999-d67b-6e0c-4b60-a899c9fb7ff3 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:48.696Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case: Started DNS server: address=127.0.0.1:16949 network=udp
>         writer.go:29: 2020-02-23T02:46:48.696Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server.raft: entering follower state: follower="Node at 127.0.0.1:16954 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:48.697Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server: Adding LAN server: server="Node-ed9aa999-d67b-6e0c-4b60-a899c9fb7ff3 (Addr: tcp/127.0.0.1:16954) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:48.697Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server: Handled event for server in area: event=member-join server=Node-ed9aa999-d67b-6e0c-4b60-a899c9fb7ff3.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:48.697Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case: Started DNS server: address=127.0.0.1:16949 network=tcp
>         writer.go:29: 2020-02-23T02:46:48.697Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case: Started HTTP server: address=127.0.0.1:16950 network=tcp
>         writer.go:29: 2020-02-23T02:46:48.697Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case: started state syncer
>         writer.go:29: 2020-02-23T02:46:48.747Z [WARN]  TestConnectCAConfig/force_without_cross_sign_snake_case.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:48.747Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server.raft: entering candidate state: node="Node at 127.0.0.1:16954 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:48.752Z [DEBUG] TestConnectCAConfig/force_without_cross_sign_snake_case.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:48.752Z [DEBUG] TestConnectCAConfig/force_without_cross_sign_snake_case.server.raft: vote granted: from=ed9aa999-d67b-6e0c-4b60-a899c9fb7ff3 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:48.752Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:48.752Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server.raft: entering leader state: leader="Node at 127.0.0.1:16954 [Leader]"
>         writer.go:29: 2020-02-23T02:46:48.752Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:48.752Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server: New leader elected: payload=Node-ed9aa999-d67b-6e0c-4b60-a899c9fb7ff3
>         writer.go:29: 2020-02-23T02:46:48.799Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:48.912Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:48.912Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:48.912Z [DEBUG] TestConnectCAConfig/force_without_cross_sign_snake_case.server: Skipping self join check for node since the cluster is too small: node=Node-ed9aa999-d67b-6e0c-4b60-a899c9fb7ff3
>         writer.go:29: 2020-02-23T02:46:48.912Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server: member joined, marking health alive: member=Node-ed9aa999-d67b-6e0c-4b60-a899c9fb7ff3
>         writer.go:29: 2020-02-23T02:46:48.949Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server.connect: CA provider config updated
>         writer.go:29: 2020-02-23T02:46:48.949Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:48.949Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:48.949Z [DEBUG] TestConnectCAConfig/force_without_cross_sign_snake_case.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:48.949Z [WARN]  TestConnectCAConfig/force_without_cross_sign_snake_case.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:48.949Z [ERROR] TestConnectCAConfig/force_without_cross_sign_snake_case.anti_entropy: failed to sync remote state: error="No cluster leader"
>         writer.go:29: 2020-02-23T02:46:48.949Z [DEBUG] TestConnectCAConfig/force_without_cross_sign_snake_case.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:48.950Z [WARN]  TestConnectCAConfig/force_without_cross_sign_snake_case.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:48.952Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:48.953Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case: consul server down
>         writer.go:29: 2020-02-23T02:46:48.953Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case: shutdown complete
>         writer.go:29: 2020-02-23T02:46:48.953Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case: Stopping server: protocol=DNS address=127.0.0.1:16949 network=tcp
>         writer.go:29: 2020-02-23T02:46:48.953Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case: Stopping server: protocol=DNS address=127.0.0.1:16949 network=udp
>         writer.go:29: 2020-02-23T02:46:48.953Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case: Stopping server: protocol=HTTP address=127.0.0.1:16950 network=tcp
>         writer.go:29: 2020-02-23T02:46:48.953Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:48.953Z [INFO]  TestConnectCAConfig/force_without_cross_sign_snake_case: Endpoints down
>     --- PASS: TestConnectCAConfig/setting_state_fails (0.39s)
>         writer.go:29: 2020-02-23T02:46:48.962Z [WARN]  TestConnectCAConfig/setting_state_fails: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:48.962Z [DEBUG] TestConnectCAConfig/setting_state_fails.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:48.962Z [DEBUG] TestConnectCAConfig/setting_state_fails.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:48.978Z [INFO]  TestConnectCAConfig/setting_state_fails.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:2eaea582-7a2b-b752-067d-dec4bdb37ddb Address:127.0.0.1:16966}]"
>         writer.go:29: 2020-02-23T02:46:48.978Z [INFO]  TestConnectCAConfig/setting_state_fails.server.raft: entering follower state: follower="Node at 127.0.0.1:16966 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:48.979Z [INFO]  TestConnectCAConfig/setting_state_fails.server.serf.wan: serf: EventMemberJoin: Node-2eaea582-7a2b-b752-067d-dec4bdb37ddb.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:48.979Z [INFO]  TestConnectCAConfig/setting_state_fails.server.serf.lan: serf: EventMemberJoin: Node-2eaea582-7a2b-b752-067d-dec4bdb37ddb 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:48.979Z [INFO]  TestConnectCAConfig/setting_state_fails.server: Handled event for server in area: event=member-join server=Node-2eaea582-7a2b-b752-067d-dec4bdb37ddb.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:48.980Z [INFO]  TestConnectCAConfig/setting_state_fails.server: Adding LAN server: server="Node-2eaea582-7a2b-b752-067d-dec4bdb37ddb (Addr: tcp/127.0.0.1:16966) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:48.980Z [INFO]  TestConnectCAConfig/setting_state_fails: Started DNS server: address=127.0.0.1:16961 network=tcp
>         writer.go:29: 2020-02-23T02:46:48.980Z [INFO]  TestConnectCAConfig/setting_state_fails: Started DNS server: address=127.0.0.1:16961 network=udp
>         writer.go:29: 2020-02-23T02:46:48.980Z [INFO]  TestConnectCAConfig/setting_state_fails: Started HTTP server: address=127.0.0.1:16962 network=tcp
>         writer.go:29: 2020-02-23T02:46:48.980Z [INFO]  TestConnectCAConfig/setting_state_fails: started state syncer
>         writer.go:29: 2020-02-23T02:46:49.043Z [WARN]  TestConnectCAConfig/setting_state_fails.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:49.043Z [INFO]  TestConnectCAConfig/setting_state_fails.server.raft: entering candidate state: node="Node at 127.0.0.1:16966 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:49.048Z [DEBUG] TestConnectCAConfig/setting_state_fails.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:49.048Z [DEBUG] TestConnectCAConfig/setting_state_fails.server.raft: vote granted: from=2eaea582-7a2b-b752-067d-dec4bdb37ddb term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:49.048Z [INFO]  TestConnectCAConfig/setting_state_fails.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:49.048Z [INFO]  TestConnectCAConfig/setting_state_fails.server.raft: entering leader state: leader="Node at 127.0.0.1:16966 [Leader]"
>         writer.go:29: 2020-02-23T02:46:49.048Z [INFO]  TestConnectCAConfig/setting_state_fails.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:49.048Z [INFO]  TestConnectCAConfig/setting_state_fails.server: New leader elected: payload=Node-2eaea582-7a2b-b752-067d-dec4bdb37ddb
>         writer.go:29: 2020-02-23T02:46:49.057Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:49.071Z [INFO]  TestConnectCAConfig/setting_state_fails.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:49.071Z [INFO]  TestConnectCAConfig/setting_state_fails.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:49.071Z [DEBUG] TestConnectCAConfig/setting_state_fails.server: Skipping self join check for node since the cluster is too small: node=Node-2eaea582-7a2b-b752-067d-dec4bdb37ddb
>         writer.go:29: 2020-02-23T02:46:49.071Z [INFO]  TestConnectCAConfig/setting_state_fails.server: member joined, marking health alive: member=Node-2eaea582-7a2b-b752-067d-dec4bdb37ddb
>         writer.go:29: 2020-02-23T02:46:49.178Z [DEBUG] TestConnectCAConfig/setting_state_fails: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:49.181Z [INFO]  TestConnectCAConfig/setting_state_fails: Synced node info
>         writer.go:29: 2020-02-23T02:46:49.330Z [INFO]  TestConnectCAConfig/setting_state_fails: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:49.330Z [INFO]  TestConnectCAConfig/setting_state_fails.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:49.330Z [DEBUG] TestConnectCAConfig/setting_state_fails.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:49.330Z [WARN]  TestConnectCAConfig/setting_state_fails.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:49.330Z [DEBUG] TestConnectCAConfig/setting_state_fails.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:49.339Z [WARN]  TestConnectCAConfig/setting_state_fails.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:49.341Z [INFO]  TestConnectCAConfig/setting_state_fails.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:49.341Z [INFO]  TestConnectCAConfig/setting_state_fails: consul server down
>         writer.go:29: 2020-02-23T02:46:49.341Z [INFO]  TestConnectCAConfig/setting_state_fails: shutdown complete
>         writer.go:29: 2020-02-23T02:46:49.341Z [INFO]  TestConnectCAConfig/setting_state_fails: Stopping server: protocol=DNS address=127.0.0.1:16961 network=tcp
>         writer.go:29: 2020-02-23T02:46:49.341Z [INFO]  TestConnectCAConfig/setting_state_fails: Stopping server: protocol=DNS address=127.0.0.1:16961 network=udp
>         writer.go:29: 2020-02-23T02:46:49.341Z [INFO]  TestConnectCAConfig/setting_state_fails: Stopping server: protocol=HTTP address=127.0.0.1:16962 network=tcp
>         writer.go:29: 2020-02-23T02:46:49.341Z [INFO]  TestConnectCAConfig/setting_state_fails: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:49.341Z [INFO]  TestConnectCAConfig/setting_state_fails: Endpoints down
>     --- PASS: TestConnectCAConfig/updating_config_with_same_state (0.30s)
>         writer.go:29: 2020-02-23T02:46:49.369Z [WARN]  TestConnectCAConfig/updating_config_with_same_state: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:49.370Z [DEBUG] TestConnectCAConfig/updating_config_with_same_state.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:49.370Z [DEBUG] TestConnectCAConfig/updating_config_with_same_state.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:49.388Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:defe3dae-641b-9eda-1818-286fc1f8eea7 Address:127.0.0.1:16990}]"
>         writer.go:29: 2020-02-23T02:46:49.388Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server.raft: entering follower state: follower="Node at 127.0.0.1:16990 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:49.389Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server.serf.wan: serf: EventMemberJoin: Node-defe3dae-641b-9eda-1818-286fc1f8eea7.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:49.389Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server.serf.lan: serf: EventMemberJoin: Node-defe3dae-641b-9eda-1818-286fc1f8eea7 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:49.389Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server: Handled event for server in area: event=member-join server=Node-defe3dae-641b-9eda-1818-286fc1f8eea7.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:49.389Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server: Adding LAN server: server="Node-defe3dae-641b-9eda-1818-286fc1f8eea7 (Addr: tcp/127.0.0.1:16990) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:49.389Z [INFO]  TestConnectCAConfig/updating_config_with_same_state: Started DNS server: address=127.0.0.1:16985 network=udp
>         writer.go:29: 2020-02-23T02:46:49.389Z [INFO]  TestConnectCAConfig/updating_config_with_same_state: Started DNS server: address=127.0.0.1:16985 network=tcp
>         writer.go:29: 2020-02-23T02:46:49.390Z [INFO]  TestConnectCAConfig/updating_config_with_same_state: Started HTTP server: address=127.0.0.1:16986 network=tcp
>         writer.go:29: 2020-02-23T02:46:49.390Z [INFO]  TestConnectCAConfig/updating_config_with_same_state: started state syncer
>         writer.go:29: 2020-02-23T02:46:49.443Z [WARN]  TestConnectCAConfig/updating_config_with_same_state.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:49.443Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server.raft: entering candidate state: node="Node at 127.0.0.1:16990 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:49.449Z [DEBUG] TestConnectCAConfig/updating_config_with_same_state.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:49.449Z [DEBUG] TestConnectCAConfig/updating_config_with_same_state.server.raft: vote granted: from=defe3dae-641b-9eda-1818-286fc1f8eea7 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:49.450Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:49.450Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server.raft: entering leader state: leader="Node at 127.0.0.1:16990 [Leader]"
>         writer.go:29: 2020-02-23T02:46:49.450Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:49.450Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server: New leader elected: payload=Node-defe3dae-641b-9eda-1818-286fc1f8eea7
>         writer.go:29: 2020-02-23T02:46:49.457Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:49.473Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:49.473Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:49.473Z [DEBUG] TestConnectCAConfig/updating_config_with_same_state.server: Skipping self join check for node since the cluster is too small: node=Node-defe3dae-641b-9eda-1818-286fc1f8eea7
>         writer.go:29: 2020-02-23T02:46:49.473Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server: member joined, marking health alive: member=Node-defe3dae-641b-9eda-1818-286fc1f8eea7
>         writer.go:29: 2020-02-23T02:46:49.543Z [DEBUG] TestConnectCAConfig/updating_config_with_same_state: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:49.546Z [INFO]  TestConnectCAConfig/updating_config_with_same_state: Synced node info
>         writer.go:29: 2020-02-23T02:46:49.546Z [DEBUG] TestConnectCAConfig/updating_config_with_same_state: Node info in sync
>         writer.go:29: 2020-02-23T02:46:49.633Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server.connect: CA provider config updated
>         writer.go:29: 2020-02-23T02:46:49.633Z [INFO]  TestConnectCAConfig/updating_config_with_same_state: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:49.633Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:49.633Z [DEBUG] TestConnectCAConfig/updating_config_with_same_state.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:49.633Z [WARN]  TestConnectCAConfig/updating_config_with_same_state.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:49.633Z [DEBUG] TestConnectCAConfig/updating_config_with_same_state.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:49.635Z [WARN]  TestConnectCAConfig/updating_config_with_same_state.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:49.637Z [INFO]  TestConnectCAConfig/updating_config_with_same_state.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:49.637Z [INFO]  TestConnectCAConfig/updating_config_with_same_state: consul server down
>         writer.go:29: 2020-02-23T02:46:49.637Z [INFO]  TestConnectCAConfig/updating_config_with_same_state: shutdown complete
>         writer.go:29: 2020-02-23T02:46:49.637Z [INFO]  TestConnectCAConfig/updating_config_with_same_state: Stopping server: protocol=DNS address=127.0.0.1:16985 network=tcp
>         writer.go:29: 2020-02-23T02:46:49.637Z [INFO]  TestConnectCAConfig/updating_config_with_same_state: Stopping server: protocol=DNS address=127.0.0.1:16985 network=udp
>         writer.go:29: 2020-02-23T02:46:49.637Z [INFO]  TestConnectCAConfig/updating_config_with_same_state: Stopping server: protocol=HTTP address=127.0.0.1:16986 network=tcp
>         writer.go:29: 2020-02-23T02:46:49.637Z [INFO]  TestConnectCAConfig/updating_config_with_same_state: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:49.637Z [INFO]  TestConnectCAConfig/updating_config_with_same_state: Endpoints down
> === CONT  TestCatalogServices
> --- PASS: TestCatalogServiceNodes (0.21s)
>     writer.go:29: 2020-02-23T02:46:49.454Z [WARN]  TestCatalogServiceNodes: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:49.455Z [DEBUG] TestCatalogServiceNodes.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:49.455Z [DEBUG] TestCatalogServiceNodes.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:49.471Z [INFO]  TestCatalogServiceNodes.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:44c6ac11-ea25-45ef-7564-ca836490c145 Address:127.0.0.1:17008}]"
>     writer.go:29: 2020-02-23T02:46:49.471Z [INFO]  TestCatalogServiceNodes.server.raft: entering follower state: follower="Node at 127.0.0.1:17008 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:49.473Z [INFO]  TestCatalogServiceNodes.server.serf.wan: serf: EventMemberJoin: Node-44c6ac11-ea25-45ef-7564-ca836490c145.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.473Z [INFO]  TestCatalogServiceNodes.server.serf.lan: serf: EventMemberJoin: Node-44c6ac11-ea25-45ef-7564-ca836490c145 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.473Z [INFO]  TestCatalogServiceNodes.server: Adding LAN server: server="Node-44c6ac11-ea25-45ef-7564-ca836490c145 (Addr: tcp/127.0.0.1:17008) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:49.473Z [INFO]  TestCatalogServiceNodes: Started DNS server: address=127.0.0.1:17003 network=udp
>     writer.go:29: 2020-02-23T02:46:49.473Z [INFO]  TestCatalogServiceNodes.server: Handled event for server in area: event=member-join server=Node-44c6ac11-ea25-45ef-7564-ca836490c145.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:49.473Z [INFO]  TestCatalogServiceNodes: Started DNS server: address=127.0.0.1:17003 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.474Z [INFO]  TestCatalogServiceNodes: Started HTTP server: address=127.0.0.1:17004 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.474Z [INFO]  TestCatalogServiceNodes: started state syncer
>     writer.go:29: 2020-02-23T02:46:49.521Z [WARN]  TestCatalogServiceNodes.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:49.521Z [INFO]  TestCatalogServiceNodes.server.raft: entering candidate state: node="Node at 127.0.0.1:17008 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:49.527Z [DEBUG] TestCatalogServiceNodes.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:49.527Z [DEBUG] TestCatalogServiceNodes.server.raft: vote granted: from=44c6ac11-ea25-45ef-7564-ca836490c145 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:49.527Z [INFO]  TestCatalogServiceNodes.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:49.527Z [INFO]  TestCatalogServiceNodes.server.raft: entering leader state: leader="Node at 127.0.0.1:17008 [Leader]"
>     writer.go:29: 2020-02-23T02:46:49.527Z [INFO]  TestCatalogServiceNodes.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:49.527Z [INFO]  TestCatalogServiceNodes.server: New leader elected: payload=Node-44c6ac11-ea25-45ef-7564-ca836490c145
>     writer.go:29: 2020-02-23T02:46:49.538Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:49.548Z [INFO]  TestCatalogServiceNodes.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:49.548Z [INFO]  TestCatalogServiceNodes.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.548Z [DEBUG] TestCatalogServiceNodes.server: Skipping self join check for node since the cluster is too small: node=Node-44c6ac11-ea25-45ef-7564-ca836490c145
>     writer.go:29: 2020-02-23T02:46:49.548Z [INFO]  TestCatalogServiceNodes.server: member joined, marking health alive: member=Node-44c6ac11-ea25-45ef-7564-ca836490c145
>     writer.go:29: 2020-02-23T02:46:49.656Z [INFO]  TestCatalogServiceNodes: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:49.656Z [INFO]  TestCatalogServiceNodes.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:49.656Z [DEBUG] TestCatalogServiceNodes.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.656Z [WARN]  TestCatalogServiceNodes.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.657Z [ERROR] TestCatalogServiceNodes.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:49.657Z [DEBUG] TestCatalogServiceNodes.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.658Z [WARN]  TestCatalogServiceNodes.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.660Z [INFO]  TestCatalogServiceNodes.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:49.660Z [INFO]  TestCatalogServiceNodes: consul server down
>     writer.go:29: 2020-02-23T02:46:49.660Z [INFO]  TestCatalogServiceNodes: shutdown complete
>     writer.go:29: 2020-02-23T02:46:49.660Z [INFO]  TestCatalogServiceNodes: Stopping server: protocol=DNS address=127.0.0.1:17003 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.660Z [INFO]  TestCatalogServiceNodes: Stopping server: protocol=DNS address=127.0.0.1:17003 network=udp
>     writer.go:29: 2020-02-23T02:46:49.660Z [INFO]  TestCatalogServiceNodes: Stopping server: protocol=HTTP address=127.0.0.1:17004 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.660Z [INFO]  TestCatalogServiceNodes: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:49.660Z [INFO]  TestCatalogServiceNodes: Endpoints down
> === CONT  TestCatalogNodes_DistanceSort
> --- PASS: TestCatalogServices (0.11s)
>     writer.go:29: 2020-02-23T02:46:49.645Z [WARN]  TestCatalogServices: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:49.645Z [DEBUG] TestCatalogServices.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:49.646Z [DEBUG] TestCatalogServices.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:49.654Z [INFO]  TestCatalogServices.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b41cf681-1d34-649f-e07e-19bbf9ba46cc Address:127.0.0.1:17026}]"
>     writer.go:29: 2020-02-23T02:46:49.655Z [INFO]  TestCatalogServices.server.serf.wan: serf: EventMemberJoin: Node-b41cf681-1d34-649f-e07e-19bbf9ba46cc.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.655Z [INFO]  TestCatalogServices.server.raft: entering follower state: follower="Node at 127.0.0.1:17026 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:49.655Z [INFO]  TestCatalogServices.server.serf.lan: serf: EventMemberJoin: Node-b41cf681-1d34-649f-e07e-19bbf9ba46cc 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.656Z [INFO]  TestCatalogServices.server: Handled event for server in area: event=member-join server=Node-b41cf681-1d34-649f-e07e-19bbf9ba46cc.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:49.656Z [INFO]  TestCatalogServices.server: Adding LAN server: server="Node-b41cf681-1d34-649f-e07e-19bbf9ba46cc (Addr: tcp/127.0.0.1:17026) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:49.656Z [INFO]  TestCatalogServices: Started DNS server: address=127.0.0.1:17021 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.656Z [INFO]  TestCatalogServices: Started DNS server: address=127.0.0.1:17021 network=udp
>     writer.go:29: 2020-02-23T02:46:49.657Z [INFO]  TestCatalogServices: Started HTTP server: address=127.0.0.1:17022 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.657Z [INFO]  TestCatalogServices: started state syncer
>     writer.go:29: 2020-02-23T02:46:49.705Z [WARN]  TestCatalogServices.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:49.706Z [INFO]  TestCatalogServices.server.raft: entering candidate state: node="Node at 127.0.0.1:17026 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:49.709Z [DEBUG] TestCatalogServices.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:49.709Z [DEBUG] TestCatalogServices.server.raft: vote granted: from=b41cf681-1d34-649f-e07e-19bbf9ba46cc term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:49.709Z [INFO]  TestCatalogServices.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:49.709Z [INFO]  TestCatalogServices.server.raft: entering leader state: leader="Node at 127.0.0.1:17026 [Leader]"
>     writer.go:29: 2020-02-23T02:46:49.709Z [INFO]  TestCatalogServices.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:49.709Z [INFO]  TestCatalogServices.server: New leader elected: payload=Node-b41cf681-1d34-649f-e07e-19bbf9ba46cc
>     writer.go:29: 2020-02-23T02:46:49.717Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:49.725Z [INFO]  TestCatalogServices.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:49.725Z [INFO]  TestCatalogServices.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.725Z [DEBUG] TestCatalogServices.server: Skipping self join check for node since the cluster is too small: node=Node-b41cf681-1d34-649f-e07e-19bbf9ba46cc
>     writer.go:29: 2020-02-23T02:46:49.725Z [INFO]  TestCatalogServices.server: member joined, marking health alive: member=Node-b41cf681-1d34-649f-e07e-19bbf9ba46cc
>     writer.go:29: 2020-02-23T02:46:49.744Z [INFO]  TestCatalogServices: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:49.744Z [INFO]  TestCatalogServices.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:49.744Z [DEBUG] TestCatalogServices.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.744Z [WARN]  TestCatalogServices.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.744Z [ERROR] TestCatalogServices.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:49.744Z [DEBUG] TestCatalogServices.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.745Z [WARN]  TestCatalogServices.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.747Z [INFO]  TestCatalogServices.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:49.747Z [INFO]  TestCatalogServices: consul server down
>     writer.go:29: 2020-02-23T02:46:49.747Z [INFO]  TestCatalogServices: shutdown complete
>     writer.go:29: 2020-02-23T02:46:49.747Z [INFO]  TestCatalogServices: Stopping server: protocol=DNS address=127.0.0.1:17021 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.747Z [INFO]  TestCatalogServices: Stopping server: protocol=DNS address=127.0.0.1:17021 network=udp
>     writer.go:29: 2020-02-23T02:46:49.747Z [INFO]  TestCatalogServices: Stopping server: protocol=HTTP address=127.0.0.1:17022 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.747Z [INFO]  TestCatalogServices: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:49.747Z [INFO]  TestCatalogServices: Endpoints down
> === CONT  TestCatalogNodes_Blocking
> --- PASS: TestCatalogRegister_checkRegistration (0.25s)
>     writer.go:29: 2020-02-23T02:46:49.518Z [WARN]  TestCatalogRegister_checkRegistration: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:49.518Z [DEBUG] TestCatalogRegister_checkRegistration.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:49.519Z [DEBUG] TestCatalogRegister_checkRegistration.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:49.530Z [INFO]  TestCatalogRegister_checkRegistration.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:ecd1175f-67a5-2ad2-faf4-92c4309f6540 Address:127.0.0.1:17020}]"
>     writer.go:29: 2020-02-23T02:46:49.530Z [INFO]  TestCatalogRegister_checkRegistration.server.raft: entering follower state: follower="Node at 127.0.0.1:17020 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:49.531Z [INFO]  TestCatalogRegister_checkRegistration.server.serf.wan: serf: EventMemberJoin: Node-ecd1175f-67a5-2ad2-faf4-92c4309f6540.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.531Z [INFO]  TestCatalogRegister_checkRegistration.server.serf.lan: serf: EventMemberJoin: Node-ecd1175f-67a5-2ad2-faf4-92c4309f6540 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.531Z [INFO]  TestCatalogRegister_checkRegistration.server: Handled event for server in area: event=member-join server=Node-ecd1175f-67a5-2ad2-faf4-92c4309f6540.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:49.531Z [INFO]  TestCatalogRegister_checkRegistration.server: Adding LAN server: server="Node-ecd1175f-67a5-2ad2-faf4-92c4309f6540 (Addr: tcp/127.0.0.1:17020) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:49.531Z [INFO]  TestCatalogRegister_checkRegistration: Started DNS server: address=127.0.0.1:17015 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.532Z [INFO]  TestCatalogRegister_checkRegistration: Started DNS server: address=127.0.0.1:17015 network=udp
>     writer.go:29: 2020-02-23T02:46:49.532Z [INFO]  TestCatalogRegister_checkRegistration: Started HTTP server: address=127.0.0.1:17016 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.532Z [INFO]  TestCatalogRegister_checkRegistration: started state syncer
>     writer.go:29: 2020-02-23T02:46:49.572Z [WARN]  TestCatalogRegister_checkRegistration.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:49.572Z [INFO]  TestCatalogRegister_checkRegistration.server.raft: entering candidate state: node="Node at 127.0.0.1:17020 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:49.609Z [DEBUG] TestCatalogRegister_checkRegistration.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:49.609Z [DEBUG] TestCatalogRegister_checkRegistration.server.raft: vote granted: from=ecd1175f-67a5-2ad2-faf4-92c4309f6540 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:49.609Z [INFO]  TestCatalogRegister_checkRegistration.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:49.609Z [INFO]  TestCatalogRegister_checkRegistration.server.raft: entering leader state: leader="Node at 127.0.0.1:17020 [Leader]"
>     writer.go:29: 2020-02-23T02:46:49.609Z [INFO]  TestCatalogRegister_checkRegistration.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:49.609Z [INFO]  TestCatalogRegister_checkRegistration.server: New leader elected: payload=Node-ecd1175f-67a5-2ad2-faf4-92c4309f6540
>     writer.go:29: 2020-02-23T02:46:49.617Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:49.625Z [INFO]  TestCatalogRegister_checkRegistration.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:49.625Z [INFO]  TestCatalogRegister_checkRegistration.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.626Z [DEBUG] TestCatalogRegister_checkRegistration.server: Skipping self join check for node since the cluster is too small: node=Node-ecd1175f-67a5-2ad2-faf4-92c4309f6540
>     writer.go:29: 2020-02-23T02:46:49.626Z [INFO]  TestCatalogRegister_checkRegistration.server: member joined, marking health alive: member=Node-ecd1175f-67a5-2ad2-faf4-92c4309f6540
>     writer.go:29: 2020-02-23T02:46:49.760Z [INFO]  TestCatalogRegister_checkRegistration: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:49.761Z [INFO]  TestCatalogRegister_checkRegistration.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:49.761Z [DEBUG] TestCatalogRegister_checkRegistration.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.761Z [WARN]  TestCatalogRegister_checkRegistration.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.761Z [ERROR] TestCatalogRegister_checkRegistration.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:49.761Z [DEBUG] TestCatalogRegister_checkRegistration.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.762Z [WARN]  TestCatalogRegister_checkRegistration.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.764Z [INFO]  TestCatalogRegister_checkRegistration.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:49.764Z [INFO]  TestCatalogRegister_checkRegistration: consul server down
>     writer.go:29: 2020-02-23T02:46:49.764Z [INFO]  TestCatalogRegister_checkRegistration: shutdown complete
>     writer.go:29: 2020-02-23T02:46:49.764Z [INFO]  TestCatalogRegister_checkRegistration: Stopping server: protocol=DNS address=127.0.0.1:17015 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.764Z [INFO]  TestCatalogRegister_checkRegistration: Stopping server: protocol=DNS address=127.0.0.1:17015 network=udp
>     writer.go:29: 2020-02-23T02:46:49.764Z [INFO]  TestCatalogRegister_checkRegistration: Stopping server: protocol=HTTP address=127.0.0.1:17016 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.764Z [INFO]  TestCatalogRegister_checkRegistration: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:49.764Z [INFO]  TestCatalogRegister_checkRegistration: Endpoints down
> === CONT  TestCatalogNodes_Filter
> --- PASS: TestCatalogServices_NodeMetaFilter (0.42s)
>     writer.go:29: 2020-02-23T02:46:49.536Z [WARN]  TestCatalogServices_NodeMetaFilter: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:49.536Z [DEBUG] TestCatalogServices_NodeMetaFilter.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:49.536Z [DEBUG] TestCatalogServices_NodeMetaFilter.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:49.550Z [INFO]  TestCatalogServices_NodeMetaFilter.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d85020e8-5d9a-916c-a1cd-553c9215902f Address:127.0.0.1:17014}]"
>     writer.go:29: 2020-02-23T02:46:49.551Z [INFO]  TestCatalogServices_NodeMetaFilter.server.raft: entering follower state: follower="Node at 127.0.0.1:17014 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:49.551Z [INFO]  TestCatalogServices_NodeMetaFilter.server.serf.wan: serf: EventMemberJoin: Node-d85020e8-5d9a-916c-a1cd-553c9215902f.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.552Z [INFO]  TestCatalogServices_NodeMetaFilter.server.serf.lan: serf: EventMemberJoin: Node-d85020e8-5d9a-916c-a1cd-553c9215902f 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.552Z [INFO]  TestCatalogServices_NodeMetaFilter: Started DNS server: address=127.0.0.1:17009 network=udp
>     writer.go:29: 2020-02-23T02:46:49.552Z [INFO]  TestCatalogServices_NodeMetaFilter.server: Adding LAN server: server="Node-d85020e8-5d9a-916c-a1cd-553c9215902f (Addr: tcp/127.0.0.1:17014) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:49.552Z [INFO]  TestCatalogServices_NodeMetaFilter.server: Handled event for server in area: event=member-join server=Node-d85020e8-5d9a-916c-a1cd-553c9215902f.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:49.552Z [INFO]  TestCatalogServices_NodeMetaFilter: Started DNS server: address=127.0.0.1:17009 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.552Z [INFO]  TestCatalogServices_NodeMetaFilter: Started HTTP server: address=127.0.0.1:17010 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.552Z [INFO]  TestCatalogServices_NodeMetaFilter: started state syncer
>     writer.go:29: 2020-02-23T02:46:49.616Z [WARN]  TestCatalogServices_NodeMetaFilter.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:49.617Z [INFO]  TestCatalogServices_NodeMetaFilter.server.raft: entering candidate state: node="Node at 127.0.0.1:17014 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:49.620Z [DEBUG] TestCatalogServices_NodeMetaFilter.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:49.621Z [DEBUG] TestCatalogServices_NodeMetaFilter.server.raft: vote granted: from=d85020e8-5d9a-916c-a1cd-553c9215902f term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:49.621Z [INFO]  TestCatalogServices_NodeMetaFilter.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:49.621Z [INFO]  TestCatalogServices_NodeMetaFilter.server.raft: entering leader state: leader="Node at 127.0.0.1:17014 [Leader]"
>     writer.go:29: 2020-02-23T02:46:49.621Z [INFO]  TestCatalogServices_NodeMetaFilter.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:49.621Z [INFO]  TestCatalogServices_NodeMetaFilter.server: New leader elected: payload=Node-d85020e8-5d9a-916c-a1cd-553c9215902f
>     writer.go:29: 2020-02-23T02:46:49.630Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:49.644Z [INFO]  TestCatalogServices_NodeMetaFilter.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:49.644Z [INFO]  TestCatalogServices_NodeMetaFilter.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.644Z [DEBUG] TestCatalogServices_NodeMetaFilter.server: Skipping self join check for node since the cluster is too small: node=Node-d85020e8-5d9a-916c-a1cd-553c9215902f
>     writer.go:29: 2020-02-23T02:46:49.644Z [INFO]  TestCatalogServices_NodeMetaFilter.server: member joined, marking health alive: member=Node-d85020e8-5d9a-916c-a1cd-553c9215902f
>     writer.go:29: 2020-02-23T02:46:49.833Z [DEBUG] TestCatalogServices_NodeMetaFilter: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:49.836Z [INFO]  TestCatalogServices_NodeMetaFilter: Synced node info
>     writer.go:29: 2020-02-23T02:46:49.938Z [INFO]  TestCatalogServices_NodeMetaFilter: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:49.938Z [INFO]  TestCatalogServices_NodeMetaFilter.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:49.938Z [DEBUG] TestCatalogServices_NodeMetaFilter.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.938Z [WARN]  TestCatalogServices_NodeMetaFilter.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.938Z [DEBUG] TestCatalogServices_NodeMetaFilter.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.941Z [WARN]  TestCatalogServices_NodeMetaFilter.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:49.942Z [INFO]  TestCatalogServices_NodeMetaFilter.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:49.943Z [INFO]  TestCatalogServices_NodeMetaFilter: consul server down
>     writer.go:29: 2020-02-23T02:46:49.943Z [INFO]  TestCatalogServices_NodeMetaFilter: shutdown complete
>     writer.go:29: 2020-02-23T02:46:49.943Z [INFO]  TestCatalogServices_NodeMetaFilter: Stopping server: protocol=DNS address=127.0.0.1:17009 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.943Z [INFO]  TestCatalogServices_NodeMetaFilter: Stopping server: protocol=DNS address=127.0.0.1:17009 network=udp
>     writer.go:29: 2020-02-23T02:46:49.943Z [INFO]  TestCatalogServices_NodeMetaFilter: Stopping server: protocol=HTTP address=127.0.0.1:17010 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.943Z [INFO]  TestCatalogServices_NodeMetaFilter: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:49.943Z [INFO]  TestCatalogServices_NodeMetaFilter: Endpoints down
> === CONT  TestCatalogNodes_MetaFilter
> --- PASS: TestCatalogNodes_Filter (0.32s)
>     writer.go:29: 2020-02-23T02:46:49.771Z [WARN]  TestCatalogNodes_Filter: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:49.771Z [DEBUG] TestCatalogNodes_Filter.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:49.772Z [DEBUG] TestCatalogNodes_Filter.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:49.781Z [INFO]  TestCatalogNodes_Filter.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:bea55a52-c3ea-15b6-6586-dc3e54cb5691 Address:127.0.0.1:17044}]"
>     writer.go:29: 2020-02-23T02:46:49.781Z [INFO]  TestCatalogNodes_Filter.server.raft: entering follower state: follower="Node at 127.0.0.1:17044 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:49.782Z [INFO]  TestCatalogNodes_Filter.server.serf.wan: serf: EventMemberJoin: Node-bea55a52-c3ea-15b6-6586-dc3e54cb5691.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.782Z [INFO]  TestCatalogNodes_Filter.server.serf.lan: serf: EventMemberJoin: Node-bea55a52-c3ea-15b6-6586-dc3e54cb5691 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.782Z [INFO]  TestCatalogNodes_Filter.server: Handled event for server in area: event=member-join server=Node-bea55a52-c3ea-15b6-6586-dc3e54cb5691.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:49.782Z [INFO]  TestCatalogNodes_Filter: Started DNS server: address=127.0.0.1:17039 network=udp
>     writer.go:29: 2020-02-23T02:46:49.782Z [INFO]  TestCatalogNodes_Filter.server: Adding LAN server: server="Node-bea55a52-c3ea-15b6-6586-dc3e54cb5691 (Addr: tcp/127.0.0.1:17044) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:49.782Z [INFO]  TestCatalogNodes_Filter: Started DNS server: address=127.0.0.1:17039 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.783Z [INFO]  TestCatalogNodes_Filter: Started HTTP server: address=127.0.0.1:17040 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.783Z [INFO]  TestCatalogNodes_Filter: started state syncer
>     writer.go:29: 2020-02-23T02:46:49.821Z [WARN]  TestCatalogNodes_Filter.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:49.821Z [INFO]  TestCatalogNodes_Filter.server.raft: entering candidate state: node="Node at 127.0.0.1:17044 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:49.825Z [DEBUG] TestCatalogNodes_Filter.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:49.825Z [DEBUG] TestCatalogNodes_Filter.server.raft: vote granted: from=bea55a52-c3ea-15b6-6586-dc3e54cb5691 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:49.825Z [INFO]  TestCatalogNodes_Filter.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:49.825Z [INFO]  TestCatalogNodes_Filter.server.raft: entering leader state: leader="Node at 127.0.0.1:17044 [Leader]"
>     writer.go:29: 2020-02-23T02:46:49.825Z [INFO]  TestCatalogNodes_Filter.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:49.825Z [INFO]  TestCatalogNodes_Filter.server: New leader elected: payload=Node-bea55a52-c3ea-15b6-6586-dc3e54cb5691
>     writer.go:29: 2020-02-23T02:46:49.832Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:49.841Z [INFO]  TestCatalogNodes_Filter.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:49.841Z [INFO]  TestCatalogNodes_Filter.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.841Z [DEBUG] TestCatalogNodes_Filter.server: Skipping self join check for node since the cluster is too small: node=Node-bea55a52-c3ea-15b6-6586-dc3e54cb5691
>     writer.go:29: 2020-02-23T02:46:49.841Z [INFO]  TestCatalogNodes_Filter.server: member joined, marking health alive: member=Node-bea55a52-c3ea-15b6-6586-dc3e54cb5691
>     writer.go:29: 2020-02-23T02:46:49.968Z [DEBUG] TestCatalogNodes_Filter: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:49.971Z [INFO]  TestCatalogNodes_Filter: Synced node info
>     writer.go:29: 2020-02-23T02:46:49.971Z [DEBUG] TestCatalogNodes_Filter: Node info in sync
>     writer.go:29: 2020-02-23T02:46:50.023Z [DEBUG] TestCatalogNodes_Filter: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:50.024Z [DEBUG] TestCatalogNodes_Filter: Node info in sync
>     writer.go:29: 2020-02-23T02:46:50.079Z [INFO]  TestCatalogNodes_Filter: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:50.079Z [INFO]  TestCatalogNodes_Filter.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:50.079Z [DEBUG] TestCatalogNodes_Filter.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.079Z [WARN]  TestCatalogNodes_Filter.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.079Z [DEBUG] TestCatalogNodes_Filter.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.081Z [WARN]  TestCatalogNodes_Filter.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.082Z [INFO]  TestCatalogNodes_Filter.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:50.083Z [INFO]  TestCatalogNodes_Filter: consul server down
>     writer.go:29: 2020-02-23T02:46:50.083Z [INFO]  TestCatalogNodes_Filter: shutdown complete
>     writer.go:29: 2020-02-23T02:46:50.083Z [INFO]  TestCatalogNodes_Filter: Stopping server: protocol=DNS address=127.0.0.1:17039 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.083Z [INFO]  TestCatalogNodes_Filter: Stopping server: protocol=DNS address=127.0.0.1:17039 network=udp
>     writer.go:29: 2020-02-23T02:46:50.083Z [INFO]  TestCatalogNodes_Filter: Stopping server: protocol=HTTP address=127.0.0.1:17040 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.083Z [INFO]  TestCatalogNodes_Filter: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:50.083Z [INFO]  TestCatalogNodes_Filter: Endpoints down
> === CONT  TestCatalogNodes
> --- PASS: TestCatalogNodes_DistanceSort (0.43s)
>     writer.go:29: 2020-02-23T02:46:49.668Z [WARN]  TestCatalogNodes_DistanceSort: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:49.668Z [DEBUG] TestCatalogNodes_DistanceSort.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:49.668Z [DEBUG] TestCatalogNodes_DistanceSort.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:49.677Z [INFO]  TestCatalogNodes_DistanceSort.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:07e759cb-6459-dae1-84dc-e88e8ece3a97 Address:127.0.0.1:17032}]"
>     writer.go:29: 2020-02-23T02:46:49.677Z [INFO]  TestCatalogNodes_DistanceSort.server.raft: entering follower state: follower="Node at 127.0.0.1:17032 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:49.678Z [INFO]  TestCatalogNodes_DistanceSort.server.serf.wan: serf: EventMemberJoin: Node-07e759cb-6459-dae1-84dc-e88e8ece3a97.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.678Z [INFO]  TestCatalogNodes_DistanceSort.server.serf.lan: serf: EventMemberJoin: Node-07e759cb-6459-dae1-84dc-e88e8ece3a97 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.678Z [INFO]  TestCatalogNodes_DistanceSort.server: Adding LAN server: server="Node-07e759cb-6459-dae1-84dc-e88e8ece3a97 (Addr: tcp/127.0.0.1:17032) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:49.678Z [INFO]  TestCatalogNodes_DistanceSort.server: Handled event for server in area: event=member-join server=Node-07e759cb-6459-dae1-84dc-e88e8ece3a97.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:49.678Z [INFO]  TestCatalogNodes_DistanceSort: Started DNS server: address=127.0.0.1:17027 network=udp
>     writer.go:29: 2020-02-23T02:46:49.678Z [INFO]  TestCatalogNodes_DistanceSort: Started DNS server: address=127.0.0.1:17027 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.679Z [INFO]  TestCatalogNodes_DistanceSort: Started HTTP server: address=127.0.0.1:17028 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.679Z [INFO]  TestCatalogNodes_DistanceSort: started state syncer
>     writer.go:29: 2020-02-23T02:46:49.718Z [WARN]  TestCatalogNodes_DistanceSort.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:49.718Z [INFO]  TestCatalogNodes_DistanceSort.server.raft: entering candidate state: node="Node at 127.0.0.1:17032 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:49.721Z [DEBUG] TestCatalogNodes_DistanceSort.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:49.722Z [DEBUG] TestCatalogNodes_DistanceSort.server.raft: vote granted: from=07e759cb-6459-dae1-84dc-e88e8ece3a97 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:49.722Z [INFO]  TestCatalogNodes_DistanceSort.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:49.722Z [INFO]  TestCatalogNodes_DistanceSort.server.raft: entering leader state: leader="Node at 127.0.0.1:17032 [Leader]"
>     writer.go:29: 2020-02-23T02:46:49.722Z [INFO]  TestCatalogNodes_DistanceSort.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:49.722Z [INFO]  TestCatalogNodes_DistanceSort.server: New leader elected: payload=Node-07e759cb-6459-dae1-84dc-e88e8ece3a97
>     writer.go:29: 2020-02-23T02:46:49.729Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:49.737Z [INFO]  TestCatalogNodes_DistanceSort.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:49.738Z [INFO]  TestCatalogNodes_DistanceSort.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.738Z [DEBUG] TestCatalogNodes_DistanceSort.server: Skipping self join check for node since the cluster is too small: node=Node-07e759cb-6459-dae1-84dc-e88e8ece3a97
>     writer.go:29: 2020-02-23T02:46:49.738Z [INFO]  TestCatalogNodes_DistanceSort.server: member joined, marking health alive: member=Node-07e759cb-6459-dae1-84dc-e88e8ece3a97
>     writer.go:29: 2020-02-23T02:46:50.082Z [INFO]  TestCatalogNodes_DistanceSort: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:50.082Z [INFO]  TestCatalogNodes_DistanceSort.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:50.082Z [DEBUG] TestCatalogNodes_DistanceSort.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.082Z [WARN]  TestCatalogNodes_DistanceSort.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.082Z [ERROR] TestCatalogNodes_DistanceSort.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:50.082Z [DEBUG] TestCatalogNodes_DistanceSort.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.088Z [WARN]  TestCatalogNodes_DistanceSort.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.090Z [INFO]  TestCatalogNodes_DistanceSort.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:50.090Z [INFO]  TestCatalogNodes_DistanceSort: consul server down
>     writer.go:29: 2020-02-23T02:46:50.090Z [INFO]  TestCatalogNodes_DistanceSort: shutdown complete
>     writer.go:29: 2020-02-23T02:46:50.090Z [INFO]  TestCatalogNodes_DistanceSort: Stopping server: protocol=DNS address=127.0.0.1:17027 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.090Z [INFO]  TestCatalogNodes_DistanceSort: Stopping server: protocol=DNS address=127.0.0.1:17027 network=udp
>     writer.go:29: 2020-02-23T02:46:50.090Z [INFO]  TestCatalogNodes_DistanceSort: Stopping server: protocol=HTTP address=127.0.0.1:17028 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.090Z [INFO]  TestCatalogNodes_DistanceSort: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:50.090Z [INFO]  TestCatalogNodes_DistanceSort: Endpoints down
> === CONT  TestCatalogDatacenters
> --- PASS: TestCatalogNodes_Blocking (0.42s)
>     writer.go:29: 2020-02-23T02:46:49.754Z [WARN]  TestCatalogNodes_Blocking: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:49.755Z [DEBUG] TestCatalogNodes_Blocking.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:49.755Z [DEBUG] TestCatalogNodes_Blocking.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:49.766Z [INFO]  TestCatalogNodes_Blocking.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:24a7bda5-182a-2ff9-1b6a-508280066ba6 Address:127.0.0.1:17050}]"
>     writer.go:29: 2020-02-23T02:46:49.767Z [INFO]  TestCatalogNodes_Blocking.server.raft: entering follower state: follower="Node at 127.0.0.1:17050 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:49.767Z [INFO]  TestCatalogNodes_Blocking.server.serf.wan: serf: EventMemberJoin: Node-24a7bda5-182a-2ff9-1b6a-508280066ba6.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.768Z [INFO]  TestCatalogNodes_Blocking.server.serf.lan: serf: EventMemberJoin: Node-24a7bda5-182a-2ff9-1b6a-508280066ba6 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.768Z [INFO]  TestCatalogNodes_Blocking.server: Handled event for server in area: event=member-join server=Node-24a7bda5-182a-2ff9-1b6a-508280066ba6.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:49.768Z [INFO]  TestCatalogNodes_Blocking.server: Adding LAN server: server="Node-24a7bda5-182a-2ff9-1b6a-508280066ba6 (Addr: tcp/127.0.0.1:17050) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:49.768Z [INFO]  TestCatalogNodes_Blocking: Started DNS server: address=127.0.0.1:17045 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.768Z [INFO]  TestCatalogNodes_Blocking: Started DNS server: address=127.0.0.1:17045 network=udp
>     writer.go:29: 2020-02-23T02:46:49.768Z [INFO]  TestCatalogNodes_Blocking: Started HTTP server: address=127.0.0.1:17046 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.769Z [INFO]  TestCatalogNodes_Blocking: started state syncer
>     writer.go:29: 2020-02-23T02:46:49.829Z [WARN]  TestCatalogNodes_Blocking.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:49.829Z [INFO]  TestCatalogNodes_Blocking.server.raft: entering candidate state: node="Node at 127.0.0.1:17050 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:49.834Z [DEBUG] TestCatalogNodes_Blocking.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:49.834Z [DEBUG] TestCatalogNodes_Blocking.server.raft: vote granted: from=24a7bda5-182a-2ff9-1b6a-508280066ba6 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:49.834Z [INFO]  TestCatalogNodes_Blocking.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:49.834Z [INFO]  TestCatalogNodes_Blocking.server.raft: entering leader state: leader="Node at 127.0.0.1:17050 [Leader]"
>     writer.go:29: 2020-02-23T02:46:49.834Z [INFO]  TestCatalogNodes_Blocking.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:49.834Z [INFO]  TestCatalogNodes_Blocking.server: New leader elected: payload=Node-24a7bda5-182a-2ff9-1b6a-508280066ba6
>     writer.go:29: 2020-02-23T02:46:49.844Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:49.852Z [INFO]  TestCatalogNodes_Blocking.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:49.852Z [INFO]  TestCatalogNodes_Blocking.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:49.852Z [DEBUG] TestCatalogNodes_Blocking.server: Skipping self join check for node since the cluster is too small: node=Node-24a7bda5-182a-2ff9-1b6a-508280066ba6
>     writer.go:29: 2020-02-23T02:46:49.852Z [INFO]  TestCatalogNodes_Blocking.server: member joined, marking health alive: member=Node-24a7bda5-182a-2ff9-1b6a-508280066ba6
>     writer.go:29: 2020-02-23T02:46:49.958Z [DEBUG] TestCatalogNodes_Blocking: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:49.964Z [INFO]  TestCatalogNodes_Blocking: Synced node info
>     writer.go:29: 2020-02-23T02:46:50.152Z [INFO]  TestCatalogNodes_Blocking: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:50.152Z [INFO]  TestCatalogNodes_Blocking.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:50.152Z [DEBUG] TestCatalogNodes_Blocking.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.152Z [WARN]  TestCatalogNodes_Blocking.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.152Z [DEBUG] TestCatalogNodes_Blocking.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.156Z [WARN]  TestCatalogNodes_Blocking.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.164Z [INFO]  TestCatalogNodes_Blocking.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:50.164Z [INFO]  TestCatalogNodes_Blocking: consul server down
>     writer.go:29: 2020-02-23T02:46:50.164Z [INFO]  TestCatalogNodes_Blocking: shutdown complete
>     writer.go:29: 2020-02-23T02:46:50.164Z [INFO]  TestCatalogNodes_Blocking: Stopping server: protocol=DNS address=127.0.0.1:17045 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.164Z [INFO]  TestCatalogNodes_Blocking: Stopping server: protocol=DNS address=127.0.0.1:17045 network=udp
>     writer.go:29: 2020-02-23T02:46:50.164Z [INFO]  TestCatalogNodes_Blocking: Stopping server: protocol=HTTP address=127.0.0.1:17046 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.164Z [INFO]  TestCatalogNodes_Blocking: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:50.164Z [INFO]  TestCatalogNodes_Blocking: Endpoints down
> === CONT  TestCatalogDeregister
> --- PASS: TestCatalogDatacenters (0.20s)
>     writer.go:29: 2020-02-23T02:46:50.099Z [WARN]  TestCatalogDatacenters: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:50.099Z [DEBUG] TestCatalogDatacenters.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:50.100Z [DEBUG] TestCatalogDatacenters.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:50.122Z [INFO]  TestCatalogDatacenters.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:db54f4c4-44c4-c8d4-9bb2-f7312fa54f05 Address:127.0.0.1:17080}]"
>     writer.go:29: 2020-02-23T02:46:50.122Z [INFO]  TestCatalogDatacenters.server.serf.wan: serf: EventMemberJoin: Node-db54f4c4-44c4-c8d4-9bb2-f7312fa54f05.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.122Z [INFO]  TestCatalogDatacenters.server.raft: entering follower state: follower="Node at 127.0.0.1:17080 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:50.123Z [INFO]  TestCatalogDatacenters.server.serf.lan: serf: EventMemberJoin: Node-db54f4c4-44c4-c8d4-9bb2-f7312fa54f05 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.124Z [INFO]  TestCatalogDatacenters: Started DNS server: address=127.0.0.1:17075 network=udp
>     writer.go:29: 2020-02-23T02:46:50.124Z [INFO]  TestCatalogDatacenters.server: Handled event for server in area: event=member-join server=Node-db54f4c4-44c4-c8d4-9bb2-f7312fa54f05.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:50.124Z [INFO]  TestCatalogDatacenters.server: Adding LAN server: server="Node-db54f4c4-44c4-c8d4-9bb2-f7312fa54f05 (Addr: tcp/127.0.0.1:17080) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:50.125Z [INFO]  TestCatalogDatacenters: Started DNS server: address=127.0.0.1:17075 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.127Z [INFO]  TestCatalogDatacenters: Started HTTP server: address=127.0.0.1:17076 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.127Z [INFO]  TestCatalogDatacenters: started state syncer
>     writer.go:29: 2020-02-23T02:46:50.187Z [WARN]  TestCatalogDatacenters.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:50.187Z [INFO]  TestCatalogDatacenters.server.raft: entering candidate state: node="Node at 127.0.0.1:17080 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:50.190Z [DEBUG] TestCatalogDatacenters.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:50.190Z [DEBUG] TestCatalogDatacenters.server.raft: vote granted: from=db54f4c4-44c4-c8d4-9bb2-f7312fa54f05 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:50.190Z [INFO]  TestCatalogDatacenters.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:50.190Z [INFO]  TestCatalogDatacenters.server.raft: entering leader state: leader="Node at 127.0.0.1:17080 [Leader]"
>     writer.go:29: 2020-02-23T02:46:50.190Z [INFO]  TestCatalogDatacenters.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:50.190Z [INFO]  TestCatalogDatacenters.server: New leader elected: payload=Node-db54f4c4-44c4-c8d4-9bb2-f7312fa54f05
>     writer.go:29: 2020-02-23T02:46:50.197Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:50.205Z [INFO]  TestCatalogDatacenters.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:50.205Z [INFO]  TestCatalogDatacenters.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.205Z [DEBUG] TestCatalogDatacenters.server: Skipping self join check for node since the cluster is too small: node=Node-db54f4c4-44c4-c8d4-9bb2-f7312fa54f05
>     writer.go:29: 2020-02-23T02:46:50.205Z [INFO]  TestCatalogDatacenters.server: member joined, marking health alive: member=Node-db54f4c4-44c4-c8d4-9bb2-f7312fa54f05
>     writer.go:29: 2020-02-23T02:46:50.284Z [INFO]  TestCatalogDatacenters: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:50.284Z [INFO]  TestCatalogDatacenters.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:50.284Z [DEBUG] TestCatalogDatacenters.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.284Z [WARN]  TestCatalogDatacenters.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.284Z [ERROR] TestCatalogDatacenters.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:50.284Z [DEBUG] TestCatalogDatacenters.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.286Z [WARN]  TestCatalogDatacenters.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.288Z [INFO]  TestCatalogDatacenters.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:50.288Z [INFO]  TestCatalogDatacenters: consul server down
>     writer.go:29: 2020-02-23T02:46:50.288Z [INFO]  TestCatalogDatacenters: shutdown complete
>     writer.go:29: 2020-02-23T02:46:50.288Z [INFO]  TestCatalogDatacenters: Stopping server: protocol=DNS address=127.0.0.1:17075 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.288Z [INFO]  TestCatalogDatacenters: Stopping server: protocol=DNS address=127.0.0.1:17075 network=udp
>     writer.go:29: 2020-02-23T02:46:50.288Z [INFO]  TestCatalogDatacenters: Stopping server: protocol=HTTP address=127.0.0.1:17076 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.288Z [INFO]  TestCatalogDatacenters: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:50.288Z [INFO]  TestCatalogDatacenters: Endpoints down
> === CONT  TestCatalogRegister_Service_InvalidAddress
> --- PASS: TestCatalogNodes_MetaFilter (0.44s)
>     writer.go:29: 2020-02-23T02:46:49.950Z [WARN]  TestCatalogNodes_MetaFilter: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:49.950Z [DEBUG] TestCatalogNodes_MetaFilter.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:49.950Z [DEBUG] TestCatalogNodes_MetaFilter.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:49.968Z [INFO]  TestCatalogNodes_MetaFilter.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:15cebc8e-d9a4-0485-34a5-fff14f96d00a Address:127.0.0.1:17056}]"
>     writer.go:29: 2020-02-23T02:46:49.968Z [INFO]  TestCatalogNodes_MetaFilter.server.raft: entering follower state: follower="Node at 127.0.0.1:17056 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:49.969Z [INFO]  TestCatalogNodes_MetaFilter.server.serf.wan: serf: EventMemberJoin: Node-15cebc8e-d9a4-0485-34a5-fff14f96d00a.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.970Z [INFO]  TestCatalogNodes_MetaFilter.server.serf.lan: serf: EventMemberJoin: Node-15cebc8e-d9a4-0485-34a5-fff14f96d00a 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:49.971Z [INFO]  TestCatalogNodes_MetaFilter.server: Adding LAN server: server="Node-15cebc8e-d9a4-0485-34a5-fff14f96d00a (Addr: tcp/127.0.0.1:17056) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:49.971Z [INFO]  TestCatalogNodes_MetaFilter.server: Handled event for server in area: event=member-join server=Node-15cebc8e-d9a4-0485-34a5-fff14f96d00a.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:49.971Z [INFO]  TestCatalogNodes_MetaFilter: Started DNS server: address=127.0.0.1:17051 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.971Z [INFO]  TestCatalogNodes_MetaFilter: Started DNS server: address=127.0.0.1:17051 network=udp
>     writer.go:29: 2020-02-23T02:46:49.977Z [INFO]  TestCatalogNodes_MetaFilter: Started HTTP server: address=127.0.0.1:17052 network=tcp
>     writer.go:29: 2020-02-23T02:46:49.978Z [INFO]  TestCatalogNodes_MetaFilter: started state syncer
>     writer.go:29: 2020-02-23T02:46:50.027Z [WARN]  TestCatalogNodes_MetaFilter.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:50.027Z [INFO]  TestCatalogNodes_MetaFilter.server.raft: entering candidate state: node="Node at 127.0.0.1:17056 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:50.030Z [DEBUG] TestCatalogNodes_MetaFilter.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:50.030Z [DEBUG] TestCatalogNodes_MetaFilter.server.raft: vote granted: from=15cebc8e-d9a4-0485-34a5-fff14f96d00a term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:50.030Z [INFO]  TestCatalogNodes_MetaFilter.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:50.030Z [INFO]  TestCatalogNodes_MetaFilter.server.raft: entering leader state: leader="Node at 127.0.0.1:17056 [Leader]"
>     writer.go:29: 2020-02-23T02:46:50.030Z [INFO]  TestCatalogNodes_MetaFilter.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:50.030Z [INFO]  TestCatalogNodes_MetaFilter.server: New leader elected: payload=Node-15cebc8e-d9a4-0485-34a5-fff14f96d00a
>     writer.go:29: 2020-02-23T02:46:50.037Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:50.046Z [INFO]  TestCatalogNodes_MetaFilter.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:50.046Z [INFO]  TestCatalogNodes_MetaFilter.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.047Z [DEBUG] TestCatalogNodes_MetaFilter.server: Skipping self join check for node since the cluster is too small: node=Node-15cebc8e-d9a4-0485-34a5-fff14f96d00a
>     writer.go:29: 2020-02-23T02:46:50.047Z [INFO]  TestCatalogNodes_MetaFilter.server: member joined, marking health alive: member=Node-15cebc8e-d9a4-0485-34a5-fff14f96d00a
>     writer.go:29: 2020-02-23T02:46:50.069Z [DEBUG] TestCatalogNodes_MetaFilter: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:50.072Z [INFO]  TestCatalogNodes_MetaFilter: Synced node info
>     writer.go:29: 2020-02-23T02:46:50.380Z [INFO]  TestCatalogNodes_MetaFilter: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:50.380Z [INFO]  TestCatalogNodes_MetaFilter.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:50.380Z [DEBUG] TestCatalogNodes_MetaFilter.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.380Z [WARN]  TestCatalogNodes_MetaFilter.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.380Z [DEBUG] TestCatalogNodes_MetaFilter.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.382Z [WARN]  TestCatalogNodes_MetaFilter.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.384Z [INFO]  TestCatalogNodes_MetaFilter.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:50.384Z [INFO]  TestCatalogNodes_MetaFilter: consul server down
>     writer.go:29: 2020-02-23T02:46:50.384Z [INFO]  TestCatalogNodes_MetaFilter: shutdown complete
>     writer.go:29: 2020-02-23T02:46:50.384Z [INFO]  TestCatalogNodes_MetaFilter: Stopping server: protocol=DNS address=127.0.0.1:17051 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.384Z [INFO]  TestCatalogNodes_MetaFilter: Stopping server: protocol=DNS address=127.0.0.1:17051 network=udp
>     writer.go:29: 2020-02-23T02:46:50.384Z [INFO]  TestCatalogNodes_MetaFilter: Stopping server: protocol=HTTP address=127.0.0.1:17052 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.384Z [INFO]  TestCatalogNodes_MetaFilter: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:50.384Z [INFO]  TestCatalogNodes_MetaFilter: Endpoints down
> === CONT  TestBlacklist
> === RUN   TestBlacklist/nothing_blocked_root
> === RUN   TestBlacklist/nothing_blocked_path
> === RUN   TestBlacklist/exact_match_1
> === RUN   TestBlacklist/exact_match_2
> === RUN   TestBlacklist/subpath
> === RUN   TestBlacklist/longer_prefix
> === RUN   TestBlacklist/longer_subpath
> === RUN   TestBlacklist/partial_prefix
> === RUN   TestBlacklist/no_match
> --- PASS: TestBlacklist (0.00s)
>     --- PASS: TestBlacklist/nothing_blocked_root (0.00s)
>     --- PASS: TestBlacklist/nothing_blocked_path (0.00s)
>     --- PASS: TestBlacklist/exact_match_1 (0.00s)
>     --- PASS: TestBlacklist/exact_match_2 (0.00s)
>     --- PASS: TestBlacklist/subpath (0.00s)
>     --- PASS: TestBlacklist/longer_prefix (0.00s)
>     --- PASS: TestBlacklist/longer_subpath (0.00s)
>     --- PASS: TestBlacklist/partial_prefix (0.00s)
>     --- PASS: TestBlacklist/no_match (0.00s)
> === CONT  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521
> --- PASS: TestCatalogDeregister (0.32s)
>     writer.go:29: 2020-02-23T02:46:50.170Z [WARN]  TestCatalogDeregister: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:50.170Z [DEBUG] TestCatalogDeregister.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:50.170Z [DEBUG] TestCatalogDeregister.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:50.182Z [INFO]  TestCatalogDeregister.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:5be4ec57-e85b-b9ba-ed0d-5e4ea9bf9de4 Address:127.0.0.1:17062}]"
>     writer.go:29: 2020-02-23T02:46:50.182Z [INFO]  TestCatalogDeregister.server.raft: entering follower state: follower="Node at 127.0.0.1:17062 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:50.182Z [INFO]  TestCatalogDeregister.server.serf.wan: serf: EventMemberJoin: Node-5be4ec57-e85b-b9ba-ed0d-5e4ea9bf9de4.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.182Z [INFO]  TestCatalogDeregister.server.serf.lan: serf: EventMemberJoin: Node-5be4ec57-e85b-b9ba-ed0d-5e4ea9bf9de4 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.183Z [INFO]  TestCatalogDeregister: Started DNS server: address=127.0.0.1:17057 network=udp
>     writer.go:29: 2020-02-23T02:46:50.183Z [INFO]  TestCatalogDeregister.server: Adding LAN server: server="Node-5be4ec57-e85b-b9ba-ed0d-5e4ea9bf9de4 (Addr: tcp/127.0.0.1:17062) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:50.183Z [INFO]  TestCatalogDeregister.server: Handled event for server in area: event=member-join server=Node-5be4ec57-e85b-b9ba-ed0d-5e4ea9bf9de4.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:50.183Z [INFO]  TestCatalogDeregister: Started DNS server: address=127.0.0.1:17057 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.183Z [INFO]  TestCatalogDeregister: Started HTTP server: address=127.0.0.1:17058 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.183Z [INFO]  TestCatalogDeregister: started state syncer
>     writer.go:29: 2020-02-23T02:46:50.228Z [WARN]  TestCatalogDeregister.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:50.228Z [INFO]  TestCatalogDeregister.server.raft: entering candidate state: node="Node at 127.0.0.1:17062 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:50.232Z [DEBUG] TestCatalogDeregister.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:50.232Z [DEBUG] TestCatalogDeregister.server.raft: vote granted: from=5be4ec57-e85b-b9ba-ed0d-5e4ea9bf9de4 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:50.232Z [INFO]  TestCatalogDeregister.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:50.232Z [INFO]  TestCatalogDeregister.server.raft: entering leader state: leader="Node at 127.0.0.1:17062 [Leader]"
>     writer.go:29: 2020-02-23T02:46:50.232Z [INFO]  TestCatalogDeregister.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:50.232Z [INFO]  TestCatalogDeregister.server: New leader elected: payload=Node-5be4ec57-e85b-b9ba-ed0d-5e4ea9bf9de4
>     writer.go:29: 2020-02-23T02:46:50.239Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:50.248Z [INFO]  TestCatalogDeregister.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:50.248Z [INFO]  TestCatalogDeregister.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.248Z [DEBUG] TestCatalogDeregister.server: Skipping self join check for node since the cluster is too small: node=Node-5be4ec57-e85b-b9ba-ed0d-5e4ea9bf9de4
>     writer.go:29: 2020-02-23T02:46:50.248Z [INFO]  TestCatalogDeregister.server: member joined, marking health alive: member=Node-5be4ec57-e85b-b9ba-ed0d-5e4ea9bf9de4
>     writer.go:29: 2020-02-23T02:46:50.477Z [INFO]  TestCatalogDeregister: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:50.477Z [INFO]  TestCatalogDeregister.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:50.477Z [DEBUG] TestCatalogDeregister.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.477Z [WARN]  TestCatalogDeregister.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.478Z [ERROR] TestCatalogDeregister.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:50.478Z [DEBUG] TestCatalogDeregister.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.479Z [WARN]  TestCatalogDeregister.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.481Z [INFO]  TestCatalogDeregister.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:50.481Z [INFO]  TestCatalogDeregister: consul server down
>     writer.go:29: 2020-02-23T02:46:50.481Z [INFO]  TestCatalogDeregister: shutdown complete
>     writer.go:29: 2020-02-23T02:46:50.481Z [INFO]  TestCatalogDeregister: Stopping server: protocol=DNS address=127.0.0.1:17057 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.481Z [INFO]  TestCatalogDeregister: Stopping server: protocol=DNS address=127.0.0.1:17057 network=udp
>     writer.go:29: 2020-02-23T02:46:50.481Z [INFO]  TestCatalogDeregister: Stopping server: protocol=HTTP address=127.0.0.1:17058 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.481Z [INFO]  TestCatalogDeregister: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:50.481Z [INFO]  TestCatalogDeregister: Endpoints down
> === CONT  TestACL_AgentMasterToken
> --- PASS: TestACL_AgentMasterToken (0.01s)
>     writer.go:29: 2020-02-23T02:46:50.489Z [WARN]  TestACL_AgentMasterToken: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:50.489Z [WARN]  TestACL_AgentMasterToken: bootstrap = true: do not enable unless necessary
> === CONT  TestAgent_RerouteExistingHTTPChecks
> --- PASS: TestCatalogNodes (0.43s)
>     writer.go:29: 2020-02-23T02:46:50.099Z [WARN]  TestCatalogNodes: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:50.099Z [DEBUG] TestCatalogNodes.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:50.100Z [DEBUG] TestCatalogNodes.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:50.111Z [INFO]  TestCatalogNodes.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:922bebb9-b633-9b61-18da-8e46a46d32c3 Address:127.0.0.1:17038}]"
>     writer.go:29: 2020-02-23T02:46:50.111Z [INFO]  TestCatalogNodes.server.serf.wan: serf: EventMemberJoin: Node-922bebb9-b633-9b61-18da-8e46a46d32c3.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.112Z [INFO]  TestCatalogNodes.server.serf.lan: serf: EventMemberJoin: Node-922bebb9-b633-9b61-18da-8e46a46d32c3 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.112Z [INFO]  TestCatalogNodes: Started DNS server: address=127.0.0.1:17033 network=udp
>     writer.go:29: 2020-02-23T02:46:50.112Z [INFO]  TestCatalogNodes.server.raft: entering follower state: follower="Node at 127.0.0.1:17038 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:50.112Z [INFO]  TestCatalogNodes.server: Adding LAN server: server="Node-922bebb9-b633-9b61-18da-8e46a46d32c3 (Addr: tcp/127.0.0.1:17038) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:50.112Z [INFO]  TestCatalogNodes.server: Handled event for server in area: event=member-join server=Node-922bebb9-b633-9b61-18da-8e46a46d32c3.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:50.112Z [INFO]  TestCatalogNodes: Started DNS server: address=127.0.0.1:17033 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.122Z [INFO]  TestCatalogNodes: Started HTTP server: address=127.0.0.1:17034 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.122Z [INFO]  TestCatalogNodes: started state syncer
>     writer.go:29: 2020-02-23T02:46:50.164Z [WARN]  TestCatalogNodes.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:50.164Z [INFO]  TestCatalogNodes.server.raft: entering candidate state: node="Node at 127.0.0.1:17038 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:50.167Z [DEBUG] TestCatalogNodes.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:50.167Z [DEBUG] TestCatalogNodes.server.raft: vote granted: from=922bebb9-b633-9b61-18da-8e46a46d32c3 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:50.167Z [INFO]  TestCatalogNodes.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:50.167Z [INFO]  TestCatalogNodes.server.raft: entering leader state: leader="Node at 127.0.0.1:17038 [Leader]"
>     writer.go:29: 2020-02-23T02:46:50.168Z [INFO]  TestCatalogNodes.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:50.171Z [INFO]  TestCatalogNodes.server: New leader elected: payload=Node-922bebb9-b633-9b61-18da-8e46a46d32c3
>     writer.go:29: 2020-02-23T02:46:50.174Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:50.185Z [INFO]  TestCatalogNodes.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:50.185Z [INFO]  TestCatalogNodes.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.185Z [DEBUG] TestCatalogNodes.server: Skipping self join check for node since the cluster is too small: node=Node-922bebb9-b633-9b61-18da-8e46a46d32c3
>     writer.go:29: 2020-02-23T02:46:50.185Z [INFO]  TestCatalogNodes.server: member joined, marking health alive: member=Node-922bebb9-b633-9b61-18da-8e46a46d32c3
>     writer.go:29: 2020-02-23T02:46:50.292Z [DEBUG] TestCatalogNodes: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:50.296Z [INFO]  TestCatalogNodes: Synced node info
>     writer.go:29: 2020-02-23T02:46:50.505Z [INFO]  TestCatalogNodes: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:50.505Z [INFO]  TestCatalogNodes.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:50.505Z [DEBUG] TestCatalogNodes.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.505Z [WARN]  TestCatalogNodes.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.505Z [DEBUG] TestCatalogNodes.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.507Z [WARN]  TestCatalogNodes.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.509Z [INFO]  TestCatalogNodes.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:50.509Z [INFO]  TestCatalogNodes: consul server down
>     writer.go:29: 2020-02-23T02:46:50.509Z [INFO]  TestCatalogNodes: shutdown complete
>     writer.go:29: 2020-02-23T02:46:50.509Z [INFO]  TestCatalogNodes: Stopping server: protocol=DNS address=127.0.0.1:17033 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.509Z [INFO]  TestCatalogNodes: Stopping server: protocol=DNS address=127.0.0.1:17033 network=udp
>     writer.go:29: 2020-02-23T02:46:50.509Z [INFO]  TestCatalogNodes: Stopping server: protocol=HTTP address=127.0.0.1:17034 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.509Z [INFO]  TestCatalogNodes: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:50.509Z [INFO]  TestCatalogNodes: Endpoints down
> === CONT  TestAgent_consulConfig_RaftTrailingLogs
> --- PASS: TestAgent_RerouteExistingHTTPChecks (0.17s)
>     writer.go:29: 2020-02-23T02:46:50.505Z [WARN]  TestAgent_RerouteExistingHTTPChecks: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:50.505Z [DEBUG] TestAgent_RerouteExistingHTTPChecks.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:50.506Z [DEBUG] TestAgent_RerouteExistingHTTPChecks.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:50.528Z [INFO]  TestAgent_RerouteExistingHTTPChecks.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:8e8b0d3d-8cf7-53c8-b777-d694bcd3b0e4 Address:127.0.0.1:17098}]"
>     writer.go:29: 2020-02-23T02:46:50.529Z [INFO]  TestAgent_RerouteExistingHTTPChecks.server.raft: entering follower state: follower="Node at 127.0.0.1:17098 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:50.529Z [INFO]  TestAgent_RerouteExistingHTTPChecks.server.serf.wan: serf: EventMemberJoin: Node-8e8b0d3d-8cf7-53c8-b777-d694bcd3b0e4.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.529Z [INFO]  TestAgent_RerouteExistingHTTPChecks.server.serf.lan: serf: EventMemberJoin: Node-8e8b0d3d-8cf7-53c8-b777-d694bcd3b0e4 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.530Z [INFO]  TestAgent_RerouteExistingHTTPChecks: Started DNS server: address=127.0.0.1:17093 network=udp
>     writer.go:29: 2020-02-23T02:46:50.530Z [INFO]  TestAgent_RerouteExistingHTTPChecks.server: Adding LAN server: server="Node-8e8b0d3d-8cf7-53c8-b777-d694bcd3b0e4 (Addr: tcp/127.0.0.1:17098) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:50.530Z [INFO]  TestAgent_RerouteExistingHTTPChecks.server: Handled event for server in area: event=member-join server=Node-8e8b0d3d-8cf7-53c8-b777-d694bcd3b0e4.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:50.530Z [INFO]  TestAgent_RerouteExistingHTTPChecks: Started DNS server: address=127.0.0.1:17093 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.530Z [INFO]  TestAgent_RerouteExistingHTTPChecks: Started HTTP server: address=127.0.0.1:17094 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.530Z [INFO]  TestAgent_RerouteExistingHTTPChecks: started state syncer
>     writer.go:29: 2020-02-23T02:46:50.586Z [WARN]  TestAgent_RerouteExistingHTTPChecks.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:50.586Z [INFO]  TestAgent_RerouteExistingHTTPChecks.server.raft: entering candidate state: node="Node at 127.0.0.1:17098 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:50.589Z [DEBUG] TestAgent_RerouteExistingHTTPChecks.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:50.589Z [DEBUG] TestAgent_RerouteExistingHTTPChecks.server.raft: vote granted: from=8e8b0d3d-8cf7-53c8-b777-d694bcd3b0e4 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:50.589Z [INFO]  TestAgent_RerouteExistingHTTPChecks.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:50.589Z [INFO]  TestAgent_RerouteExistingHTTPChecks.server.raft: entering leader state: leader="Node at 127.0.0.1:17098 [Leader]"
>     writer.go:29: 2020-02-23T02:46:50.589Z [INFO]  TestAgent_RerouteExistingHTTPChecks.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:50.589Z [INFO]  TestAgent_RerouteExistingHTTPChecks.server: New leader elected: payload=Node-8e8b0d3d-8cf7-53c8-b777-d694bcd3b0e4
>     writer.go:29: 2020-02-23T02:46:50.597Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:50.605Z [INFO]  TestAgent_RerouteExistingHTTPChecks.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:50.606Z [INFO]  TestAgent_RerouteExistingHTTPChecks.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.606Z [DEBUG] TestAgent_RerouteExistingHTTPChecks.server: Skipping self join check for node since the cluster is too small: node=Node-8e8b0d3d-8cf7-53c8-b777-d694bcd3b0e4
>     writer.go:29: 2020-02-23T02:46:50.606Z [INFO]  TestAgent_RerouteExistingHTTPChecks.server: member joined, marking health alive: member=Node-8e8b0d3d-8cf7-53c8-b777-d694bcd3b0e4
>     writer.go:29: 2020-02-23T02:46:50.654Z [WARN]  TestAgent_RerouteExistingHTTPChecks: check has interval below minimum: check=http minimum_interval=1s
>     writer.go:29: 2020-02-23T02:46:50.654Z [DEBUG] TestAgent_RerouteExistingHTTPChecks.tlsutil: OutgoingTLSConfigForCheck: version=1
>     writer.go:29: 2020-02-23T02:46:50.654Z [WARN]  TestAgent_RerouteExistingHTTPChecks: check has interval below minimum: check=grpc minimum_interval=1s
>     writer.go:29: 2020-02-23T02:46:50.654Z [INFO]  TestAgent_RerouteExistingHTTPChecks: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:50.654Z [INFO]  TestAgent_RerouteExistingHTTPChecks.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:50.654Z [DEBUG] TestAgent_RerouteExistingHTTPChecks.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.654Z [WARN]  TestAgent_RerouteExistingHTTPChecks.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.655Z [ERROR] TestAgent_RerouteExistingHTTPChecks.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:50.655Z [DEBUG] TestAgent_RerouteExistingHTTPChecks.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.657Z [WARN]  TestAgent_RerouteExistingHTTPChecks.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.659Z [INFO]  TestAgent_RerouteExistingHTTPChecks.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:50.659Z [INFO]  TestAgent_RerouteExistingHTTPChecks: consul server down
>     writer.go:29: 2020-02-23T02:46:50.659Z [INFO]  TestAgent_RerouteExistingHTTPChecks: shutdown complete
>     writer.go:29: 2020-02-23T02:46:50.659Z [INFO]  TestAgent_RerouteExistingHTTPChecks: Stopping server: protocol=DNS address=127.0.0.1:17093 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.660Z [INFO]  TestAgent_RerouteExistingHTTPChecks: Stopping server: protocol=DNS address=127.0.0.1:17093 network=udp
>     writer.go:29: 2020-02-23T02:46:50.660Z [INFO]  TestAgent_RerouteExistingHTTPChecks: Stopping server: protocol=HTTP address=127.0.0.1:17094 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.660Z [INFO]  TestAgent_RerouteExistingHTTPChecks: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:50.660Z [INFO]  TestAgent_RerouteExistingHTTPChecks: Endpoints down
> === CONT  TestAgent_consulConfig_AutoEncryptAllowTLS
> === RUN   TestCatalogRegister_Service_InvalidAddress/addr_0.0.0.0
> === RUN   TestCatalogRegister_Service_InvalidAddress/addr_::
> === RUN   TestCatalogRegister_Service_InvalidAddress/addr_[::]
> --- PASS: TestCatalogRegister_Service_InvalidAddress (0.49s)
>     writer.go:29: 2020-02-23T02:46:50.296Z [WARN]  TestCatalogRegister_Service_InvalidAddress: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:50.296Z [DEBUG] TestCatalogRegister_Service_InvalidAddress.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:50.297Z [DEBUG] TestCatalogRegister_Service_InvalidAddress.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:50.307Z [INFO]  TestCatalogRegister_Service_InvalidAddress.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:94642aff-db12-177b-988b-66899a65e1c8 Address:127.0.0.1:17074}]"
>     writer.go:29: 2020-02-23T02:46:50.308Z [INFO]  TestCatalogRegister_Service_InvalidAddress.server.serf.wan: serf: EventMemberJoin: Node-94642aff-db12-177b-988b-66899a65e1c8.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.308Z [INFO]  TestCatalogRegister_Service_InvalidAddress.server.serf.lan: serf: EventMemberJoin: Node-94642aff-db12-177b-988b-66899a65e1c8 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.308Z [INFO]  TestCatalogRegister_Service_InvalidAddress: Started DNS server: address=127.0.0.1:17069 network=udp
>     writer.go:29: 2020-02-23T02:46:50.308Z [INFO]  TestCatalogRegister_Service_InvalidAddress.server.raft: entering follower state: follower="Node at 127.0.0.1:17074 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:50.308Z [INFO]  TestCatalogRegister_Service_InvalidAddress.server: Adding LAN server: server="Node-94642aff-db12-177b-988b-66899a65e1c8 (Addr: tcp/127.0.0.1:17074) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:50.308Z [INFO]  TestCatalogRegister_Service_InvalidAddress.server: Handled event for server in area: event=member-join server=Node-94642aff-db12-177b-988b-66899a65e1c8.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:50.308Z [INFO]  TestCatalogRegister_Service_InvalidAddress: Started DNS server: address=127.0.0.1:17069 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.309Z [INFO]  TestCatalogRegister_Service_InvalidAddress: Started HTTP server: address=127.0.0.1:17070 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.309Z [INFO]  TestCatalogRegister_Service_InvalidAddress: started state syncer
>     writer.go:29: 2020-02-23T02:46:50.377Z [WARN]  TestCatalogRegister_Service_InvalidAddress.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:50.377Z [INFO]  TestCatalogRegister_Service_InvalidAddress.server.raft: entering candidate state: node="Node at 127.0.0.1:17074 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:50.381Z [DEBUG] TestCatalogRegister_Service_InvalidAddress.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:50.381Z [DEBUG] TestCatalogRegister_Service_InvalidAddress.server.raft: vote granted: from=94642aff-db12-177b-988b-66899a65e1c8 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:50.381Z [INFO]  TestCatalogRegister_Service_InvalidAddress.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:50.381Z [INFO]  TestCatalogRegister_Service_InvalidAddress.server.raft: entering leader state: leader="Node at 127.0.0.1:17074 [Leader]"
>     writer.go:29: 2020-02-23T02:46:50.381Z [INFO]  TestCatalogRegister_Service_InvalidAddress.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:50.381Z [INFO]  TestCatalogRegister_Service_InvalidAddress.server: New leader elected: payload=Node-94642aff-db12-177b-988b-66899a65e1c8
>     writer.go:29: 2020-02-23T02:46:50.396Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:50.405Z [INFO]  TestCatalogRegister_Service_InvalidAddress.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:50.406Z [INFO]  TestCatalogRegister_Service_InvalidAddress.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.406Z [DEBUG] TestCatalogRegister_Service_InvalidAddress.server: Skipping self join check for node since the cluster is too small: node=Node-94642aff-db12-177b-988b-66899a65e1c8
>     writer.go:29: 2020-02-23T02:46:50.406Z [INFO]  TestCatalogRegister_Service_InvalidAddress.server: member joined, marking health alive: member=Node-94642aff-db12-177b-988b-66899a65e1c8
>     writer.go:29: 2020-02-23T02:46:50.448Z [DEBUG] TestCatalogRegister_Service_InvalidAddress: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:50.451Z [INFO]  TestCatalogRegister_Service_InvalidAddress: Synced node info
>     writer.go:29: 2020-02-23T02:46:50.517Z [DEBUG] TestCatalogRegister_Service_InvalidAddress: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:50.517Z [DEBUG] TestCatalogRegister_Service_InvalidAddress: Node info in sync
>     writer.go:29: 2020-02-23T02:46:50.518Z [DEBUG] TestCatalogRegister_Service_InvalidAddress: Node info in sync
>     --- PASS: TestCatalogRegister_Service_InvalidAddress/addr_0.0.0.0 (0.00s)
>     --- PASS: TestCatalogRegister_Service_InvalidAddress/addr_:: (0.00s)
>     --- PASS: TestCatalogRegister_Service_InvalidAddress/addr_[::] (0.00s)
>     writer.go:29: 2020-02-23T02:46:50.711Z [INFO]  TestCatalogRegister_Service_InvalidAddress: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:50.711Z [INFO]  TestCatalogRegister_Service_InvalidAddress.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:50.711Z [DEBUG] TestCatalogRegister_Service_InvalidAddress.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.711Z [WARN]  TestCatalogRegister_Service_InvalidAddress.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.711Z [DEBUG] TestCatalogRegister_Service_InvalidAddress.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.751Z [WARN]  TestCatalogRegister_Service_InvalidAddress.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.776Z [INFO]  TestCatalogRegister_Service_InvalidAddress.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:50.777Z [INFO]  TestCatalogRegister_Service_InvalidAddress: consul server down
>     writer.go:29: 2020-02-23T02:46:50.777Z [INFO]  TestCatalogRegister_Service_InvalidAddress: shutdown complete
>     writer.go:29: 2020-02-23T02:46:50.777Z [INFO]  TestCatalogRegister_Service_InvalidAddress: Stopping server: protocol=DNS address=127.0.0.1:17069 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.777Z [INFO]  TestCatalogRegister_Service_InvalidAddress: Stopping server: protocol=DNS address=127.0.0.1:17069 network=udp
>     writer.go:29: 2020-02-23T02:46:50.777Z [INFO]  TestCatalogRegister_Service_InvalidAddress: Stopping server: protocol=HTTP address=127.0.0.1:17070 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.777Z [INFO]  TestCatalogRegister_Service_InvalidAddress: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:50.777Z [INFO]  TestCatalogRegister_Service_InvalidAddress: Endpoints down
> === CONT  TestAgent_ReloadConfigTLSConfigFailure
> --- PASS: TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521 (0.58s)
>     writer.go:29: 2020-02-23T02:46:50.392Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:50.392Z [DEBUG] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:50.393Z [DEBUG] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:50.406Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:9ce4d4b6-2c8f-50a3-a6de-25f60c63053c Address:127.0.0.1:17068}]"
>     writer.go:29: 2020-02-23T02:46:50.406Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.serf.wan: serf: EventMemberJoin: Node-9ce4d4b6-2c8f-50a3-a6de-25f60c63053c.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.407Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.serf.lan: serf: EventMemberJoin: Node-9ce4d4b6-2c8f-50a3-a6de-25f60c63053c 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.407Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: Started DNS server: address=127.0.0.1:17063 network=udp
>     writer.go:29: 2020-02-23T02:46:50.407Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.raft: entering follower state: follower="Node at 127.0.0.1:17068 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:50.407Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server: Adding LAN server: server="Node-9ce4d4b6-2c8f-50a3-a6de-25f60c63053c (Addr: tcp/127.0.0.1:17068) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:50.407Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server: Handled event for server in area: event=member-join server=Node-9ce4d4b6-2c8f-50a3-a6de-25f60c63053c.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:50.407Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: Started DNS server: address=127.0.0.1:17063 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.408Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: Started HTTP server: address=127.0.0.1:17064 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.408Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: started state syncer
>     writer.go:29: 2020-02-23T02:46:50.466Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:50.466Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.raft: entering candidate state: node="Node at 127.0.0.1:17068 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:50.473Z [DEBUG] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:50.473Z [DEBUG] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.raft: vote granted: from=9ce4d4b6-2c8f-50a3-a6de-25f60c63053c term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:50.473Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:50.473Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.raft: entering leader state: leader="Node at 127.0.0.1:17068 [Leader]"
>     writer.go:29: 2020-02-23T02:46:50.473Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:50.473Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server: New leader elected: payload=Node-9ce4d4b6-2c8f-50a3-a6de-25f60c63053c
>     writer.go:29: 2020-02-23T02:46:50.486Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:50.494Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:50.494Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.494Z [DEBUG] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server: Skipping self join check for node since the cluster is too small: node=Node-9ce4d4b6-2c8f-50a3-a6de-25f60c63053c
>     writer.go:29: 2020-02-23T02:46:50.494Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server: member joined, marking health alive: member=Node-9ce4d4b6-2c8f-50a3-a6de-25f60c63053c
>     writer.go:29: 2020-02-23T02:46:50.501Z [DEBUG] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:50.505Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: Synced node info
>     writer.go:29: 2020-02-23T02:46:50.505Z [DEBUG] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: Node info in sync
>     writer.go:29: 2020-02-23T02:46:50.839Z [DEBUG] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:50.840Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.serf.lan: serf: EventMemberJoin: Node-4bb84b38-b99c-469a-259c-437b530dfc37 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.841Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.842Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.843Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.843Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.843Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=discovery-chain:echo error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.843Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.844Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.844Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.844Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.844Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.844Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=discovery-chain:echo error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.844Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.845Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.845Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.845Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.845Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.845Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.845Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.845Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.845Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.845Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.845Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=discovery-chain:echo error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=roots error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=leaf error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=intentions error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.proxycfg: watch error: id=discovery-chain:echo error="error filling agent cache: No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.845Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2: Started DNS server: address=127.0.0.1:17081 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.845Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2: Started DNS server: address=127.0.0.1:17081 network=udp
>     writer.go:29: 2020-02-23T02:46:50.846Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2: Started HTTP server: address=127.0.0.1:17082 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.846Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2: started state syncer
>     writer.go:29: 2020-02-23T02:46:50.847Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.847Z [ERROR] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.anti_entropy: failed to sync remote state: error="No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:50.847Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.847Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.847Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.847Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.847Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.847Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.847Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.847Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     agent_test.go:4077: joining client to server
>     writer.go:29: 2020-02-23T02:46:50.847Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: (LAN) joining: lan_addresses=[127.0.0.1:17084]
>     writer.go:29: 2020-02-23T02:46:50.847Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.847Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.847Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.847Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:50.848Z [DEBUG] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.memberlist.lan: memberlist: Stream connection from=127.0.0.1:44396
>     writer.go:29: 2020-02-23T02:46:50.848Z [DEBUG] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.memberlist.lan: memberlist: Initiating push/pull sync with: 127.0.0.1:17084
>     writer.go:29: 2020-02-23T02:46:50.848Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.serf.lan: serf: EventMemberJoin: Node-9ce4d4b6-2c8f-50a3-a6de-25f60c63053c 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.848Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client: adding server: server="Node-9ce4d4b6-2c8f-50a3-a6de-25f60c63053c (Addr: tcp/127.0.0.1:17068) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:50.848Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client: New leader elected: payload=Node-9ce4d4b6-2c8f-50a3-a6de-25f60c63053c
>     writer.go:29: 2020-02-23T02:46:50.849Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.serf.lan: serf: EventMemberJoin: Node-4bb84b38-b99c-469a-259c-437b530dfc37 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.849Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: (LAN) joined: number_of_nodes=1
>     writer.go:29: 2020-02-23T02:46:50.849Z [DEBUG] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: systemd notify failed: error="No socket"
>     agent_test.go:4085: joined client to server
>     writer.go:29: 2020-02-23T02:46:50.849Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:50.849Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client: shutting down client
>     writer.go:29: 2020-02-23T02:46:50.849Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.849Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2.client.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:50.849Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server: member joined, marking health alive: member=Node-4bb84b38-b99c-469a-259c-437b530dfc37
>     writer.go:29: 2020-02-23T02:46:50.851Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2: consul client down
>     writer.go:29: 2020-02-23T02:46:50.851Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2: shutdown complete
>     writer.go:29: 2020-02-23T02:46:50.851Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2: Stopping server: protocol=DNS address=127.0.0.1:17081 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.851Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2: Stopping server: protocol=DNS address=127.0.0.1:17081 network=udp
>     writer.go:29: 2020-02-23T02:46:50.851Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2: Stopping server: protocol=HTTP address=127.0.0.1:17082 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.851Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:50.851Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a2: Endpoints down
>     writer.go:29: 2020-02-23T02:46:50.851Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:50.851Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:50.851Z [DEBUG] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.851Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.851Z [DEBUG] TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.920Z [WARN]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.963Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:50.964Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: consul server down
>     writer.go:29: 2020-02-23T02:46:50.964Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: shutdown complete
>     writer.go:29: 2020-02-23T02:46:50.964Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: Stopping server: protocol=DNS address=127.0.0.1:17063 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.964Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: Stopping server: protocol=DNS address=127.0.0.1:17063 network=udp
>     writer.go:29: 2020-02-23T02:46:50.964Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: Stopping server: protocol=HTTP address=127.0.0.1:17064 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.964Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:50.964Z [INFO]  TestAgentCache_serviceInConfigFile_initialFetchErrors_Issue6521-a1: Endpoints down
> === CONT  TestAgent_ReloadConfigIncomingRPCConfig
> --- PASS: TestAgent_consulConfig_RaftTrailingLogs (0.47s)
>     writer.go:29: 2020-02-23T02:46:50.517Z [WARN]  TestAgent_consulConfig_RaftTrailingLogs: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:50.517Z [DEBUG] TestAgent_consulConfig_RaftTrailingLogs.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:50.518Z [DEBUG] TestAgent_consulConfig_RaftTrailingLogs.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:50.533Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:58d02076-fdca-1d81-b2b9-8c400943d453 Address:127.0.0.1:17110}]"
>     writer.go:29: 2020-02-23T02:46:50.533Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.server.serf.wan: serf: EventMemberJoin: Node-58d02076-fdca-1d81-b2b9-8c400943d453.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.533Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.server.serf.lan: serf: EventMemberJoin: Node-58d02076-fdca-1d81-b2b9-8c400943d453 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.533Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs: Started DNS server: address=127.0.0.1:17105 network=udp
>     writer.go:29: 2020-02-23T02:46:50.534Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.server.raft: entering follower state: follower="Node at 127.0.0.1:17110 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:50.534Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.server: Adding LAN server: server="Node-58d02076-fdca-1d81-b2b9-8c400943d453 (Addr: tcp/127.0.0.1:17110) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:50.534Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.server: Handled event for server in area: event=member-join server=Node-58d02076-fdca-1d81-b2b9-8c400943d453.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:50.534Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs: Started DNS server: address=127.0.0.1:17105 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.534Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs: Started HTTP server: address=127.0.0.1:17106 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.534Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs: started state syncer
>     writer.go:29: 2020-02-23T02:46:50.592Z [WARN]  TestAgent_consulConfig_RaftTrailingLogs.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:50.592Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.server.raft: entering candidate state: node="Node at 127.0.0.1:17110 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:50.596Z [DEBUG] TestAgent_consulConfig_RaftTrailingLogs.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:50.597Z [DEBUG] TestAgent_consulConfig_RaftTrailingLogs.server.raft: vote granted: from=58d02076-fdca-1d81-b2b9-8c400943d453 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:50.597Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:50.597Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.server.raft: entering leader state: leader="Node at 127.0.0.1:17110 [Leader]"
>     writer.go:29: 2020-02-23T02:46:50.597Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:50.597Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.server: New leader elected: payload=Node-58d02076-fdca-1d81-b2b9-8c400943d453
>     writer.go:29: 2020-02-23T02:46:50.605Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:50.613Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:50.614Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.614Z [DEBUG] TestAgent_consulConfig_RaftTrailingLogs.server: Skipping self join check for node since the cluster is too small: node=Node-58d02076-fdca-1d81-b2b9-8c400943d453
>     writer.go:29: 2020-02-23T02:46:50.614Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.server: member joined, marking health alive: member=Node-58d02076-fdca-1d81-b2b9-8c400943d453
>     writer.go:29: 2020-02-23T02:46:50.735Z [DEBUG] TestAgent_consulConfig_RaftTrailingLogs: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:50.779Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs: Synced node info
>     writer.go:29: 2020-02-23T02:46:50.970Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:50.970Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:50.970Z [DEBUG] TestAgent_consulConfig_RaftTrailingLogs.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.970Z [WARN]  TestAgent_consulConfig_RaftTrailingLogs.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.970Z [DEBUG] TestAgent_consulConfig_RaftTrailingLogs.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.972Z [WARN]  TestAgent_consulConfig_RaftTrailingLogs.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:50.974Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:50.974Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs: consul server down
>     writer.go:29: 2020-02-23T02:46:50.974Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs: shutdown complete
>     writer.go:29: 2020-02-23T02:46:50.974Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs: Stopping server: protocol=DNS address=127.0.0.1:17105 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.974Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs: Stopping server: protocol=DNS address=127.0.0.1:17105 network=udp
>     writer.go:29: 2020-02-23T02:46:50.974Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs: Stopping server: protocol=HTTP address=127.0.0.1:17106 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.974Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:50.974Z [INFO]  TestAgent_consulConfig_RaftTrailingLogs: Endpoints down
> === CONT  TestAgent_ReloadConfigOutgoingRPCConfig
> --- PASS: TestAgent_consulConfig_AutoEncryptAllowTLS (0.44s)
>     writer.go:29: 2020-02-23T02:46:50.668Z [WARN]  TestAgent_consulConfig_AutoEncryptAllowTLS: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:50.669Z [DEBUG] TestAgent_consulConfig_AutoEncryptAllowTLS.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:50.669Z [DEBUG] TestAgent_consulConfig_AutoEncryptAllowTLS.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:50.686Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:8a352496-0d49-a883-fa61-72be8a786929 Address:127.0.0.1:17104}]"
>     writer.go:29: 2020-02-23T02:46:50.686Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.server.raft: entering follower state: follower="Node at 127.0.0.1:17104 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:50.686Z [DEBUG] TestAgent_consulConfig_AutoEncryptAllowTLS.tlsutil: UpdateAutoEncryptCA: version=2
>     writer.go:29: 2020-02-23T02:46:50.687Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.server.serf.wan: serf: EventMemberJoin: Node-8a352496-0d49-a883-fa61-72be8a786929.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.687Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.server.serf.lan: serf: EventMemberJoin: Node-8a352496-0d49-a883-fa61-72be8a786929 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.687Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.server: Handled event for server in area: event=member-join server=Node-8a352496-0d49-a883-fa61-72be8a786929.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:50.687Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.server: Adding LAN server: server="Node-8a352496-0d49-a883-fa61-72be8a786929 (Addr: tcp/127.0.0.1:17104) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:50.688Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS: Started DNS server: address=127.0.0.1:17099 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.688Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS: Started DNS server: address=127.0.0.1:17099 network=udp
>     writer.go:29: 2020-02-23T02:46:50.688Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS: Started HTTP server: address=127.0.0.1:17100 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.688Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS: started state syncer
>     writer.go:29: 2020-02-23T02:46:50.726Z [WARN]  TestAgent_consulConfig_AutoEncryptAllowTLS.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:50.726Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.server.raft: entering candidate state: node="Node at 127.0.0.1:17104 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:50.779Z [DEBUG] TestAgent_consulConfig_AutoEncryptAllowTLS.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:50.779Z [DEBUG] TestAgent_consulConfig_AutoEncryptAllowTLS.server.raft: vote granted: from=8a352496-0d49-a883-fa61-72be8a786929 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:50.779Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:50.779Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.server.raft: entering leader state: leader="Node at 127.0.0.1:17104 [Leader]"
>     writer.go:29: 2020-02-23T02:46:50.783Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:50.783Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.server: New leader elected: payload=Node-8a352496-0d49-a883-fa61-72be8a786929
>     writer.go:29: 2020-02-23T02:46:50.790Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:50.800Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:50.800Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.800Z [DEBUG] TestAgent_consulConfig_AutoEncryptAllowTLS.server: Skipping self join check for node since the cluster is too small: node=Node-8a352496-0d49-a883-fa61-72be8a786929
>     writer.go:29: 2020-02-23T02:46:50.800Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.server: member joined, marking health alive: member=Node-8a352496-0d49-a883-fa61-72be8a786929
>     writer.go:29: 2020-02-23T02:46:50.800Z [DEBUG] TestAgent_consulConfig_AutoEncryptAllowTLS.tlsutil: UpdateAutoEncryptCA: version=3
>     writer.go:29: 2020-02-23T02:46:50.805Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS: Synced node info
>     writer.go:29: 2020-02-23T02:46:51.096Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:51.097Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:51.097Z [DEBUG] TestAgent_consulConfig_AutoEncryptAllowTLS.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.097Z [WARN]  TestAgent_consulConfig_AutoEncryptAllowTLS.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.097Z [DEBUG] TestAgent_consulConfig_AutoEncryptAllowTLS.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.098Z [WARN]  TestAgent_consulConfig_AutoEncryptAllowTLS.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.100Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:51.100Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS: consul server down
>     writer.go:29: 2020-02-23T02:46:51.100Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS: shutdown complete
>     writer.go:29: 2020-02-23T02:46:51.100Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS: Stopping server: protocol=DNS address=127.0.0.1:17099 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.100Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS: Stopping server: protocol=DNS address=127.0.0.1:17099 network=udp
>     writer.go:29: 2020-02-23T02:46:51.100Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS: Stopping server: protocol=HTTP address=127.0.0.1:17100 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.100Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:51.100Z [INFO]  TestAgent_consulConfig_AutoEncryptAllowTLS: Endpoints down
> === CONT  TestAgent_loadTokens
> --- PASS: TestAgent_ReloadConfigIncomingRPCConfig (0.20s)
>     writer.go:29: 2020-02-23T02:46:50.971Z [WARN]  TestAgent_ReloadConfigIncomingRPCConfig: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:50.971Z [DEBUG] TestAgent_ReloadConfigIncomingRPCConfig.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:50.972Z [DEBUG] TestAgent_ReloadConfigIncomingRPCConfig.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:50.992Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:866f3b9b-55ec-23d6-085d-c7358be393b9 Address:127.0.0.1:17116}]"
>     writer.go:29: 2020-02-23T02:46:50.996Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.server.serf.wan: serf: EventMemberJoin: Node-866f3b9b-55ec-23d6-085d-c7358be393b9.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.998Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.server.serf.lan: serf: EventMemberJoin: Node-866f3b9b-55ec-23d6-085d-c7358be393b9 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.001Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.server.raft: entering follower state: follower="Node at 127.0.0.1:17116 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:51.002Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.server: Adding LAN server: server="Node-866f3b9b-55ec-23d6-085d-c7358be393b9 (Addr: tcp/127.0.0.1:17116) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:51.004Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.server: Handled event for server in area: event=member-join server=Node-866f3b9b-55ec-23d6-085d-c7358be393b9.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.004Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig: Started DNS server: address=127.0.0.1:17111 network=udp
>     writer.go:29: 2020-02-23T02:46:51.004Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig: Started DNS server: address=127.0.0.1:17111 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.007Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig: Started HTTP server: address=127.0.0.1:17112 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.007Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig: started state syncer
>     writer.go:29: 2020-02-23T02:46:51.059Z [WARN]  TestAgent_ReloadConfigIncomingRPCConfig.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:51.059Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.server.raft: entering candidate state: node="Node at 127.0.0.1:17116 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:51.063Z [DEBUG] TestAgent_ReloadConfigIncomingRPCConfig.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:51.063Z [DEBUG] TestAgent_ReloadConfigIncomingRPCConfig.server.raft: vote granted: from=866f3b9b-55ec-23d6-085d-c7358be393b9 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:51.063Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:51.063Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.server.raft: entering leader state: leader="Node at 127.0.0.1:17116 [Leader]"
>     writer.go:29: 2020-02-23T02:46:51.063Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:51.063Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.server: New leader elected: payload=Node-866f3b9b-55ec-23d6-085d-c7358be393b9
>     writer.go:29: 2020-02-23T02:46:51.070Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:51.078Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:51.078Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.078Z [DEBUG] TestAgent_ReloadConfigIncomingRPCConfig.server: Skipping self join check for node since the cluster is too small: node=Node-866f3b9b-55ec-23d6-085d-c7358be393b9
>     writer.go:29: 2020-02-23T02:46:51.078Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.server: member joined, marking health alive: member=Node-866f3b9b-55ec-23d6-085d-c7358be393b9
>     writer.go:29: 2020-02-23T02:46:51.152Z [DEBUG] TestAgent_ReloadConfigIncomingRPCConfig.tlsutil: IncomingRPCConfig: version=1
>     writer.go:29: 2020-02-23T02:46:51.153Z [DEBUG] TestAgent_ReloadConfigIncomingRPCConfig.tlsutil: IncomingRPCConfig: version=1
>     writer.go:29: 2020-02-23T02:46:51.160Z [WARN]  TestAgent_ReloadConfigIncomingRPCConfig: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.160Z [DEBUG] TestAgent_ReloadConfigIncomingRPCConfig.tlsutil: Update: version=2
>     writer.go:29: 2020-02-23T02:46:51.160Z [DEBUG] TestAgent_ReloadConfigIncomingRPCConfig.tlsutil: IncomingRPCConfig: version=2
>     writer.go:29: 2020-02-23T02:46:51.160Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:51.160Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:51.160Z [DEBUG] TestAgent_ReloadConfigIncomingRPCConfig.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.160Z [WARN]  TestAgent_ReloadConfigIncomingRPCConfig.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.160Z [ERROR] TestAgent_ReloadConfigIncomingRPCConfig.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:51.161Z [DEBUG] TestAgent_ReloadConfigIncomingRPCConfig.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.164Z [WARN]  TestAgent_ReloadConfigIncomingRPCConfig.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.166Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:51.166Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig: consul server down
>     writer.go:29: 2020-02-23T02:46:51.166Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig: shutdown complete
>     writer.go:29: 2020-02-23T02:46:51.166Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig: Stopping server: protocol=DNS address=127.0.0.1:17111 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.166Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig: Stopping server: protocol=DNS address=127.0.0.1:17111 network=udp
>     writer.go:29: 2020-02-23T02:46:51.166Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig: Stopping server: protocol=HTTP address=127.0.0.1:17112 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.166Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:51.166Z [INFO]  TestAgent_ReloadConfigIncomingRPCConfig: Endpoints down
> === CONT  TestAgent_reloadWatchesHTTPS
> 2020-02-23T02:46:51.183Z [WARN]  TestAgent_reloadWatchesHTTPS: bootstrap = true: do not enable unless necessary
> 2020-02-23T02:46:51.184Z [DEBUG] TestAgent_reloadWatchesHTTPS.tlsutil: Update: version=1
> 2020-02-23T02:46:51.184Z [DEBUG] TestAgent_reloadWatchesHTTPS.tlsutil: OutgoingRPCWrapper: version=1
> 2020-02-23T02:46:51.194Z [INFO]  TestAgent_reloadWatchesHTTPS.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:afbae87f-cd48-b04e-699a-f315877a19d9 Address:127.0.0.1:17146}]"
> 2020-02-23T02:46:51.194Z [INFO]  TestAgent_reloadWatchesHTTPS.server.raft: entering follower state: follower="Node at 127.0.0.1:17146 [Follower]" leader=
> 2020-02-23T02:46:51.194Z [INFO]  TestAgent_reloadWatchesHTTPS.server.serf.wan: serf: EventMemberJoin: Node-afbae87f-cd48-b04e-699a-f315877a19d9.dc1 127.0.0.1
> 2020-02-23T02:46:51.195Z [INFO]  TestAgent_reloadWatchesHTTPS.server.serf.lan: serf: EventMemberJoin: Node-afbae87f-cd48-b04e-699a-f315877a19d9 127.0.0.1
> 2020-02-23T02:46:51.195Z [INFO]  TestAgent_reloadWatchesHTTPS.server: Adding LAN server: server="Node-afbae87f-cd48-b04e-699a-f315877a19d9 (Addr: tcp/127.0.0.1:17146) (DC: dc1)"
> 2020-02-23T02:46:51.195Z [INFO]  TestAgent_reloadWatchesHTTPS: Started DNS server: address=127.0.0.1:17141 network=udp
> 2020-02-23T02:46:51.195Z [INFO]  TestAgent_reloadWatchesHTTPS.server: Handled event for server in area: event=member-join server=Node-afbae87f-cd48-b04e-699a-f315877a19d9.dc1 area=wan
> 2020-02-23T02:46:51.195Z [INFO]  TestAgent_reloadWatchesHTTPS: Started DNS server: address=127.0.0.1:17141 network=tcp
> 2020-02-23T02:46:51.195Z [DEBUG] TestAgent_reloadWatchesHTTPS.tlsutil: IncomingHTTPSConfig: version=1
> 2020-02-23T02:46:51.195Z [INFO]  TestAgent_reloadWatchesHTTPS: Started HTTPS server: address=127.0.0.1:17143 network=tcp
> 2020-02-23T02:46:51.195Z [INFO]  TestAgent_reloadWatchesHTTPS: started state syncer
> --- PASS: TestAgent_ReloadConfigTLSConfigFailure (0.43s)
>     writer.go:29: 2020-02-23T02:46:50.786Z [WARN]  TestAgent_ReloadConfigTLSConfigFailure: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:50.787Z [DEBUG] TestAgent_ReloadConfigTLSConfigFailure.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:50.787Z [DEBUG] TestAgent_ReloadConfigTLSConfigFailure.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:50.800Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:64237070-7c03-4604-42db-806c987781e6 Address:127.0.0.1:17092}]"
>     writer.go:29: 2020-02-23T02:46:50.801Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.server.raft: entering follower state: follower="Node at 127.0.0.1:17092 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:50.802Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.server.serf.wan: serf: EventMemberJoin: Node-64237070-7c03-4604-42db-806c987781e6.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.802Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.server.serf.lan: serf: EventMemberJoin: Node-64237070-7c03-4604-42db-806c987781e6 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:50.803Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.server: Handled event for server in area: event=member-join server=Node-64237070-7c03-4604-42db-806c987781e6.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:50.803Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.server: Adding LAN server: server="Node-64237070-7c03-4604-42db-806c987781e6 (Addr: tcp/127.0.0.1:17092) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:50.803Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure: Started DNS server: address=127.0.0.1:17087 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.803Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure: Started DNS server: address=127.0.0.1:17087 network=udp
>     writer.go:29: 2020-02-23T02:46:50.804Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure: Started HTTP server: address=127.0.0.1:17088 network=tcp
>     writer.go:29: 2020-02-23T02:46:50.804Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure: started state syncer
>     writer.go:29: 2020-02-23T02:46:50.845Z [WARN]  TestAgent_ReloadConfigTLSConfigFailure.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:50.845Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.server.raft: entering candidate state: node="Node at 127.0.0.1:17092 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:50.849Z [DEBUG] TestAgent_ReloadConfigTLSConfigFailure.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:50.849Z [DEBUG] TestAgent_ReloadConfigTLSConfigFailure.server.raft: vote granted: from=64237070-7c03-4604-42db-806c987781e6 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:50.849Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:50.849Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.server.raft: entering leader state: leader="Node at 127.0.0.1:17092 [Leader]"
>     writer.go:29: 2020-02-23T02:46:50.849Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:50.849Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.server: New leader elected: payload=Node-64237070-7c03-4604-42db-806c987781e6
>     writer.go:29: 2020-02-23T02:46:50.967Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:50.986Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:50.986Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:50.986Z [DEBUG] TestAgent_ReloadConfigTLSConfigFailure.server: Skipping self join check for node since the cluster is too small: node=Node-64237070-7c03-4604-42db-806c987781e6
>     writer.go:29: 2020-02-23T02:46:50.987Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.server: member joined, marking health alive: member=Node-64237070-7c03-4604-42db-806c987781e6
>     writer.go:29: 2020-02-23T02:46:51.050Z [DEBUG] TestAgent_ReloadConfigTLSConfigFailure: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:51.053Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure: Synced node info
>     writer.go:29: 2020-02-23T02:46:51.197Z [DEBUG] TestAgent_ReloadConfigTLSConfigFailure.tlsutil: IncomingRPCConfig: version=1
>     writer.go:29: 2020-02-23T02:46:51.204Z [WARN]  TestAgent_ReloadConfigTLSConfigFailure: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.204Z [DEBUG] TestAgent_ReloadConfigTLSConfigFailure.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.204Z [DEBUG] TestAgent_ReloadConfigTLSConfigFailure.tlsutil: IncomingRPCConfig: version=1
>     writer.go:29: 2020-02-23T02:46:51.204Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:51.204Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:51.204Z [DEBUG] TestAgent_ReloadConfigTLSConfigFailure.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.204Z [WARN]  TestAgent_ReloadConfigTLSConfigFailure.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.204Z [DEBUG] TestAgent_ReloadConfigTLSConfigFailure.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.207Z [WARN]  TestAgent_ReloadConfigTLSConfigFailure.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.209Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:51.209Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure: consul server down
>     writer.go:29: 2020-02-23T02:46:51.209Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure: shutdown complete
>     writer.go:29: 2020-02-23T02:46:51.210Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure: Stopping server: protocol=DNS address=127.0.0.1:17087 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.210Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure: Stopping server: protocol=DNS address=127.0.0.1:17087 network=udp
>     writer.go:29: 2020-02-23T02:46:51.210Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure: Stopping server: protocol=HTTP address=127.0.0.1:17088 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.210Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:51.210Z [INFO]  TestAgent_ReloadConfigTLSConfigFailure: Endpoints down
> === CONT  TestAgent_reloadWatches
> 2020-02-23T02:46:51.263Z [WARN]  TestAgent_reloadWatchesHTTPS.server.raft: heartbeat timeout reached, starting election: last-leader=
> 2020-02-23T02:46:51.263Z [INFO]  TestAgent_reloadWatchesHTTPS.server.raft: entering candidate state: node="Node at 127.0.0.1:17146 [Candidate]" term=2
> === RUN   TestAgent_loadTokens/original-configuration
> === RUN   TestAgent_loadTokens/updated-configuration
> === RUN   TestAgent_loadTokens/persisted-tokens
> === RUN   TestAgent_loadTokens/persisted-tokens-override
> === RUN   TestAgent_loadTokens/partial-persisted
> 2020-02-23T02:46:51.349Z [DEBUG] TestAgent_reloadWatchesHTTPS.server.raft: votes: needed=1
> 2020-02-23T02:46:51.349Z [DEBUG] TestAgent_reloadWatchesHTTPS.server.raft: vote granted: from=afbae87f-cd48-b04e-699a-f315877a19d9 term=2 tally=1
> 2020-02-23T02:46:51.349Z [INFO]  TestAgent_reloadWatchesHTTPS.server.raft: election won: tally=1
> 2020-02-23T02:46:51.349Z [INFO]  TestAgent_reloadWatchesHTTPS.server.raft: entering leader state: leader="Node at 127.0.0.1:17146 [Leader]"
> 2020-02-23T02:46:51.350Z [INFO]  TestAgent_reloadWatchesHTTPS.server: cluster leadership acquired
> 2020-02-23T02:46:51.350Z [INFO]  TestAgent_reloadWatchesHTTPS.server: New leader elected: payload=Node-afbae87f-cd48-b04e-699a-f315877a19d9
> --- PASS: TestAgent_ReloadConfigOutgoingRPCConfig (0.38s)
>     writer.go:29: 2020-02-23T02:46:51.025Z [WARN]  TestAgent_ReloadConfigOutgoingRPCConfig: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.026Z [DEBUG] TestAgent_ReloadConfigOutgoingRPCConfig.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.026Z [DEBUG] TestAgent_ReloadConfigOutgoingRPCConfig.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:51.035Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:44c35d3a-2eb7-ca8f-acda-2edb17731c17 Address:127.0.0.1:17122}]"
>     writer.go:29: 2020-02-23T02:46:51.035Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.server.raft: entering follower state: follower="Node at 127.0.0.1:17122 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:51.035Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.server.serf.wan: serf: EventMemberJoin: Node-44c35d3a-2eb7-ca8f-acda-2edb17731c17.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.036Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.server.serf.lan: serf: EventMemberJoin: Node-44c35d3a-2eb7-ca8f-acda-2edb17731c17 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.036Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig: Started DNS server: address=127.0.0.1:17117 network=udp
>     writer.go:29: 2020-02-23T02:46:51.036Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.server: Adding LAN server: server="Node-44c35d3a-2eb7-ca8f-acda-2edb17731c17 (Addr: tcp/127.0.0.1:17122) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:51.036Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.server: Handled event for server in area: event=member-join server=Node-44c35d3a-2eb7-ca8f-acda-2edb17731c17.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.036Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig: Started DNS server: address=127.0.0.1:17117 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.036Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig: Started HTTP server: address=127.0.0.1:17118 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.036Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig: started state syncer
>     writer.go:29: 2020-02-23T02:46:51.081Z [WARN]  TestAgent_ReloadConfigOutgoingRPCConfig.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:51.082Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.server.raft: entering candidate state: node="Node at 127.0.0.1:17122 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:51.085Z [DEBUG] TestAgent_ReloadConfigOutgoingRPCConfig.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:51.085Z [DEBUG] TestAgent_ReloadConfigOutgoingRPCConfig.server.raft: vote granted: from=44c35d3a-2eb7-ca8f-acda-2edb17731c17 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:51.085Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:51.085Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.server.raft: entering leader state: leader="Node at 127.0.0.1:17122 [Leader]"
>     writer.go:29: 2020-02-23T02:46:51.085Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:51.085Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.server: New leader elected: payload=Node-44c35d3a-2eb7-ca8f-acda-2edb17731c17
>     writer.go:29: 2020-02-23T02:46:51.092Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:51.100Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:51.100Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.100Z [DEBUG] TestAgent_ReloadConfigOutgoingRPCConfig.server: Skipping self join check for node since the cluster is too small: node=Node-44c35d3a-2eb7-ca8f-acda-2edb17731c17
>     writer.go:29: 2020-02-23T02:46:51.100Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.server: member joined, marking health alive: member=Node-44c35d3a-2eb7-ca8f-acda-2edb17731c17
>     writer.go:29: 2020-02-23T02:46:51.134Z [DEBUG] TestAgent_ReloadConfigOutgoingRPCConfig: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:51.136Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig: Synced node info
>     writer.go:29: 2020-02-23T02:46:51.303Z [DEBUG] TestAgent_ReloadConfigOutgoingRPCConfig.tlsutil: OutgoingRPCConfig: version=1
>     writer.go:29: 2020-02-23T02:46:51.310Z [WARN]  TestAgent_ReloadConfigOutgoingRPCConfig: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.311Z [DEBUG] TestAgent_ReloadConfigOutgoingRPCConfig.tlsutil: Update: version=2
>     writer.go:29: 2020-02-23T02:46:51.311Z [DEBUG] TestAgent_ReloadConfigOutgoingRPCConfig.tlsutil: OutgoingRPCConfig: version=2
>     writer.go:29: 2020-02-23T02:46:51.311Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:51.311Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:51.311Z [DEBUG] TestAgent_ReloadConfigOutgoingRPCConfig.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.311Z [WARN]  TestAgent_ReloadConfigOutgoingRPCConfig.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.311Z [DEBUG] TestAgent_ReloadConfigOutgoingRPCConfig.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.332Z [WARN]  TestAgent_ReloadConfigOutgoingRPCConfig.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.349Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:51.350Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig: consul server down
>     writer.go:29: 2020-02-23T02:46:51.350Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig: shutdown complete
>     writer.go:29: 2020-02-23T02:46:51.350Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig: Stopping server: protocol=DNS address=127.0.0.1:17117 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.350Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig: Stopping server: protocol=DNS address=127.0.0.1:17117 network=udp
>     writer.go:29: 2020-02-23T02:46:51.350Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig: Stopping server: protocol=HTTP address=127.0.0.1:17118 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.350Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:51.350Z [INFO]  TestAgent_ReloadConfigOutgoingRPCConfig: Endpoints down
> === CONT  TestAgent_GetCoordinate
> === RUN   TestAgent_loadTokens/persistence-error-not-json
> === RUN   TestAgent_loadTokens/persistence-error-wrong-top-level
> --- PASS: TestAgent_loadTokens (0.32s)
>     writer.go:29: 2020-02-23T02:46:51.108Z [WARN]  TestAgent_loadTokens: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.108Z [DEBUG] TestAgent_loadTokens.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.108Z [DEBUG] TestAgent_loadTokens.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:51.123Z [INFO]  TestAgent_loadTokens.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f496c99d-1c05-65e6-b158-9f8fbc18e099 Address:127.0.0.1:17140}]"
>     writer.go:29: 2020-02-23T02:46:51.123Z [INFO]  TestAgent_loadTokens.server.serf.wan: serf: EventMemberJoin: Node-f496c99d-1c05-65e6-b158-9f8fbc18e099.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.124Z [INFO]  TestAgent_loadTokens.server.serf.lan: serf: EventMemberJoin: Node-f496c99d-1c05-65e6-b158-9f8fbc18e099 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.124Z [INFO]  TestAgent_loadTokens: Started DNS server: address=127.0.0.1:17135 network=udp
>     writer.go:29: 2020-02-23T02:46:51.124Z [INFO]  TestAgent_loadTokens.server.raft: entering follower state: follower="Node at 127.0.0.1:17140 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:51.124Z [INFO]  TestAgent_loadTokens.server: Adding LAN server: server="Node-f496c99d-1c05-65e6-b158-9f8fbc18e099 (Addr: tcp/127.0.0.1:17140) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:51.124Z [INFO]  TestAgent_loadTokens.server: Handled event for server in area: event=member-join server=Node-f496c99d-1c05-65e6-b158-9f8fbc18e099.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.124Z [INFO]  TestAgent_loadTokens: Started DNS server: address=127.0.0.1:17135 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.125Z [INFO]  TestAgent_loadTokens: Started HTTP server: address=127.0.0.1:17136 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.125Z [INFO]  TestAgent_loadTokens: started state syncer
>     writer.go:29: 2020-02-23T02:46:51.183Z [WARN]  TestAgent_loadTokens.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:51.184Z [INFO]  TestAgent_loadTokens.server.raft: entering candidate state: node="Node at 127.0.0.1:17140 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:51.188Z [DEBUG] TestAgent_loadTokens.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:51.188Z [DEBUG] TestAgent_loadTokens.server.raft: vote granted: from=f496c99d-1c05-65e6-b158-9f8fbc18e099 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:51.188Z [INFO]  TestAgent_loadTokens.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:51.188Z [INFO]  TestAgent_loadTokens.server.raft: entering leader state: leader="Node at 127.0.0.1:17140 [Leader]"
>     writer.go:29: 2020-02-23T02:46:51.188Z [INFO]  TestAgent_loadTokens.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:51.188Z [INFO]  TestAgent_loadTokens.server: New leader elected: payload=Node-f496c99d-1c05-65e6-b158-9f8fbc18e099
>     writer.go:29: 2020-02-23T02:46:51.191Z [INFO]  TestAgent_loadTokens.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:51.192Z [INFO]  TestAgent_loadTokens.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:51.196Z [INFO]  TestAgent_loadTokens.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:51.196Z [INFO]  TestAgent_loadTokens.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:51.196Z [INFO]  TestAgent_loadTokens.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:51.196Z [INFO]  TestAgent_loadTokens.server.serf.lan: serf: EventMemberUpdate: Node-f496c99d-1c05-65e6-b158-9f8fbc18e099
>     writer.go:29: 2020-02-23T02:46:51.197Z [INFO]  TestAgent_loadTokens.server.serf.wan: serf: EventMemberUpdate: Node-f496c99d-1c05-65e6-b158-9f8fbc18e099.dc1
>     writer.go:29: 2020-02-23T02:46:51.197Z [INFO]  TestAgent_loadTokens.server: Handled event for server in area: event=member-update server=Node-f496c99d-1c05-65e6-b158-9f8fbc18e099.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.203Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:51.209Z [INFO]  TestAgent_loadTokens.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:51.209Z [INFO]  TestAgent_loadTokens.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.210Z [DEBUG] TestAgent_loadTokens.server: Skipping self join check for node since the cluster is too small: node=Node-f496c99d-1c05-65e6-b158-9f8fbc18e099
>     writer.go:29: 2020-02-23T02:46:51.210Z [INFO]  TestAgent_loadTokens.server: member joined, marking health alive: member=Node-f496c99d-1c05-65e6-b158-9f8fbc18e099
>     writer.go:29: 2020-02-23T02:46:51.212Z [DEBUG] TestAgent_loadTokens.server: Skipping self join check for node since the cluster is too small: node=Node-f496c99d-1c05-65e6-b158-9f8fbc18e099
>     --- PASS: TestAgent_loadTokens/original-configuration (0.00s)
>     --- PASS: TestAgent_loadTokens/updated-configuration (0.00s)
>     writer.go:29: 2020-02-23T02:46:51.346Z [WARN]  TestAgent_loadTokens: "default" token present in both the configuration and persisted token store, using the persisted token
>     writer.go:29: 2020-02-23T02:46:51.346Z [WARN]  TestAgent_loadTokens: "agent" token present in both the configuration and persisted token store, using the persisted token
>     writer.go:29: 2020-02-23T02:46:51.346Z [WARN]  TestAgent_loadTokens: "agent_master" token present in both the configuration and persisted token store, using the persisted token
>     writer.go:29: 2020-02-23T02:46:51.346Z [WARN]  TestAgent_loadTokens: "replication" token present in both the configuration and persisted token store, using the persisted token
>     --- PASS: TestAgent_loadTokens/persisted-tokens (0.00s)
>     writer.go:29: 2020-02-23T02:46:51.349Z [WARN]  TestAgent_loadTokens: "default" token present in both the configuration and persisted token store, using the persisted token
>     writer.go:29: 2020-02-23T02:46:51.349Z [WARN]  TestAgent_loadTokens: "agent" token present in both the configuration and persisted token store, using the persisted token
>     writer.go:29: 2020-02-23T02:46:51.349Z [WARN]  TestAgent_loadTokens: "agent_master" token present in both the configuration and persisted token store, using the persisted token
>     writer.go:29: 2020-02-23T02:46:51.349Z [WARN]  TestAgent_loadTokens: "replication" token present in both the configuration and persisted token store, using the persisted token
>     --- PASS: TestAgent_loadTokens/persisted-tokens-override (0.00s)
>     writer.go:29: 2020-02-23T02:46:51.357Z [WARN]  TestAgent_loadTokens: "agent" token present in both the configuration and persisted token store, using the persisted token
>     writer.go:29: 2020-02-23T02:46:51.357Z [WARN]  TestAgent_loadTokens: "agent_master" token present in both the configuration and persisted token store, using the persisted token
>     --- PASS: TestAgent_loadTokens/partial-persisted (0.01s)
>     writer.go:29: 2020-02-23T02:46:51.367Z [WARN]  TestAgent_loadTokens: unable to load persisted tokens: error="failed to decode tokens file "/tmp/TestAgent_loadTokens-agent297614237/acl-tokens.json": invalid character '\x01' looking for beginning of value"
>     --- PASS: TestAgent_loadTokens/persistence-error-not-json (0.01s)
>     writer.go:29: 2020-02-23T02:46:51.378Z [WARN]  TestAgent_loadTokens: unable to load persisted tokens: error="failed to decode tokens file "/tmp/TestAgent_loadTokens-agent297614237/acl-tokens.json": json: cannot unmarshal array into Go value of type agent.persistedTokens"
>     --- PASS: TestAgent_loadTokens/persistence-error-wrong-top-level (0.01s)
>     writer.go:29: 2020-02-23T02:46:51.378Z [INFO]  TestAgent_loadTokens: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:51.378Z [INFO]  TestAgent_loadTokens.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:51.378Z [DEBUG] TestAgent_loadTokens.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:51.378Z [DEBUG] TestAgent_loadTokens.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:51.378Z [DEBUG] TestAgent_loadTokens.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.378Z [WARN]  TestAgent_loadTokens.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.378Z [ERROR] TestAgent_loadTokens.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:51.378Z [DEBUG] TestAgent_loadTokens.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:51.378Z [DEBUG] TestAgent_loadTokens.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:51.378Z [DEBUG] TestAgent_loadTokens.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.397Z [WARN]  TestAgent_loadTokens.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.420Z [INFO]  TestAgent_loadTokens.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:51.420Z [INFO]  TestAgent_loadTokens: consul server down
>     writer.go:29: 2020-02-23T02:46:51.420Z [INFO]  TestAgent_loadTokens: shutdown complete
>     writer.go:29: 2020-02-23T02:46:51.420Z [INFO]  TestAgent_loadTokens: Stopping server: protocol=DNS address=127.0.0.1:17135 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.420Z [INFO]  TestAgent_loadTokens: Stopping server: protocol=DNS address=127.0.0.1:17135 network=udp
>     writer.go:29: 2020-02-23T02:46:51.420Z [INFO]  TestAgent_loadTokens: Stopping server: protocol=HTTP address=127.0.0.1:17136 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.420Z [INFO]  TestAgent_loadTokens: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:51.420Z [INFO]  TestAgent_loadTokens: Endpoints down
> === CONT  TestAgent_purgeCheckState
> --- PASS: TestAgent_reloadWatches (0.21s)
>     writer.go:29: 2020-02-23T02:46:51.221Z [WARN]  TestAgent_reloadWatches: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.221Z [DEBUG] TestAgent_reloadWatches.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.221Z [DEBUG] TestAgent_reloadWatches.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:51.231Z [INFO]  TestAgent_reloadWatches.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:5b732a09-383e-96bc-8aa9-9c59f22153c8 Address:127.0.0.1:17134}]"
>     writer.go:29: 2020-02-23T02:46:51.232Z [INFO]  TestAgent_reloadWatches.server.serf.wan: serf: EventMemberJoin: Node-5b732a09-383e-96bc-8aa9-9c59f22153c8.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.232Z [INFO]  TestAgent_reloadWatches.server.serf.lan: serf: EventMemberJoin: Node-5b732a09-383e-96bc-8aa9-9c59f22153c8 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.232Z [INFO]  TestAgent_reloadWatches: Started DNS server: address=127.0.0.1:17129 network=udp
>     writer.go:29: 2020-02-23T02:46:51.232Z [INFO]  TestAgent_reloadWatches.server.raft: entering follower state: follower="Node at 127.0.0.1:17134 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:51.232Z [INFO]  TestAgent_reloadWatches.server: Adding LAN server: server="Node-5b732a09-383e-96bc-8aa9-9c59f22153c8 (Addr: tcp/127.0.0.1:17134) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:51.232Z [INFO]  TestAgent_reloadWatches.server: Handled event for server in area: event=member-join server=Node-5b732a09-383e-96bc-8aa9-9c59f22153c8.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.232Z [INFO]  TestAgent_reloadWatches: Started DNS server: address=127.0.0.1:17129 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.233Z [INFO]  TestAgent_reloadWatches: Started HTTP server: address=127.0.0.1:17130 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.233Z [INFO]  TestAgent_reloadWatches: started state syncer
>     writer.go:29: 2020-02-23T02:46:51.273Z [WARN]  TestAgent_reloadWatches.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:51.273Z [INFO]  TestAgent_reloadWatches.server.raft: entering candidate state: node="Node at 127.0.0.1:17134 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:51.345Z [DEBUG] TestAgent_reloadWatches.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:51.345Z [DEBUG] TestAgent_reloadWatches.server.raft: vote granted: from=5b732a09-383e-96bc-8aa9-9c59f22153c8 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:51.345Z [INFO]  TestAgent_reloadWatches.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:51.345Z [INFO]  TestAgent_reloadWatches.server.raft: entering leader state: leader="Node at 127.0.0.1:17134 [Leader]"
>     writer.go:29: 2020-02-23T02:46:51.345Z [INFO]  TestAgent_reloadWatches.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:51.345Z [INFO]  TestAgent_reloadWatches.server: New leader elected: payload=Node-5b732a09-383e-96bc-8aa9-9c59f22153c8
>     writer.go:29: 2020-02-23T02:46:51.373Z [INFO]  TestAgent_reloadWatches: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:51.373Z [INFO]  TestAgent_reloadWatches.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:51.373Z [WARN]  TestAgent_reloadWatches.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.373Z [ERROR] TestAgent_reloadWatches.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:51.397Z [WARN]  TestAgent_reloadWatches.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.420Z [INFO]  TestAgent_reloadWatches.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:51.421Z [INFO]  TestAgent_reloadWatches: consul server down
>     writer.go:29: 2020-02-23T02:46:51.421Z [INFO]  TestAgent_reloadWatches: shutdown complete
>     writer.go:29: 2020-02-23T02:46:51.421Z [INFO]  TestAgent_reloadWatches: Stopping server: protocol=DNS address=127.0.0.1:17129 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.421Z [INFO]  TestAgent_reloadWatches: Stopping server: protocol=DNS address=127.0.0.1:17129 network=udp
>     writer.go:29: 2020-02-23T02:46:51.421Z [INFO]  TestAgent_reloadWatches: Stopping server: protocol=HTTP address=127.0.0.1:17130 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.421Z [INFO]  TestAgent_reloadWatches: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:51.421Z [INFO]  TestAgent_reloadWatches: Endpoints down
> === CONT  TestAgent_loadCheckState
> 2020-02-23T02:46:51.426Z [INFO]  TestAgent_reloadWatchesHTTPS: Requesting shutdown
> 2020-02-23T02:46:51.426Z [INFO]  TestAgent_reloadWatchesHTTPS.server: shutting down server
> 2020-02-23T02:46:51.426Z [WARN]  TestAgent_reloadWatchesHTTPS.server.serf.lan: serf: Shutdown without a Leave
> 2020-02-23T02:46:51.427Z [ERROR] TestAgent_reloadWatchesHTTPS.anti_entropy: failed to sync remote state: error="No cluster leader"
> 2020-02-23T02:46:51.427Z [DEBUG] TestAgent_reloadWatchesHTTPS.tlsutil: IncomingHTTPSConfig: version=1
> 2020/02/23 02:46:51 http: TLS handshake error from 127.0.0.1:50110: tls: no certificates configured
> 2020-02-23T02:46:51.427Z [ERROR] watch.watch: Watch errored: type=key error="Get https://127.0.0.1:17143/v1/kv/asdf: remote error: tls: internal error" retry=5s
> 2020-02-23T02:46:51.431Z [WARN]  TestAgent_reloadWatchesHTTPS.server.serf.wan: serf: Shutdown without a Leave
> 2020-02-23T02:46:51.431Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
> 2020-02-23T02:46:51.432Z [INFO]  TestAgent_reloadWatchesHTTPS.server.router.manager: shutting down
> 2020-02-23T02:46:51.435Z [INFO]  TestAgent_reloadWatchesHTTPS: consul server down
> 2020-02-23T02:46:51.435Z [INFO]  TestAgent_reloadWatchesHTTPS: shutdown complete
> 2020-02-23T02:46:51.435Z [INFO]  TestAgent_reloadWatchesHTTPS: Stopping server: protocol=DNS address=127.0.0.1:17141 network=tcp
> 2020-02-23T02:46:51.435Z [INFO]  TestAgent_reloadWatchesHTTPS: Stopping server: protocol=DNS address=127.0.0.1:17141 network=udp
> 2020-02-23T02:46:51.435Z [INFO]  TestAgent_reloadWatchesHTTPS: Stopping server: protocol=HTTPS address=127.0.0.1:17143 network=tcp
> 2020-02-23T02:46:51.435Z [INFO]  TestAgent_reloadWatchesHTTPS: Waiting for endpoints to shut down
> 2020-02-23T02:46:51.435Z [INFO]  TestAgent_reloadWatchesHTTPS: Endpoints down
> --- PASS: TestAgent_reloadWatchesHTTPS (0.27s)
> === CONT  TestAgent_persistCheckState
> 2020-02-23T02:46:51.439Z [ERROR] TestAgent_reloadWatchesHTTPS.server: failed to establish leadership: error="error generating CA root certificate: error computing next serial number: leadership lost while committing log"
> --- PASS: TestAgent_loadCheckState (0.18s)
>     writer.go:29: 2020-02-23T02:46:51.434Z [WARN]  TestAgent_loadCheckState: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.434Z [DEBUG] TestAgent_loadCheckState.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.434Z [DEBUG] TestAgent_loadCheckState.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:51.449Z [INFO]  TestAgent_loadCheckState.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:9132c26e-81aa-5092-2200-f8a07f227942 Address:127.0.0.1:16132}]"
>     writer.go:29: 2020-02-23T02:46:51.449Z [INFO]  TestAgent_loadCheckState.server.serf.wan: serf: EventMemberJoin: Node-9132c26e-81aa-5092-2200-f8a07f227942.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.450Z [INFO]  TestAgent_loadCheckState.server.serf.lan: serf: EventMemberJoin: Node-9132c26e-81aa-5092-2200-f8a07f227942 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.450Z [INFO]  TestAgent_loadCheckState: Started DNS server: address=127.0.0.1:16127 network=udp
>     writer.go:29: 2020-02-23T02:46:51.450Z [INFO]  TestAgent_loadCheckState.server.raft: entering follower state: follower="Node at 127.0.0.1:16132 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:51.450Z [INFO]  TestAgent_loadCheckState.server: Adding LAN server: server="Node-9132c26e-81aa-5092-2200-f8a07f227942 (Addr: tcp/127.0.0.1:16132) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:51.450Z [INFO]  TestAgent_loadCheckState.server: Handled event for server in area: event=member-join server=Node-9132c26e-81aa-5092-2200-f8a07f227942.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.450Z [INFO]  TestAgent_loadCheckState: Started DNS server: address=127.0.0.1:16127 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.450Z [INFO]  TestAgent_loadCheckState: Started HTTP server: address=127.0.0.1:16128 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.450Z [INFO]  TestAgent_loadCheckState: started state syncer
>     writer.go:29: 2020-02-23T02:46:51.501Z [WARN]  TestAgent_loadCheckState.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:51.501Z [INFO]  TestAgent_loadCheckState.server.raft: entering candidate state: node="Node at 127.0.0.1:16132 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:51.505Z [DEBUG] TestAgent_loadCheckState.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:51.505Z [DEBUG] TestAgent_loadCheckState.server.raft: vote granted: from=9132c26e-81aa-5092-2200-f8a07f227942 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:51.505Z [INFO]  TestAgent_loadCheckState.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:51.505Z [INFO]  TestAgent_loadCheckState.server.raft: entering leader state: leader="Node at 127.0.0.1:16132 [Leader]"
>     writer.go:29: 2020-02-23T02:46:51.505Z [INFO]  TestAgent_loadCheckState.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:51.505Z [INFO]  TestAgent_loadCheckState.server: New leader elected: payload=Node-9132c26e-81aa-5092-2200-f8a07f227942
>     writer.go:29: 2020-02-23T02:46:51.513Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:51.522Z [INFO]  TestAgent_loadCheckState: Synced node info
>     writer.go:29: 2020-02-23T02:46:51.522Z [DEBUG] TestAgent_loadCheckState: Node info in sync
>     writer.go:29: 2020-02-23T02:46:51.524Z [INFO]  TestAgent_loadCheckState.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:51.524Z [INFO]  TestAgent_loadCheckState.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.524Z [DEBUG] TestAgent_loadCheckState.server: Skipping self join check for node since the cluster is too small: node=Node-9132c26e-81aa-5092-2200-f8a07f227942
>     writer.go:29: 2020-02-23T02:46:51.524Z [INFO]  TestAgent_loadCheckState.server: member joined, marking health alive: member=Node-9132c26e-81aa-5092-2200-f8a07f227942
>     writer.go:29: 2020-02-23T02:46:51.594Z [DEBUG] TestAgent_loadCheckState: check state expired, not restoring: check=check1
>     writer.go:29: 2020-02-23T02:46:51.594Z [INFO]  TestAgent_loadCheckState: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:51.594Z [INFO]  TestAgent_loadCheckState.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:51.594Z [DEBUG] TestAgent_loadCheckState.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.594Z [WARN]  TestAgent_loadCheckState.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.594Z [DEBUG] TestAgent_loadCheckState.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.596Z [WARN]  TestAgent_loadCheckState.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.598Z [INFO]  TestAgent_loadCheckState.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:51.598Z [INFO]  TestAgent_loadCheckState: consul server down
>     writer.go:29: 2020-02-23T02:46:51.598Z [INFO]  TestAgent_loadCheckState: shutdown complete
>     writer.go:29: 2020-02-23T02:46:51.598Z [INFO]  TestAgent_loadCheckState: Stopping server: protocol=DNS address=127.0.0.1:16127 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.598Z [INFO]  TestAgent_loadCheckState: Stopping server: protocol=DNS address=127.0.0.1:16127 network=udp
>     writer.go:29: 2020-02-23T02:46:51.598Z [INFO]  TestAgent_loadCheckState: Stopping server: protocol=HTTP address=127.0.0.1:16128 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.598Z [INFO]  TestAgent_loadCheckState: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:51.598Z [INFO]  TestAgent_loadCheckState: Endpoints down
> === CONT  TestAgent_loadChecks_checkFails
> --- PASS: TestAgent_purgeCheckState (0.23s)
>     writer.go:29: 2020-02-23T02:46:51.431Z [WARN]  TestAgent_purgeCheckState: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.431Z [DEBUG] TestAgent_purgeCheckState.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.432Z [DEBUG] TestAgent_purgeCheckState.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:51.453Z [INFO]  TestAgent_purgeCheckState.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:49b87c9f-53d0-e0ad-5d8a-3d75c233e666 Address:127.0.0.1:16138}]"
>     writer.go:29: 2020-02-23T02:46:51.454Z [INFO]  TestAgent_purgeCheckState.server.serf.wan: serf: EventMemberJoin: Node-49b87c9f-53d0-e0ad-5d8a-3d75c233e666.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.454Z [INFO]  TestAgent_purgeCheckState.server.raft: entering follower state: follower="Node at 127.0.0.1:16138 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:51.454Z [INFO]  TestAgent_purgeCheckState.server.serf.lan: serf: EventMemberJoin: Node-49b87c9f-53d0-e0ad-5d8a-3d75c233e666 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.454Z [INFO]  TestAgent_purgeCheckState.server: Handled event for server in area: event=member-join server=Node-49b87c9f-53d0-e0ad-5d8a-3d75c233e666.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.455Z [INFO]  TestAgent_purgeCheckState.server: Adding LAN server: server="Node-49b87c9f-53d0-e0ad-5d8a-3d75c233e666 (Addr: tcp/127.0.0.1:16138) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:51.455Z [INFO]  TestAgent_purgeCheckState: Started DNS server: address=127.0.0.1:16133 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.455Z [INFO]  TestAgent_purgeCheckState: Started DNS server: address=127.0.0.1:16133 network=udp
>     writer.go:29: 2020-02-23T02:46:51.455Z [INFO]  TestAgent_purgeCheckState: Started HTTP server: address=127.0.0.1:16134 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.455Z [INFO]  TestAgent_purgeCheckState: started state syncer
>     writer.go:29: 2020-02-23T02:46:51.499Z [WARN]  TestAgent_purgeCheckState.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:51.499Z [INFO]  TestAgent_purgeCheckState.server.raft: entering candidate state: node="Node at 127.0.0.1:16138 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:51.503Z [DEBUG] TestAgent_purgeCheckState.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:51.503Z [DEBUG] TestAgent_purgeCheckState.server.raft: vote granted: from=49b87c9f-53d0-e0ad-5d8a-3d75c233e666 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:51.503Z [INFO]  TestAgent_purgeCheckState.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:51.503Z [INFO]  TestAgent_purgeCheckState.server.raft: entering leader state: leader="Node at 127.0.0.1:16138 [Leader]"
>     writer.go:29: 2020-02-23T02:46:51.503Z [INFO]  TestAgent_purgeCheckState.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:51.503Z [INFO]  TestAgent_purgeCheckState.server: New leader elected: payload=Node-49b87c9f-53d0-e0ad-5d8a-3d75c233e666
>     writer.go:29: 2020-02-23T02:46:51.513Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:51.527Z [INFO]  TestAgent_purgeCheckState.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:51.527Z [INFO]  TestAgent_purgeCheckState.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.527Z [DEBUG] TestAgent_purgeCheckState.server: Skipping self join check for node since the cluster is too small: node=Node-49b87c9f-53d0-e0ad-5d8a-3d75c233e666
>     writer.go:29: 2020-02-23T02:46:51.527Z [INFO]  TestAgent_purgeCheckState.server: member joined, marking health alive: member=Node-49b87c9f-53d0-e0ad-5d8a-3d75c233e666
>     writer.go:29: 2020-02-23T02:46:51.642Z [INFO]  TestAgent_purgeCheckState: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:51.642Z [INFO]  TestAgent_purgeCheckState.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:51.642Z [DEBUG] TestAgent_purgeCheckState.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.642Z [WARN]  TestAgent_purgeCheckState.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.642Z [ERROR] TestAgent_purgeCheckState.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:51.642Z [DEBUG] TestAgent_purgeCheckState.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.644Z [WARN]  TestAgent_purgeCheckState.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.648Z [INFO]  TestAgent_purgeCheckState.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:51.649Z [INFO]  TestAgent_purgeCheckState: consul server down
>     writer.go:29: 2020-02-23T02:46:51.649Z [INFO]  TestAgent_purgeCheckState: shutdown complete
>     writer.go:29: 2020-02-23T02:46:51.649Z [INFO]  TestAgent_purgeCheckState: Stopping server: protocol=DNS address=127.0.0.1:16133 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.649Z [INFO]  TestAgent_purgeCheckState: Stopping server: protocol=DNS address=127.0.0.1:16133 network=udp
>     writer.go:29: 2020-02-23T02:46:51.649Z [INFO]  TestAgent_purgeCheckState: Stopping server: protocol=HTTP address=127.0.0.1:16134 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.649Z [INFO]  TestAgent_purgeCheckState: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:51.649Z [INFO]  TestAgent_purgeCheckState: Endpoints down
> === CONT  TestAgent_checkStateSnapshot
> --- PASS: TestAgent_persistCheckState (0.31s)
>     writer.go:29: 2020-02-23T02:46:51.442Z [WARN]  TestAgent_persistCheckState: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.442Z [DEBUG] TestAgent_persistCheckState.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.443Z [DEBUG] TestAgent_persistCheckState.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:51.458Z [INFO]  TestAgent_persistCheckState.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:bc8111bf-65d2-f1d4-98ef-eb3153091345 Address:127.0.0.1:16156}]"
>     writer.go:29: 2020-02-23T02:46:51.459Z [INFO]  TestAgent_persistCheckState.server.serf.wan: serf: EventMemberJoin: Node-bc8111bf-65d2-f1d4-98ef-eb3153091345.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.459Z [INFO]  TestAgent_persistCheckState.server.raft: entering follower state: follower="Node at 127.0.0.1:16156 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:51.459Z [INFO]  TestAgent_persistCheckState.server.serf.lan: serf: EventMemberJoin: Node-bc8111bf-65d2-f1d4-98ef-eb3153091345 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.459Z [INFO]  TestAgent_persistCheckState: Started DNS server: address=127.0.0.1:16151 network=udp
>     writer.go:29: 2020-02-23T02:46:51.459Z [INFO]  TestAgent_persistCheckState.server: Adding LAN server: server="Node-bc8111bf-65d2-f1d4-98ef-eb3153091345 (Addr: tcp/127.0.0.1:16156) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:51.459Z [INFO]  TestAgent_persistCheckState.server: Handled event for server in area: event=member-join server=Node-bc8111bf-65d2-f1d4-98ef-eb3153091345.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.460Z [INFO]  TestAgent_persistCheckState: Started DNS server: address=127.0.0.1:16151 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.460Z [INFO]  TestAgent_persistCheckState: Started HTTP server: address=127.0.0.1:16152 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.460Z [INFO]  TestAgent_persistCheckState: started state syncer
>     writer.go:29: 2020-02-23T02:46:51.501Z [WARN]  TestAgent_persistCheckState.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:51.501Z [INFO]  TestAgent_persistCheckState.server.raft: entering candidate state: node="Node at 127.0.0.1:16156 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:51.504Z [DEBUG] TestAgent_persistCheckState.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:51.504Z [DEBUG] TestAgent_persistCheckState.server.raft: vote granted: from=bc8111bf-65d2-f1d4-98ef-eb3153091345 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:51.504Z [INFO]  TestAgent_persistCheckState.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:51.504Z [INFO]  TestAgent_persistCheckState.server.raft: entering leader state: leader="Node at 127.0.0.1:16156 [Leader]"
>     writer.go:29: 2020-02-23T02:46:51.504Z [INFO]  TestAgent_persistCheckState.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:51.505Z [INFO]  TestAgent_persistCheckState.server: New leader elected: payload=Node-bc8111bf-65d2-f1d4-98ef-eb3153091345
>     writer.go:29: 2020-02-23T02:46:51.512Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:51.521Z [INFO]  TestAgent_persistCheckState.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:51.521Z [INFO]  TestAgent_persistCheckState.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.521Z [DEBUG] TestAgent_persistCheckState.server: Skipping self join check for node since the cluster is too small: node=Node-bc8111bf-65d2-f1d4-98ef-eb3153091345
>     writer.go:29: 2020-02-23T02:46:51.521Z [INFO]  TestAgent_persistCheckState.server: member joined, marking health alive: member=Node-bc8111bf-65d2-f1d4-98ef-eb3153091345
>     writer.go:29: 2020-02-23T02:46:51.743Z [INFO]  TestAgent_persistCheckState: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:51.743Z [INFO]  TestAgent_persistCheckState.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:51.743Z [DEBUG] TestAgent_persistCheckState.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.743Z [WARN]  TestAgent_persistCheckState.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.743Z [ERROR] TestAgent_persistCheckState.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:51.743Z [DEBUG] TestAgent_persistCheckState.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.745Z [WARN]  TestAgent_persistCheckState.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.747Z [INFO]  TestAgent_persistCheckState.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:51.747Z [INFO]  TestAgent_persistCheckState: consul server down
>     writer.go:29: 2020-02-23T02:46:51.747Z [INFO]  TestAgent_persistCheckState: shutdown complete
>     writer.go:29: 2020-02-23T02:46:51.747Z [INFO]  TestAgent_persistCheckState: Stopping server: protocol=DNS address=127.0.0.1:16151 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.747Z [INFO]  TestAgent_persistCheckState: Stopping server: protocol=DNS address=127.0.0.1:16151 network=udp
>     writer.go:29: 2020-02-23T02:46:51.748Z [INFO]  TestAgent_persistCheckState: Stopping server: protocol=HTTP address=127.0.0.1:16152 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.748Z [INFO]  TestAgent_persistCheckState: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:51.748Z [INFO]  TestAgent_persistCheckState: Endpoints down
> === CONT  TestAgent_NodeMaintenanceMode
> --- PASS: TestAgent_loadChecks_checkFails (0.30s)
>     writer.go:29: 2020-02-23T02:46:51.613Z [WARN]  TestAgent_loadChecks_checkFails: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.613Z [DEBUG] TestAgent_loadChecks_checkFails.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.627Z [DEBUG] TestAgent_loadChecks_checkFails.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:51.675Z [INFO]  TestAgent_loadChecks_checkFails.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:11fbcd66-f0cf-5e12-ee2d-96320e68fbdb Address:127.0.0.1:17128}]"
>     writer.go:29: 2020-02-23T02:46:51.675Z [INFO]  TestAgent_loadChecks_checkFails.server.serf.wan: serf: EventMemberJoin: Node-11fbcd66-f0cf-5e12-ee2d-96320e68fbdb.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.676Z [INFO]  TestAgent_loadChecks_checkFails.server.serf.lan: serf: EventMemberJoin: Node-11fbcd66-f0cf-5e12-ee2d-96320e68fbdb 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.676Z [INFO]  TestAgent_loadChecks_checkFails: Started DNS server: address=127.0.0.1:17123 network=udp
>     writer.go:29: 2020-02-23T02:46:51.676Z [INFO]  TestAgent_loadChecks_checkFails.server.raft: entering follower state: follower="Node at 127.0.0.1:17128 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:51.677Z [INFO]  TestAgent_loadChecks_checkFails.server: Adding LAN server: server="Node-11fbcd66-f0cf-5e12-ee2d-96320e68fbdb (Addr: tcp/127.0.0.1:17128) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:51.677Z [INFO]  TestAgent_loadChecks_checkFails.server: Handled event for server in area: event=member-join server=Node-11fbcd66-f0cf-5e12-ee2d-96320e68fbdb.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.677Z [INFO]  TestAgent_loadChecks_checkFails: Started DNS server: address=127.0.0.1:17123 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.677Z [INFO]  TestAgent_loadChecks_checkFails: Started HTTP server: address=127.0.0.1:17124 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.677Z [INFO]  TestAgent_loadChecks_checkFails: started state syncer
>     writer.go:29: 2020-02-23T02:46:51.719Z [WARN]  TestAgent_loadChecks_checkFails.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:51.720Z [INFO]  TestAgent_loadChecks_checkFails.server.raft: entering candidate state: node="Node at 127.0.0.1:17128 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:51.723Z [DEBUG] TestAgent_loadChecks_checkFails.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:51.723Z [DEBUG] TestAgent_loadChecks_checkFails.server.raft: vote granted: from=11fbcd66-f0cf-5e12-ee2d-96320e68fbdb term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:51.723Z [INFO]  TestAgent_loadChecks_checkFails.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:51.723Z [INFO]  TestAgent_loadChecks_checkFails.server.raft: entering leader state: leader="Node at 127.0.0.1:17128 [Leader]"
>     writer.go:29: 2020-02-23T02:46:51.723Z [INFO]  TestAgent_loadChecks_checkFails.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:51.723Z [INFO]  TestAgent_loadChecks_checkFails.server: New leader elected: payload=Node-11fbcd66-f0cf-5e12-ee2d-96320e68fbdb
>     writer.go:29: 2020-02-23T02:46:51.730Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:51.738Z [INFO]  TestAgent_loadChecks_checkFails.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:51.738Z [INFO]  TestAgent_loadChecks_checkFails.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.739Z [DEBUG] TestAgent_loadChecks_checkFails.server: Skipping self join check for node since the cluster is too small: node=Node-11fbcd66-f0cf-5e12-ee2d-96320e68fbdb
>     writer.go:29: 2020-02-23T02:46:51.739Z [INFO]  TestAgent_loadChecks_checkFails.server: member joined, marking health alive: member=Node-11fbcd66-f0cf-5e12-ee2d-96320e68fbdb
>     writer.go:29: 2020-02-23T02:46:51.897Z [WARN]  TestAgent_loadChecks_checkFails: Failed to restore check: check=service:redis error="ServiceID "nope" does not exist"
>     writer.go:29: 2020-02-23T02:46:51.897Z [DEBUG] TestAgent_loadChecks_checkFails: restored health check from file: check=service:redis file=/tmp/TestAgent_loadChecks_checkFails-agent829210654/checks/60a2ef12de014a05ecdc850d9aab46da
>     writer.go:29: 2020-02-23T02:46:51.897Z [INFO]  TestAgent_loadChecks_checkFails: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:51.897Z [INFO]  TestAgent_loadChecks_checkFails.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:51.897Z [DEBUG] TestAgent_loadChecks_checkFails.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.897Z [WARN]  TestAgent_loadChecks_checkFails.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.897Z [ERROR] TestAgent_loadChecks_checkFails.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:51.897Z [DEBUG] TestAgent_loadChecks_checkFails.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.899Z [WARN]  TestAgent_loadChecks_checkFails.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.900Z [INFO]  TestAgent_loadChecks_checkFails.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:51.900Z [INFO]  TestAgent_loadChecks_checkFails: consul server down
>     writer.go:29: 2020-02-23T02:46:51.900Z [INFO]  TestAgent_loadChecks_checkFails: shutdown complete
>     writer.go:29: 2020-02-23T02:46:51.900Z [INFO]  TestAgent_loadChecks_checkFails: Stopping server: protocol=DNS address=127.0.0.1:17123 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.900Z [INFO]  TestAgent_loadChecks_checkFails: Stopping server: protocol=DNS address=127.0.0.1:17123 network=udp
>     writer.go:29: 2020-02-23T02:46:51.900Z [INFO]  TestAgent_loadChecks_checkFails: Stopping server: protocol=HTTP address=127.0.0.1:17124 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.900Z [INFO]  TestAgent_loadChecks_checkFails: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:51.900Z [INFO]  TestAgent_loadChecks_checkFails: Endpoints down
> === CONT  TestAgent_AddCheck_restoresSnapshot
> --- PASS: TestAgent_NodeMaintenanceMode (0.19s)
>     writer.go:29: 2020-02-23T02:46:51.761Z [WARN]  TestAgent_NodeMaintenanceMode: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.761Z [DEBUG] TestAgent_NodeMaintenanceMode.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.762Z [DEBUG] TestAgent_NodeMaintenanceMode.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:51.773Z [INFO]  TestAgent_NodeMaintenanceMode.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:e7ea5525-87c3-9dcf-8199-7896be577d79 Address:127.0.0.1:16168}]"
>     writer.go:29: 2020-02-23T02:46:51.774Z [INFO]  TestAgent_NodeMaintenanceMode.server.raft: entering follower state: follower="Node at 127.0.0.1:16168 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:51.774Z [INFO]  TestAgent_NodeMaintenanceMode.server.serf.wan: serf: EventMemberJoin: Node-e7ea5525-87c3-9dcf-8199-7896be577d79.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.779Z [INFO]  TestAgent_NodeMaintenanceMode.server.serf.lan: serf: EventMemberJoin: Node-e7ea5525-87c3-9dcf-8199-7896be577d79 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.779Z [INFO]  TestAgent_NodeMaintenanceMode.server: Adding LAN server: server="Node-e7ea5525-87c3-9dcf-8199-7896be577d79 (Addr: tcp/127.0.0.1:16168) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:51.779Z [INFO]  TestAgent_NodeMaintenanceMode.server: Handled event for server in area: event=member-join server=Node-e7ea5525-87c3-9dcf-8199-7896be577d79.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.780Z [INFO]  TestAgent_NodeMaintenanceMode: Started DNS server: address=127.0.0.1:16163 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.780Z [INFO]  TestAgent_NodeMaintenanceMode: Started DNS server: address=127.0.0.1:16163 network=udp
>     writer.go:29: 2020-02-23T02:46:51.780Z [INFO]  TestAgent_NodeMaintenanceMode: Started HTTP server: address=127.0.0.1:16164 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.780Z [INFO]  TestAgent_NodeMaintenanceMode: started state syncer
>     writer.go:29: 2020-02-23T02:46:51.809Z [WARN]  TestAgent_NodeMaintenanceMode.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:51.809Z [INFO]  TestAgent_NodeMaintenanceMode.server.raft: entering candidate state: node="Node at 127.0.0.1:16168 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:51.812Z [DEBUG] TestAgent_NodeMaintenanceMode.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:51.813Z [DEBUG] TestAgent_NodeMaintenanceMode.server.raft: vote granted: from=e7ea5525-87c3-9dcf-8199-7896be577d79 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:51.813Z [INFO]  TestAgent_NodeMaintenanceMode.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:51.813Z [INFO]  TestAgent_NodeMaintenanceMode.server.raft: entering leader state: leader="Node at 127.0.0.1:16168 [Leader]"
>     writer.go:29: 2020-02-23T02:46:51.813Z [INFO]  TestAgent_NodeMaintenanceMode.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:51.813Z [INFO]  TestAgent_NodeMaintenanceMode.server: New leader elected: payload=Node-e7ea5525-87c3-9dcf-8199-7896be577d79
>     writer.go:29: 2020-02-23T02:46:51.820Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:51.828Z [INFO]  TestAgent_NodeMaintenanceMode.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:51.828Z [INFO]  TestAgent_NodeMaintenanceMode.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.828Z [DEBUG] TestAgent_NodeMaintenanceMode.server: Skipping self join check for node since the cluster is too small: node=Node-e7ea5525-87c3-9dcf-8199-7896be577d79
>     writer.go:29: 2020-02-23T02:46:51.828Z [INFO]  TestAgent_NodeMaintenanceMode.server: member joined, marking health alive: member=Node-e7ea5525-87c3-9dcf-8199-7896be577d79
>     writer.go:29: 2020-02-23T02:46:51.927Z [INFO]  TestAgent_NodeMaintenanceMode: Node entered maintenance mode
>     writer.go:29: 2020-02-23T02:46:51.927Z [DEBUG] TestAgent_NodeMaintenanceMode: removed check: check=_node_maintenance
>     writer.go:29: 2020-02-23T02:46:51.927Z [INFO]  TestAgent_NodeMaintenanceMode: Node left maintenance mode
>     writer.go:29: 2020-02-23T02:46:51.929Z [INFO]  TestAgent_NodeMaintenanceMode: Node entered maintenance mode
>     writer.go:29: 2020-02-23T02:46:51.929Z [INFO]  TestAgent_NodeMaintenanceMode: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:51.929Z [INFO]  TestAgent_NodeMaintenanceMode.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:51.929Z [DEBUG] TestAgent_NodeMaintenanceMode.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.929Z [WARN]  TestAgent_NodeMaintenanceMode.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.929Z [ERROR] TestAgent_NodeMaintenanceMode.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:51.929Z [DEBUG] TestAgent_NodeMaintenanceMode.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.931Z [WARN]  TestAgent_NodeMaintenanceMode.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.933Z [INFO]  TestAgent_NodeMaintenanceMode.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:51.933Z [INFO]  TestAgent_NodeMaintenanceMode: consul server down
>     writer.go:29: 2020-02-23T02:46:51.933Z [INFO]  TestAgent_NodeMaintenanceMode: shutdown complete
>     writer.go:29: 2020-02-23T02:46:51.933Z [INFO]  TestAgent_NodeMaintenanceMode: Stopping server: protocol=DNS address=127.0.0.1:16163 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.933Z [INFO]  TestAgent_NodeMaintenanceMode: Stopping server: protocol=DNS address=127.0.0.1:16163 network=udp
>     writer.go:29: 2020-02-23T02:46:51.933Z [INFO]  TestAgent_NodeMaintenanceMode: Stopping server: protocol=HTTP address=127.0.0.1:16164 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.933Z [INFO]  TestAgent_NodeMaintenanceMode: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:51.933Z [INFO]  TestAgent_NodeMaintenanceMode: Endpoints down
> === CONT  TestAgent_Service_MaintenanceMode
> --- PASS: TestAgent_GetCoordinate (0.62s)
>     writer.go:29: 2020-02-23T02:46:51.357Z [WARN]  TestAgent_GetCoordinate: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.357Z [DEBUG] TestAgent_GetCoordinate.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.358Z [DEBUG] TestAgent_GetCoordinate.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:51.444Z [INFO]  TestAgent_GetCoordinate.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4465272a-85e6-5580-c903-d75e580e6fbc Address:127.0.0.1:16144}]"
>     writer.go:29: 2020-02-23T02:46:51.445Z [INFO]  TestAgent_GetCoordinate.server.serf.wan: serf: EventMemberJoin: Node-4465272a-85e6-5580-c903-d75e580e6fbc.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.446Z [INFO]  TestAgent_GetCoordinate.server.raft: entering follower state: follower="Node at 127.0.0.1:16144 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:51.446Z [INFO]  TestAgent_GetCoordinate.server.serf.lan: serf: EventMemberJoin: Node-4465272a-85e6-5580-c903-d75e580e6fbc 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.446Z [INFO]  TestAgent_GetCoordinate.server: Adding LAN server: server="Node-4465272a-85e6-5580-c903-d75e580e6fbc (Addr: tcp/127.0.0.1:16144) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:51.447Z [INFO]  TestAgent_GetCoordinate.server: Handled event for server in area: event=member-join server=Node-4465272a-85e6-5580-c903-d75e580e6fbc.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.448Z [INFO]  TestAgent_GetCoordinate: Started DNS server: address=127.0.0.1:16139 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.448Z [INFO]  TestAgent_GetCoordinate: Started DNS server: address=127.0.0.1:16139 network=udp
>     writer.go:29: 2020-02-23T02:46:51.448Z [INFO]  TestAgent_GetCoordinate: Started HTTP server: address=127.0.0.1:16140 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.448Z [INFO]  TestAgent_GetCoordinate: started state syncer
>     writer.go:29: 2020-02-23T02:46:51.501Z [WARN]  TestAgent_GetCoordinate.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:51.501Z [INFO]  TestAgent_GetCoordinate.server.raft: entering candidate state: node="Node at 127.0.0.1:16144 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:51.504Z [DEBUG] TestAgent_GetCoordinate.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:51.504Z [DEBUG] TestAgent_GetCoordinate.server.raft: vote granted: from=4465272a-85e6-5580-c903-d75e580e6fbc term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:51.504Z [INFO]  TestAgent_GetCoordinate.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:51.504Z [INFO]  TestAgent_GetCoordinate.server.raft: entering leader state: leader="Node at 127.0.0.1:16144 [Leader]"
>     writer.go:29: 2020-02-23T02:46:51.505Z [INFO]  TestAgent_GetCoordinate.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:51.505Z [INFO]  TestAgent_GetCoordinate.server: New leader elected: payload=Node-4465272a-85e6-5580-c903-d75e580e6fbc
>     writer.go:29: 2020-02-23T02:46:51.514Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:51.527Z [INFO]  TestAgent_GetCoordinate.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:51.527Z [INFO]  TestAgent_GetCoordinate.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.527Z [DEBUG] TestAgent_GetCoordinate.server: Skipping self join check for node since the cluster is too small: node=Node-4465272a-85e6-5580-c903-d75e580e6fbc
>     writer.go:29: 2020-02-23T02:46:51.527Z [INFO]  TestAgent_GetCoordinate.server: member joined, marking health alive: member=Node-4465272a-85e6-5580-c903-d75e580e6fbc
>     writer.go:29: 2020-02-23T02:46:51.594Z [INFO]  TestAgent_GetCoordinate: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:51.595Z [INFO]  TestAgent_GetCoordinate.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:51.595Z [DEBUG] TestAgent_GetCoordinate.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.595Z [WARN]  TestAgent_GetCoordinate.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.595Z [ERROR] TestAgent_GetCoordinate.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:51.595Z [DEBUG] TestAgent_GetCoordinate.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.596Z [WARN]  TestAgent_GetCoordinate.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.599Z [INFO]  TestAgent_GetCoordinate.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:51.599Z [INFO]  TestAgent_GetCoordinate: consul server down
>     writer.go:29: 2020-02-23T02:46:51.599Z [INFO]  TestAgent_GetCoordinate: shutdown complete
>     writer.go:29: 2020-02-23T02:46:51.599Z [INFO]  TestAgent_GetCoordinate: Stopping server: protocol=DNS address=127.0.0.1:16139 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.599Z [INFO]  TestAgent_GetCoordinate: Stopping server: protocol=DNS address=127.0.0.1:16139 network=udp
>     writer.go:29: 2020-02-23T02:46:51.599Z [INFO]  TestAgent_GetCoordinate: Stopping server: protocol=HTTP address=127.0.0.1:16140 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.599Z [INFO]  TestAgent_GetCoordinate: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:51.599Z [INFO]  TestAgent_GetCoordinate: Endpoints down
>     writer.go:29: 2020-02-23T02:46:51.656Z [WARN]  TestAgent_GetCoordinate: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.656Z [DEBUG] TestAgent_GetCoordinate.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.658Z [DEBUG] TestAgent_GetCoordinate.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:51.683Z [INFO]  TestAgent_GetCoordinate.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:bf95b627-8bb7-1e34-3ac4-a3945501fad7 Address:127.0.0.1:16162}]"
>     writer.go:29: 2020-02-23T02:46:51.683Z [INFO]  TestAgent_GetCoordinate.server.raft: entering follower state: follower="Node at 127.0.0.1:16162 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:51.683Z [INFO]  TestAgent_GetCoordinate.server.serf.wan: serf: EventMemberJoin: Node-bf95b627-8bb7-1e34-3ac4-a3945501fad7.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.684Z [INFO]  TestAgent_GetCoordinate.server.serf.lan: serf: EventMemberJoin: Node-bf95b627-8bb7-1e34-3ac4-a3945501fad7 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.684Z [INFO]  TestAgent_GetCoordinate.server: Adding LAN server: server="Node-bf95b627-8bb7-1e34-3ac4-a3945501fad7 (Addr: tcp/127.0.0.1:16162) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:51.684Z [INFO]  TestAgent_GetCoordinate.server: Handled event for server in area: event=member-join server=Node-bf95b627-8bb7-1e34-3ac4-a3945501fad7.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.684Z [INFO]  TestAgent_GetCoordinate: Started DNS server: address=127.0.0.1:16157 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.684Z [INFO]  TestAgent_GetCoordinate: Started DNS server: address=127.0.0.1:16157 network=udp
>     writer.go:29: 2020-02-23T02:46:51.685Z [INFO]  TestAgent_GetCoordinate: Started HTTP server: address=127.0.0.1:16158 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.685Z [INFO]  TestAgent_GetCoordinate: started state syncer
>     writer.go:29: 2020-02-23T02:46:51.731Z [WARN]  TestAgent_GetCoordinate.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:51.731Z [INFO]  TestAgent_GetCoordinate.server.raft: entering candidate state: node="Node at 127.0.0.1:16162 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:51.735Z [DEBUG] TestAgent_GetCoordinate.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:51.735Z [DEBUG] TestAgent_GetCoordinate.server.raft: vote granted: from=bf95b627-8bb7-1e34-3ac4-a3945501fad7 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:51.735Z [INFO]  TestAgent_GetCoordinate.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:51.735Z [INFO]  TestAgent_GetCoordinate.server.raft: entering leader state: leader="Node at 127.0.0.1:16162 [Leader]"
>     writer.go:29: 2020-02-23T02:46:51.735Z [INFO]  TestAgent_GetCoordinate.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:51.735Z [INFO]  TestAgent_GetCoordinate.server: New leader elected: payload=Node-bf95b627-8bb7-1e34-3ac4-a3945501fad7
>     writer.go:29: 2020-02-23T02:46:51.745Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:51.755Z [INFO]  TestAgent_GetCoordinate.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:51.755Z [INFO]  TestAgent_GetCoordinate.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.755Z [DEBUG] TestAgent_GetCoordinate.server: Skipping self join check for node since the cluster is too small: node=Node-bf95b627-8bb7-1e34-3ac4-a3945501fad7
>     writer.go:29: 2020-02-23T02:46:51.755Z [INFO]  TestAgent_GetCoordinate.server: member joined, marking health alive: member=Node-bf95b627-8bb7-1e34-3ac4-a3945501fad7
>     writer.go:29: 2020-02-23T02:46:51.965Z [INFO]  TestAgent_GetCoordinate: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:51.965Z [INFO]  TestAgent_GetCoordinate.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:51.965Z [DEBUG] TestAgent_GetCoordinate.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.965Z [WARN]  TestAgent_GetCoordinate.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.965Z [ERROR] TestAgent_GetCoordinate.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:51.965Z [DEBUG] TestAgent_GetCoordinate.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.966Z [WARN]  TestAgent_GetCoordinate.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.968Z [INFO]  TestAgent_GetCoordinate.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:51.968Z [INFO]  TestAgent_GetCoordinate: consul server down
>     writer.go:29: 2020-02-23T02:46:51.968Z [INFO]  TestAgent_GetCoordinate: shutdown complete
>     writer.go:29: 2020-02-23T02:46:51.968Z [INFO]  TestAgent_GetCoordinate: Stopping server: protocol=DNS address=127.0.0.1:16157 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.968Z [INFO]  TestAgent_GetCoordinate: Stopping server: protocol=DNS address=127.0.0.1:16157 network=udp
>     writer.go:29: 2020-02-23T02:46:51.968Z [INFO]  TestAgent_GetCoordinate: Stopping server: protocol=HTTP address=127.0.0.1:16158 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.968Z [INFO]  TestAgent_GetCoordinate: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:51.968Z [INFO]  TestAgent_GetCoordinate: Endpoints down
> === CONT  TestAgent_unloadChecks
> --- PASS: TestAgent_checkStateSnapshot (0.33s)
>     writer.go:29: 2020-02-23T02:46:51.671Z [WARN]  TestAgent_checkStateSnapshot: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.671Z [DEBUG] TestAgent_checkStateSnapshot.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.671Z [DEBUG] TestAgent_checkStateSnapshot.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:51.681Z [INFO]  TestAgent_checkStateSnapshot.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:57717868-5d37-006f-0c1b-7d17755affd7 Address:127.0.0.1:16150}]"
>     writer.go:29: 2020-02-23T02:46:51.681Z [INFO]  TestAgent_checkStateSnapshot.server.serf.wan: serf: EventMemberJoin: Node-57717868-5d37-006f-0c1b-7d17755affd7.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.682Z [INFO]  TestAgent_checkStateSnapshot.server.serf.lan: serf: EventMemberJoin: Node-57717868-5d37-006f-0c1b-7d17755affd7 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.682Z [INFO]  TestAgent_checkStateSnapshot: Started DNS server: address=127.0.0.1:16145 network=udp
>     writer.go:29: 2020-02-23T02:46:51.682Z [INFO]  TestAgent_checkStateSnapshot.server.raft: entering follower state: follower="Node at 127.0.0.1:16150 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:51.682Z [INFO]  TestAgent_checkStateSnapshot.server: Adding LAN server: server="Node-57717868-5d37-006f-0c1b-7d17755affd7 (Addr: tcp/127.0.0.1:16150) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:51.682Z [INFO]  TestAgent_checkStateSnapshot.server: Handled event for server in area: event=member-join server=Node-57717868-5d37-006f-0c1b-7d17755affd7.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.682Z [INFO]  TestAgent_checkStateSnapshot: Started DNS server: address=127.0.0.1:16145 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.683Z [INFO]  TestAgent_checkStateSnapshot: Started HTTP server: address=127.0.0.1:16146 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.683Z [INFO]  TestAgent_checkStateSnapshot: started state syncer
>     writer.go:29: 2020-02-23T02:46:51.737Z [WARN]  TestAgent_checkStateSnapshot.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:51.737Z [INFO]  TestAgent_checkStateSnapshot.server.raft: entering candidate state: node="Node at 127.0.0.1:16150 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:51.744Z [DEBUG] TestAgent_checkStateSnapshot.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:51.744Z [DEBUG] TestAgent_checkStateSnapshot.server.raft: vote granted: from=57717868-5d37-006f-0c1b-7d17755affd7 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:51.744Z [INFO]  TestAgent_checkStateSnapshot.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:51.744Z [INFO]  TestAgent_checkStateSnapshot.server.raft: entering leader state: leader="Node at 127.0.0.1:16150 [Leader]"
>     writer.go:29: 2020-02-23T02:46:51.744Z [INFO]  TestAgent_checkStateSnapshot.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:51.744Z [INFO]  TestAgent_checkStateSnapshot.server: New leader elected: payload=Node-57717868-5d37-006f-0c1b-7d17755affd7
>     writer.go:29: 2020-02-23T02:46:51.752Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:51.761Z [INFO]  TestAgent_checkStateSnapshot.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:51.761Z [INFO]  TestAgent_checkStateSnapshot.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.761Z [DEBUG] TestAgent_checkStateSnapshot.server: Skipping self join check for node since the cluster is too small: node=Node-57717868-5d37-006f-0c1b-7d17755affd7
>     writer.go:29: 2020-02-23T02:46:51.761Z [INFO]  TestAgent_checkStateSnapshot.server: member joined, marking health alive: member=Node-57717868-5d37-006f-0c1b-7d17755affd7
>     writer.go:29: 2020-02-23T02:46:51.961Z [DEBUG] TestAgent_checkStateSnapshot: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:51.965Z [INFO]  TestAgent_checkStateSnapshot: Synced node info
>     writer.go:29: 2020-02-23T02:46:51.978Z [DEBUG] TestAgent_checkStateSnapshot: removed check: check=service:redis
>     writer.go:29: 2020-02-23T02:46:51.978Z [DEBUG] TestAgent_checkStateSnapshot: restored health check from file: check=service:redis file=/tmp/TestAgent_checkStateSnapshot-agent155482368/checks/60a2ef12de014a05ecdc850d9aab46da
>     writer.go:29: 2020-02-23T02:46:51.978Z [INFO]  TestAgent_checkStateSnapshot: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:51.978Z [INFO]  TestAgent_checkStateSnapshot.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:51.978Z [DEBUG] TestAgent_checkStateSnapshot.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.978Z [WARN]  TestAgent_checkStateSnapshot.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.978Z [DEBUG] TestAgent_checkStateSnapshot.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:51.980Z [WARN]  TestAgent_checkStateSnapshot.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:51.982Z [INFO]  TestAgent_checkStateSnapshot.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:51.982Z [INFO]  TestAgent_checkStateSnapshot: consul server down
>     writer.go:29: 2020-02-23T02:46:51.982Z [INFO]  TestAgent_checkStateSnapshot: shutdown complete
>     writer.go:29: 2020-02-23T02:46:51.982Z [INFO]  TestAgent_checkStateSnapshot: Stopping server: protocol=DNS address=127.0.0.1:16145 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.982Z [INFO]  TestAgent_checkStateSnapshot: Stopping server: protocol=DNS address=127.0.0.1:16145 network=udp
>     writer.go:29: 2020-02-23T02:46:51.982Z [INFO]  TestAgent_checkStateSnapshot: Stopping server: protocol=HTTP address=127.0.0.1:16146 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.982Z [INFO]  TestAgent_checkStateSnapshot: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:51.982Z [INFO]  TestAgent_checkStateSnapshot: Endpoints down
> === CONT  TestAgent_loadChecks_token
> --- PASS: TestAgent_Service_MaintenanceMode (0.11s)
>     writer.go:29: 2020-02-23T02:46:51.943Z [WARN]  TestAgent_Service_MaintenanceMode: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.943Z [DEBUG] TestAgent_Service_MaintenanceMode.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.943Z [DEBUG] TestAgent_Service_MaintenanceMode.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:51.957Z [INFO]  TestAgent_Service_MaintenanceMode.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:305b3b85-70d5-51bb-c824-2c9cd83e9a20 Address:127.0.0.1:16210}]"
>     writer.go:29: 2020-02-23T02:46:51.957Z [INFO]  TestAgent_Service_MaintenanceMode.server.serf.wan: serf: EventMemberJoin: Node-305b3b85-70d5-51bb-c824-2c9cd83e9a20.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.957Z [INFO]  TestAgent_Service_MaintenanceMode.server.serf.lan: serf: EventMemberJoin: Node-305b3b85-70d5-51bb-c824-2c9cd83e9a20 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.957Z [INFO]  TestAgent_Service_MaintenanceMode: Started DNS server: address=127.0.0.1:16205 network=udp
>     writer.go:29: 2020-02-23T02:46:51.958Z [INFO]  TestAgent_Service_MaintenanceMode.server.raft: entering follower state: follower="Node at 127.0.0.1:16210 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:51.958Z [INFO]  TestAgent_Service_MaintenanceMode.server: Adding LAN server: server="Node-305b3b85-70d5-51bb-c824-2c9cd83e9a20 (Addr: tcp/127.0.0.1:16210) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:51.958Z [INFO]  TestAgent_Service_MaintenanceMode.server: Handled event for server in area: event=member-join server=Node-305b3b85-70d5-51bb-c824-2c9cd83e9a20.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.958Z [INFO]  TestAgent_Service_MaintenanceMode: Started DNS server: address=127.0.0.1:16205 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.958Z [INFO]  TestAgent_Service_MaintenanceMode: Started HTTP server: address=127.0.0.1:16206 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.958Z [INFO]  TestAgent_Service_MaintenanceMode: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.026Z [WARN]  TestAgent_Service_MaintenanceMode.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.026Z [INFO]  TestAgent_Service_MaintenanceMode.server.raft: entering candidate state: node="Node at 127.0.0.1:16210 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.029Z [DEBUG] TestAgent_Service_MaintenanceMode.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.029Z [DEBUG] TestAgent_Service_MaintenanceMode.server.raft: vote granted: from=305b3b85-70d5-51bb-c824-2c9cd83e9a20 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.029Z [INFO]  TestAgent_Service_MaintenanceMode.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.029Z [INFO]  TestAgent_Service_MaintenanceMode.server.raft: entering leader state: leader="Node at 127.0.0.1:16210 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.029Z [INFO]  TestAgent_Service_MaintenanceMode.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.029Z [INFO]  TestAgent_Service_MaintenanceMode.server: New leader elected: payload=Node-305b3b85-70d5-51bb-c824-2c9cd83e9a20
>     writer.go:29: 2020-02-23T02:46:52.033Z [INFO]  TestAgent_Service_MaintenanceMode: Service entered maintenance mode: service=redis
>     writer.go:29: 2020-02-23T02:46:52.033Z [DEBUG] TestAgent_Service_MaintenanceMode: removed check: check=_service_maintenance:redis
>     writer.go:29: 2020-02-23T02:46:52.033Z [INFO]  TestAgent_Service_MaintenanceMode: Service left maintenance mode: service=redis
>     writer.go:29: 2020-02-23T02:46:52.035Z [INFO]  TestAgent_Service_MaintenanceMode: Service entered maintenance mode: service=redis
>     writer.go:29: 2020-02-23T02:46:52.035Z [INFO]  TestAgent_Service_MaintenanceMode: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.035Z [INFO]  TestAgent_Service_MaintenanceMode.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:52.035Z [WARN]  TestAgent_Service_MaintenanceMode.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.035Z [ERROR] TestAgent_Service_MaintenanceMode.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:52.037Z [WARN]  TestAgent_Service_MaintenanceMode.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.037Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.038Z [INFO]  TestAgent_Service_MaintenanceMode.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.041Z [ERROR] TestAgent_Service_MaintenanceMode.server: failed to establish leadership: error="error generating CA root certificate: error computing next serial number: leadership lost while committing log"
>     writer.go:29: 2020-02-23T02:46:52.041Z [INFO]  TestAgent_Service_MaintenanceMode: consul server down
>     writer.go:29: 2020-02-23T02:46:52.041Z [INFO]  TestAgent_Service_MaintenanceMode: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.041Z [ERROR] TestAgent_Service_MaintenanceMode.server: failed to transfer leadership attempt, will retry: attempt=0 retry_limit=3 error="raft is already shutdown"
>     writer.go:29: 2020-02-23T02:46:52.041Z [ERROR] TestAgent_Service_MaintenanceMode.server: failed to transfer leadership attempt, will retry: attempt=1 retry_limit=3 error="raft is already shutdown"
>     writer.go:29: 2020-02-23T02:46:52.041Z [ERROR] TestAgent_Service_MaintenanceMode.server: failed to transfer leadership attempt, will retry: attempt=2 retry_limit=3 error="raft is already shutdown"
>     writer.go:29: 2020-02-23T02:46:52.041Z [ERROR] TestAgent_Service_MaintenanceMode.server: failed to transfer leadership: error="failed to transfer leadership in 3 attempts"
>     writer.go:29: 2020-02-23T02:46:52.041Z [INFO]  TestAgent_Service_MaintenanceMode: Stopping server: protocol=DNS address=127.0.0.1:16205 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.041Z [INFO]  TestAgent_Service_MaintenanceMode: Stopping server: protocol=DNS address=127.0.0.1:16205 network=udp
>     writer.go:29: 2020-02-23T02:46:52.041Z [INFO]  TestAgent_Service_MaintenanceMode: Stopping server: protocol=HTTP address=127.0.0.1:16206 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.041Z [INFO]  TestAgent_Service_MaintenanceMode: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.041Z [INFO]  TestAgent_Service_MaintenanceMode: Endpoints down
> === CONT  TestAgent_PurgeCheckOnDuplicate
> --- PASS: TestAgent_PurgeCheckOnDuplicate (0.02s)
>     writer.go:29: 2020-02-23T02:46:52.048Z [WARN]  TestAgent_PurgeCheckOnDuplicate: Node name will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.: node_name="Node 139dec21-1cf6-bf31-8b6f-61bc2892e8f5"
>     writer.go:29: 2020-02-23T02:46:52.048Z [DEBUG] TestAgent_PurgeCheckOnDuplicate.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.049Z [INFO]  TestAgent_PurgeCheckOnDuplicate.client.serf.lan: serf: EventMemberJoin: Node 139dec21-1cf6-bf31-8b6f-61bc2892e8f5 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.049Z [INFO]  TestAgent_PurgeCheckOnDuplicate: Started DNS server: address=127.0.0.1:16187 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.049Z [INFO]  TestAgent_PurgeCheckOnDuplicate: Started DNS server: address=127.0.0.1:16187 network=udp
>     writer.go:29: 2020-02-23T02:46:52.050Z [INFO]  TestAgent_PurgeCheckOnDuplicate: Started HTTP server: address=127.0.0.1:16188 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.050Z [INFO]  TestAgent_PurgeCheckOnDuplicate: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.050Z [WARN]  TestAgent_PurgeCheckOnDuplicate.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:52.050Z [ERROR] TestAgent_PurgeCheckOnDuplicate.anti_entropy: failed to sync remote state: error="No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:52.052Z [INFO]  TestAgent_PurgeCheckOnDuplicate: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.052Z [INFO]  TestAgent_PurgeCheckOnDuplicate.client: shutting down client
>     writer.go:29: 2020-02-23T02:46:52.052Z [WARN]  TestAgent_PurgeCheckOnDuplicate.client.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.052Z [INFO]  TestAgent_PurgeCheckOnDuplicate.client.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.053Z [INFO]  TestAgent_PurgeCheckOnDuplicate: consul client down
>     writer.go:29: 2020-02-23T02:46:52.053Z [INFO]  TestAgent_PurgeCheckOnDuplicate: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.053Z [INFO]  TestAgent_PurgeCheckOnDuplicate: Stopping server: protocol=DNS address=127.0.0.1:16187 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.053Z [INFO]  TestAgent_PurgeCheckOnDuplicate: Stopping server: protocol=DNS address=127.0.0.1:16187 network=udp
>     writer.go:29: 2020-02-23T02:46:52.053Z [INFO]  TestAgent_PurgeCheckOnDuplicate: Stopping server: protocol=HTTP address=127.0.0.1:16188 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.053Z [INFO]  TestAgent_PurgeCheckOnDuplicate: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.053Z [INFO]  TestAgent_PurgeCheckOnDuplicate: Endpoints down
>     writer.go:29: 2020-02-23T02:46:52.059Z [WARN]  TestAgent_PurgeCheckOnDuplicate-a2: Node name will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.: node_name="Node 139dec21-1cf6-bf31-8b6f-61bc2892e8f5"
>     writer.go:29: 2020-02-23T02:46:52.060Z [DEBUG] TestAgent_PurgeCheckOnDuplicate-a2.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.060Z [INFO]  TestAgent_PurgeCheckOnDuplicate-a2.client.serf.lan: serf: EventMemberJoin: Node 139dec21-1cf6-bf31-8b6f-61bc2892e8f5 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.060Z [WARN]  TestAgent_PurgeCheckOnDuplicate-a2.client.serf.lan: serf: Failed to re-join any previously known node
>     writer.go:29: 2020-02-23T02:46:52.060Z [DEBUG] TestAgent_PurgeCheckOnDuplicate-a2: check exists, not restoring from file: check=mem file=/tmp/consul-test/TestAgent_PurgeCheckOnDuplicate-agent503568774/checks/afc4fc7e48a0710a1dc94ef3e8bc5764
>     writer.go:29: 2020-02-23T02:46:52.060Z [INFO]  TestAgent_PurgeCheckOnDuplicate-a2: Started DNS server: address=127.0.0.1:16193 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.060Z [INFO]  TestAgent_PurgeCheckOnDuplicate-a2: Started DNS server: address=127.0.0.1:16193 network=udp
>     writer.go:29: 2020-02-23T02:46:52.061Z [INFO]  TestAgent_PurgeCheckOnDuplicate-a2: Started HTTP server: address=127.0.0.1:16194 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.061Z [INFO]  TestAgent_PurgeCheckOnDuplicate-a2: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.061Z [WARN]  TestAgent_PurgeCheckOnDuplicate-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:52.061Z [ERROR] TestAgent_PurgeCheckOnDuplicate-a2.anti_entropy: failed to sync remote state: error="No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:52.061Z [INFO]  TestAgent_PurgeCheckOnDuplicate-a2: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.061Z [INFO]  TestAgent_PurgeCheckOnDuplicate-a2.client: shutting down client
>     writer.go:29: 2020-02-23T02:46:52.061Z [WARN]  TestAgent_PurgeCheckOnDuplicate-a2.client.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.061Z [INFO]  TestAgent_PurgeCheckOnDuplicate-a2.client.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.063Z [INFO]  TestAgent_PurgeCheckOnDuplicate-a2: consul client down
>     writer.go:29: 2020-02-23T02:46:52.063Z [INFO]  TestAgent_PurgeCheckOnDuplicate-a2: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.063Z [INFO]  TestAgent_PurgeCheckOnDuplicate-a2: Stopping server: protocol=DNS address=127.0.0.1:16193 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.063Z [INFO]  TestAgent_PurgeCheckOnDuplicate-a2: Stopping server: protocol=DNS address=127.0.0.1:16193 network=udp
>     writer.go:29: 2020-02-23T02:46:52.063Z [INFO]  TestAgent_PurgeCheckOnDuplicate-a2: Stopping server: protocol=HTTP address=127.0.0.1:16194 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.063Z [INFO]  TestAgent_PurgeCheckOnDuplicate-a2: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.063Z [INFO]  TestAgent_PurgeCheckOnDuplicate-a2: Endpoints down
> === CONT  TestAgent_PersistCheck
> --- PASS: TestAgent_PersistCheck (0.03s)
>     writer.go:29: 2020-02-23T02:46:52.069Z [DEBUG] TestAgent_PersistCheck.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.070Z [INFO]  TestAgent_PersistCheck.client.serf.lan: serf: EventMemberJoin: Node-333b1b39-5dde-00f2-e794-05841c8e1dd5 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.070Z [INFO]  TestAgent_PersistCheck: Started DNS server: address=127.0.0.1:16217 network=udp
>     writer.go:29: 2020-02-23T02:46:52.070Z [INFO]  TestAgent_PersistCheck: Started DNS server: address=127.0.0.1:16217 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.071Z [INFO]  TestAgent_PersistCheck: Started HTTP server: address=127.0.0.1:16218 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.071Z [INFO]  TestAgent_PersistCheck: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.071Z [WARN]  TestAgent_PersistCheck.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:52.071Z [ERROR] TestAgent_PersistCheck.anti_entropy: failed to sync remote state: error="No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:52.075Z [INFO]  TestAgent_PersistCheck: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.075Z [INFO]  TestAgent_PersistCheck.client: shutting down client
>     writer.go:29: 2020-02-23T02:46:52.075Z [WARN]  TestAgent_PersistCheck.client.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.075Z [INFO]  TestAgent_PersistCheck.client.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.077Z [INFO]  TestAgent_PersistCheck: consul client down
>     writer.go:29: 2020-02-23T02:46:52.077Z [INFO]  TestAgent_PersistCheck: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.077Z [INFO]  TestAgent_PersistCheck: Stopping server: protocol=DNS address=127.0.0.1:16217 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.077Z [INFO]  TestAgent_PersistCheck: Stopping server: protocol=DNS address=127.0.0.1:16217 network=udp
>     writer.go:29: 2020-02-23T02:46:52.077Z [INFO]  TestAgent_PersistCheck: Stopping server: protocol=HTTP address=127.0.0.1:16218 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.077Z [INFO]  TestAgent_PersistCheck: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.077Z [INFO]  TestAgent_PersistCheck: Endpoints down
>     writer.go:29: 2020-02-23T02:46:52.085Z [DEBUG] TestAgent_PersistCheck-a2.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.085Z [INFO]  TestAgent_PersistCheck-a2.client.serf.lan: serf: EventMemberJoin: Node-5731e040-9b4f-2763-9f76-14be3e295877 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.085Z [INFO]  TestAgent_PersistCheck-a2.client.serf.lan: serf: Attempting re-join to previously known node: Node-333b1b39-5dde-00f2-e794-05841c8e1dd5: 127.0.0.1:16220
>     writer.go:29: 2020-02-23T02:46:52.086Z [DEBUG] TestAgent_PersistCheck-a2: restored health check from file: check=mem file=/tmp/consul-test/TestAgent_PersistCheck-agent513461037/checks/afc4fc7e48a0710a1dc94ef3e8bc5764
>     writer.go:29: 2020-02-23T02:46:52.086Z [INFO]  TestAgent_PersistCheck-a2: Started DNS server: address=127.0.0.1:16235 network=udp
>     writer.go:29: 2020-02-23T02:46:52.086Z [INFO]  TestAgent_PersistCheck-a2: Started DNS server: address=127.0.0.1:16235 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.086Z [DEBUG] TestAgent_PersistCheck-a2.client.memberlist.lan: memberlist: Failed to join 127.0.0.1: dial tcp 127.0.0.1:16220: connect: connection refused
>     writer.go:29: 2020-02-23T02:46:52.086Z [WARN]  TestAgent_PersistCheck-a2.client.serf.lan: serf: Failed to re-join any previously known node
>     writer.go:29: 2020-02-23T02:46:52.086Z [INFO]  TestAgent_PersistCheck-a2: Started HTTP server: address=127.0.0.1:16236 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.086Z [INFO]  TestAgent_PersistCheck-a2: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.086Z [WARN]  TestAgent_PersistCheck-a2.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:46:52.086Z [ERROR] TestAgent_PersistCheck-a2.anti_entropy: failed to sync remote state: error="No known Consul servers"
>     writer.go:29: 2020-02-23T02:46:52.086Z [INFO]  TestAgent_PersistCheck-a2: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.086Z [INFO]  TestAgent_PersistCheck-a2.client: shutting down client
>     writer.go:29: 2020-02-23T02:46:52.086Z [WARN]  TestAgent_PersistCheck-a2.client.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.086Z [INFO]  TestAgent_PersistCheck-a2.client.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.088Z [INFO]  TestAgent_PersistCheck-a2: consul client down
>     writer.go:29: 2020-02-23T02:46:52.088Z [INFO]  TestAgent_PersistCheck-a2: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.088Z [INFO]  TestAgent_PersistCheck-a2: Stopping server: protocol=DNS address=127.0.0.1:16235 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.088Z [INFO]  TestAgent_PersistCheck-a2: Stopping server: protocol=DNS address=127.0.0.1:16235 network=udp
>     writer.go:29: 2020-02-23T02:46:52.088Z [INFO]  TestAgent_PersistCheck-a2: Stopping server: protocol=HTTP address=127.0.0.1:16236 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.088Z [INFO]  TestAgent_PersistCheck-a2: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.089Z [INFO]  TestAgent_PersistCheck-a2: Endpoints down
> === CONT  TestAgent_updateTTLCheck
> --- PASS: TestAgent_unloadChecks (0.17s)
>     writer.go:29: 2020-02-23T02:46:51.977Z [WARN]  TestAgent_unloadChecks: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.978Z [DEBUG] TestAgent_unloadChecks.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.978Z [DEBUG] TestAgent_unloadChecks.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:51.994Z [INFO]  TestAgent_unloadChecks.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d948c8d6-4309-5e9f-7eaf-ef2868a2be50 Address:127.0.0.1:16180}]"
>     writer.go:29: 2020-02-23T02:46:51.994Z [INFO]  TestAgent_unloadChecks.server.raft: entering follower state: follower="Node at 127.0.0.1:16180 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:51.995Z [INFO]  TestAgent_unloadChecks.server.serf.wan: serf: EventMemberJoin: Node-d948c8d6-4309-5e9f-7eaf-ef2868a2be50.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.995Z [INFO]  TestAgent_unloadChecks.server.serf.lan: serf: EventMemberJoin: Node-d948c8d6-4309-5e9f-7eaf-ef2868a2be50 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.995Z [INFO]  TestAgent_unloadChecks.server: Handled event for server in area: event=member-join server=Node-d948c8d6-4309-5e9f-7eaf-ef2868a2be50.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.995Z [INFO]  TestAgent_unloadChecks.server: Adding LAN server: server="Node-d948c8d6-4309-5e9f-7eaf-ef2868a2be50 (Addr: tcp/127.0.0.1:16180) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:51.996Z [INFO]  TestAgent_unloadChecks: Started DNS server: address=127.0.0.1:16175 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.996Z [INFO]  TestAgent_unloadChecks: Started DNS server: address=127.0.0.1:16175 network=udp
>     writer.go:29: 2020-02-23T02:46:51.996Z [INFO]  TestAgent_unloadChecks: Started HTTP server: address=127.0.0.1:16176 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.996Z [INFO]  TestAgent_unloadChecks: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.040Z [WARN]  TestAgent_unloadChecks.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.040Z [INFO]  TestAgent_unloadChecks.server.raft: entering candidate state: node="Node at 127.0.0.1:16180 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.044Z [DEBUG] TestAgent_unloadChecks.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.044Z [DEBUG] TestAgent_unloadChecks.server.raft: vote granted: from=d948c8d6-4309-5e9f-7eaf-ef2868a2be50 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.044Z [INFO]  TestAgent_unloadChecks.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.044Z [INFO]  TestAgent_unloadChecks.server.raft: entering leader state: leader="Node at 127.0.0.1:16180 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.044Z [INFO]  TestAgent_unloadChecks.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.044Z [INFO]  TestAgent_unloadChecks.server: New leader elected: payload=Node-d948c8d6-4309-5e9f-7eaf-ef2868a2be50
>     writer.go:29: 2020-02-23T02:46:52.052Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.061Z [INFO]  TestAgent_unloadChecks.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.061Z [INFO]  TestAgent_unloadChecks.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.061Z [DEBUG] TestAgent_unloadChecks.server: Skipping self join check for node since the cluster is too small: node=Node-d948c8d6-4309-5e9f-7eaf-ef2868a2be50
>     writer.go:29: 2020-02-23T02:46:52.061Z [INFO]  TestAgent_unloadChecks.server: member joined, marking health alive: member=Node-d948c8d6-4309-5e9f-7eaf-ef2868a2be50
>     writer.go:29: 2020-02-23T02:46:52.123Z [DEBUG] TestAgent_unloadChecks: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:52.126Z [INFO]  TestAgent_unloadChecks: Synced node info
>     writer.go:29: 2020-02-23T02:46:52.126Z [DEBUG] TestAgent_unloadChecks: Node info in sync
>     writer.go:29: 2020-02-23T02:46:52.134Z [DEBUG] TestAgent_unloadChecks: removed check: check=service:redis
>     writer.go:29: 2020-02-23T02:46:52.134Z [INFO]  TestAgent_unloadChecks: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.134Z [INFO]  TestAgent_unloadChecks.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:52.134Z [DEBUG] TestAgent_unloadChecks.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.134Z [WARN]  TestAgent_unloadChecks.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.134Z [DEBUG] TestAgent_unloadChecks.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.135Z [WARN]  TestAgent_unloadChecks.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.137Z [INFO]  TestAgent_unloadChecks.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.137Z [INFO]  TestAgent_unloadChecks: consul server down
>     writer.go:29: 2020-02-23T02:46:52.137Z [INFO]  TestAgent_unloadChecks: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.137Z [INFO]  TestAgent_unloadChecks: Stopping server: protocol=DNS address=127.0.0.1:16175 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.137Z [INFO]  TestAgent_unloadChecks: Stopping server: protocol=DNS address=127.0.0.1:16175 network=udp
>     writer.go:29: 2020-02-23T02:46:52.137Z [INFO]  TestAgent_unloadChecks: Stopping server: protocol=HTTP address=127.0.0.1:16176 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.137Z [INFO]  TestAgent_unloadChecks: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.137Z [INFO]  TestAgent_unloadChecks: Endpoints down
> === CONT  TestAgent_HTTPCheck_TLSSkipVerify
> --- PASS: TestAgent_AddCheck_restoresSnapshot (0.28s)
>     writer.go:29: 2020-02-23T02:46:51.914Z [WARN]  TestAgent_AddCheck_restoresSnapshot: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.914Z [DEBUG] TestAgent_AddCheck_restoresSnapshot.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.914Z [DEBUG] TestAgent_AddCheck_restoresSnapshot.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:51.941Z [INFO]  TestAgent_AddCheck_restoresSnapshot.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:65aa9cc3-ea87-419c-456d-a42af3cd78a2 Address:127.0.0.1:16174}]"
>     writer.go:29: 2020-02-23T02:46:51.941Z [INFO]  TestAgent_AddCheck_restoresSnapshot.server.raft: entering follower state: follower="Node at 127.0.0.1:16174 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:51.941Z [INFO]  TestAgent_AddCheck_restoresSnapshot.server.serf.wan: serf: EventMemberJoin: Node-65aa9cc3-ea87-419c-456d-a42af3cd78a2.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.942Z [INFO]  TestAgent_AddCheck_restoresSnapshot.server.serf.lan: serf: EventMemberJoin: Node-65aa9cc3-ea87-419c-456d-a42af3cd78a2 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:51.942Z [INFO]  TestAgent_AddCheck_restoresSnapshot.server: Handled event for server in area: event=member-join server=Node-65aa9cc3-ea87-419c-456d-a42af3cd78a2.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:51.942Z [INFO]  TestAgent_AddCheck_restoresSnapshot.server: Adding LAN server: server="Node-65aa9cc3-ea87-419c-456d-a42af3cd78a2 (Addr: tcp/127.0.0.1:16174) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:51.942Z [INFO]  TestAgent_AddCheck_restoresSnapshot: Started DNS server: address=127.0.0.1:16169 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.942Z [INFO]  TestAgent_AddCheck_restoresSnapshot: Started DNS server: address=127.0.0.1:16169 network=udp
>     writer.go:29: 2020-02-23T02:46:51.943Z [INFO]  TestAgent_AddCheck_restoresSnapshot: Started HTTP server: address=127.0.0.1:16170 network=tcp
>     writer.go:29: 2020-02-23T02:46:51.943Z [INFO]  TestAgent_AddCheck_restoresSnapshot: started state syncer
>     writer.go:29: 2020-02-23T02:46:51.981Z [WARN]  TestAgent_AddCheck_restoresSnapshot.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:51.981Z [INFO]  TestAgent_AddCheck_restoresSnapshot.server.raft: entering candidate state: node="Node at 127.0.0.1:16174 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:51.990Z [DEBUG] TestAgent_AddCheck_restoresSnapshot.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:51.990Z [DEBUG] TestAgent_AddCheck_restoresSnapshot.server.raft: vote granted: from=65aa9cc3-ea87-419c-456d-a42af3cd78a2 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:51.990Z [INFO]  TestAgent_AddCheck_restoresSnapshot.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:51.990Z [INFO]  TestAgent_AddCheck_restoresSnapshot.server.raft: entering leader state: leader="Node at 127.0.0.1:16174 [Leader]"
>     writer.go:29: 2020-02-23T02:46:51.990Z [INFO]  TestAgent_AddCheck_restoresSnapshot.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:51.990Z [INFO]  TestAgent_AddCheck_restoresSnapshot.server: New leader elected: payload=Node-65aa9cc3-ea87-419c-456d-a42af3cd78a2
>     writer.go:29: 2020-02-23T02:46:51.998Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.007Z [INFO]  TestAgent_AddCheck_restoresSnapshot.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.007Z [INFO]  TestAgent_AddCheck_restoresSnapshot.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.007Z [DEBUG] TestAgent_AddCheck_restoresSnapshot.server: Skipping self join check for node since the cluster is too small: node=Node-65aa9cc3-ea87-419c-456d-a42af3cd78a2
>     writer.go:29: 2020-02-23T02:46:52.007Z [INFO]  TestAgent_AddCheck_restoresSnapshot.server: member joined, marking health alive: member=Node-65aa9cc3-ea87-419c-456d-a42af3cd78a2
>     writer.go:29: 2020-02-23T02:46:52.173Z [INFO]  TestAgent_AddCheck_restoresSnapshot: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.173Z [INFO]  TestAgent_AddCheck_restoresSnapshot.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:52.173Z [DEBUG] TestAgent_AddCheck_restoresSnapshot.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.173Z [WARN]  TestAgent_AddCheck_restoresSnapshot.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.174Z [ERROR] TestAgent_AddCheck_restoresSnapshot.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:52.174Z [DEBUG] TestAgent_AddCheck_restoresSnapshot.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.178Z [WARN]  TestAgent_AddCheck_restoresSnapshot.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.180Z [INFO]  TestAgent_AddCheck_restoresSnapshot.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.180Z [INFO]  TestAgent_AddCheck_restoresSnapshot: consul server down
>     writer.go:29: 2020-02-23T02:46:52.180Z [INFO]  TestAgent_AddCheck_restoresSnapshot: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.180Z [INFO]  TestAgent_AddCheck_restoresSnapshot: Stopping server: protocol=DNS address=127.0.0.1:16169 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.180Z [INFO]  TestAgent_AddCheck_restoresSnapshot: Stopping server: protocol=DNS address=127.0.0.1:16169 network=udp
>     writer.go:29: 2020-02-23T02:46:52.180Z [INFO]  TestAgent_AddCheck_restoresSnapshot: Stopping server: protocol=HTTP address=127.0.0.1:16170 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.180Z [INFO]  TestAgent_AddCheck_restoresSnapshot: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.180Z [INFO]  TestAgent_AddCheck_restoresSnapshot: Endpoints down
> === CONT  TestAgent_RemoveCheck
> --- PASS: TestAgent_updateTTLCheck (0.14s)
>     writer.go:29: 2020-02-23T02:46:52.095Z [WARN]  TestAgent_updateTTLCheck: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:52.095Z [DEBUG] TestAgent_updateTTLCheck.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.095Z [DEBUG] TestAgent_updateTTLCheck.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:52.106Z [INFO]  TestAgent_updateTTLCheck.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:787b5c69-87cb-6588-3924-8177032e0ac5 Address:127.0.0.1:16234}]"
>     writer.go:29: 2020-02-23T02:46:52.106Z [INFO]  TestAgent_updateTTLCheck.server.serf.wan: serf: EventMemberJoin: Node-787b5c69-87cb-6588-3924-8177032e0ac5.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.106Z [INFO]  TestAgent_updateTTLCheck.server.serf.lan: serf: EventMemberJoin: Node-787b5c69-87cb-6588-3924-8177032e0ac5 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.107Z [INFO]  TestAgent_updateTTLCheck: Started DNS server: address=127.0.0.1:16229 network=udp
>     writer.go:29: 2020-02-23T02:46:52.107Z [INFO]  TestAgent_updateTTLCheck.server.raft: entering follower state: follower="Node at 127.0.0.1:16234 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:52.107Z [INFO]  TestAgent_updateTTLCheck.server: Adding LAN server: server="Node-787b5c69-87cb-6588-3924-8177032e0ac5 (Addr: tcp/127.0.0.1:16234) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:52.107Z [INFO]  TestAgent_updateTTLCheck.server: Handled event for server in area: event=member-join server=Node-787b5c69-87cb-6588-3924-8177032e0ac5.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:52.107Z [INFO]  TestAgent_updateTTLCheck: Started DNS server: address=127.0.0.1:16229 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.107Z [INFO]  TestAgent_updateTTLCheck: Started HTTP server: address=127.0.0.1:16230 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.107Z [INFO]  TestAgent_updateTTLCheck: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.155Z [WARN]  TestAgent_updateTTLCheck.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.155Z [INFO]  TestAgent_updateTTLCheck.server.raft: entering candidate state: node="Node at 127.0.0.1:16234 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.158Z [DEBUG] TestAgent_updateTTLCheck.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.158Z [DEBUG] TestAgent_updateTTLCheck.server.raft: vote granted: from=787b5c69-87cb-6588-3924-8177032e0ac5 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.158Z [INFO]  TestAgent_updateTTLCheck.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.158Z [INFO]  TestAgent_updateTTLCheck.server.raft: entering leader state: leader="Node at 127.0.0.1:16234 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.159Z [INFO]  TestAgent_updateTTLCheck.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.159Z [INFO]  TestAgent_updateTTLCheck.server: New leader elected: payload=Node-787b5c69-87cb-6588-3924-8177032e0ac5
>     writer.go:29: 2020-02-23T02:46:52.165Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.179Z [INFO]  TestAgent_updateTTLCheck.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.179Z [INFO]  TestAgent_updateTTLCheck.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.179Z [DEBUG] TestAgent_updateTTLCheck.server: Skipping self join check for node since the cluster is too small: node=Node-787b5c69-87cb-6588-3924-8177032e0ac5
>     writer.go:29: 2020-02-23T02:46:52.179Z [INFO]  TestAgent_updateTTLCheck.server: member joined, marking health alive: member=Node-787b5c69-87cb-6588-3924-8177032e0ac5
>     writer.go:29: 2020-02-23T02:46:52.220Z [DEBUG] TestAgent_updateTTLCheck: Check status updated: check=mem status=passing
>     writer.go:29: 2020-02-23T02:46:52.220Z [DEBUG] TestAgent_updateTTLCheck: Check status updated: check=mem status=critical
>     writer.go:29: 2020-02-23T02:46:52.220Z [INFO]  TestAgent_updateTTLCheck: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.220Z [INFO]  TestAgent_updateTTLCheck.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:52.220Z [DEBUG] TestAgent_updateTTLCheck.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.220Z [WARN]  TestAgent_updateTTLCheck.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.220Z [ERROR] TestAgent_updateTTLCheck.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:52.220Z [DEBUG] TestAgent_updateTTLCheck.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.222Z [WARN]  TestAgent_updateTTLCheck.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.224Z [INFO]  TestAgent_updateTTLCheck.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.224Z [INFO]  TestAgent_updateTTLCheck: consul server down
>     writer.go:29: 2020-02-23T02:46:52.224Z [INFO]  TestAgent_updateTTLCheck: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.224Z [INFO]  TestAgent_updateTTLCheck: Stopping server: protocol=DNS address=127.0.0.1:16229 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.224Z [INFO]  TestAgent_updateTTLCheck: Stopping server: protocol=DNS address=127.0.0.1:16229 network=udp
>     writer.go:29: 2020-02-23T02:46:52.224Z [INFO]  TestAgent_updateTTLCheck: Stopping server: protocol=HTTP address=127.0.0.1:16230 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.224Z [INFO]  TestAgent_updateTTLCheck: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.224Z [INFO]  TestAgent_updateTTLCheck: Endpoints down
> === CONT  TestAgent_AddCheck_Alias_userAndSetToken
> --- PASS: TestAgent_loadChecks_token (0.42s)
>     writer.go:29: 2020-02-23T02:46:51.990Z [WARN]  TestAgent_loadChecks_token: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:51.990Z [DEBUG] TestAgent_loadChecks_token.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:51.990Z [DEBUG] TestAgent_loadChecks_token.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:52.005Z [INFO]  TestAgent_loadChecks_token.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:59454cb6-761f-51d3-31e1-4f3dd00eaecc Address:127.0.0.1:16204}]"
>     writer.go:29: 2020-02-23T02:46:52.005Z [INFO]  TestAgent_loadChecks_token.server.raft: entering follower state: follower="Node at 127.0.0.1:16204 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:52.006Z [INFO]  TestAgent_loadChecks_token.server.serf.wan: serf: EventMemberJoin: Node-59454cb6-761f-51d3-31e1-4f3dd00eaecc.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.006Z [INFO]  TestAgent_loadChecks_token.server.serf.lan: serf: EventMemberJoin: Node-59454cb6-761f-51d3-31e1-4f3dd00eaecc 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.006Z [INFO]  TestAgent_loadChecks_token.server: Adding LAN server: server="Node-59454cb6-761f-51d3-31e1-4f3dd00eaecc (Addr: tcp/127.0.0.1:16204) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:52.007Z [INFO]  TestAgent_loadChecks_token.server: Handled event for server in area: event=member-join server=Node-59454cb6-761f-51d3-31e1-4f3dd00eaecc.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:52.007Z [INFO]  TestAgent_loadChecks_token: Started DNS server: address=127.0.0.1:16199 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.007Z [INFO]  TestAgent_loadChecks_token: Started DNS server: address=127.0.0.1:16199 network=udp
>     writer.go:29: 2020-02-23T02:46:52.008Z [INFO]  TestAgent_loadChecks_token: Started HTTP server: address=127.0.0.1:16200 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.008Z [INFO]  TestAgent_loadChecks_token: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.055Z [WARN]  TestAgent_loadChecks_token.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.055Z [INFO]  TestAgent_loadChecks_token.server.raft: entering candidate state: node="Node at 127.0.0.1:16204 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.060Z [DEBUG] TestAgent_loadChecks_token.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.060Z [DEBUG] TestAgent_loadChecks_token.server.raft: vote granted: from=59454cb6-761f-51d3-31e1-4f3dd00eaecc term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.060Z [INFO]  TestAgent_loadChecks_token.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.060Z [INFO]  TestAgent_loadChecks_token.server.raft: entering leader state: leader="Node at 127.0.0.1:16204 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.060Z [INFO]  TestAgent_loadChecks_token.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.060Z [INFO]  TestAgent_loadChecks_token.server: New leader elected: payload=Node-59454cb6-761f-51d3-31e1-4f3dd00eaecc
>     writer.go:29: 2020-02-23T02:46:52.068Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.077Z [INFO]  TestAgent_loadChecks_token.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.077Z [INFO]  TestAgent_loadChecks_token.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.077Z [DEBUG] TestAgent_loadChecks_token.server: Skipping self join check for node since the cluster is too small: node=Node-59454cb6-761f-51d3-31e1-4f3dd00eaecc
>     writer.go:29: 2020-02-23T02:46:52.077Z [INFO]  TestAgent_loadChecks_token.server: member joined, marking health alive: member=Node-59454cb6-761f-51d3-31e1-4f3dd00eaecc
>     writer.go:29: 2020-02-23T02:46:52.339Z [DEBUG] TestAgent_loadChecks_token: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:52.342Z [INFO]  TestAgent_loadChecks_token: Synced node info
>     writer.go:29: 2020-02-23T02:46:52.343Z [INFO]  TestAgent_loadChecks_token: Synced check: check=rabbitmq
>     writer.go:29: 2020-02-23T02:46:52.343Z [DEBUG] TestAgent_loadChecks_token: Node info in sync
>     writer.go:29: 2020-02-23T02:46:52.343Z [DEBUG] TestAgent_loadChecks_token: Check in sync: check=rabbitmq
>     writer.go:29: 2020-02-23T02:46:52.399Z [INFO]  TestAgent_loadChecks_token: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.399Z [INFO]  TestAgent_loadChecks_token.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:52.399Z [DEBUG] TestAgent_loadChecks_token.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.399Z [WARN]  TestAgent_loadChecks_token.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.399Z [DEBUG] TestAgent_loadChecks_token.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.401Z [WARN]  TestAgent_loadChecks_token.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.406Z [INFO]  TestAgent_loadChecks_token.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.406Z [INFO]  TestAgent_loadChecks_token: consul server down
>     writer.go:29: 2020-02-23T02:46:52.406Z [INFO]  TestAgent_loadChecks_token: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.406Z [INFO]  TestAgent_loadChecks_token: Stopping server: protocol=DNS address=127.0.0.1:16199 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.406Z [INFO]  TestAgent_loadChecks_token: Stopping server: protocol=DNS address=127.0.0.1:16199 network=udp
>     writer.go:29: 2020-02-23T02:46:52.406Z [INFO]  TestAgent_loadChecks_token: Stopping server: protocol=HTTP address=127.0.0.1:16200 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.406Z [INFO]  TestAgent_loadChecks_token: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.406Z [INFO]  TestAgent_loadChecks_token: Endpoints down
> === CONT  TestAgent_AddCheck_Alias_userToken
> --- PASS: TestAgent_RemoveCheck (0.24s)
>     writer.go:29: 2020-02-23T02:46:52.188Z [WARN]  TestAgent_RemoveCheck: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:52.188Z [DEBUG] TestAgent_RemoveCheck.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.188Z [DEBUG] TestAgent_RemoveCheck.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:52.198Z [INFO]  TestAgent_RemoveCheck.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a058886b-09e5-1961-578c-0650ae10b500 Address:127.0.0.1:16252}]"
>     writer.go:29: 2020-02-23T02:46:52.198Z [INFO]  TestAgent_RemoveCheck.server.raft: entering follower state: follower="Node at 127.0.0.1:16252 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:52.198Z [INFO]  TestAgent_RemoveCheck.server.serf.wan: serf: EventMemberJoin: Node-a058886b-09e5-1961-578c-0650ae10b500.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.198Z [INFO]  TestAgent_RemoveCheck.server.serf.lan: serf: EventMemberJoin: Node-a058886b-09e5-1961-578c-0650ae10b500 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.199Z [INFO]  TestAgent_RemoveCheck.server: Adding LAN server: server="Node-a058886b-09e5-1961-578c-0650ae10b500 (Addr: tcp/127.0.0.1:16252) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:52.199Z [INFO]  TestAgent_RemoveCheck: Started DNS server: address=127.0.0.1:16247 network=udp
>     writer.go:29: 2020-02-23T02:46:52.199Z [INFO]  TestAgent_RemoveCheck.server: Handled event for server in area: event=member-join server=Node-a058886b-09e5-1961-578c-0650ae10b500.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:52.199Z [INFO]  TestAgent_RemoveCheck: Started DNS server: address=127.0.0.1:16247 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.199Z [INFO]  TestAgent_RemoveCheck: Started HTTP server: address=127.0.0.1:16248 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.199Z [INFO]  TestAgent_RemoveCheck: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.250Z [WARN]  TestAgent_RemoveCheck.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.250Z [INFO]  TestAgent_RemoveCheck.server.raft: entering candidate state: node="Node at 127.0.0.1:16252 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.254Z [DEBUG] TestAgent_RemoveCheck.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.254Z [DEBUG] TestAgent_RemoveCheck.server.raft: vote granted: from=a058886b-09e5-1961-578c-0650ae10b500 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.254Z [INFO]  TestAgent_RemoveCheck.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.254Z [INFO]  TestAgent_RemoveCheck.server.raft: entering leader state: leader="Node at 127.0.0.1:16252 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.254Z [INFO]  TestAgent_RemoveCheck.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.254Z [INFO]  TestAgent_RemoveCheck.server: New leader elected: payload=Node-a058886b-09e5-1961-578c-0650ae10b500
>     writer.go:29: 2020-02-23T02:46:52.261Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.269Z [INFO]  TestAgent_RemoveCheck.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.269Z [INFO]  TestAgent_RemoveCheck.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.269Z [DEBUG] TestAgent_RemoveCheck.server: Skipping self join check for node since the cluster is too small: node=Node-a058886b-09e5-1961-578c-0650ae10b500
>     writer.go:29: 2020-02-23T02:46:52.269Z [INFO]  TestAgent_RemoveCheck.server: member joined, marking health alive: member=Node-a058886b-09e5-1961-578c-0650ae10b500
>     writer.go:29: 2020-02-23T02:46:52.414Z [DEBUG] TestAgent_RemoveCheck: removed check: check=mem
>     writer.go:29: 2020-02-23T02:46:52.414Z [DEBUG] TestAgent_RemoveCheck: removed check: check=mem
>     writer.go:29: 2020-02-23T02:46:52.414Z [INFO]  TestAgent_RemoveCheck: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.414Z [INFO]  TestAgent_RemoveCheck.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:52.414Z [DEBUG] TestAgent_RemoveCheck.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.414Z [WARN]  TestAgent_RemoveCheck.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.414Z [ERROR] TestAgent_RemoveCheck.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:52.414Z [DEBUG] TestAgent_RemoveCheck.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.416Z [WARN]  TestAgent_RemoveCheck.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.418Z [INFO]  TestAgent_RemoveCheck.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.418Z [INFO]  TestAgent_RemoveCheck: consul server down
>     writer.go:29: 2020-02-23T02:46:52.418Z [INFO]  TestAgent_RemoveCheck: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.418Z [INFO]  TestAgent_RemoveCheck: Stopping server: protocol=DNS address=127.0.0.1:16247 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.418Z [INFO]  TestAgent_RemoveCheck: Stopping server: protocol=DNS address=127.0.0.1:16247 network=udp
>     writer.go:29: 2020-02-23T02:46:52.418Z [INFO]  TestAgent_RemoveCheck: Stopping server: protocol=HTTP address=127.0.0.1:16248 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.418Z [INFO]  TestAgent_RemoveCheck: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.418Z [INFO]  TestAgent_RemoveCheck: Endpoints down
> === CONT  TestAgent_AddCheck_Alias_setToken
> --- PASS: TestAgent_AddCheck_Alias_userAndSetToken (0.22s)
>     writer.go:29: 2020-02-23T02:46:52.231Z [WARN]  TestAgent_AddCheck_Alias_userAndSetToken: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:52.232Z [DEBUG] TestAgent_AddCheck_Alias_userAndSetToken.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.232Z [DEBUG] TestAgent_AddCheck_Alias_userAndSetToken.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:52.254Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f25282a4-1d87-4408-ad29-ce3224004ceb Address:127.0.0.1:16246}]"
>     writer.go:29: 2020-02-23T02:46:52.254Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.server.raft: entering follower state: follower="Node at 127.0.0.1:16246 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:52.255Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.server.serf.wan: serf: EventMemberJoin: Node-f25282a4-1d87-4408-ad29-ce3224004ceb.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.255Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.server.serf.lan: serf: EventMemberJoin: Node-f25282a4-1d87-4408-ad29-ce3224004ceb 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.256Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.server: Handled event for server in area: event=member-join server=Node-f25282a4-1d87-4408-ad29-ce3224004ceb.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:52.256Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.server: Adding LAN server: server="Node-f25282a4-1d87-4408-ad29-ce3224004ceb (Addr: tcp/127.0.0.1:16246) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:52.256Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken: Started DNS server: address=127.0.0.1:16241 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.256Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken: Started DNS server: address=127.0.0.1:16241 network=udp
>     writer.go:29: 2020-02-23T02:46:52.256Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken: Started HTTP server: address=127.0.0.1:16242 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.256Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.301Z [WARN]  TestAgent_AddCheck_Alias_userAndSetToken.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.301Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.server.raft: entering candidate state: node="Node at 127.0.0.1:16246 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.305Z [DEBUG] TestAgent_AddCheck_Alias_userAndSetToken.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.305Z [DEBUG] TestAgent_AddCheck_Alias_userAndSetToken.server.raft: vote granted: from=f25282a4-1d87-4408-ad29-ce3224004ceb term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.305Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.305Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.server.raft: entering leader state: leader="Node at 127.0.0.1:16246 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.305Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.305Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.server: New leader elected: payload=Node-f25282a4-1d87-4408-ad29-ce3224004ceb
>     writer.go:29: 2020-02-23T02:46:52.312Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.323Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.323Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.323Z [DEBUG] TestAgent_AddCheck_Alias_userAndSetToken.server: Skipping self join check for node since the cluster is too small: node=Node-f25282a4-1d87-4408-ad29-ce3224004ceb
>     writer.go:29: 2020-02-23T02:46:52.323Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.server: member joined, marking health alive: member=Node-f25282a4-1d87-4408-ad29-ce3224004ceb
>     writer.go:29: 2020-02-23T02:46:52.436Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.436Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:52.436Z [DEBUG] TestAgent_AddCheck_Alias_userAndSetToken.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.436Z [WARN]  TestAgent_AddCheck_Alias_userAndSetToken.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.436Z [ERROR] TestAgent_AddCheck_Alias_userAndSetToken.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:52.436Z [DEBUG] TestAgent_AddCheck_Alias_userAndSetToken.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.438Z [WARN]  TestAgent_AddCheck_Alias_userAndSetToken.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.441Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.441Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken: consul server down
>     writer.go:29: 2020-02-23T02:46:52.441Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.441Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken: Stopping server: protocol=DNS address=127.0.0.1:16241 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.441Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken: Stopping server: protocol=DNS address=127.0.0.1:16241 network=udp
>     writer.go:29: 2020-02-23T02:46:52.441Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken: Stopping server: protocol=HTTP address=127.0.0.1:16242 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.441Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.441Z [INFO]  TestAgent_AddCheck_Alias_userAndSetToken: Endpoints down
> === CONT  TestAgent_AddCheck_Alias
> --- PASS: TestAgent_HTTPCheck_TLSSkipVerify (0.44s)
>     writer.go:29: 2020-02-23T02:46:52.152Z [WARN]  TestAgent_HTTPCheck_TLSSkipVerify: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:52.152Z [DEBUG] TestAgent_HTTPCheck_TLSSkipVerify.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.153Z [DEBUG] TestAgent_HTTPCheck_TLSSkipVerify.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:52.170Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4cc3b55a-d7a9-a633-4a61-4d5f6376b1a1 Address:127.0.0.1:16216}]"
>     writer.go:29: 2020-02-23T02:46:52.171Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.server.serf.wan: serf: EventMemberJoin: Node-4cc3b55a-d7a9-a633-4a61-4d5f6376b1a1.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.171Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.server.serf.lan: serf: EventMemberJoin: Node-4cc3b55a-d7a9-a633-4a61-4d5f6376b1a1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.172Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify: Started DNS server: address=127.0.0.1:16211 network=udp
>     writer.go:29: 2020-02-23T02:46:52.172Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.server.raft: entering follower state: follower="Node at 127.0.0.1:16216 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:52.172Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.server: Adding LAN server: server="Node-4cc3b55a-d7a9-a633-4a61-4d5f6376b1a1 (Addr: tcp/127.0.0.1:16216) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:52.172Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.server: Handled event for server in area: event=member-join server=Node-4cc3b55a-d7a9-a633-4a61-4d5f6376b1a1.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:52.172Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify: Started DNS server: address=127.0.0.1:16211 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.173Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify: Started HTTP server: address=127.0.0.1:16212 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.173Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.214Z [WARN]  TestAgent_HTTPCheck_TLSSkipVerify.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.214Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.server.raft: entering candidate state: node="Node at 127.0.0.1:16216 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.217Z [DEBUG] TestAgent_HTTPCheck_TLSSkipVerify.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.217Z [DEBUG] TestAgent_HTTPCheck_TLSSkipVerify.server.raft: vote granted: from=4cc3b55a-d7a9-a633-4a61-4d5f6376b1a1 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.217Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.217Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.server.raft: entering leader state: leader="Node at 127.0.0.1:16216 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.217Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.217Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.server: New leader elected: payload=Node-4cc3b55a-d7a9-a633-4a61-4d5f6376b1a1
>     writer.go:29: 2020-02-23T02:46:52.227Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.236Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.236Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.236Z [DEBUG] TestAgent_HTTPCheck_TLSSkipVerify.server: Skipping self join check for node since the cluster is too small: node=Node-4cc3b55a-d7a9-a633-4a61-4d5f6376b1a1
>     writer.go:29: 2020-02-23T02:46:52.236Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.server: member joined, marking health alive: member=Node-4cc3b55a-d7a9-a633-4a61-4d5f6376b1a1
>     writer.go:29: 2020-02-23T02:46:52.245Z [WARN]  TestAgent_HTTPCheck_TLSSkipVerify: check has interval below minimum: check=tls minimum_interval=1s
>     writer.go:29: 2020-02-23T02:46:52.245Z [DEBUG] TestAgent_HTTPCheck_TLSSkipVerify.tlsutil: OutgoingTLSConfigForCheck: version=1
>     writer.go:29: 2020-02-23T02:46:52.271Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify: Synced node info
>     writer.go:29: 2020-02-23T02:46:52.273Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify: Synced check: check=tls
>     writer.go:29: 2020-02-23T02:46:52.569Z [DEBUG] TestAgent_HTTPCheck_TLSSkipVerify: Check status updated: check=tls status=passing
>     writer.go:29: 2020-02-23T02:46:52.574Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.574Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:52.574Z [DEBUG] TestAgent_HTTPCheck_TLSSkipVerify.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.574Z [WARN]  TestAgent_HTTPCheck_TLSSkipVerify.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.574Z [DEBUG] TestAgent_HTTPCheck_TLSSkipVerify.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.576Z [WARN]  TestAgent_HTTPCheck_TLSSkipVerify.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.578Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.578Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify: consul server down
>     writer.go:29: 2020-02-23T02:46:52.578Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.578Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify: Stopping server: protocol=DNS address=127.0.0.1:16211 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.578Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify: Stopping server: protocol=DNS address=127.0.0.1:16211 network=udp
>     writer.go:29: 2020-02-23T02:46:52.578Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify: Stopping server: protocol=HTTP address=127.0.0.1:16212 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.578Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.578Z [INFO]  TestAgent_HTTPCheck_TLSSkipVerify: Endpoints down
> === CONT  TestAgent_AddCheck_GRPC
> --- PASS: TestAgent_AddCheck_Alias_setToken (0.25s)
>     writer.go:29: 2020-02-23T02:46:52.426Z [WARN]  TestAgent_AddCheck_Alias_setToken: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:52.426Z [DEBUG] TestAgent_AddCheck_Alias_setToken.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.426Z [DEBUG] TestAgent_AddCheck_Alias_setToken.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:52.439Z [INFO]  TestAgent_AddCheck_Alias_setToken.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:83eb04a6-25a4-bbb2-cde6-b60eea427510 Address:127.0.0.1:16264}]"
>     writer.go:29: 2020-02-23T02:46:52.439Z [INFO]  TestAgent_AddCheck_Alias_setToken.server.raft: entering follower state: follower="Node at 127.0.0.1:16264 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:52.440Z [INFO]  TestAgent_AddCheck_Alias_setToken.server.serf.wan: serf: EventMemberJoin: Node-83eb04a6-25a4-bbb2-cde6-b60eea427510.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.440Z [INFO]  TestAgent_AddCheck_Alias_setToken.server.serf.lan: serf: EventMemberJoin: Node-83eb04a6-25a4-bbb2-cde6-b60eea427510 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.440Z [INFO]  TestAgent_AddCheck_Alias_setToken.server: Adding LAN server: server="Node-83eb04a6-25a4-bbb2-cde6-b60eea427510 (Addr: tcp/127.0.0.1:16264) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:52.440Z [INFO]  TestAgent_AddCheck_Alias_setToken: Started DNS server: address=127.0.0.1:16259 network=udp
>     writer.go:29: 2020-02-23T02:46:52.441Z [INFO]  TestAgent_AddCheck_Alias_setToken.server: Handled event for server in area: event=member-join server=Node-83eb04a6-25a4-bbb2-cde6-b60eea427510.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:52.441Z [INFO]  TestAgent_AddCheck_Alias_setToken: Started DNS server: address=127.0.0.1:16259 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.441Z [INFO]  TestAgent_AddCheck_Alias_setToken: Started HTTP server: address=127.0.0.1:16260 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.441Z [INFO]  TestAgent_AddCheck_Alias_setToken: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.498Z [WARN]  TestAgent_AddCheck_Alias_setToken.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.499Z [INFO]  TestAgent_AddCheck_Alias_setToken.server.raft: entering candidate state: node="Node at 127.0.0.1:16264 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.502Z [DEBUG] TestAgent_AddCheck_Alias_setToken.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.502Z [DEBUG] TestAgent_AddCheck_Alias_setToken.server.raft: vote granted: from=83eb04a6-25a4-bbb2-cde6-b60eea427510 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.503Z [INFO]  TestAgent_AddCheck_Alias_setToken.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.503Z [INFO]  TestAgent_AddCheck_Alias_setToken.server.raft: entering leader state: leader="Node at 127.0.0.1:16264 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.503Z [INFO]  TestAgent_AddCheck_Alias_setToken.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.503Z [INFO]  TestAgent_AddCheck_Alias_setToken.server: New leader elected: payload=Node-83eb04a6-25a4-bbb2-cde6-b60eea427510
>     writer.go:29: 2020-02-23T02:46:52.514Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.525Z [INFO]  TestAgent_AddCheck_Alias_setToken.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.525Z [INFO]  TestAgent_AddCheck_Alias_setToken.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.525Z [DEBUG] TestAgent_AddCheck_Alias_setToken.server: Skipping self join check for node since the cluster is too small: node=Node-83eb04a6-25a4-bbb2-cde6-b60eea427510
>     writer.go:29: 2020-02-23T02:46:52.525Z [INFO]  TestAgent_AddCheck_Alias_setToken.server: member joined, marking health alive: member=Node-83eb04a6-25a4-bbb2-cde6-b60eea427510
>     writer.go:29: 2020-02-23T02:46:52.537Z [DEBUG] TestAgent_AddCheck_Alias_setToken: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:52.540Z [INFO]  TestAgent_AddCheck_Alias_setToken: Synced node info
>     writer.go:29: 2020-02-23T02:46:52.540Z [DEBUG] TestAgent_AddCheck_Alias_setToken: Node info in sync
>     writer.go:29: 2020-02-23T02:46:52.663Z [INFO]  TestAgent_AddCheck_Alias_setToken: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.664Z [INFO]  TestAgent_AddCheck_Alias_setToken.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:52.664Z [DEBUG] TestAgent_AddCheck_Alias_setToken.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.664Z [WARN]  TestAgent_AddCheck_Alias_setToken.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.664Z [DEBUG] TestAgent_AddCheck_Alias_setToken.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.666Z [WARN]  TestAgent_AddCheck_Alias_setToken.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.667Z [INFO]  TestAgent_AddCheck_Alias_setToken.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.667Z [INFO]  TestAgent_AddCheck_Alias_setToken: consul server down
>     writer.go:29: 2020-02-23T02:46:52.667Z [INFO]  TestAgent_AddCheck_Alias_setToken: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.667Z [INFO]  TestAgent_AddCheck_Alias_setToken: Stopping server: protocol=DNS address=127.0.0.1:16259 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.667Z [INFO]  TestAgent_AddCheck_Alias_setToken: Stopping server: protocol=DNS address=127.0.0.1:16259 network=udp
>     writer.go:29: 2020-02-23T02:46:52.667Z [INFO]  TestAgent_AddCheck_Alias_setToken: Stopping server: protocol=HTTP address=127.0.0.1:16260 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.667Z [INFO]  TestAgent_AddCheck_Alias_setToken: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.668Z [INFO]  TestAgent_AddCheck_Alias_setToken: Endpoints down
> === CONT  TestAgent_AddCheck_ExecRemoteDisable
> --- PASS: TestAgent_AddCheck_Alias_userToken (0.43s)
>     writer.go:29: 2020-02-23T02:46:52.414Z [WARN]  TestAgent_AddCheck_Alias_userToken: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:52.414Z [DEBUG] TestAgent_AddCheck_Alias_userToken.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.415Z [DEBUG] TestAgent_AddCheck_Alias_userToken.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:52.430Z [INFO]  TestAgent_AddCheck_Alias_userToken.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f1277684-1f77-1c0e-2734-40e9706779bd Address:127.0.0.1:16258}]"
>     writer.go:29: 2020-02-23T02:46:52.430Z [INFO]  TestAgent_AddCheck_Alias_userToken.server.serf.wan: serf: EventMemberJoin: Node-f1277684-1f77-1c0e-2734-40e9706779bd.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.431Z [INFO]  TestAgent_AddCheck_Alias_userToken.server.serf.lan: serf: EventMemberJoin: Node-f1277684-1f77-1c0e-2734-40e9706779bd 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.431Z [INFO]  TestAgent_AddCheck_Alias_userToken: Started DNS server: address=127.0.0.1:16253 network=udp
>     writer.go:29: 2020-02-23T02:46:52.431Z [INFO]  TestAgent_AddCheck_Alias_userToken.server.raft: entering follower state: follower="Node at 127.0.0.1:16258 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:52.431Z [INFO]  TestAgent_AddCheck_Alias_userToken.server: Handled event for server in area: event=member-join server=Node-f1277684-1f77-1c0e-2734-40e9706779bd.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:52.431Z [INFO]  TestAgent_AddCheck_Alias_userToken.server: Adding LAN server: server="Node-f1277684-1f77-1c0e-2734-40e9706779bd (Addr: tcp/127.0.0.1:16258) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:52.431Z [INFO]  TestAgent_AddCheck_Alias_userToken: Started DNS server: address=127.0.0.1:16253 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.432Z [INFO]  TestAgent_AddCheck_Alias_userToken: Started HTTP server: address=127.0.0.1:16254 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.432Z [INFO]  TestAgent_AddCheck_Alias_userToken: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.499Z [WARN]  TestAgent_AddCheck_Alias_userToken.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.499Z [INFO]  TestAgent_AddCheck_Alias_userToken.server.raft: entering candidate state: node="Node at 127.0.0.1:16258 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.502Z [DEBUG] TestAgent_AddCheck_Alias_userToken.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.502Z [DEBUG] TestAgent_AddCheck_Alias_userToken.server.raft: vote granted: from=f1277684-1f77-1c0e-2734-40e9706779bd term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.502Z [INFO]  TestAgent_AddCheck_Alias_userToken.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.502Z [INFO]  TestAgent_AddCheck_Alias_userToken.server.raft: entering leader state: leader="Node at 127.0.0.1:16258 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.502Z [INFO]  TestAgent_AddCheck_Alias_userToken.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.502Z [INFO]  TestAgent_AddCheck_Alias_userToken.server: New leader elected: payload=Node-f1277684-1f77-1c0e-2734-40e9706779bd
>     writer.go:29: 2020-02-23T02:46:52.515Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.524Z [INFO]  TestAgent_AddCheck_Alias_userToken.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.524Z [INFO]  TestAgent_AddCheck_Alias_userToken.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.524Z [DEBUG] TestAgent_AddCheck_Alias_userToken.server: Skipping self join check for node since the cluster is too small: node=Node-f1277684-1f77-1c0e-2734-40e9706779bd
>     writer.go:29: 2020-02-23T02:46:52.524Z [INFO]  TestAgent_AddCheck_Alias_userToken.server: member joined, marking health alive: member=Node-f1277684-1f77-1c0e-2734-40e9706779bd
>     writer.go:29: 2020-02-23T02:46:52.623Z [DEBUG] TestAgent_AddCheck_Alias_userToken: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:52.626Z [INFO]  TestAgent_AddCheck_Alias_userToken: Synced node info
>     writer.go:29: 2020-02-23T02:46:52.832Z [INFO]  TestAgent_AddCheck_Alias_userToken: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.832Z [INFO]  TestAgent_AddCheck_Alias_userToken.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:52.832Z [DEBUG] TestAgent_AddCheck_Alias_userToken.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.832Z [WARN]  TestAgent_AddCheck_Alias_userToken.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.833Z [DEBUG] TestAgent_AddCheck_Alias_userToken.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.834Z [WARN]  TestAgent_AddCheck_Alias_userToken.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.836Z [INFO]  TestAgent_AddCheck_Alias_userToken.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.836Z [INFO]  TestAgent_AddCheck_Alias_userToken: consul server down
>     writer.go:29: 2020-02-23T02:46:52.836Z [INFO]  TestAgent_AddCheck_Alias_userToken: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.836Z [INFO]  TestAgent_AddCheck_Alias_userToken: Stopping server: protocol=DNS address=127.0.0.1:16253 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.836Z [INFO]  TestAgent_AddCheck_Alias_userToken: Stopping server: protocol=DNS address=127.0.0.1:16253 network=udp
>     writer.go:29: 2020-02-23T02:46:52.836Z [INFO]  TestAgent_AddCheck_Alias_userToken: Stopping server: protocol=HTTP address=127.0.0.1:16254 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.836Z [INFO]  TestAgent_AddCheck_Alias_userToken: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.836Z [INFO]  TestAgent_AddCheck_Alias_userToken: Endpoints down
> === CONT  TestAgent_AddCheck_ExecDisable
> --- PASS: TestAgent_AddCheck_Alias (0.40s)
>     writer.go:29: 2020-02-23T02:46:52.448Z [WARN]  TestAgent_AddCheck_Alias: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:52.449Z [DEBUG] TestAgent_AddCheck_Alias.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.449Z [DEBUG] TestAgent_AddCheck_Alias.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:52.459Z [INFO]  TestAgent_AddCheck_Alias.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:2997819b-0ef8-ca44-b229-ab0f33037364 Address:127.0.0.1:16276}]"
>     writer.go:29: 2020-02-23T02:46:52.460Z [INFO]  TestAgent_AddCheck_Alias.server.serf.wan: serf: EventMemberJoin: Node-2997819b-0ef8-ca44-b229-ab0f33037364.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.460Z [INFO]  TestAgent_AddCheck_Alias.server.serf.lan: serf: EventMemberJoin: Node-2997819b-0ef8-ca44-b229-ab0f33037364 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.460Z [INFO]  TestAgent_AddCheck_Alias: Started DNS server: address=127.0.0.1:16271 network=udp
>     writer.go:29: 2020-02-23T02:46:52.460Z [INFO]  TestAgent_AddCheck_Alias.server.raft: entering follower state: follower="Node at 127.0.0.1:16276 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:52.461Z [INFO]  TestAgent_AddCheck_Alias.server: Adding LAN server: server="Node-2997819b-0ef8-ca44-b229-ab0f33037364 (Addr: tcp/127.0.0.1:16276) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:52.461Z [INFO]  TestAgent_AddCheck_Alias.server: Handled event for server in area: event=member-join server=Node-2997819b-0ef8-ca44-b229-ab0f33037364.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:52.461Z [INFO]  TestAgent_AddCheck_Alias: Started DNS server: address=127.0.0.1:16271 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.461Z [INFO]  TestAgent_AddCheck_Alias: Started HTTP server: address=127.0.0.1:16272 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.461Z [INFO]  TestAgent_AddCheck_Alias: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.517Z [WARN]  TestAgent_AddCheck_Alias.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.517Z [INFO]  TestAgent_AddCheck_Alias.server.raft: entering candidate state: node="Node at 127.0.0.1:16276 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.523Z [DEBUG] TestAgent_AddCheck_Alias.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.523Z [DEBUG] TestAgent_AddCheck_Alias.server.raft: vote granted: from=2997819b-0ef8-ca44-b229-ab0f33037364 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.523Z [INFO]  TestAgent_AddCheck_Alias.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.523Z [INFO]  TestAgent_AddCheck_Alias.server.raft: entering leader state: leader="Node at 127.0.0.1:16276 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.523Z [INFO]  TestAgent_AddCheck_Alias.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.523Z [INFO]  TestAgent_AddCheck_Alias.server: New leader elected: payload=Node-2997819b-0ef8-ca44-b229-ab0f33037364
>     writer.go:29: 2020-02-23T02:46:52.540Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.547Z [INFO]  TestAgent_AddCheck_Alias.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.548Z [INFO]  TestAgent_AddCheck_Alias.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.548Z [DEBUG] TestAgent_AddCheck_Alias.server: Skipping self join check for node since the cluster is too small: node=Node-2997819b-0ef8-ca44-b229-ab0f33037364
>     writer.go:29: 2020-02-23T02:46:52.548Z [INFO]  TestAgent_AddCheck_Alias.server: member joined, marking health alive: member=Node-2997819b-0ef8-ca44-b229-ab0f33037364
>     writer.go:29: 2020-02-23T02:46:52.560Z [DEBUG] TestAgent_AddCheck_Alias: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:52.563Z [INFO]  TestAgent_AddCheck_Alias: Synced node info
>     writer.go:29: 2020-02-23T02:46:52.563Z [DEBUG] TestAgent_AddCheck_Alias: Node info in sync
>     writer.go:29: 2020-02-23T02:46:52.841Z [INFO]  TestAgent_AddCheck_Alias: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.841Z [INFO]  TestAgent_AddCheck_Alias.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:52.841Z [DEBUG] TestAgent_AddCheck_Alias.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.841Z [WARN]  TestAgent_AddCheck_Alias.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.841Z [DEBUG] TestAgent_AddCheck_Alias.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.843Z [WARN]  TestAgent_AddCheck_Alias.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.845Z [INFO]  TestAgent_AddCheck_Alias.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.845Z [INFO]  TestAgent_AddCheck_Alias: consul server down
>     writer.go:29: 2020-02-23T02:46:52.845Z [INFO]  TestAgent_AddCheck_Alias: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.845Z [INFO]  TestAgent_AddCheck_Alias: Stopping server: protocol=DNS address=127.0.0.1:16271 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.845Z [INFO]  TestAgent_AddCheck_Alias: Stopping server: protocol=DNS address=127.0.0.1:16271 network=udp
>     writer.go:29: 2020-02-23T02:46:52.845Z [INFO]  TestAgent_AddCheck_Alias: Stopping server: protocol=HTTP address=127.0.0.1:16272 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.845Z [INFO]  TestAgent_AddCheck_Alias: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.845Z [INFO]  TestAgent_AddCheck_Alias: Endpoints down
> === CONT  TestAgent_AddCheck_RestoreState
> --- PASS: TestAgent_AddCheck_ExecRemoteDisable (0.21s)
>     writer.go:29: 2020-02-23T02:46:52.675Z [WARN]  TestAgent_AddCheck_ExecRemoteDisable: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:52.675Z [DEBUG] TestAgent_AddCheck_ExecRemoteDisable.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.675Z [DEBUG] TestAgent_AddCheck_ExecRemoteDisable.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:52.684Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:2cb561b9-3621-d613-b724-1582840ae341 Address:127.0.0.1:16282}]"
>     writer.go:29: 2020-02-23T02:46:52.684Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.server.raft: entering follower state: follower="Node at 127.0.0.1:16282 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:52.685Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.server.serf.wan: serf: EventMemberJoin: Node-2cb561b9-3621-d613-b724-1582840ae341.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.685Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.server.serf.lan: serf: EventMemberJoin: Node-2cb561b9-3621-d613-b724-1582840ae341 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.685Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.server: Handled event for server in area: event=member-join server=Node-2cb561b9-3621-d613-b724-1582840ae341.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:52.685Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.server: Adding LAN server: server="Node-2cb561b9-3621-d613-b724-1582840ae341 (Addr: tcp/127.0.0.1:16282) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:52.685Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable: Started DNS server: address=127.0.0.1:16277 network=udp
>     writer.go:29: 2020-02-23T02:46:52.686Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable: Started DNS server: address=127.0.0.1:16277 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.686Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable: Started HTTP server: address=127.0.0.1:16278 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.686Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.743Z [WARN]  TestAgent_AddCheck_ExecRemoteDisable.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.743Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.server.raft: entering candidate state: node="Node at 127.0.0.1:16282 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.746Z [DEBUG] TestAgent_AddCheck_ExecRemoteDisable.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.746Z [DEBUG] TestAgent_AddCheck_ExecRemoteDisable.server.raft: vote granted: from=2cb561b9-3621-d613-b724-1582840ae341 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.746Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.747Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.server.raft: entering leader state: leader="Node at 127.0.0.1:16282 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.747Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.747Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.server: New leader elected: payload=Node-2cb561b9-3621-d613-b724-1582840ae341
>     writer.go:29: 2020-02-23T02:46:52.757Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.767Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.767Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.768Z [DEBUG] TestAgent_AddCheck_ExecRemoteDisable.server: Skipping self join check for node since the cluster is too small: node=Node-2cb561b9-3621-d613-b724-1582840ae341
>     writer.go:29: 2020-02-23T02:46:52.768Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.server: member joined, marking health alive: member=Node-2cb561b9-3621-d613-b724-1582840ae341
>     writer.go:29: 2020-02-23T02:46:52.869Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.870Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:52.870Z [DEBUG] TestAgent_AddCheck_ExecRemoteDisable.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.870Z [WARN]  TestAgent_AddCheck_ExecRemoteDisable.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.870Z [ERROR] TestAgent_AddCheck_ExecRemoteDisable.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:52.870Z [DEBUG] TestAgent_AddCheck_ExecRemoteDisable.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.872Z [WARN]  TestAgent_AddCheck_ExecRemoteDisable.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.873Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.873Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable: consul server down
>     writer.go:29: 2020-02-23T02:46:52.873Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.873Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable: Stopping server: protocol=DNS address=127.0.0.1:16277 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.873Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable: Stopping server: protocol=DNS address=127.0.0.1:16277 network=udp
>     writer.go:29: 2020-02-23T02:46:52.873Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable: Stopping server: protocol=HTTP address=127.0.0.1:16278 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.873Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.874Z [INFO]  TestAgent_AddCheck_ExecRemoteDisable: Endpoints down
> === CONT  TestAgent_AddCheck_MissingService
> --- PASS: TestAgent_AddCheck_GRPC (0.30s)
>     writer.go:29: 2020-02-23T02:46:52.585Z [WARN]  TestAgent_AddCheck_GRPC: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:52.585Z [DEBUG] TestAgent_AddCheck_GRPC.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.585Z [DEBUG] TestAgent_AddCheck_GRPC.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:52.596Z [INFO]  TestAgent_AddCheck_GRPC.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:38572186-7a7d-7b32-88b7-c62005b01f85 Address:127.0.0.1:16270}]"
>     writer.go:29: 2020-02-23T02:46:52.596Z [INFO]  TestAgent_AddCheck_GRPC.server.serf.wan: serf: EventMemberJoin: Node-38572186-7a7d-7b32-88b7-c62005b01f85.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.597Z [INFO]  TestAgent_AddCheck_GRPC.server.serf.lan: serf: EventMemberJoin: Node-38572186-7a7d-7b32-88b7-c62005b01f85 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.597Z [INFO]  TestAgent_AddCheck_GRPC: Started DNS server: address=127.0.0.1:16265 network=udp
>     writer.go:29: 2020-02-23T02:46:52.597Z [INFO]  TestAgent_AddCheck_GRPC.server.raft: entering follower state: follower="Node at 127.0.0.1:16270 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:52.597Z [INFO]  TestAgent_AddCheck_GRPC.server: Adding LAN server: server="Node-38572186-7a7d-7b32-88b7-c62005b01f85 (Addr: tcp/127.0.0.1:16270) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:52.597Z [INFO]  TestAgent_AddCheck_GRPC.server: Handled event for server in area: event=member-join server=Node-38572186-7a7d-7b32-88b7-c62005b01f85.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:52.597Z [INFO]  TestAgent_AddCheck_GRPC: Started DNS server: address=127.0.0.1:16265 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.598Z [INFO]  TestAgent_AddCheck_GRPC: Started HTTP server: address=127.0.0.1:16266 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.598Z [INFO]  TestAgent_AddCheck_GRPC: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.650Z [WARN]  TestAgent_AddCheck_GRPC.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.650Z [INFO]  TestAgent_AddCheck_GRPC.server.raft: entering candidate state: node="Node at 127.0.0.1:16270 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.654Z [DEBUG] TestAgent_AddCheck_GRPC.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.654Z [DEBUG] TestAgent_AddCheck_GRPC.server.raft: vote granted: from=38572186-7a7d-7b32-88b7-c62005b01f85 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.654Z [INFO]  TestAgent_AddCheck_GRPC.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.654Z [INFO]  TestAgent_AddCheck_GRPC.server.raft: entering leader state: leader="Node at 127.0.0.1:16270 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.654Z [INFO]  TestAgent_AddCheck_GRPC.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.654Z [INFO]  TestAgent_AddCheck_GRPC.server: New leader elected: payload=Node-38572186-7a7d-7b32-88b7-c62005b01f85
>     writer.go:29: 2020-02-23T02:46:52.661Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.674Z [INFO]  TestAgent_AddCheck_GRPC.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.674Z [INFO]  TestAgent_AddCheck_GRPC.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.674Z [DEBUG] TestAgent_AddCheck_GRPC.server: Skipping self join check for node since the cluster is too small: node=Node-38572186-7a7d-7b32-88b7-c62005b01f85
>     writer.go:29: 2020-02-23T02:46:52.674Z [INFO]  TestAgent_AddCheck_GRPC.server: member joined, marking health alive: member=Node-38572186-7a7d-7b32-88b7-c62005b01f85
>     writer.go:29: 2020-02-23T02:46:52.870Z [INFO]  TestAgent_AddCheck_GRPC: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:52.870Z [INFO]  TestAgent_AddCheck_GRPC.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:52.870Z [DEBUG] TestAgent_AddCheck_GRPC.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.870Z [WARN]  TestAgent_AddCheck_GRPC.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.870Z [ERROR] TestAgent_AddCheck_GRPC.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:52.870Z [DEBUG] TestAgent_AddCheck_GRPC.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.872Z [WARN]  TestAgent_AddCheck_GRPC.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:52.879Z [INFO]  TestAgent_AddCheck_GRPC.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:52.879Z [INFO]  TestAgent_AddCheck_GRPC: consul server down
>     writer.go:29: 2020-02-23T02:46:52.879Z [INFO]  TestAgent_AddCheck_GRPC: shutdown complete
>     writer.go:29: 2020-02-23T02:46:52.879Z [INFO]  TestAgent_AddCheck_GRPC: Stopping server: protocol=DNS address=127.0.0.1:16265 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.879Z [INFO]  TestAgent_AddCheck_GRPC: Stopping server: protocol=DNS address=127.0.0.1:16265 network=udp
>     writer.go:29: 2020-02-23T02:46:52.879Z [INFO]  TestAgent_AddCheck_GRPC: Stopping server: protocol=HTTP address=127.0.0.1:16266 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.879Z [INFO]  TestAgent_AddCheck_GRPC: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:52.879Z [INFO]  TestAgent_AddCheck_GRPC: Endpoints down
> === CONT  TestAgent_AddCheck_MinInterval
> --- PASS: TestAgent_AddCheck_MissingService (0.19s)
>     writer.go:29: 2020-02-23T02:46:52.895Z [WARN]  TestAgent_AddCheck_MissingService: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:52.895Z [DEBUG] TestAgent_AddCheck_MissingService.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.895Z [DEBUG] TestAgent_AddCheck_MissingService.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:52.906Z [INFO]  TestAgent_AddCheck_MissingService.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:2e88d365-7543-8c4e-71e5-7a24ff11c871 Address:127.0.0.1:16312}]"
>     writer.go:29: 2020-02-23T02:46:52.906Z [INFO]  TestAgent_AddCheck_MissingService.server.serf.wan: serf: EventMemberJoin: Node-2e88d365-7543-8c4e-71e5-7a24ff11c871.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.907Z [INFO]  TestAgent_AddCheck_MissingService.server.serf.lan: serf: EventMemberJoin: Node-2e88d365-7543-8c4e-71e5-7a24ff11c871 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.907Z [INFO]  TestAgent_AddCheck_MissingService: Started DNS server: address=127.0.0.1:16307 network=udp
>     writer.go:29: 2020-02-23T02:46:52.907Z [INFO]  TestAgent_AddCheck_MissingService.server.raft: entering follower state: follower="Node at 127.0.0.1:16312 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:52.907Z [INFO]  TestAgent_AddCheck_MissingService.server: Adding LAN server: server="Node-2e88d365-7543-8c4e-71e5-7a24ff11c871 (Addr: tcp/127.0.0.1:16312) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:52.907Z [INFO]  TestAgent_AddCheck_MissingService.server: Handled event for server in area: event=member-join server=Node-2e88d365-7543-8c4e-71e5-7a24ff11c871.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:52.907Z [INFO]  TestAgent_AddCheck_MissingService: Started DNS server: address=127.0.0.1:16307 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.908Z [INFO]  TestAgent_AddCheck_MissingService: Started HTTP server: address=127.0.0.1:16308 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.908Z [INFO]  TestAgent_AddCheck_MissingService: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.972Z [WARN]  TestAgent_AddCheck_MissingService.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.972Z [INFO]  TestAgent_AddCheck_MissingService.server.raft: entering candidate state: node="Node at 127.0.0.1:16312 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.976Z [DEBUG] TestAgent_AddCheck_MissingService.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.976Z [DEBUG] TestAgent_AddCheck_MissingService.server.raft: vote granted: from=2e88d365-7543-8c4e-71e5-7a24ff11c871 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.976Z [INFO]  TestAgent_AddCheck_MissingService.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.976Z [INFO]  TestAgent_AddCheck_MissingService.server.raft: entering leader state: leader="Node at 127.0.0.1:16312 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.976Z [INFO]  TestAgent_AddCheck_MissingService.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.977Z [INFO]  TestAgent_AddCheck_MissingService.server: New leader elected: payload=Node-2e88d365-7543-8c4e-71e5-7a24ff11c871
>     writer.go:29: 2020-02-23T02:46:52.987Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.996Z [INFO]  TestAgent_AddCheck_MissingService.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.996Z [INFO]  TestAgent_AddCheck_MissingService.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.996Z [DEBUG] TestAgent_AddCheck_MissingService.server: Skipping self join check for node since the cluster is too small: node=Node-2e88d365-7543-8c4e-71e5-7a24ff11c871
>     writer.go:29: 2020-02-23T02:46:52.996Z [INFO]  TestAgent_AddCheck_MissingService.server: member joined, marking health alive: member=Node-2e88d365-7543-8c4e-71e5-7a24ff11c871
>     writer.go:29: 2020-02-23T02:46:53.062Z [INFO]  TestAgent_AddCheck_MissingService: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:53.062Z [INFO]  TestAgent_AddCheck_MissingService.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:53.062Z [DEBUG] TestAgent_AddCheck_MissingService.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.062Z [WARN]  TestAgent_AddCheck_MissingService.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.062Z [DEBUG] TestAgent_AddCheck_MissingService.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.062Z [ERROR] TestAgent_AddCheck_MissingService.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:53.064Z [WARN]  TestAgent_AddCheck_MissingService.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.066Z [INFO]  TestAgent_AddCheck_MissingService.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:53.066Z [INFO]  TestAgent_AddCheck_MissingService: consul server down
>     writer.go:29: 2020-02-23T02:46:53.066Z [INFO]  TestAgent_AddCheck_MissingService: shutdown complete
>     writer.go:29: 2020-02-23T02:46:53.066Z [INFO]  TestAgent_AddCheck_MissingService: Stopping server: protocol=DNS address=127.0.0.1:16307 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.066Z [INFO]  TestAgent_AddCheck_MissingService: Stopping server: protocol=DNS address=127.0.0.1:16307 network=udp
>     writer.go:29: 2020-02-23T02:46:53.066Z [INFO]  TestAgent_AddCheck_MissingService: Stopping server: protocol=HTTP address=127.0.0.1:16308 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.066Z [INFO]  TestAgent_AddCheck_MissingService: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:53.066Z [INFO]  TestAgent_AddCheck_MissingService: Endpoints down
> === CONT  TestAgent_AddCheck_StartPassing
> --- PASS: TestAgent_AddCheck_RestoreState (0.24s)
>     writer.go:29: 2020-02-23T02:46:52.852Z [WARN]  TestAgent_AddCheck_RestoreState: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:52.853Z [DEBUG] TestAgent_AddCheck_RestoreState.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.854Z [DEBUG] TestAgent_AddCheck_RestoreState.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:52.867Z [INFO]  TestAgent_AddCheck_RestoreState.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:933f22e2-28e9-5b46-cc28-e803f98cfa51 Address:127.0.0.1:16288}]"
>     writer.go:29: 2020-02-23T02:46:52.867Z [INFO]  TestAgent_AddCheck_RestoreState.server.serf.wan: serf: EventMemberJoin: Node-933f22e2-28e9-5b46-cc28-e803f98cfa51.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.867Z [INFO]  TestAgent_AddCheck_RestoreState.server.serf.lan: serf: EventMemberJoin: Node-933f22e2-28e9-5b46-cc28-e803f98cfa51 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.867Z [INFO]  TestAgent_AddCheck_RestoreState: Started DNS server: address=127.0.0.1:16283 network=udp
>     writer.go:29: 2020-02-23T02:46:52.867Z [INFO]  TestAgent_AddCheck_RestoreState.server.raft: entering follower state: follower="Node at 127.0.0.1:16288 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:52.868Z [INFO]  TestAgent_AddCheck_RestoreState.server: Adding LAN server: server="Node-933f22e2-28e9-5b46-cc28-e803f98cfa51 (Addr: tcp/127.0.0.1:16288) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:52.868Z [INFO]  TestAgent_AddCheck_RestoreState.server: Handled event for server in area: event=member-join server=Node-933f22e2-28e9-5b46-cc28-e803f98cfa51.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:52.868Z [INFO]  TestAgent_AddCheck_RestoreState: Started DNS server: address=127.0.0.1:16283 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.868Z [INFO]  TestAgent_AddCheck_RestoreState: Started HTTP server: address=127.0.0.1:16284 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.868Z [INFO]  TestAgent_AddCheck_RestoreState: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.914Z [WARN]  TestAgent_AddCheck_RestoreState.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.914Z [INFO]  TestAgent_AddCheck_RestoreState.server.raft: entering candidate state: node="Node at 127.0.0.1:16288 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.920Z [DEBUG] TestAgent_AddCheck_RestoreState.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.920Z [DEBUG] TestAgent_AddCheck_RestoreState.server.raft: vote granted: from=933f22e2-28e9-5b46-cc28-e803f98cfa51 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.920Z [INFO]  TestAgent_AddCheck_RestoreState.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.920Z [INFO]  TestAgent_AddCheck_RestoreState.server.raft: entering leader state: leader="Node at 127.0.0.1:16288 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.920Z [INFO]  TestAgent_AddCheck_RestoreState.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.920Z [INFO]  TestAgent_AddCheck_RestoreState.server: New leader elected: payload=Node-933f22e2-28e9-5b46-cc28-e803f98cfa51
>     writer.go:29: 2020-02-23T02:46:52.965Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.977Z [INFO]  TestAgent_AddCheck_RestoreState.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.977Z [INFO]  TestAgent_AddCheck_RestoreState.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.977Z [DEBUG] TestAgent_AddCheck_RestoreState.server: Skipping self join check for node since the cluster is too small: node=Node-933f22e2-28e9-5b46-cc28-e803f98cfa51
>     writer.go:29: 2020-02-23T02:46:52.977Z [INFO]  TestAgent_AddCheck_RestoreState.server: member joined, marking health alive: member=Node-933f22e2-28e9-5b46-cc28-e803f98cfa51
>     writer.go:29: 2020-02-23T02:46:52.985Z [DEBUG] TestAgent_AddCheck_RestoreState: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:52.988Z [INFO]  TestAgent_AddCheck_RestoreState: Synced node info
>     writer.go:29: 2020-02-23T02:46:52.988Z [DEBUG] TestAgent_AddCheck_RestoreState: Node info in sync
>     writer.go:29: 2020-02-23T02:46:53.072Z [INFO]  TestAgent_AddCheck_RestoreState: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:53.072Z [INFO]  TestAgent_AddCheck_RestoreState.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:53.072Z [DEBUG] TestAgent_AddCheck_RestoreState.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.072Z [WARN]  TestAgent_AddCheck_RestoreState.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.072Z [DEBUG] TestAgent_AddCheck_RestoreState.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.079Z [WARN]  TestAgent_AddCheck_RestoreState.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.081Z [INFO]  TestAgent_AddCheck_RestoreState.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:53.081Z [INFO]  TestAgent_AddCheck_RestoreState: consul server down
>     writer.go:29: 2020-02-23T02:46:53.081Z [INFO]  TestAgent_AddCheck_RestoreState: shutdown complete
>     writer.go:29: 2020-02-23T02:46:53.081Z [INFO]  TestAgent_AddCheck_RestoreState: Stopping server: protocol=DNS address=127.0.0.1:16283 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.081Z [INFO]  TestAgent_AddCheck_RestoreState: Stopping server: protocol=DNS address=127.0.0.1:16283 network=udp
>     writer.go:29: 2020-02-23T02:46:53.081Z [INFO]  TestAgent_AddCheck_RestoreState: Stopping server: protocol=HTTP address=127.0.0.1:16284 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.081Z [INFO]  TestAgent_AddCheck_RestoreState: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:53.081Z [INFO]  TestAgent_AddCheck_RestoreState: Endpoints down
> === CONT  TestAgent_AddCheck
> --- PASS: TestAgent_AddCheck_ExecDisable (0.38s)
>     writer.go:29: 2020-02-23T02:46:52.845Z [WARN]  TestAgent_AddCheck_ExecDisable: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:52.853Z [DEBUG] TestAgent_AddCheck_ExecDisable.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.853Z [DEBUG] TestAgent_AddCheck_ExecDisable.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:52.864Z [INFO]  TestAgent_AddCheck_ExecDisable.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:91b35aa0-4131-70b5-05e2-9885eae0260f Address:127.0.0.1:16294}]"
>     writer.go:29: 2020-02-23T02:46:52.865Z [INFO]  TestAgent_AddCheck_ExecDisable.server.serf.wan: serf: EventMemberJoin: Node-91b35aa0-4131-70b5-05e2-9885eae0260f.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.865Z [INFO]  TestAgent_AddCheck_ExecDisable.server.serf.lan: serf: EventMemberJoin: Node-91b35aa0-4131-70b5-05e2-9885eae0260f 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.865Z [INFO]  TestAgent_AddCheck_ExecDisable: Started DNS server: address=127.0.0.1:16289 network=udp
>     writer.go:29: 2020-02-23T02:46:52.865Z [INFO]  TestAgent_AddCheck_ExecDisable.server.raft: entering follower state: follower="Node at 127.0.0.1:16294 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:52.865Z [INFO]  TestAgent_AddCheck_ExecDisable.server: Adding LAN server: server="Node-91b35aa0-4131-70b5-05e2-9885eae0260f (Addr: tcp/127.0.0.1:16294) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:52.865Z [INFO]  TestAgent_AddCheck_ExecDisable.server: Handled event for server in area: event=member-join server=Node-91b35aa0-4131-70b5-05e2-9885eae0260f.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:52.865Z [INFO]  TestAgent_AddCheck_ExecDisable: Started DNS server: address=127.0.0.1:16289 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.866Z [INFO]  TestAgent_AddCheck_ExecDisable: Started HTTP server: address=127.0.0.1:16290 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.866Z [INFO]  TestAgent_AddCheck_ExecDisable: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.902Z [WARN]  TestAgent_AddCheck_ExecDisable.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.902Z [INFO]  TestAgent_AddCheck_ExecDisable.server.raft: entering candidate state: node="Node at 127.0.0.1:16294 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.908Z [DEBUG] TestAgent_AddCheck_ExecDisable.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.908Z [DEBUG] TestAgent_AddCheck_ExecDisable.server.raft: vote granted: from=91b35aa0-4131-70b5-05e2-9885eae0260f term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.908Z [INFO]  TestAgent_AddCheck_ExecDisable.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.908Z [INFO]  TestAgent_AddCheck_ExecDisable.server.raft: entering leader state: leader="Node at 127.0.0.1:16294 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.908Z [INFO]  TestAgent_AddCheck_ExecDisable.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.908Z [INFO]  TestAgent_AddCheck_ExecDisable.server: New leader elected: payload=Node-91b35aa0-4131-70b5-05e2-9885eae0260f
>     writer.go:29: 2020-02-23T02:46:52.915Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.935Z [INFO]  TestAgent_AddCheck_ExecDisable.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.936Z [INFO]  TestAgent_AddCheck_ExecDisable.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.936Z [DEBUG] TestAgent_AddCheck_ExecDisable.server: Skipping self join check for node since the cluster is too small: node=Node-91b35aa0-4131-70b5-05e2-9885eae0260f
>     writer.go:29: 2020-02-23T02:46:52.936Z [INFO]  TestAgent_AddCheck_ExecDisable.server: member joined, marking health alive: member=Node-91b35aa0-4131-70b5-05e2-9885eae0260f
>     writer.go:29: 2020-02-23T02:46:52.977Z [DEBUG] TestAgent_AddCheck_ExecDisable: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:52.981Z [INFO]  TestAgent_AddCheck_ExecDisable: Synced node info
>     writer.go:29: 2020-02-23T02:46:53.215Z [INFO]  TestAgent_AddCheck_ExecDisable: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:53.215Z [INFO]  TestAgent_AddCheck_ExecDisable.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:53.215Z [DEBUG] TestAgent_AddCheck_ExecDisable.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.215Z [WARN]  TestAgent_AddCheck_ExecDisable.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.215Z [DEBUG] TestAgent_AddCheck_ExecDisable.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.217Z [WARN]  TestAgent_AddCheck_ExecDisable.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.218Z [INFO]  TestAgent_AddCheck_ExecDisable.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:53.218Z [INFO]  TestAgent_AddCheck_ExecDisable: consul server down
>     writer.go:29: 2020-02-23T02:46:53.219Z [INFO]  TestAgent_AddCheck_ExecDisable: shutdown complete
>     writer.go:29: 2020-02-23T02:46:53.219Z [INFO]  TestAgent_AddCheck_ExecDisable: Stopping server: protocol=DNS address=127.0.0.1:16289 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.219Z [INFO]  TestAgent_AddCheck_ExecDisable: Stopping server: protocol=DNS address=127.0.0.1:16289 network=udp
>     writer.go:29: 2020-02-23T02:46:53.219Z [INFO]  TestAgent_AddCheck_ExecDisable: Stopping server: protocol=HTTP address=127.0.0.1:16290 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.219Z [INFO]  TestAgent_AddCheck_ExecDisable: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:53.219Z [INFO]  TestAgent_AddCheck_ExecDisable: Endpoints down
> === CONT  TestAgent_IndexChurn
> === RUN   TestAgent_IndexChurn/no_tags
> --- PASS: TestAgent_AddCheck_MinInterval (0.38s)
>     writer.go:29: 2020-02-23T02:46:52.888Z [WARN]  TestAgent_AddCheck_MinInterval: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:52.888Z [DEBUG] TestAgent_AddCheck_MinInterval.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:52.889Z [DEBUG] TestAgent_AddCheck_MinInterval.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:52.904Z [INFO]  TestAgent_AddCheck_MinInterval.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:19ffdc7b-a6e3-9e47-2de9-9372207fe215 Address:127.0.0.1:16300}]"
>     writer.go:29: 2020-02-23T02:46:52.905Z [INFO]  TestAgent_AddCheck_MinInterval.server.serf.wan: serf: EventMemberJoin: Node-19ffdc7b-a6e3-9e47-2de9-9372207fe215.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.905Z [INFO]  TestAgent_AddCheck_MinInterval.server.serf.lan: serf: EventMemberJoin: Node-19ffdc7b-a6e3-9e47-2de9-9372207fe215 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:52.905Z [INFO]  TestAgent_AddCheck_MinInterval: Started DNS server: address=127.0.0.1:16295 network=udp
>     writer.go:29: 2020-02-23T02:46:52.905Z [INFO]  TestAgent_AddCheck_MinInterval.server.raft: entering follower state: follower="Node at 127.0.0.1:16300 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:52.905Z [INFO]  TestAgent_AddCheck_MinInterval.server: Adding LAN server: server="Node-19ffdc7b-a6e3-9e47-2de9-9372207fe215 (Addr: tcp/127.0.0.1:16300) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:52.905Z [INFO]  TestAgent_AddCheck_MinInterval.server: Handled event for server in area: event=member-join server=Node-19ffdc7b-a6e3-9e47-2de9-9372207fe215.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:52.905Z [INFO]  TestAgent_AddCheck_MinInterval: Started DNS server: address=127.0.0.1:16295 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.906Z [INFO]  TestAgent_AddCheck_MinInterval: Started HTTP server: address=127.0.0.1:16296 network=tcp
>     writer.go:29: 2020-02-23T02:46:52.906Z [INFO]  TestAgent_AddCheck_MinInterval: started state syncer
>     writer.go:29: 2020-02-23T02:46:52.959Z [WARN]  TestAgent_AddCheck_MinInterval.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:52.959Z [INFO]  TestAgent_AddCheck_MinInterval.server.raft: entering candidate state: node="Node at 127.0.0.1:16300 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:52.964Z [DEBUG] TestAgent_AddCheck_MinInterval.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:52.964Z [DEBUG] TestAgent_AddCheck_MinInterval.server.raft: vote granted: from=19ffdc7b-a6e3-9e47-2de9-9372207fe215 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:52.964Z [INFO]  TestAgent_AddCheck_MinInterval.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:52.964Z [INFO]  TestAgent_AddCheck_MinInterval.server.raft: entering leader state: leader="Node at 127.0.0.1:16300 [Leader]"
>     writer.go:29: 2020-02-23T02:46:52.964Z [INFO]  TestAgent_AddCheck_MinInterval.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:52.964Z [INFO]  TestAgent_AddCheck_MinInterval.server: New leader elected: payload=Node-19ffdc7b-a6e3-9e47-2de9-9372207fe215
>     writer.go:29: 2020-02-23T02:46:52.984Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:52.993Z [INFO]  TestAgent_AddCheck_MinInterval.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:52.993Z [INFO]  TestAgent_AddCheck_MinInterval.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:52.993Z [DEBUG] TestAgent_AddCheck_MinInterval.server: Skipping self join check for node since the cluster is too small: node=Node-19ffdc7b-a6e3-9e47-2de9-9372207fe215
>     writer.go:29: 2020-02-23T02:46:52.993Z [INFO]  TestAgent_AddCheck_MinInterval.server: member joined, marking health alive: member=Node-19ffdc7b-a6e3-9e47-2de9-9372207fe215
>     writer.go:29: 2020-02-23T02:46:53.258Z [WARN]  TestAgent_AddCheck_MinInterval: check has interval below minimum: check=mem minimum_interval=1s
>     writer.go:29: 2020-02-23T02:46:53.258Z [INFO]  TestAgent_AddCheck_MinInterval: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:53.258Z [INFO]  TestAgent_AddCheck_MinInterval.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:53.258Z [DEBUG] TestAgent_AddCheck_MinInterval.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.258Z [WARN]  TestAgent_AddCheck_MinInterval.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.258Z [ERROR] TestAgent_AddCheck_MinInterval.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:53.258Z [DEBUG] TestAgent_AddCheck_MinInterval.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.260Z [WARN]  TestAgent_AddCheck_MinInterval.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.261Z [INFO]  TestAgent_AddCheck_MinInterval.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:53.261Z [INFO]  TestAgent_AddCheck_MinInterval: consul server down
>     writer.go:29: 2020-02-23T02:46:53.261Z [INFO]  TestAgent_AddCheck_MinInterval: shutdown complete
>     writer.go:29: 2020-02-23T02:46:53.262Z [INFO]  TestAgent_AddCheck_MinInterval: Stopping server: protocol=DNS address=127.0.0.1:16295 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.262Z [INFO]  TestAgent_AddCheck_MinInterval: Stopping server: protocol=DNS address=127.0.0.1:16295 network=udp
>     writer.go:29: 2020-02-23T02:46:53.262Z [INFO]  TestAgent_AddCheck_MinInterval: Stopping server: protocol=HTTP address=127.0.0.1:16296 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.262Z [INFO]  TestAgent_AddCheck_MinInterval: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:53.262Z [INFO]  TestAgent_AddCheck_MinInterval: Endpoints down
> === CONT  TestAgent_makeNodeID
> --- PASS: TestAgent_AddCheck_StartPassing (0.21s)
>     writer.go:29: 2020-02-23T02:46:53.082Z [WARN]  TestAgent_AddCheck_StartPassing: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:53.082Z [DEBUG] TestAgent_AddCheck_StartPassing.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:53.083Z [DEBUG] TestAgent_AddCheck_StartPassing.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:53.126Z [INFO]  TestAgent_AddCheck_StartPassing.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:48cf610d-bc35-b1bb-b7f5-246f685ea658 Address:127.0.0.1:16306}]"
>     writer.go:29: 2020-02-23T02:46:53.126Z [INFO]  TestAgent_AddCheck_StartPassing.server.raft: entering follower state: follower="Node at 127.0.0.1:16306 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:53.127Z [INFO]  TestAgent_AddCheck_StartPassing.server.serf.wan: serf: EventMemberJoin: Node-48cf610d-bc35-b1bb-b7f5-246f685ea658.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.127Z [INFO]  TestAgent_AddCheck_StartPassing.server.serf.lan: serf: EventMemberJoin: Node-48cf610d-bc35-b1bb-b7f5-246f685ea658 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.131Z [INFO]  TestAgent_AddCheck_StartPassing.server: Adding LAN server: server="Node-48cf610d-bc35-b1bb-b7f5-246f685ea658 (Addr: tcp/127.0.0.1:16306) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:53.131Z [INFO]  TestAgent_AddCheck_StartPassing.server: Handled event for server in area: event=member-join server=Node-48cf610d-bc35-b1bb-b7f5-246f685ea658.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:53.131Z [INFO]  TestAgent_AddCheck_StartPassing: Started DNS server: address=127.0.0.1:16301 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.131Z [INFO]  TestAgent_AddCheck_StartPassing: Started DNS server: address=127.0.0.1:16301 network=udp
>     writer.go:29: 2020-02-23T02:46:53.132Z [INFO]  TestAgent_AddCheck_StartPassing: Started HTTP server: address=127.0.0.1:16302 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.132Z [INFO]  TestAgent_AddCheck_StartPassing: started state syncer
>     writer.go:29: 2020-02-23T02:46:53.182Z [WARN]  TestAgent_AddCheck_StartPassing.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:53.182Z [INFO]  TestAgent_AddCheck_StartPassing.server.raft: entering candidate state: node="Node at 127.0.0.1:16306 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:53.185Z [DEBUG] TestAgent_AddCheck_StartPassing.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:53.185Z [DEBUG] TestAgent_AddCheck_StartPassing.server.raft: vote granted: from=48cf610d-bc35-b1bb-b7f5-246f685ea658 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:53.185Z [INFO]  TestAgent_AddCheck_StartPassing.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:53.185Z [INFO]  TestAgent_AddCheck_StartPassing.server.raft: entering leader state: leader="Node at 127.0.0.1:16306 [Leader]"
>     writer.go:29: 2020-02-23T02:46:53.185Z [INFO]  TestAgent_AddCheck_StartPassing.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:53.185Z [INFO]  TestAgent_AddCheck_StartPassing.server: New leader elected: payload=Node-48cf610d-bc35-b1bb-b7f5-246f685ea658
>     writer.go:29: 2020-02-23T02:46:53.192Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:53.199Z [INFO]  TestAgent_AddCheck_StartPassing.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:53.199Z [INFO]  TestAgent_AddCheck_StartPassing.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.199Z [DEBUG] TestAgent_AddCheck_StartPassing.server: Skipping self join check for node since the cluster is too small: node=Node-48cf610d-bc35-b1bb-b7f5-246f685ea658
>     writer.go:29: 2020-02-23T02:46:53.199Z [INFO]  TestAgent_AddCheck_StartPassing.server: member joined, marking health alive: member=Node-48cf610d-bc35-b1bb-b7f5-246f685ea658
>     writer.go:29: 2020-02-23T02:46:53.276Z [INFO]  TestAgent_AddCheck_StartPassing: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:53.276Z [INFO]  TestAgent_AddCheck_StartPassing.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:53.276Z [DEBUG] TestAgent_AddCheck_StartPassing.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.276Z [WARN]  TestAgent_AddCheck_StartPassing.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.276Z [ERROR] TestAgent_AddCheck_StartPassing.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:53.276Z [DEBUG] TestAgent_AddCheck_StartPassing.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.277Z [WARN]  TestAgent_AddCheck_StartPassing.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.279Z [INFO]  TestAgent_AddCheck_StartPassing.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:53.279Z [INFO]  TestAgent_AddCheck_StartPassing: consul server down
>     writer.go:29: 2020-02-23T02:46:53.279Z [INFO]  TestAgent_AddCheck_StartPassing: shutdown complete
>     writer.go:29: 2020-02-23T02:46:53.279Z [INFO]  TestAgent_AddCheck_StartPassing: Stopping server: protocol=DNS address=127.0.0.1:16301 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.279Z [INFO]  TestAgent_AddCheck_StartPassing: Stopping server: protocol=DNS address=127.0.0.1:16301 network=udp
>     writer.go:29: 2020-02-23T02:46:53.279Z [INFO]  TestAgent_AddCheck_StartPassing: Stopping server: protocol=HTTP address=127.0.0.1:16302 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.279Z [INFO]  TestAgent_AddCheck_StartPassing: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:53.279Z [INFO]  TestAgent_AddCheck_StartPassing: Endpoints down
> === CONT  TestAgent_setupNodeID
> --- PASS: TestAgent_AddCheck (0.40s)
>     writer.go:29: 2020-02-23T02:46:53.088Z [WARN]  TestAgent_AddCheck: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:53.088Z [DEBUG] TestAgent_AddCheck.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:53.089Z [DEBUG] TestAgent_AddCheck.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:53.101Z [INFO]  TestAgent_AddCheck.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:91bdd9d0-9ec0-43c2-4189-22458e7d401d Address:127.0.0.1:16318}]"
>     writer.go:29: 2020-02-23T02:46:53.101Z [INFO]  TestAgent_AddCheck.server.raft: entering follower state: follower="Node at 127.0.0.1:16318 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:53.102Z [INFO]  TestAgent_AddCheck.server.serf.wan: serf: EventMemberJoin: Node-91bdd9d0-9ec0-43c2-4189-22458e7d401d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.102Z [INFO]  TestAgent_AddCheck.server.serf.lan: serf: EventMemberJoin: Node-91bdd9d0-9ec0-43c2-4189-22458e7d401d 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.102Z [INFO]  TestAgent_AddCheck.server: Handled event for server in area: event=member-join server=Node-91bdd9d0-9ec0-43c2-4189-22458e7d401d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:53.102Z [INFO]  TestAgent_AddCheck.server: Adding LAN server: server="Node-91bdd9d0-9ec0-43c2-4189-22458e7d401d (Addr: tcp/127.0.0.1:16318) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:53.102Z [INFO]  TestAgent_AddCheck: Started DNS server: address=127.0.0.1:16313 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.102Z [INFO]  TestAgent_AddCheck: Started DNS server: address=127.0.0.1:16313 network=udp
>     writer.go:29: 2020-02-23T02:46:53.103Z [INFO]  TestAgent_AddCheck: Started HTTP server: address=127.0.0.1:16314 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.103Z [INFO]  TestAgent_AddCheck: started state syncer
>     writer.go:29: 2020-02-23T02:46:53.157Z [WARN]  TestAgent_AddCheck.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:53.157Z [INFO]  TestAgent_AddCheck.server.raft: entering candidate state: node="Node at 127.0.0.1:16318 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:53.160Z [DEBUG] TestAgent_AddCheck.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:53.160Z [DEBUG] TestAgent_AddCheck.server.raft: vote granted: from=91bdd9d0-9ec0-43c2-4189-22458e7d401d term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:53.160Z [INFO]  TestAgent_AddCheck.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:53.160Z [INFO]  TestAgent_AddCheck.server.raft: entering leader state: leader="Node at 127.0.0.1:16318 [Leader]"
>     writer.go:29: 2020-02-23T02:46:53.161Z [INFO]  TestAgent_AddCheck.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:53.161Z [INFO]  TestAgent_AddCheck.server: New leader elected: payload=Node-91bdd9d0-9ec0-43c2-4189-22458e7d401d
>     writer.go:29: 2020-02-23T02:46:53.168Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:53.175Z [INFO]  TestAgent_AddCheck.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:53.175Z [INFO]  TestAgent_AddCheck.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.175Z [DEBUG] TestAgent_AddCheck.server: Skipping self join check for node since the cluster is too small: node=Node-91bdd9d0-9ec0-43c2-4189-22458e7d401d
>     writer.go:29: 2020-02-23T02:46:53.175Z [INFO]  TestAgent_AddCheck.server: member joined, marking health alive: member=Node-91bdd9d0-9ec0-43c2-4189-22458e7d401d
>     writer.go:29: 2020-02-23T02:46:53.284Z [DEBUG] TestAgent_AddCheck: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:53.286Z [INFO]  TestAgent_AddCheck: Synced node info
>     writer.go:29: 2020-02-23T02:46:53.439Z [INFO]  TestAgent_AddCheck: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:53.439Z [INFO]  TestAgent_AddCheck.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:53.439Z [DEBUG] TestAgent_AddCheck.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.439Z [WARN]  TestAgent_AddCheck.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.439Z [DEBUG] TestAgent_AddCheck.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.462Z [WARN]  TestAgent_AddCheck.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.485Z [INFO]  TestAgent_AddCheck.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:53.485Z [INFO]  TestAgent_AddCheck: consul server down
>     writer.go:29: 2020-02-23T02:46:53.485Z [INFO]  TestAgent_AddCheck: shutdown complete
>     writer.go:29: 2020-02-23T02:46:53.485Z [INFO]  TestAgent_AddCheck: Stopping server: protocol=DNS address=127.0.0.1:16313 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.485Z [INFO]  TestAgent_AddCheck: Stopping server: protocol=DNS address=127.0.0.1:16313 network=udp
>     writer.go:29: 2020-02-23T02:46:53.485Z [INFO]  TestAgent_AddCheck: Stopping server: protocol=HTTP address=127.0.0.1:16314 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.485Z [INFO]  TestAgent_AddCheck: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:53.485Z [INFO]  TestAgent_AddCheck: Endpoints down
> === CONT  TestAgent_ReconnectConfigWanDisabled
> --- PASS: TestAgent_makeNodeID (0.23s)
>     writer.go:29: 2020-02-23T02:46:53.269Z [WARN]  TestAgent_makeNodeID: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:53.269Z [DEBUG] TestAgent_makeNodeID: Using random ID as node ID: id=492f27d5-79b6-4217-6cc3-f6ceace9b8e5
>     writer.go:29: 2020-02-23T02:46:53.269Z [DEBUG] TestAgent_makeNodeID.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:53.269Z [DEBUG] TestAgent_makeNodeID.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:53.289Z [INFO]  TestAgent_makeNodeID.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:492f27d5-79b6-4217-6cc3-f6ceace9b8e5 Address:127.0.0.1:16324}]"
>     writer.go:29: 2020-02-23T02:46:53.290Z [INFO]  TestAgent_makeNodeID.server.serf.wan: serf: EventMemberJoin: Node-d7c8ca2d-e090-79c9-5dcf-90d0c5793fff.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.290Z [INFO]  TestAgent_makeNodeID.server.serf.lan: serf: EventMemberJoin: Node-d7c8ca2d-e090-79c9-5dcf-90d0c5793fff 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.290Z [INFO]  TestAgent_makeNodeID: Started DNS server: address=127.0.0.1:16319 network=udp
>     writer.go:29: 2020-02-23T02:46:53.290Z [INFO]  TestAgent_makeNodeID.server.raft: entering follower state: follower="Node at 127.0.0.1:16324 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:53.291Z [INFO]  TestAgent_makeNodeID.server: Adding LAN server: server="Node-d7c8ca2d-e090-79c9-5dcf-90d0c5793fff (Addr: tcp/127.0.0.1:16324) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:53.291Z [INFO]  TestAgent_makeNodeID.server: Handled event for server in area: event=member-join server=Node-d7c8ca2d-e090-79c9-5dcf-90d0c5793fff.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:53.291Z [INFO]  TestAgent_makeNodeID: Started DNS server: address=127.0.0.1:16319 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.291Z [INFO]  TestAgent_makeNodeID: Started HTTP server: address=127.0.0.1:16320 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.291Z [INFO]  TestAgent_makeNodeID: started state syncer
>     writer.go:29: 2020-02-23T02:46:53.340Z [WARN]  TestAgent_makeNodeID.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:53.341Z [INFO]  TestAgent_makeNodeID.server.raft: entering candidate state: node="Node at 127.0.0.1:16324 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:53.344Z [DEBUG] TestAgent_makeNodeID.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:53.344Z [DEBUG] TestAgent_makeNodeID.server.raft: vote granted: from=492f27d5-79b6-4217-6cc3-f6ceace9b8e5 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:53.344Z [INFO]  TestAgent_makeNodeID.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:53.344Z [INFO]  TestAgent_makeNodeID.server.raft: entering leader state: leader="Node at 127.0.0.1:16324 [Leader]"
>     writer.go:29: 2020-02-23T02:46:53.344Z [INFO]  TestAgent_makeNodeID.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:53.344Z [INFO]  TestAgent_makeNodeID.server: New leader elected: payload=Node-d7c8ca2d-e090-79c9-5dcf-90d0c5793fff
>     writer.go:29: 2020-02-23T02:46:53.351Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:53.359Z [INFO]  TestAgent_makeNodeID.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:53.359Z [INFO]  TestAgent_makeNodeID.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.359Z [DEBUG] TestAgent_makeNodeID.server: Skipping self join check for node since the cluster is too small: node=Node-d7c8ca2d-e090-79c9-5dcf-90d0c5793fff
>     writer.go:29: 2020-02-23T02:46:53.359Z [INFO]  TestAgent_makeNodeID.server: member joined, marking health alive: member=Node-d7c8ca2d-e090-79c9-5dcf-90d0c5793fff
>     writer.go:29: 2020-02-23T02:46:53.429Z [DEBUG] TestAgent_makeNodeID: Using random ID as node ID: id=fc9e9a28-2c04-a927-9b93-016b137ce5cd
>     writer.go:29: 2020-02-23T02:46:53.429Z [DEBUG] TestAgent_makeNodeID: Using random ID as node ID: id=01527d51-2800-b0d0-31f6-06a2b6d7e1f8
>     writer.go:29: 2020-02-23T02:46:53.446Z [DEBUG] TestAgent_makeNodeID: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:53.448Z [DEBUG] TestAgent_makeNodeID: Using unique ID from host as node ID: id=1539dec6-1297-15c5-84bf-9261de287980
>     writer.go:29: 2020-02-23T02:46:53.448Z [DEBUG] TestAgent_makeNodeID: Using unique ID from host as node ID: id=1539dec6-1297-15c5-84bf-9261de287980
>     writer.go:29: 2020-02-23T02:46:53.448Z [INFO]  TestAgent_makeNodeID: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:53.449Z [INFO]  TestAgent_makeNodeID.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:53.449Z [DEBUG] TestAgent_makeNodeID.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.449Z [WARN]  TestAgent_makeNodeID.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.449Z [DEBUG] TestAgent_makeNodeID.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.484Z [WARN]  TestAgent_makeNodeID.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.485Z [INFO]  TestAgent_makeNodeID: Synced node info
>     writer.go:29: 2020-02-23T02:46:53.493Z [INFO]  TestAgent_makeNodeID.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:53.493Z [INFO]  TestAgent_makeNodeID: consul server down
>     writer.go:29: 2020-02-23T02:46:53.493Z [INFO]  TestAgent_makeNodeID: shutdown complete
>     writer.go:29: 2020-02-23T02:46:53.493Z [INFO]  TestAgent_makeNodeID: Stopping server: protocol=DNS address=127.0.0.1:16319 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.493Z [INFO]  TestAgent_makeNodeID: Stopping server: protocol=DNS address=127.0.0.1:16319 network=udp
>     writer.go:29: 2020-02-23T02:46:53.493Z [INFO]  TestAgent_makeNodeID: Stopping server: protocol=HTTP address=127.0.0.1:16320 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.493Z [INFO]  TestAgent_makeNodeID: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:53.493Z [INFO]  TestAgent_makeNodeID: Endpoints down
> === CONT  TestAgent_ReconnectConfigSettings
> --- PASS: TestAgent_setupNodeID (0.40s)
>     writer.go:29: 2020-02-23T02:46:53.288Z [WARN]  TestAgent_setupNodeID: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:53.288Z [DEBUG] TestAgent_setupNodeID: Using random ID as node ID: id=b13ffbc0-e66f-9d88-b157-4be98d09178c
>     writer.go:29: 2020-02-23T02:46:53.288Z [DEBUG] TestAgent_setupNodeID.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:53.288Z [DEBUG] TestAgent_setupNodeID.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:53.300Z [INFO]  TestAgent_setupNodeID.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b13ffbc0-e66f-9d88-b157-4be98d09178c Address:127.0.0.1:16336}]"
>     writer.go:29: 2020-02-23T02:46:53.300Z [INFO]  TestAgent_setupNodeID.server.raft: entering follower state: follower="Node at 127.0.0.1:16336 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:53.301Z [INFO]  TestAgent_setupNodeID.server.serf.wan: serf: EventMemberJoin: Node-73b0627c-54cc-a926-7410-f077cb2853ff.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.302Z [INFO]  TestAgent_setupNodeID.server.serf.lan: serf: EventMemberJoin: Node-73b0627c-54cc-a926-7410-f077cb2853ff 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.302Z [INFO]  TestAgent_setupNodeID.server: Handled event for server in area: event=member-join server=Node-73b0627c-54cc-a926-7410-f077cb2853ff.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:53.302Z [INFO]  TestAgent_setupNodeID.server: Adding LAN server: server="Node-73b0627c-54cc-a926-7410-f077cb2853ff (Addr: tcp/127.0.0.1:16336) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:53.302Z [INFO]  TestAgent_setupNodeID: Started DNS server: address=127.0.0.1:16331 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.303Z [INFO]  TestAgent_setupNodeID: Started DNS server: address=127.0.0.1:16331 network=udp
>     writer.go:29: 2020-02-23T02:46:53.303Z [INFO]  TestAgent_setupNodeID: Started HTTP server: address=127.0.0.1:16332 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.303Z [INFO]  TestAgent_setupNodeID: started state syncer
>     writer.go:29: 2020-02-23T02:46:53.345Z [WARN]  TestAgent_setupNodeID.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:53.345Z [INFO]  TestAgent_setupNodeID.server.raft: entering candidate state: node="Node at 127.0.0.1:16336 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:53.348Z [DEBUG] TestAgent_setupNodeID.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:53.348Z [DEBUG] TestAgent_setupNodeID.server.raft: vote granted: from=b13ffbc0-e66f-9d88-b157-4be98d09178c term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:53.348Z [INFO]  TestAgent_setupNodeID.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:53.348Z [INFO]  TestAgent_setupNodeID.server.raft: entering leader state: leader="Node at 127.0.0.1:16336 [Leader]"
>     writer.go:29: 2020-02-23T02:46:53.348Z [INFO]  TestAgent_setupNodeID.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:53.348Z [INFO]  TestAgent_setupNodeID.server: New leader elected: payload=Node-73b0627c-54cc-a926-7410-f077cb2853ff
>     writer.go:29: 2020-02-23T02:46:53.357Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:53.365Z [INFO]  TestAgent_setupNodeID.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:53.365Z [INFO]  TestAgent_setupNodeID.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.365Z [DEBUG] TestAgent_setupNodeID.server: Skipping self join check for node since the cluster is too small: node=Node-73b0627c-54cc-a926-7410-f077cb2853ff
>     writer.go:29: 2020-02-23T02:46:53.365Z [INFO]  TestAgent_setupNodeID.server: member joined, marking health alive: member=Node-73b0627c-54cc-a926-7410-f077cb2853ff
>     writer.go:29: 2020-02-23T02:46:53.426Z [DEBUG] TestAgent_setupNodeID: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:53.483Z [INFO]  TestAgent_setupNodeID: Synced node info
>     writer.go:29: 2020-02-23T02:46:53.483Z [DEBUG] TestAgent_setupNodeID: Node info in sync
>     writer.go:29: 2020-02-23T02:46:53.651Z [INFO]  TestAgent_setupNodeID: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:53.651Z [INFO]  TestAgent_setupNodeID.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:53.651Z [DEBUG] TestAgent_setupNodeID.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.651Z [WARN]  TestAgent_setupNodeID.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.651Z [DEBUG] TestAgent_setupNodeID.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.674Z [WARN]  TestAgent_setupNodeID.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.677Z [INFO]  TestAgent_setupNodeID.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:53.677Z [INFO]  TestAgent_setupNodeID: consul server down
>     writer.go:29: 2020-02-23T02:46:53.677Z [INFO]  TestAgent_setupNodeID: shutdown complete
>     writer.go:29: 2020-02-23T02:46:53.677Z [INFO]  TestAgent_setupNodeID: Stopping server: protocol=DNS address=127.0.0.1:16331 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.677Z [INFO]  TestAgent_setupNodeID: Stopping server: protocol=DNS address=127.0.0.1:16331 network=udp
>     writer.go:29: 2020-02-23T02:46:53.677Z [INFO]  TestAgent_setupNodeID: Stopping server: protocol=HTTP address=127.0.0.1:16332 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.677Z [INFO]  TestAgent_setupNodeID: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:53.677Z [INFO]  TestAgent_setupNodeID: Endpoints down
> === CONT  TestAgent_TokenStore
> --- PASS: TestAgent_ReconnectConfigSettings (0.31s)
>     writer.go:29: 2020-02-23T02:46:53.506Z [WARN]  TestAgent_ReconnectConfigSettings: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:53.506Z [DEBUG] TestAgent_ReconnectConfigSettings.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:53.507Z [DEBUG] TestAgent_ReconnectConfigSettings.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:53.518Z [INFO]  TestAgent_ReconnectConfigSettings.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:936760b2-e3f9-b91f-9cde-790cea6d7ef4 Address:127.0.0.1:16348}]"
>     writer.go:29: 2020-02-23T02:46:53.518Z [INFO]  TestAgent_ReconnectConfigSettings.server.raft: entering follower state: follower="Node at 127.0.0.1:16348 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:53.519Z [INFO]  TestAgent_ReconnectConfigSettings.server.serf.wan: serf: EventMemberJoin: Node-936760b2-e3f9-b91f-9cde-790cea6d7ef4.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.520Z [INFO]  TestAgent_ReconnectConfigSettings.server.serf.lan: serf: EventMemberJoin: Node-936760b2-e3f9-b91f-9cde-790cea6d7ef4 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.520Z [INFO]  TestAgent_ReconnectConfigSettings.server: Handled event for server in area: event=member-join server=Node-936760b2-e3f9-b91f-9cde-790cea6d7ef4.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:53.520Z [INFO]  TestAgent_ReconnectConfigSettings.server: Adding LAN server: server="Node-936760b2-e3f9-b91f-9cde-790cea6d7ef4 (Addr: tcp/127.0.0.1:16348) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:53.520Z [INFO]  TestAgent_ReconnectConfigSettings: Started DNS server: address=127.0.0.1:16343 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.520Z [INFO]  TestAgent_ReconnectConfigSettings: Started DNS server: address=127.0.0.1:16343 network=udp
>     writer.go:29: 2020-02-23T02:46:53.521Z [INFO]  TestAgent_ReconnectConfigSettings: Started HTTP server: address=127.0.0.1:16344 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.521Z [INFO]  TestAgent_ReconnectConfigSettings: started state syncer
>     writer.go:29: 2020-02-23T02:46:53.572Z [WARN]  TestAgent_ReconnectConfigSettings.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:53.572Z [INFO]  TestAgent_ReconnectConfigSettings.server.raft: entering candidate state: node="Node at 127.0.0.1:16348 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:53.576Z [DEBUG] TestAgent_ReconnectConfigSettings.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:53.576Z [DEBUG] TestAgent_ReconnectConfigSettings.server.raft: vote granted: from=936760b2-e3f9-b91f-9cde-790cea6d7ef4 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:53.576Z [INFO]  TestAgent_ReconnectConfigSettings.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:53.576Z [INFO]  TestAgent_ReconnectConfigSettings.server.raft: entering leader state: leader="Node at 127.0.0.1:16348 [Leader]"
>     writer.go:29: 2020-02-23T02:46:53.576Z [INFO]  TestAgent_ReconnectConfigSettings.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:53.576Z [INFO]  TestAgent_ReconnectConfigSettings.server: New leader elected: payload=Node-936760b2-e3f9-b91f-9cde-790cea6d7ef4
>     writer.go:29: 2020-02-23T02:46:53.584Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:53.665Z [INFO]  TestAgent_ReconnectConfigSettings.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:53.665Z [INFO]  TestAgent_ReconnectConfigSettings.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.666Z [DEBUG] TestAgent_ReconnectConfigSettings.server: Skipping self join check for node since the cluster is too small: node=Node-936760b2-e3f9-b91f-9cde-790cea6d7ef4
>     writer.go:29: 2020-02-23T02:46:53.666Z [INFO]  TestAgent_ReconnectConfigSettings.server: member joined, marking health alive: member=Node-936760b2-e3f9-b91f-9cde-790cea6d7ef4
>     writer.go:29: 2020-02-23T02:46:53.676Z [INFO]  TestAgent_ReconnectConfigSettings: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:53.676Z [INFO]  TestAgent_ReconnectConfigSettings.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:53.676Z [DEBUG] TestAgent_ReconnectConfigSettings.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.676Z [WARN]  TestAgent_ReconnectConfigSettings.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.676Z [ERROR] TestAgent_ReconnectConfigSettings.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:53.676Z [DEBUG] TestAgent_ReconnectConfigSettings.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.691Z [WARN]  TestAgent_ReconnectConfigSettings.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.693Z [INFO]  TestAgent_ReconnectConfigSettings.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:53.693Z [INFO]  TestAgent_ReconnectConfigSettings: consul server down
>     writer.go:29: 2020-02-23T02:46:53.693Z [INFO]  TestAgent_ReconnectConfigSettings: shutdown complete
>     writer.go:29: 2020-02-23T02:46:53.693Z [INFO]  TestAgent_ReconnectConfigSettings: Stopping server: protocol=DNS address=127.0.0.1:16343 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.693Z [INFO]  TestAgent_ReconnectConfigSettings: Stopping server: protocol=DNS address=127.0.0.1:16343 network=udp
>     writer.go:29: 2020-02-23T02:46:53.693Z [INFO]  TestAgent_ReconnectConfigSettings: Stopping server: protocol=HTTP address=127.0.0.1:16344 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.693Z [INFO]  TestAgent_ReconnectConfigSettings: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:53.693Z [INFO]  TestAgent_ReconnectConfigSettings: Endpoints down
>     writer.go:29: 2020-02-23T02:46:53.702Z [WARN]  TestAgent_ReconnectConfigSettings: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:53.702Z [DEBUG] TestAgent_ReconnectConfigSettings.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:53.735Z [DEBUG] TestAgent_ReconnectConfigSettings.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:53.748Z [INFO]  TestAgent_ReconnectConfigSettings.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fb49eb66-41f1-fbaa-70b4-632caff715ee Address:127.0.0.1:16372}]"
>     writer.go:29: 2020-02-23T02:46:53.748Z [INFO]  TestAgent_ReconnectConfigSettings.server.raft: entering follower state: follower="Node at 127.0.0.1:16372 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:53.748Z [INFO]  TestAgent_ReconnectConfigSettings.server.serf.wan: serf: EventMemberJoin: Node-fb49eb66-41f1-fbaa-70b4-632caff715ee.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.749Z [INFO]  TestAgent_ReconnectConfigSettings.server.serf.lan: serf: EventMemberJoin: Node-fb49eb66-41f1-fbaa-70b4-632caff715ee 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.749Z [INFO]  TestAgent_ReconnectConfigSettings.server: Adding LAN server: server="Node-fb49eb66-41f1-fbaa-70b4-632caff715ee (Addr: tcp/127.0.0.1:16372) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:53.749Z [INFO]  TestAgent_ReconnectConfigSettings.server: Handled event for server in area: event=member-join server=Node-fb49eb66-41f1-fbaa-70b4-632caff715ee.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:53.749Z [INFO]  TestAgent_ReconnectConfigSettings: Started DNS server: address=127.0.0.1:16367 network=udp
>     writer.go:29: 2020-02-23T02:46:53.749Z [INFO]  TestAgent_ReconnectConfigSettings: Started DNS server: address=127.0.0.1:16367 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.749Z [INFO]  TestAgent_ReconnectConfigSettings: Started HTTP server: address=127.0.0.1:16368 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.749Z [INFO]  TestAgent_ReconnectConfigSettings: started state syncer
>     writer.go:29: 2020-02-23T02:46:53.790Z [WARN]  TestAgent_ReconnectConfigSettings.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:53.790Z [INFO]  TestAgent_ReconnectConfigSettings.server.raft: entering candidate state: node="Node at 127.0.0.1:16372 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:53.793Z [DEBUG] TestAgent_ReconnectConfigSettings.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:53.793Z [DEBUG] TestAgent_ReconnectConfigSettings.server.raft: vote granted: from=fb49eb66-41f1-fbaa-70b4-632caff715ee term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:53.793Z [INFO]  TestAgent_ReconnectConfigSettings.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:53.793Z [INFO]  TestAgent_ReconnectConfigSettings.server.raft: entering leader state: leader="Node at 127.0.0.1:16372 [Leader]"
>     writer.go:29: 2020-02-23T02:46:53.793Z [INFO]  TestAgent_ReconnectConfigSettings.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:53.793Z [INFO]  TestAgent_ReconnectConfigSettings.server: New leader elected: payload=Node-fb49eb66-41f1-fbaa-70b4-632caff715ee
>     writer.go:29: 2020-02-23T02:46:53.794Z [INFO]  TestAgent_ReconnectConfigSettings: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:53.794Z [INFO]  TestAgent_ReconnectConfigSettings.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:53.794Z [WARN]  TestAgent_ReconnectConfigSettings.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.794Z [ERROR] TestAgent_ReconnectConfigSettings.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:53.796Z [WARN]  TestAgent_ReconnectConfigSettings.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.800Z [INFO]  TestAgent_ReconnectConfigSettings.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:53.800Z [INFO]  TestAgent_ReconnectConfigSettings: consul server down
>     writer.go:29: 2020-02-23T02:46:53.800Z [INFO]  TestAgent_ReconnectConfigSettings: shutdown complete
>     writer.go:29: 2020-02-23T02:46:53.800Z [INFO]  TestAgent_ReconnectConfigSettings: Stopping server: protocol=DNS address=127.0.0.1:16367 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.800Z [INFO]  TestAgent_ReconnectConfigSettings: Stopping server: protocol=DNS address=127.0.0.1:16367 network=udp
>     writer.go:29: 2020-02-23T02:46:53.800Z [INFO]  TestAgent_ReconnectConfigSettings: Stopping server: protocol=HTTP address=127.0.0.1:16368 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.800Z [INFO]  TestAgent_ReconnectConfigSettings: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:53.800Z [INFO]  TestAgent_ReconnectConfigSettings: Endpoints down
> === CONT  TestAgent_RPCPing
> --- PASS: TestAgent_ReconnectConfigWanDisabled (0.42s)
>     writer.go:29: 2020-02-23T02:46:53.507Z [WARN]  TestAgent_ReconnectConfigWanDisabled: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:53.508Z [DEBUG] TestAgent_ReconnectConfigWanDisabled.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:53.508Z [DEBUG] TestAgent_ReconnectConfigWanDisabled.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:53.522Z [INFO]  TestAgent_ReconnectConfigWanDisabled.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:6a0c8e8f-1cd9-8567-8c0a-9d4a1cf388ae Address:127.0.0.1:16342}]"
>     writer.go:29: 2020-02-23T02:46:53.522Z [INFO]  TestAgent_ReconnectConfigWanDisabled.server.raft: entering follower state: follower="Node at 127.0.0.1:16342 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:53.522Z [INFO]  TestAgent_ReconnectConfigWanDisabled.server.serf.lan: serf: EventMemberJoin: Node-6a0c8e8f-1cd9-8567-8c0a-9d4a1cf388ae 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.522Z [INFO]  TestAgent_ReconnectConfigWanDisabled.server: Adding LAN server: server="Node-6a0c8e8f-1cd9-8567-8c0a-9d4a1cf388ae (Addr: tcp/127.0.0.1:16342) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:53.523Z [INFO]  TestAgent_ReconnectConfigWanDisabled: Started DNS server: address=127.0.0.1:16337 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.523Z [INFO]  TestAgent_ReconnectConfigWanDisabled: Started DNS server: address=127.0.0.1:16337 network=udp
>     writer.go:29: 2020-02-23T02:46:53.523Z [INFO]  TestAgent_ReconnectConfigWanDisabled: Started HTTP server: address=127.0.0.1:16338 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.523Z [INFO]  TestAgent_ReconnectConfigWanDisabled: started state syncer
>     writer.go:29: 2020-02-23T02:46:53.573Z [WARN]  TestAgent_ReconnectConfigWanDisabled.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:53.573Z [INFO]  TestAgent_ReconnectConfigWanDisabled.server.raft: entering candidate state: node="Node at 127.0.0.1:16342 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:53.576Z [DEBUG] TestAgent_ReconnectConfigWanDisabled.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:53.576Z [DEBUG] TestAgent_ReconnectConfigWanDisabled.server.raft: vote granted: from=6a0c8e8f-1cd9-8567-8c0a-9d4a1cf388ae term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:53.576Z [INFO]  TestAgent_ReconnectConfigWanDisabled.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:53.576Z [INFO]  TestAgent_ReconnectConfigWanDisabled.server.raft: entering leader state: leader="Node at 127.0.0.1:16342 [Leader]"
>     writer.go:29: 2020-02-23T02:46:53.577Z [INFO]  TestAgent_ReconnectConfigWanDisabled.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:53.577Z [INFO]  TestAgent_ReconnectConfigWanDisabled.server: New leader elected: payload=Node-6a0c8e8f-1cd9-8567-8c0a-9d4a1cf388ae
>     writer.go:29: 2020-02-23T02:46:53.584Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:53.665Z [INFO]  TestAgent_ReconnectConfigWanDisabled.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:53.665Z [INFO]  TestAgent_ReconnectConfigWanDisabled.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.666Z [DEBUG] TestAgent_ReconnectConfigWanDisabled.server: Skipping self join check for node since the cluster is too small: node=Node-6a0c8e8f-1cd9-8567-8c0a-9d4a1cf388ae
>     writer.go:29: 2020-02-23T02:46:53.666Z [INFO]  TestAgent_ReconnectConfigWanDisabled.server: member joined, marking health alive: member=Node-6a0c8e8f-1cd9-8567-8c0a-9d4a1cf388ae
>     writer.go:29: 2020-02-23T02:46:53.754Z [DEBUG] TestAgent_ReconnectConfigWanDisabled: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:53.757Z [INFO]  TestAgent_ReconnectConfigWanDisabled: Synced node info
>     writer.go:29: 2020-02-23T02:46:53.898Z [INFO]  TestAgent_ReconnectConfigWanDisabled: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:53.898Z [INFO]  TestAgent_ReconnectConfigWanDisabled.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:53.898Z [DEBUG] TestAgent_ReconnectConfigWanDisabled.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.898Z [WARN]  TestAgent_ReconnectConfigWanDisabled.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.898Z [DEBUG] TestAgent_ReconnectConfigWanDisabled.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.900Z [INFO]  TestAgent_ReconnectConfigWanDisabled: consul server down
>     writer.go:29: 2020-02-23T02:46:53.900Z [INFO]  TestAgent_ReconnectConfigWanDisabled: shutdown complete
>     writer.go:29: 2020-02-23T02:46:53.900Z [INFO]  TestAgent_ReconnectConfigWanDisabled: Stopping server: protocol=DNS address=127.0.0.1:16337 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.900Z [INFO]  TestAgent_ReconnectConfigWanDisabled: Stopping server: protocol=DNS address=127.0.0.1:16337 network=udp
>     writer.go:29: 2020-02-23T02:46:53.900Z [INFO]  TestAgent_ReconnectConfigWanDisabled: Stopping server: protocol=HTTP address=127.0.0.1:16338 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.900Z [INFO]  TestAgent_ReconnectConfigWanDisabled: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:53.900Z [INFO]  TestAgent_ReconnectConfigWanDisabled: Endpoints down
> === CONT  TestAgent_StartStop
> === RUN   TestAgent_IndexChurn/with_tags
> --- PASS: TestAgent_TokenStore (0.43s)
>     writer.go:29: 2020-02-23T02:46:53.684Z [WARN]  TestAgent_TokenStore: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:53.684Z [DEBUG] TestAgent_TokenStore.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:53.684Z [DEBUG] TestAgent_TokenStore.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:53.745Z [INFO]  TestAgent_TokenStore.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a2262fc3-10ac-63ac-aaec-3747480d648b Address:127.0.0.1:16366}]"
>     writer.go:29: 2020-02-23T02:46:53.745Z [INFO]  TestAgent_TokenStore.server.raft: entering follower state: follower="Node at 127.0.0.1:16366 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:53.746Z [INFO]  TestAgent_TokenStore.server.serf.wan: serf: EventMemberJoin: Node-a2262fc3-10ac-63ac-aaec-3747480d648b.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.746Z [INFO]  TestAgent_TokenStore.server.serf.lan: serf: EventMemberJoin: Node-a2262fc3-10ac-63ac-aaec-3747480d648b 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.746Z [INFO]  TestAgent_TokenStore.server: Adding LAN server: server="Node-a2262fc3-10ac-63ac-aaec-3747480d648b (Addr: tcp/127.0.0.1:16366) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:53.746Z [INFO]  TestAgent_TokenStore: Started DNS server: address=127.0.0.1:16361 network=udp
>     writer.go:29: 2020-02-23T02:46:53.746Z [INFO]  TestAgent_TokenStore.server: Handled event for server in area: event=member-join server=Node-a2262fc3-10ac-63ac-aaec-3747480d648b.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:53.746Z [INFO]  TestAgent_TokenStore: Started DNS server: address=127.0.0.1:16361 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.747Z [INFO]  TestAgent_TokenStore: Started HTTP server: address=127.0.0.1:16362 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.747Z [INFO]  TestAgent_TokenStore: started state syncer
>     writer.go:29: 2020-02-23T02:46:53.787Z [WARN]  TestAgent_TokenStore.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:53.787Z [INFO]  TestAgent_TokenStore.server.raft: entering candidate state: node="Node at 127.0.0.1:16366 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:53.790Z [DEBUG] TestAgent_TokenStore.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:53.790Z [DEBUG] TestAgent_TokenStore.server.raft: vote granted: from=a2262fc3-10ac-63ac-aaec-3747480d648b term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:53.790Z [INFO]  TestAgent_TokenStore.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:53.790Z [INFO]  TestAgent_TokenStore.server.raft: entering leader state: leader="Node at 127.0.0.1:16366 [Leader]"
>     writer.go:29: 2020-02-23T02:46:53.790Z [INFO]  TestAgent_TokenStore.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:53.790Z [INFO]  TestAgent_TokenStore.server: New leader elected: payload=Node-a2262fc3-10ac-63ac-aaec-3747480d648b
>     writer.go:29: 2020-02-23T02:46:53.799Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:53.814Z [INFO]  TestAgent_TokenStore.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:53.814Z [INFO]  TestAgent_TokenStore.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.814Z [DEBUG] TestAgent_TokenStore.server: Skipping self join check for node since the cluster is too small: node=Node-a2262fc3-10ac-63ac-aaec-3747480d648b
>     writer.go:29: 2020-02-23T02:46:53.814Z [INFO]  TestAgent_TokenStore.server: member joined, marking health alive: member=Node-a2262fc3-10ac-63ac-aaec-3747480d648b
>     writer.go:29: 2020-02-23T02:46:53.832Z [DEBUG] TestAgent_TokenStore: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:53.835Z [INFO]  TestAgent_TokenStore: Synced node info
>     writer.go:29: 2020-02-23T02:46:53.835Z [DEBUG] TestAgent_TokenStore: Node info in sync
>     writer.go:29: 2020-02-23T02:46:53.982Z [INFO]  TestAgent_TokenStore: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:53.982Z [INFO]  TestAgent_TokenStore.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:53.982Z [DEBUG] TestAgent_TokenStore.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.982Z [WARN]  TestAgent_TokenStore.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:53.982Z [DEBUG] TestAgent_TokenStore.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.047Z [WARN]  TestAgent_TokenStore.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:54.104Z [INFO]  TestAgent_TokenStore.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:54.104Z [INFO]  TestAgent_TokenStore: consul server down
>     writer.go:29: 2020-02-23T02:46:54.104Z [INFO]  TestAgent_TokenStore: shutdown complete
>     writer.go:29: 2020-02-23T02:46:54.104Z [INFO]  TestAgent_TokenStore: Stopping server: protocol=DNS address=127.0.0.1:16361 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.104Z [INFO]  TestAgent_TokenStore: Stopping server: protocol=DNS address=127.0.0.1:16361 network=udp
>     writer.go:29: 2020-02-23T02:46:54.104Z [INFO]  TestAgent_TokenStore: Stopping server: protocol=HTTP address=127.0.0.1:16362 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.104Z [INFO]  TestAgent_TokenStore: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:54.104Z [INFO]  TestAgent_TokenStore: Endpoints down
> === CONT  TestAgent_Services_ExposeConfig
> --- PASS: TestAgent_RPCPing (0.34s)
>     writer.go:29: 2020-02-23T02:46:53.807Z [WARN]  TestAgent_RPCPing: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:53.807Z [DEBUG] TestAgent_RPCPing.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:53.808Z [DEBUG] TestAgent_RPCPing.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:53.818Z [INFO]  TestAgent_RPCPing.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:42f0f90f-61a2-aacf-c2e3-8d3ff6c8facb Address:127.0.0.1:16228}]"
>     writer.go:29: 2020-02-23T02:46:53.819Z [INFO]  TestAgent_RPCPing.server.raft: entering follower state: follower="Node at 127.0.0.1:16228 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:53.819Z [INFO]  TestAgent_RPCPing.server.serf.wan: serf: EventMemberJoin: Node-42f0f90f-61a2-aacf-c2e3-8d3ff6c8facb.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.819Z [INFO]  TestAgent_RPCPing.server.serf.lan: serf: EventMemberJoin: Node-42f0f90f-61a2-aacf-c2e3-8d3ff6c8facb 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.819Z [INFO]  TestAgent_RPCPing.server: Adding LAN server: server="Node-42f0f90f-61a2-aacf-c2e3-8d3ff6c8facb (Addr: tcp/127.0.0.1:16228) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:53.819Z [INFO]  TestAgent_RPCPing: Started DNS server: address=127.0.0.1:16223 network=udp
>     writer.go:29: 2020-02-23T02:46:53.820Z [INFO]  TestAgent_RPCPing.server: Handled event for server in area: event=member-join server=Node-42f0f90f-61a2-aacf-c2e3-8d3ff6c8facb.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:53.820Z [INFO]  TestAgent_RPCPing: Started DNS server: address=127.0.0.1:16223 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.820Z [INFO]  TestAgent_RPCPing: Started HTTP server: address=127.0.0.1:16224 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.820Z [INFO]  TestAgent_RPCPing: started state syncer
>     writer.go:29: 2020-02-23T02:46:53.879Z [WARN]  TestAgent_RPCPing.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:53.879Z [INFO]  TestAgent_RPCPing.server.raft: entering candidate state: node="Node at 127.0.0.1:16228 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:53.882Z [DEBUG] TestAgent_RPCPing.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:53.882Z [DEBUG] TestAgent_RPCPing.server.raft: vote granted: from=42f0f90f-61a2-aacf-c2e3-8d3ff6c8facb term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:53.882Z [INFO]  TestAgent_RPCPing.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:53.882Z [INFO]  TestAgent_RPCPing.server.raft: entering leader state: leader="Node at 127.0.0.1:16228 [Leader]"
>     writer.go:29: 2020-02-23T02:46:53.882Z [INFO]  TestAgent_RPCPing.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:53.882Z [INFO]  TestAgent_RPCPing.server: New leader elected: payload=Node-42f0f90f-61a2-aacf-c2e3-8d3ff6c8facb
>     writer.go:29: 2020-02-23T02:46:53.889Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:53.898Z [INFO]  TestAgent_RPCPing.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:53.898Z [INFO]  TestAgent_RPCPing.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:53.898Z [DEBUG] TestAgent_RPCPing.server: Skipping self join check for node since the cluster is too small: node=Node-42f0f90f-61a2-aacf-c2e3-8d3ff6c8facb
>     writer.go:29: 2020-02-23T02:46:53.898Z [INFO]  TestAgent_RPCPing.server: member joined, marking health alive: member=Node-42f0f90f-61a2-aacf-c2e3-8d3ff6c8facb
>     writer.go:29: 2020-02-23T02:46:53.983Z [DEBUG] TestAgent_RPCPing: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:54.069Z [INFO]  TestAgent_RPCPing: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:54.069Z [INFO]  TestAgent_RPCPing.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:54.069Z [DEBUG] TestAgent_RPCPing.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.069Z [WARN]  TestAgent_RPCPing.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:54.069Z [DEBUG] TestAgent_RPCPing.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.076Z [INFO]  TestAgent_RPCPing: Synced node info
>     writer.go:29: 2020-02-23T02:46:54.104Z [WARN]  TestAgent_RPCPing.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:54.139Z [INFO]  TestAgent_RPCPing.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:54.139Z [INFO]  TestAgent_RPCPing: consul server down
>     writer.go:29: 2020-02-23T02:46:54.139Z [INFO]  TestAgent_RPCPing: shutdown complete
>     writer.go:29: 2020-02-23T02:46:54.139Z [INFO]  TestAgent_RPCPing: Stopping server: protocol=DNS address=127.0.0.1:16223 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.139Z [INFO]  TestAgent_RPCPing: Stopping server: protocol=DNS address=127.0.0.1:16223 network=udp
>     writer.go:29: 2020-02-23T02:46:54.139Z [INFO]  TestAgent_RPCPing: Stopping server: protocol=HTTP address=127.0.0.1:16224 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.139Z [INFO]  TestAgent_RPCPing: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:54.139Z [INFO]  TestAgent_RPCPing: Endpoints down
> === CONT  TestAgent_HostBadACL
> --- PASS: TestAgent_HostBadACL (0.22s)
>     writer.go:29: 2020-02-23T02:46:54.154Z [WARN]  TestAgent_HostBadACL: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:54.155Z [WARN]  TestAgent_HostBadACL: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:54.155Z [DEBUG] TestAgent_HostBadACL.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:54.155Z [DEBUG] TestAgent_HostBadACL.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:54.180Z [INFO]  TestAgent_HostBadACL.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:c56f9b6d-203e-0cd1-b3a4-8af86c23f661 Address:127.0.0.1:16390}]"
>     writer.go:29: 2020-02-23T02:46:54.180Z [INFO]  TestAgent_HostBadACL.server.serf.wan: serf: EventMemberJoin: Node-c56f9b6d-203e-0cd1-b3a4-8af86c23f661.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:54.181Z [INFO]  TestAgent_HostBadACL.server.serf.lan: serf: EventMemberJoin: Node-c56f9b6d-203e-0cd1-b3a4-8af86c23f661 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:54.181Z [INFO]  TestAgent_HostBadACL: Started DNS server: address=127.0.0.1:16385 network=udp
>     writer.go:29: 2020-02-23T02:46:54.181Z [INFO]  TestAgent_HostBadACL.server.raft: entering follower state: follower="Node at 127.0.0.1:16390 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:54.181Z [INFO]  TestAgent_HostBadACL.server: Adding LAN server: server="Node-c56f9b6d-203e-0cd1-b3a4-8af86c23f661 (Addr: tcp/127.0.0.1:16390) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:54.181Z [INFO]  TestAgent_HostBadACL.server: Handled event for server in area: event=member-join server=Node-c56f9b6d-203e-0cd1-b3a4-8af86c23f661.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:54.181Z [INFO]  TestAgent_HostBadACL: Started DNS server: address=127.0.0.1:16385 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.182Z [INFO]  TestAgent_HostBadACL: Started HTTP server: address=127.0.0.1:16386 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.182Z [INFO]  TestAgent_HostBadACL: started state syncer
>     writer.go:29: 2020-02-23T02:46:54.245Z [WARN]  TestAgent_HostBadACL.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:54.245Z [INFO]  TestAgent_HostBadACL.server.raft: entering candidate state: node="Node at 127.0.0.1:16390 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:54.301Z [DEBUG] TestAgent_HostBadACL.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:54.301Z [DEBUG] TestAgent_HostBadACL.server.raft: vote granted: from=c56f9b6d-203e-0cd1-b3a4-8af86c23f661 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:54.301Z [INFO]  TestAgent_HostBadACL.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:54.301Z [INFO]  TestAgent_HostBadACL.server.raft: entering leader state: leader="Node at 127.0.0.1:16390 [Leader]"
>     writer.go:29: 2020-02-23T02:46:54.301Z [INFO]  TestAgent_HostBadACL.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:54.301Z [INFO]  TestAgent_HostBadACL.server: New leader elected: payload=Node-c56f9b6d-203e-0cd1-b3a4-8af86c23f661
>     writer.go:29: 2020-02-23T02:46:54.303Z [INFO]  TestAgent_HostBadACL.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:54.305Z [INFO]  TestAgent_HostBadACL.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:54.305Z [WARN]  TestAgent_HostBadACL.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:54.307Z [INFO]  TestAgent_HostBadACL.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:54.318Z [INFO]  TestAgent_HostBadACL.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:54.318Z [INFO]  TestAgent_HostBadACL.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:54.318Z [INFO]  TestAgent_HostBadACL.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:54.318Z [INFO]  TestAgent_HostBadACL.server.serf.lan: serf: EventMemberUpdate: Node-c56f9b6d-203e-0cd1-b3a4-8af86c23f661
>     writer.go:29: 2020-02-23T02:46:54.318Z [INFO]  TestAgent_HostBadACL.server.serf.wan: serf: EventMemberUpdate: Node-c56f9b6d-203e-0cd1-b3a4-8af86c23f661.dc1
>     writer.go:29: 2020-02-23T02:46:54.318Z [INFO]  TestAgent_HostBadACL.server: Handled event for server in area: event=member-update server=Node-c56f9b6d-203e-0cd1-b3a4-8af86c23f661.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:54.323Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:54.330Z [INFO]  TestAgent_HostBadACL.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:54.330Z [INFO]  TestAgent_HostBadACL.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.330Z [DEBUG] TestAgent_HostBadACL.server: Skipping self join check for node since the cluster is too small: node=Node-c56f9b6d-203e-0cd1-b3a4-8af86c23f661
>     writer.go:29: 2020-02-23T02:46:54.330Z [INFO]  TestAgent_HostBadACL.server: member joined, marking health alive: member=Node-c56f9b6d-203e-0cd1-b3a4-8af86c23f661
>     writer.go:29: 2020-02-23T02:46:54.333Z [DEBUG] TestAgent_HostBadACL.server: Skipping self join check for node since the cluster is too small: node=Node-c56f9b6d-203e-0cd1-b3a4-8af86c23f661
>     writer.go:29: 2020-02-23T02:46:54.355Z [DEBUG] TestAgent_HostBadACL.acl: dropping node from result due to ACLs: node=Node-c56f9b6d-203e-0cd1-b3a4-8af86c23f661
>     writer.go:29: 2020-02-23T02:46:54.355Z [INFO]  TestAgent_HostBadACL: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:54.355Z [INFO]  TestAgent_HostBadACL.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:54.355Z [DEBUG] TestAgent_HostBadACL.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.355Z [DEBUG] TestAgent_HostBadACL.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:54.355Z [DEBUG] TestAgent_HostBadACL.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:54.355Z [WARN]  TestAgent_HostBadACL.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:54.356Z [ERROR] TestAgent_HostBadACL.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:54.356Z [DEBUG] TestAgent_HostBadACL.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.356Z [DEBUG] TestAgent_HostBadACL.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:54.356Z [DEBUG] TestAgent_HostBadACL.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:54.357Z [WARN]  TestAgent_HostBadACL.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:54.359Z [INFO]  TestAgent_HostBadACL.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:54.359Z [INFO]  TestAgent_HostBadACL: consul server down
>     writer.go:29: 2020-02-23T02:46:54.359Z [INFO]  TestAgent_HostBadACL: shutdown complete
>     writer.go:29: 2020-02-23T02:46:54.359Z [INFO]  TestAgent_HostBadACL: Stopping server: protocol=DNS address=127.0.0.1:16385 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.359Z [INFO]  TestAgent_HostBadACL: Stopping server: protocol=DNS address=127.0.0.1:16385 network=udp
>     writer.go:29: 2020-02-23T02:46:54.359Z [INFO]  TestAgent_HostBadACL: Stopping server: protocol=HTTP address=127.0.0.1:16386 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.359Z [INFO]  TestAgent_HostBadACL: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:54.359Z [INFO]  TestAgent_HostBadACL: Endpoints down
> === CONT  TestAgent_Host
> --- PASS: TestAgent_Services_ExposeConfig (0.52s)
>     writer.go:29: 2020-02-23T02:46:54.119Z [WARN]  TestAgent_Services_ExposeConfig: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:54.119Z [DEBUG] TestAgent_Services_ExposeConfig.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:54.119Z [DEBUG] TestAgent_Services_ExposeConfig.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:54.178Z [INFO]  TestAgent_Services_ExposeConfig.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b690c2d6-2232-8e09-7c36-ea3aa4b7c89a Address:127.0.0.1:16384}]"
>     writer.go:29: 2020-02-23T02:46:54.178Z [INFO]  TestAgent_Services_ExposeConfig.server.serf.wan: serf: EventMemberJoin: Node-b690c2d6-2232-8e09-7c36-ea3aa4b7c89a.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:54.179Z [INFO]  TestAgent_Services_ExposeConfig.server.serf.lan: serf: EventMemberJoin: Node-b690c2d6-2232-8e09-7c36-ea3aa4b7c89a 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:54.179Z [INFO]  TestAgent_Services_ExposeConfig: Started DNS server: address=127.0.0.1:16379 network=udp
>     writer.go:29: 2020-02-23T02:46:54.179Z [INFO]  TestAgent_Services_ExposeConfig.server.raft: entering follower state: follower="Node at 127.0.0.1:16384 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:54.179Z [INFO]  TestAgent_Services_ExposeConfig.server: Adding LAN server: server="Node-b690c2d6-2232-8e09-7c36-ea3aa4b7c89a (Addr: tcp/127.0.0.1:16384) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:54.179Z [INFO]  TestAgent_Services_ExposeConfig.server: Handled event for server in area: event=member-join server=Node-b690c2d6-2232-8e09-7c36-ea3aa4b7c89a.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:54.180Z [INFO]  TestAgent_Services_ExposeConfig: Started DNS server: address=127.0.0.1:16379 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.182Z [INFO]  TestAgent_Services_ExposeConfig: Started HTTP server: address=127.0.0.1:16380 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.182Z [INFO]  TestAgent_Services_ExposeConfig: started state syncer
>     writer.go:29: 2020-02-23T02:46:54.242Z [WARN]  TestAgent_Services_ExposeConfig.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:54.242Z [INFO]  TestAgent_Services_ExposeConfig.server.raft: entering candidate state: node="Node at 127.0.0.1:16384 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:54.299Z [DEBUG] TestAgent_Services_ExposeConfig.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:54.299Z [DEBUG] TestAgent_Services_ExposeConfig.server.raft: vote granted: from=b690c2d6-2232-8e09-7c36-ea3aa4b7c89a term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:54.299Z [INFO]  TestAgent_Services_ExposeConfig.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:54.299Z [INFO]  TestAgent_Services_ExposeConfig.server.raft: entering leader state: leader="Node at 127.0.0.1:16384 [Leader]"
>     writer.go:29: 2020-02-23T02:46:54.299Z [INFO]  TestAgent_Services_ExposeConfig.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:54.299Z [INFO]  TestAgent_Services_ExposeConfig.server: New leader elected: payload=Node-b690c2d6-2232-8e09-7c36-ea3aa4b7c89a
>     writer.go:29: 2020-02-23T02:46:54.309Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:54.328Z [INFO]  TestAgent_Services_ExposeConfig.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:54.328Z [INFO]  TestAgent_Services_ExposeConfig.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.328Z [DEBUG] TestAgent_Services_ExposeConfig.server: Skipping self join check for node since the cluster is too small: node=Node-b690c2d6-2232-8e09-7c36-ea3aa4b7c89a
>     writer.go:29: 2020-02-23T02:46:54.328Z [INFO]  TestAgent_Services_ExposeConfig.server: member joined, marking health alive: member=Node-b690c2d6-2232-8e09-7c36-ea3aa4b7c89a
>     writer.go:29: 2020-02-23T02:46:54.408Z [DEBUG] TestAgent_Services_ExposeConfig: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:54.411Z [INFO]  TestAgent_Services_ExposeConfig: Synced node info
>     writer.go:29: 2020-02-23T02:46:54.411Z [DEBUG] TestAgent_Services_ExposeConfig: Node info in sync
>     writer.go:29: 2020-02-23T02:46:54.572Z [INFO]  TestAgent_Services_ExposeConfig: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:54.572Z [INFO]  TestAgent_Services_ExposeConfig.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:54.572Z [DEBUG] TestAgent_Services_ExposeConfig.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.572Z [WARN]  TestAgent_Services_ExposeConfig.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:54.572Z [DEBUG] TestAgent_Services_ExposeConfig.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.593Z [WARN]  TestAgent_Services_ExposeConfig.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:54.623Z [INFO]  TestAgent_Services_ExposeConfig.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:54.623Z [INFO]  TestAgent_Services_ExposeConfig: consul server down
>     writer.go:29: 2020-02-23T02:46:54.623Z [INFO]  TestAgent_Services_ExposeConfig: shutdown complete
>     writer.go:29: 2020-02-23T02:46:54.623Z [INFO]  TestAgent_Services_ExposeConfig: Stopping server: protocol=DNS address=127.0.0.1:16379 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.623Z [INFO]  TestAgent_Services_ExposeConfig: Stopping server: protocol=DNS address=127.0.0.1:16379 network=udp
>     writer.go:29: 2020-02-23T02:46:54.623Z [INFO]  TestAgent_Services_ExposeConfig: Stopping server: protocol=HTTP address=127.0.0.1:16380 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.623Z [INFO]  TestAgent_Services_ExposeConfig: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:54.623Z [INFO]  TestAgent_Services_ExposeConfig: Endpoints down
> === CONT  TestAgentConnectAuthorize_defaultAllow
> --- PASS: TestAgent_Host (0.27s)
>     writer.go:29: 2020-02-23T02:46:54.367Z [WARN]  TestAgent_Host: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:54.367Z [WARN]  TestAgent_Host: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:54.368Z [DEBUG] TestAgent_Host.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:54.368Z [DEBUG] TestAgent_Host.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:54.377Z [INFO]  TestAgent_Host.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:6d86c52b-2aef-74d8-bcf5-598facd1a585 Address:127.0.0.1:16402}]"
>     writer.go:29: 2020-02-23T02:46:54.377Z [INFO]  TestAgent_Host.server.raft: entering follower state: follower="Node at 127.0.0.1:16402 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:54.378Z [INFO]  TestAgent_Host.server.serf.wan: serf: EventMemberJoin: Node-6d86c52b-2aef-74d8-bcf5-598facd1a585.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:54.378Z [INFO]  TestAgent_Host.server.serf.lan: serf: EventMemberJoin: Node-6d86c52b-2aef-74d8-bcf5-598facd1a585 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:54.378Z [INFO]  TestAgent_Host.server: Handled event for server in area: event=member-join server=Node-6d86c52b-2aef-74d8-bcf5-598facd1a585.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:54.378Z [INFO]  TestAgent_Host.server: Adding LAN server: server="Node-6d86c52b-2aef-74d8-bcf5-598facd1a585 (Addr: tcp/127.0.0.1:16402) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:54.379Z [INFO]  TestAgent_Host: Started DNS server: address=127.0.0.1:16397 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.379Z [INFO]  TestAgent_Host: Started DNS server: address=127.0.0.1:16397 network=udp
>     writer.go:29: 2020-02-23T02:46:54.379Z [INFO]  TestAgent_Host: Started HTTP server: address=127.0.0.1:16398 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.379Z [INFO]  TestAgent_Host: started state syncer
>     writer.go:29: 2020-02-23T02:46:54.415Z [WARN]  TestAgent_Host.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:54.415Z [INFO]  TestAgent_Host.server.raft: entering candidate state: node="Node at 127.0.0.1:16402 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:54.418Z [DEBUG] TestAgent_Host.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:54.418Z [DEBUG] TestAgent_Host.server.raft: vote granted: from=6d86c52b-2aef-74d8-bcf5-598facd1a585 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:54.418Z [INFO]  TestAgent_Host.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:54.418Z [INFO]  TestAgent_Host.server.raft: entering leader state: leader="Node at 127.0.0.1:16402 [Leader]"
>     writer.go:29: 2020-02-23T02:46:54.418Z [INFO]  TestAgent_Host.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:54.418Z [INFO]  TestAgent_Host.server: New leader elected: payload=Node-6d86c52b-2aef-74d8-bcf5-598facd1a585
>     writer.go:29: 2020-02-23T02:46:54.420Z [INFO]  TestAgent_Host.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:54.421Z [INFO]  TestAgent_Host.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:54.421Z [WARN]  TestAgent_Host.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:54.424Z [INFO]  TestAgent_Host.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:54.428Z [INFO]  TestAgent_Host.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:54.428Z [INFO]  TestAgent_Host.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:54.428Z [INFO]  TestAgent_Host.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:54.428Z [INFO]  TestAgent_Host.server.serf.lan: serf: EventMemberUpdate: Node-6d86c52b-2aef-74d8-bcf5-598facd1a585
>     writer.go:29: 2020-02-23T02:46:54.428Z [INFO]  TestAgent_Host.server.serf.wan: serf: EventMemberUpdate: Node-6d86c52b-2aef-74d8-bcf5-598facd1a585.dc1
>     writer.go:29: 2020-02-23T02:46:54.428Z [INFO]  TestAgent_Host.server: Handled event for server in area: event=member-update server=Node-6d86c52b-2aef-74d8-bcf5-598facd1a585.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:54.432Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:54.438Z [INFO]  TestAgent_Host.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:54.438Z [INFO]  TestAgent_Host.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.438Z [DEBUG] TestAgent_Host.server: Skipping self join check for node since the cluster is too small: node=Node-6d86c52b-2aef-74d8-bcf5-598facd1a585
>     writer.go:29: 2020-02-23T02:46:54.438Z [INFO]  TestAgent_Host.server: member joined, marking health alive: member=Node-6d86c52b-2aef-74d8-bcf5-598facd1a585
>     writer.go:29: 2020-02-23T02:46:54.441Z [DEBUG] TestAgent_Host.server: Skipping self join check for node since the cluster is too small: node=Node-6d86c52b-2aef-74d8-bcf5-598facd1a585
>     writer.go:29: 2020-02-23T02:46:54.588Z [INFO]  TestAgent_Host: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:54.588Z [INFO]  TestAgent_Host.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:54.588Z [DEBUG] TestAgent_Host.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:54.588Z [DEBUG] TestAgent_Host.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:54.588Z [DEBUG] TestAgent_Host.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.588Z [WARN]  TestAgent_Host.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:54.588Z [ERROR] TestAgent_Host.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:54.588Z [DEBUG] TestAgent_Host.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:54.588Z [DEBUG] TestAgent_Host.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:54.588Z [DEBUG] TestAgent_Host.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.611Z [WARN]  TestAgent_Host.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:54.625Z [INFO]  TestAgent_Host.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:54.625Z [INFO]  TestAgent_Host: consul server down
>     writer.go:29: 2020-02-23T02:46:54.625Z [INFO]  TestAgent_Host: shutdown complete
>     writer.go:29: 2020-02-23T02:46:54.625Z [INFO]  TestAgent_Host: Stopping server: protocol=DNS address=127.0.0.1:16397 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.625Z [INFO]  TestAgent_Host: Stopping server: protocol=DNS address=127.0.0.1:16397 network=udp
>     writer.go:29: 2020-02-23T02:46:54.625Z [INFO]  TestAgent_Host: Stopping server: protocol=HTTP address=127.0.0.1:16398 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.625Z [INFO]  TestAgent_Host: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:54.625Z [INFO]  TestAgent_Host: Endpoints down
> === CONT  TestAgentConnectAuthorize_defaultDeny
> --- PASS: TestAgentConnectAuthorize_defaultDeny (0.25s)
>     writer.go:29: 2020-02-23T02:46:54.635Z [WARN]  TestAgentConnectAuthorize_defaultDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:54.635Z [WARN]  TestAgentConnectAuthorize_defaultDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:54.635Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:54.636Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:54.660Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:67334278-7575-8253-6679-ebfeac7c9434 Address:127.0.0.1:16408}]"
>     writer.go:29: 2020-02-23T02:46:54.660Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16408 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:54.661Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server.serf.wan: serf: EventMemberJoin: Node-67334278-7575-8253-6679-ebfeac7c9434.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:54.663Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server.serf.lan: serf: EventMemberJoin: Node-67334278-7575-8253-6679-ebfeac7c9434 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:54.663Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server: Adding LAN server: server="Node-67334278-7575-8253-6679-ebfeac7c9434 (Addr: tcp/127.0.0.1:16408) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:54.664Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server: Handled event for server in area: event=member-join server=Node-67334278-7575-8253-6679-ebfeac7c9434.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:54.664Z [INFO]  TestAgentConnectAuthorize_defaultDeny: Started DNS server: address=127.0.0.1:16403 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.664Z [INFO]  TestAgentConnectAuthorize_defaultDeny: Started DNS server: address=127.0.0.1:16403 network=udp
>     writer.go:29: 2020-02-23T02:46:54.665Z [INFO]  TestAgentConnectAuthorize_defaultDeny: Started HTTP server: address=127.0.0.1:16404 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.665Z [INFO]  TestAgentConnectAuthorize_defaultDeny: started state syncer
>     writer.go:29: 2020-02-23T02:46:54.703Z [WARN]  TestAgentConnectAuthorize_defaultDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:54.703Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16408 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:54.706Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:54.706Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.server.raft: vote granted: from=67334278-7575-8253-6679-ebfeac7c9434 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:54.706Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:54.706Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16408 [Leader]"
>     writer.go:29: 2020-02-23T02:46:54.706Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:54.706Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server: New leader elected: payload=Node-67334278-7575-8253-6679-ebfeac7c9434
>     writer.go:29: 2020-02-23T02:46:54.708Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:54.710Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:54.710Z [WARN]  TestAgentConnectAuthorize_defaultDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:54.712Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:54.714Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:54.714Z [WARN]  TestAgentConnectAuthorize_defaultDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:54.717Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:54.717Z [INFO]  TestAgentConnectAuthorize_defaultDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:54.717Z [INFO]  TestAgentConnectAuthorize_defaultDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:54.717Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:46:54.717Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server.serf.lan: serf: EventMemberUpdate: Node-67334278-7575-8253-6679-ebfeac7c9434
>     writer.go:29: 2020-02-23T02:46:54.717Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server.serf.wan: serf: EventMemberUpdate: Node-67334278-7575-8253-6679-ebfeac7c9434.dc1
>     writer.go:29: 2020-02-23T02:46:54.717Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server: Handled event for server in area: event=member-update server=Node-67334278-7575-8253-6679-ebfeac7c9434.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:54.717Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:54.717Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server.serf.lan: serf: EventMemberUpdate: Node-67334278-7575-8253-6679-ebfeac7c9434
>     writer.go:29: 2020-02-23T02:46:54.717Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server.serf.wan: serf: EventMemberUpdate: Node-67334278-7575-8253-6679-ebfeac7c9434.dc1
>     writer.go:29: 2020-02-23T02:46:54.717Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server: Handled event for server in area: event=member-update server=Node-67334278-7575-8253-6679-ebfeac7c9434.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:54.722Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:54.730Z [INFO]  TestAgentConnectAuthorize_defaultDeny: Synced node info
>     writer.go:29: 2020-02-23T02:46:54.730Z [DEBUG] TestAgentConnectAuthorize_defaultDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:46:54.733Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:54.733Z [INFO]  TestAgentConnectAuthorize_defaultDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.733Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.server: Skipping self join check for node since the cluster is too small: node=Node-67334278-7575-8253-6679-ebfeac7c9434
>     writer.go:29: 2020-02-23T02:46:54.733Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server: member joined, marking health alive: member=Node-67334278-7575-8253-6679-ebfeac7c9434
>     writer.go:29: 2020-02-23T02:46:54.736Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.server: Skipping self join check for node since the cluster is too small: node=Node-67334278-7575-8253-6679-ebfeac7c9434
>     writer.go:29: 2020-02-23T02:46:54.736Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.server: Skipping self join check for node since the cluster is too small: node=Node-67334278-7575-8253-6679-ebfeac7c9434
>     writer.go:29: 2020-02-23T02:46:54.868Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.acl: dropping node from result due to ACLs: node=Node-67334278-7575-8253-6679-ebfeac7c9434
>     writer.go:29: 2020-02-23T02:46:54.868Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.acl: dropping node from result due to ACLs: node=Node-67334278-7575-8253-6679-ebfeac7c9434
>     writer.go:29: 2020-02-23T02:46:54.868Z [INFO]  TestAgentConnectAuthorize_defaultDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:54.868Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:54.868Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:54.868Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:54.868Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.868Z [WARN]  TestAgentConnectAuthorize_defaultDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:54.869Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:54.869Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:54.869Z [DEBUG] TestAgentConnectAuthorize_defaultDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.871Z [WARN]  TestAgentConnectAuthorize_defaultDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:54.872Z [INFO]  TestAgentConnectAuthorize_defaultDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:54.873Z [INFO]  TestAgentConnectAuthorize_defaultDeny: consul server down
>     writer.go:29: 2020-02-23T02:46:54.873Z [INFO]  TestAgentConnectAuthorize_defaultDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:46:54.873Z [INFO]  TestAgentConnectAuthorize_defaultDeny: Stopping server: protocol=DNS address=127.0.0.1:16403 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.873Z [INFO]  TestAgentConnectAuthorize_defaultDeny: Stopping server: protocol=DNS address=127.0.0.1:16403 network=udp
>     writer.go:29: 2020-02-23T02:46:54.873Z [INFO]  TestAgentConnectAuthorize_defaultDeny: Stopping server: protocol=HTTP address=127.0.0.1:16404 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.873Z [INFO]  TestAgentConnectAuthorize_defaultDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:54.873Z [INFO]  TestAgentConnectAuthorize_defaultDeny: Endpoints down
> === CONT  TestAgentConnectAuthorize_serviceWrite
> --- PASS: TestAgentConnectAuthorize_defaultAllow (0.34s)
>     writer.go:29: 2020-02-23T02:46:54.635Z [WARN]  TestAgentConnectAuthorize_defaultAllow: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:54.635Z [WARN]  TestAgentConnectAuthorize_defaultAllow: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:54.636Z [DEBUG] TestAgentConnectAuthorize_defaultAllow.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:54.636Z [DEBUG] TestAgentConnectAuthorize_defaultAllow.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:54.662Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f70d5075-1b7d-6930-c987-c0efaffc6606 Address:127.0.0.1:16396}]"
>     writer.go:29: 2020-02-23T02:46:54.662Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server.raft: entering follower state: follower="Node at 127.0.0.1:16396 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:54.662Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server.serf.wan: serf: EventMemberJoin: Node-f70d5075-1b7d-6930-c987-c0efaffc6606.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:54.663Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server.serf.lan: serf: EventMemberJoin: Node-f70d5075-1b7d-6930-c987-c0efaffc6606 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:54.664Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server: Handled event for server in area: event=member-join server=Node-f70d5075-1b7d-6930-c987-c0efaffc6606.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:54.664Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server: Adding LAN server: server="Node-f70d5075-1b7d-6930-c987-c0efaffc6606 (Addr: tcp/127.0.0.1:16396) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:54.664Z [INFO]  TestAgentConnectAuthorize_defaultAllow: Started DNS server: address=127.0.0.1:16391 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.664Z [INFO]  TestAgentConnectAuthorize_defaultAllow: Started DNS server: address=127.0.0.1:16391 network=udp
>     writer.go:29: 2020-02-23T02:46:54.664Z [INFO]  TestAgentConnectAuthorize_defaultAllow: Started HTTP server: address=127.0.0.1:16392 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.665Z [INFO]  TestAgentConnectAuthorize_defaultAllow: started state syncer
>     writer.go:29: 2020-02-23T02:46:54.716Z [WARN]  TestAgentConnectAuthorize_defaultAllow.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:54.716Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server.raft: entering candidate state: node="Node at 127.0.0.1:16396 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:54.720Z [DEBUG] TestAgentConnectAuthorize_defaultAllow.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:54.720Z [DEBUG] TestAgentConnectAuthorize_defaultAllow.server.raft: vote granted: from=f70d5075-1b7d-6930-c987-c0efaffc6606 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:54.720Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:54.720Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server.raft: entering leader state: leader="Node at 127.0.0.1:16396 [Leader]"
>     writer.go:29: 2020-02-23T02:46:54.720Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:54.720Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server: New leader elected: payload=Node-f70d5075-1b7d-6930-c987-c0efaffc6606
>     writer.go:29: 2020-02-23T02:46:54.723Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:54.725Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:54.725Z [WARN]  TestAgentConnectAuthorize_defaultAllow.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:54.728Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:54.732Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:54.732Z [INFO]  TestAgentConnectAuthorize_defaultAllow.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:54.732Z [INFO]  TestAgentConnectAuthorize_defaultAllow.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:54.732Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server.serf.lan: serf: EventMemberUpdate: Node-f70d5075-1b7d-6930-c987-c0efaffc6606
>     writer.go:29: 2020-02-23T02:46:54.732Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server.serf.wan: serf: EventMemberUpdate: Node-f70d5075-1b7d-6930-c987-c0efaffc6606.dc1
>     writer.go:29: 2020-02-23T02:46:54.733Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server: Handled event for server in area: event=member-update server=Node-f70d5075-1b7d-6930-c987-c0efaffc6606.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:54.737Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:54.748Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:54.748Z [INFO]  TestAgentConnectAuthorize_defaultAllow.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.748Z [DEBUG] TestAgentConnectAuthorize_defaultAllow.server: Skipping self join check for node since the cluster is too small: node=Node-f70d5075-1b7d-6930-c987-c0efaffc6606
>     writer.go:29: 2020-02-23T02:46:54.748Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server: member joined, marking health alive: member=Node-f70d5075-1b7d-6930-c987-c0efaffc6606
>     writer.go:29: 2020-02-23T02:46:54.756Z [DEBUG] TestAgentConnectAuthorize_defaultAllow.server: Skipping self join check for node since the cluster is too small: node=Node-f70d5075-1b7d-6930-c987-c0efaffc6606
>     writer.go:29: 2020-02-23T02:46:54.774Z [DEBUG] TestAgentConnectAuthorize_defaultAllow: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:54.778Z [INFO]  TestAgentConnectAuthorize_defaultAllow: Synced node info
>     writer.go:29: 2020-02-23T02:46:54.952Z [INFO]  TestAgentConnectAuthorize_defaultAllow: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:54.952Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:54.952Z [DEBUG] TestAgentConnectAuthorize_defaultAllow.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:54.952Z [DEBUG] TestAgentConnectAuthorize_defaultAllow.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:54.952Z [DEBUG] TestAgentConnectAuthorize_defaultAllow.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.952Z [WARN]  TestAgentConnectAuthorize_defaultAllow.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:54.952Z [DEBUG] TestAgentConnectAuthorize_defaultAllow.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:54.952Z [DEBUG] TestAgentConnectAuthorize_defaultAllow.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:54.952Z [DEBUG] TestAgentConnectAuthorize_defaultAllow.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.960Z [WARN]  TestAgentConnectAuthorize_defaultAllow.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:54.962Z [INFO]  TestAgentConnectAuthorize_defaultAllow.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:54.962Z [INFO]  TestAgentConnectAuthorize_defaultAllow: consul server down
>     writer.go:29: 2020-02-23T02:46:54.962Z [INFO]  TestAgentConnectAuthorize_defaultAllow: shutdown complete
>     writer.go:29: 2020-02-23T02:46:54.962Z [INFO]  TestAgentConnectAuthorize_defaultAllow: Stopping server: protocol=DNS address=127.0.0.1:16391 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.962Z [INFO]  TestAgentConnectAuthorize_defaultAllow: Stopping server: protocol=DNS address=127.0.0.1:16391 network=udp
>     writer.go:29: 2020-02-23T02:46:54.962Z [INFO]  TestAgentConnectAuthorize_defaultAllow: Stopping server: protocol=HTTP address=127.0.0.1:16392 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.962Z [INFO]  TestAgentConnectAuthorize_defaultAllow: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:54.962Z [INFO]  TestAgentConnectAuthorize_defaultAllow: Endpoints down
> === CONT  TestAgentConnectAuthorize_denyWildcard
> --- PASS: TestAgent_IndexChurn (1.92s)
>     --- PASS: TestAgent_IndexChurn/no_tags (0.69s)
>         writer.go:29: 2020-02-23T02:46:53.226Z [WARN]  TestAgent_IndexChurn/no_tags: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:53.226Z [DEBUG] TestAgent_IndexChurn/no_tags.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:53.227Z [DEBUG] TestAgent_IndexChurn/no_tags.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:53.239Z [INFO]  TestAgent_IndexChurn/no_tags.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:7282470c-b299-ff2b-6375-732bb45040eb Address:127.0.0.1:16330}]"
>         writer.go:29: 2020-02-23T02:46:53.239Z [INFO]  TestAgent_IndexChurn/no_tags.server.serf.wan: serf: EventMemberJoin: Node-7282470c-b299-ff2b-6375-732bb45040eb.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:53.239Z [INFO]  TestAgent_IndexChurn/no_tags.server.serf.lan: serf: EventMemberJoin: Node-7282470c-b299-ff2b-6375-732bb45040eb 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:53.240Z [INFO]  TestAgent_IndexChurn/no_tags: Started DNS server: address=127.0.0.1:16325 network=udp
>         writer.go:29: 2020-02-23T02:46:53.240Z [INFO]  TestAgent_IndexChurn/no_tags.server.raft: entering follower state: follower="Node at 127.0.0.1:16330 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:53.240Z [INFO]  TestAgent_IndexChurn/no_tags.server: Adding LAN server: server="Node-7282470c-b299-ff2b-6375-732bb45040eb (Addr: tcp/127.0.0.1:16330) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:53.240Z [INFO]  TestAgent_IndexChurn/no_tags.server: Handled event for server in area: event=member-join server=Node-7282470c-b299-ff2b-6375-732bb45040eb.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:53.240Z [INFO]  TestAgent_IndexChurn/no_tags: Started DNS server: address=127.0.0.1:16325 network=tcp
>         writer.go:29: 2020-02-23T02:46:53.240Z [INFO]  TestAgent_IndexChurn/no_tags: Started HTTP server: address=127.0.0.1:16326 network=tcp
>         writer.go:29: 2020-02-23T02:46:53.240Z [INFO]  TestAgent_IndexChurn/no_tags: started state syncer
>         writer.go:29: 2020-02-23T02:46:53.291Z [WARN]  TestAgent_IndexChurn/no_tags.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:53.291Z [INFO]  TestAgent_IndexChurn/no_tags.server.raft: entering candidate state: node="Node at 127.0.0.1:16330 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:53.295Z [DEBUG] TestAgent_IndexChurn/no_tags.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:53.295Z [DEBUG] TestAgent_IndexChurn/no_tags.server.raft: vote granted: from=7282470c-b299-ff2b-6375-732bb45040eb term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:53.295Z [INFO]  TestAgent_IndexChurn/no_tags.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:53.295Z [INFO]  TestAgent_IndexChurn/no_tags.server.raft: entering leader state: leader="Node at 127.0.0.1:16330 [Leader]"
>         writer.go:29: 2020-02-23T02:46:53.295Z [INFO]  TestAgent_IndexChurn/no_tags.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:53.295Z [INFO]  TestAgent_IndexChurn/no_tags.server: New leader elected: payload=Node-7282470c-b299-ff2b-6375-732bb45040eb
>         writer.go:29: 2020-02-23T02:46:53.303Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:53.311Z [INFO]  TestAgent_IndexChurn/no_tags.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:53.311Z [INFO]  TestAgent_IndexChurn/no_tags.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:53.311Z [DEBUG] TestAgent_IndexChurn/no_tags.server: Skipping self join check for node since the cluster is too small: node=Node-7282470c-b299-ff2b-6375-732bb45040eb
>         writer.go:29: 2020-02-23T02:46:53.311Z [INFO]  TestAgent_IndexChurn/no_tags.server: member joined, marking health alive: member=Node-7282470c-b299-ff2b-6375-732bb45040eb
>         writer.go:29: 2020-02-23T02:46:53.401Z [DEBUG] TestAgent_IndexChurn/no_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:53.404Z [INFO]  TestAgent_IndexChurn/no_tags: Synced node info
>         writer.go:29: 2020-02-23T02:46:53.406Z [INFO]  TestAgent_IndexChurn/no_tags: Synced service: service=redis
>         writer.go:29: 2020-02-23T02:46:53.406Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:53.408Z [INFO]  TestAgent_IndexChurn/no_tags: Synced check: check=node-check
>         writer.go:29: 2020-02-23T02:46:53.410Z [DEBUG] TestAgent_IndexChurn/no_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:53.410Z [DEBUG] TestAgent_IndexChurn/no_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:53.410Z [DEBUG] TestAgent_IndexChurn/no_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:53.410Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:53.410Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:53.908Z [DEBUG] TestAgent_IndexChurn/no_tags: Registered node: node="Node-level check"
>         writer.go:29: 2020-02-23T02:46:53.908Z [DEBUG] TestAgent_IndexChurn/no_tags: Registered node: node="Serf Health Status"
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Registered node: node="Service-level check"
>         writer.go:29: 2020-02-23T02:46:53.909Z [INFO]  TestAgent_IndexChurn/no_tags: Sync in progress: iteration=1
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [INFO]  TestAgent_IndexChurn/no_tags: Sync in progress: iteration=2
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [INFO]  TestAgent_IndexChurn/no_tags: Sync in progress: iteration=3
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [INFO]  TestAgent_IndexChurn/no_tags: Sync in progress: iteration=4
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [INFO]  TestAgent_IndexChurn/no_tags: Sync in progress: iteration=5
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [INFO]  TestAgent_IndexChurn/no_tags: Sync in progress: iteration=6
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [INFO]  TestAgent_IndexChurn/no_tags: Sync in progress: iteration=7
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [INFO]  TestAgent_IndexChurn/no_tags: Sync in progress: iteration=8
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [INFO]  TestAgent_IndexChurn/no_tags: Sync in progress: iteration=9
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:53.909Z [INFO]  TestAgent_IndexChurn/no_tags: Sync in progress: iteration=10
>         writer.go:29: 2020-02-23T02:46:53.909Z [DEBUG] TestAgent_IndexChurn/no_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:53.910Z [DEBUG] TestAgent_IndexChurn/no_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:53.910Z [DEBUG] TestAgent_IndexChurn/no_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:53.910Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:53.910Z [DEBUG] TestAgent_IndexChurn/no_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:53.910Z [INFO]  TestAgent_IndexChurn/no_tags: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:53.910Z [INFO]  TestAgent_IndexChurn/no_tags.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:53.910Z [DEBUG] TestAgent_IndexChurn/no_tags.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:53.910Z [WARN]  TestAgent_IndexChurn/no_tags.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:53.910Z [DEBUG] TestAgent_IndexChurn/no_tags.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:53.912Z [WARN]  TestAgent_IndexChurn/no_tags.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:53.913Z [INFO]  TestAgent_IndexChurn/no_tags.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:53.913Z [INFO]  TestAgent_IndexChurn/no_tags: consul server down
>         writer.go:29: 2020-02-23T02:46:53.913Z [INFO]  TestAgent_IndexChurn/no_tags: shutdown complete
>         writer.go:29: 2020-02-23T02:46:53.913Z [INFO]  TestAgent_IndexChurn/no_tags: Stopping server: protocol=DNS address=127.0.0.1:16325 network=tcp
>         writer.go:29: 2020-02-23T02:46:53.913Z [INFO]  TestAgent_IndexChurn/no_tags: Stopping server: protocol=DNS address=127.0.0.1:16325 network=udp
>         writer.go:29: 2020-02-23T02:46:53.913Z [INFO]  TestAgent_IndexChurn/no_tags: Stopping server: protocol=HTTP address=127.0.0.1:16326 network=tcp
>         writer.go:29: 2020-02-23T02:46:53.913Z [INFO]  TestAgent_IndexChurn/no_tags: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:53.913Z [INFO]  TestAgent_IndexChurn/no_tags: Endpoints down
>     --- PASS: TestAgent_IndexChurn/with_tags (1.22s)
>         writer.go:29: 2020-02-23T02:46:53.958Z [WARN]  TestAgent_IndexChurn/with_tags: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:53.958Z [DEBUG] TestAgent_IndexChurn/with_tags.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:53.958Z [DEBUG] TestAgent_IndexChurn/with_tags.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:54.172Z [INFO]  TestAgent_IndexChurn/with_tags.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:0c801871-0493-e570-c302-16d2f9866126 Address:127.0.0.1:16378}]"
>         writer.go:29: 2020-02-23T02:46:54.172Z [INFO]  TestAgent_IndexChurn/with_tags.server.raft: entering follower state: follower="Node at 127.0.0.1:16378 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:54.173Z [INFO]  TestAgent_IndexChurn/with_tags.server.serf.wan: serf: EventMemberJoin: Node-0c801871-0493-e570-c302-16d2f9866126.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:54.174Z [INFO]  TestAgent_IndexChurn/with_tags.server.serf.lan: serf: EventMemberJoin: Node-0c801871-0493-e570-c302-16d2f9866126 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:54.175Z [INFO]  TestAgent_IndexChurn/with_tags.server: Adding LAN server: server="Node-0c801871-0493-e570-c302-16d2f9866126 (Addr: tcp/127.0.0.1:16378) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:54.175Z [INFO]  TestAgent_IndexChurn/with_tags.server: Handled event for server in area: event=member-join server=Node-0c801871-0493-e570-c302-16d2f9866126.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:54.175Z [INFO]  TestAgent_IndexChurn/with_tags: Started DNS server: address=127.0.0.1:16373 network=tcp
>         writer.go:29: 2020-02-23T02:46:54.175Z [INFO]  TestAgent_IndexChurn/with_tags: Started DNS server: address=127.0.0.1:16373 network=udp
>         writer.go:29: 2020-02-23T02:46:54.176Z [INFO]  TestAgent_IndexChurn/with_tags: Started HTTP server: address=127.0.0.1:16374 network=tcp
>         writer.go:29: 2020-02-23T02:46:54.176Z [INFO]  TestAgent_IndexChurn/with_tags: started state syncer
>         writer.go:29: 2020-02-23T02:46:54.236Z [WARN]  TestAgent_IndexChurn/with_tags.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:54.236Z [INFO]  TestAgent_IndexChurn/with_tags.server.raft: entering candidate state: node="Node at 127.0.0.1:16378 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:54.300Z [DEBUG] TestAgent_IndexChurn/with_tags.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:54.300Z [DEBUG] TestAgent_IndexChurn/with_tags.server.raft: vote granted: from=0c801871-0493-e570-c302-16d2f9866126 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:54.300Z [INFO]  TestAgent_IndexChurn/with_tags.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:54.300Z [INFO]  TestAgent_IndexChurn/with_tags.server.raft: entering leader state: leader="Node at 127.0.0.1:16378 [Leader]"
>         writer.go:29: 2020-02-23T02:46:54.300Z [INFO]  TestAgent_IndexChurn/with_tags.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:54.300Z [INFO]  TestAgent_IndexChurn/with_tags.server: New leader elected: payload=Node-0c801871-0493-e570-c302-16d2f9866126
>         writer.go:29: 2020-02-23T02:46:54.309Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:54.324Z [INFO]  TestAgent_IndexChurn/with_tags.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:54.324Z [INFO]  TestAgent_IndexChurn/with_tags.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:54.324Z [DEBUG] TestAgent_IndexChurn/with_tags.server: Skipping self join check for node since the cluster is too small: node=Node-0c801871-0493-e570-c302-16d2f9866126
>         writer.go:29: 2020-02-23T02:46:54.324Z [INFO]  TestAgent_IndexChurn/with_tags.server: member joined, marking health alive: member=Node-0c801871-0493-e570-c302-16d2f9866126
>         writer.go:29: 2020-02-23T02:46:54.478Z [DEBUG] TestAgent_IndexChurn/with_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:54.504Z [INFO]  TestAgent_IndexChurn/with_tags: Synced node info
>         writer.go:29: 2020-02-23T02:46:54.613Z [DEBUG] TestAgent_IndexChurn/with_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:54.613Z [DEBUG] TestAgent_IndexChurn/with_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:54.631Z [INFO]  TestAgent_IndexChurn/with_tags: Synced service: service=redis
>         writer.go:29: 2020-02-23T02:46:54.631Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:54.634Z [INFO]  TestAgent_IndexChurn/with_tags: Synced check: check=node-check
>         writer.go:29: 2020-02-23T02:46:55.134Z [DEBUG] TestAgent_IndexChurn/with_tags: Registered node: node="Node-level check"
>         writer.go:29: 2020-02-23T02:46:55.134Z [DEBUG] TestAgent_IndexChurn/with_tags: Registered node: node="Serf Health Status"
>         writer.go:29: 2020-02-23T02:46:55.134Z [DEBUG] TestAgent_IndexChurn/with_tags: Registered node: node="Service-level check"
>         writer.go:29: 2020-02-23T02:46:55.134Z [INFO]  TestAgent_IndexChurn/with_tags: Sync in progress: iteration=1
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [INFO]  TestAgent_IndexChurn/with_tags: Sync in progress: iteration=2
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [INFO]  TestAgent_IndexChurn/with_tags: Sync in progress: iteration=3
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [INFO]  TestAgent_IndexChurn/with_tags: Sync in progress: iteration=4
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [INFO]  TestAgent_IndexChurn/with_tags: Sync in progress: iteration=5
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [INFO]  TestAgent_IndexChurn/with_tags: Sync in progress: iteration=6
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [INFO]  TestAgent_IndexChurn/with_tags: Sync in progress: iteration=7
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [INFO]  TestAgent_IndexChurn/with_tags: Sync in progress: iteration=8
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [INFO]  TestAgent_IndexChurn/with_tags: Sync in progress: iteration=9
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [INFO]  TestAgent_IndexChurn/with_tags: Sync in progress: iteration=10
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Node info in sync
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Service in sync: service=redis
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=redis-check
>         writer.go:29: 2020-02-23T02:46:55.135Z [DEBUG] TestAgent_IndexChurn/with_tags: Check in sync: check=node-check
>         writer.go:29: 2020-02-23T02:46:55.136Z [INFO]  TestAgent_IndexChurn/with_tags: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:55.136Z [INFO]  TestAgent_IndexChurn/with_tags.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:55.136Z [DEBUG] TestAgent_IndexChurn/with_tags.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:55.136Z [WARN]  TestAgent_IndexChurn/with_tags.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:55.136Z [DEBUG] TestAgent_IndexChurn/with_tags.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:55.137Z [WARN]  TestAgent_IndexChurn/with_tags.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:55.139Z [INFO]  TestAgent_IndexChurn/with_tags.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:55.139Z [INFO]  TestAgent_IndexChurn/with_tags: consul server down
>         writer.go:29: 2020-02-23T02:46:55.139Z [INFO]  TestAgent_IndexChurn/with_tags: shutdown complete
>         writer.go:29: 2020-02-23T02:46:55.139Z [INFO]  TestAgent_IndexChurn/with_tags: Stopping server: protocol=DNS address=127.0.0.1:16373 network=tcp
>         writer.go:29: 2020-02-23T02:46:55.139Z [INFO]  TestAgent_IndexChurn/with_tags: Stopping server: protocol=DNS address=127.0.0.1:16373 network=udp
>         writer.go:29: 2020-02-23T02:46:55.139Z [INFO]  TestAgent_IndexChurn/with_tags: Stopping server: protocol=HTTP address=127.0.0.1:16374 network=tcp
>         writer.go:29: 2020-02-23T02:46:55.139Z [INFO]  TestAgent_IndexChurn/with_tags: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:55.139Z [INFO]  TestAgent_IndexChurn/with_tags: Endpoints down
> === CONT  TestAgentConnectAuthorize_allowTrustDomain
> --- PASS: TestAgentConnectAuthorize_denyWildcard (0.25s)
>     writer.go:29: 2020-02-23T02:46:54.979Z [WARN]  TestAgentConnectAuthorize_denyWildcard: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:54.979Z [DEBUG] TestAgentConnectAuthorize_denyWildcard.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:54.982Z [DEBUG] TestAgentConnectAuthorize_denyWildcard.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:54.997Z [INFO]  TestAgentConnectAuthorize_denyWildcard.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f1cfabe6-f431-8fef-0990-8bcb2cfadd1c Address:127.0.0.1:16414}]"
>     writer.go:29: 2020-02-23T02:46:54.997Z [INFO]  TestAgentConnectAuthorize_denyWildcard.server.raft: entering follower state: follower="Node at 127.0.0.1:16414 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:54.998Z [INFO]  TestAgentConnectAuthorize_denyWildcard.server.serf.wan: serf: EventMemberJoin: Node-f1cfabe6-f431-8fef-0990-8bcb2cfadd1c.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:54.999Z [INFO]  TestAgentConnectAuthorize_denyWildcard.server.serf.lan: serf: EventMemberJoin: Node-f1cfabe6-f431-8fef-0990-8bcb2cfadd1c 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:54.999Z [INFO]  TestAgentConnectAuthorize_denyWildcard: Started DNS server: address=127.0.0.1:16409 network=udp
>     writer.go:29: 2020-02-23T02:46:54.999Z [INFO]  TestAgentConnectAuthorize_denyWildcard.server: Adding LAN server: server="Node-f1cfabe6-f431-8fef-0990-8bcb2cfadd1c (Addr: tcp/127.0.0.1:16414) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:54.999Z [INFO]  TestAgentConnectAuthorize_denyWildcard.server: Handled event for server in area: event=member-join server=Node-f1cfabe6-f431-8fef-0990-8bcb2cfadd1c.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:54.999Z [INFO]  TestAgentConnectAuthorize_denyWildcard: Started DNS server: address=127.0.0.1:16409 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.999Z [INFO]  TestAgentConnectAuthorize_denyWildcard: Started HTTP server: address=127.0.0.1:16410 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.999Z [INFO]  TestAgentConnectAuthorize_denyWildcard: started state syncer
>     writer.go:29: 2020-02-23T02:46:55.040Z [WARN]  TestAgentConnectAuthorize_denyWildcard.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:55.040Z [INFO]  TestAgentConnectAuthorize_denyWildcard.server.raft: entering candidate state: node="Node at 127.0.0.1:16414 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:55.043Z [DEBUG] TestAgentConnectAuthorize_denyWildcard.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:55.043Z [DEBUG] TestAgentConnectAuthorize_denyWildcard.server.raft: vote granted: from=f1cfabe6-f431-8fef-0990-8bcb2cfadd1c term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:55.043Z [INFO]  TestAgentConnectAuthorize_denyWildcard.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:55.043Z [INFO]  TestAgentConnectAuthorize_denyWildcard.server.raft: entering leader state: leader="Node at 127.0.0.1:16414 [Leader]"
>     writer.go:29: 2020-02-23T02:46:55.044Z [INFO]  TestAgentConnectAuthorize_denyWildcard.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:55.044Z [INFO]  TestAgentConnectAuthorize_denyWildcard.server: New leader elected: payload=Node-f1cfabe6-f431-8fef-0990-8bcb2cfadd1c
>     writer.go:29: 2020-02-23T02:46:55.052Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:55.061Z [INFO]  TestAgentConnectAuthorize_denyWildcard.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:55.061Z [INFO]  TestAgentConnectAuthorize_denyWildcard.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.061Z [DEBUG] TestAgentConnectAuthorize_denyWildcard.server: Skipping self join check for node since the cluster is too small: node=Node-f1cfabe6-f431-8fef-0990-8bcb2cfadd1c
>     writer.go:29: 2020-02-23T02:46:55.061Z [INFO]  TestAgentConnectAuthorize_denyWildcard.server: member joined, marking health alive: member=Node-f1cfabe6-f431-8fef-0990-8bcb2cfadd1c
>     writer.go:29: 2020-02-23T02:46:55.124Z [DEBUG] TestAgentConnectAuthorize_denyWildcard: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:55.127Z [INFO]  TestAgentConnectAuthorize_denyWildcard: Synced node info
>     writer.go:29: 2020-02-23T02:46:55.127Z [DEBUG] TestAgentConnectAuthorize_denyWildcard: Node info in sync
>     writer.go:29: 2020-02-23T02:46:55.212Z [INFO]  TestAgentConnectAuthorize_denyWildcard: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:55.212Z [INFO]  TestAgentConnectAuthorize_denyWildcard.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:55.212Z [DEBUG] TestAgentConnectAuthorize_denyWildcard.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.212Z [WARN]  TestAgentConnectAuthorize_denyWildcard.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.212Z [DEBUG] TestAgentConnectAuthorize_denyWildcard.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.214Z [WARN]  TestAgentConnectAuthorize_denyWildcard.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.215Z [INFO]  TestAgentConnectAuthorize_denyWildcard.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:55.215Z [INFO]  TestAgentConnectAuthorize_denyWildcard: consul server down
>     writer.go:29: 2020-02-23T02:46:55.215Z [INFO]  TestAgentConnectAuthorize_denyWildcard: shutdown complete
>     writer.go:29: 2020-02-23T02:46:55.215Z [INFO]  TestAgentConnectAuthorize_denyWildcard: Stopping server: protocol=DNS address=127.0.0.1:16409 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.215Z [INFO]  TestAgentConnectAuthorize_denyWildcard: Stopping server: protocol=DNS address=127.0.0.1:16409 network=udp
>     writer.go:29: 2020-02-23T02:46:55.215Z [INFO]  TestAgentConnectAuthorize_denyWildcard: Stopping server: protocol=HTTP address=127.0.0.1:16410 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.216Z [INFO]  TestAgentConnectAuthorize_denyWildcard: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:55.216Z [INFO]  TestAgentConnectAuthorize_denyWildcard: Endpoints down
> === CONT  TestAgentConnectAuthorize_deny
> --- PASS: TestAgentConnectAuthorize_serviceWrite (0.36s)
>     writer.go:29: 2020-02-23T02:46:54.896Z [WARN]  TestAgentConnectAuthorize_serviceWrite: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:54.896Z [WARN]  TestAgentConnectAuthorize_serviceWrite: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:54.896Z [DEBUG] TestAgentConnectAuthorize_serviceWrite.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:54.897Z [DEBUG] TestAgentConnectAuthorize_serviceWrite.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:54.907Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b4487afd-50f9-80f0-18ad-f489542ba3b1 Address:127.0.0.1:16426}]"
>     writer.go:29: 2020-02-23T02:46:54.907Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server.serf.wan: serf: EventMemberJoin: Node-b4487afd-50f9-80f0-18ad-f489542ba3b1.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:54.908Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server.serf.lan: serf: EventMemberJoin: Node-b4487afd-50f9-80f0-18ad-f489542ba3b1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:54.908Z [INFO]  TestAgentConnectAuthorize_serviceWrite: Started DNS server: address=127.0.0.1:16421 network=udp
>     writer.go:29: 2020-02-23T02:46:54.908Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server.raft: entering follower state: follower="Node at 127.0.0.1:16426 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:54.908Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server: Adding LAN server: server="Node-b4487afd-50f9-80f0-18ad-f489542ba3b1 (Addr: tcp/127.0.0.1:16426) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:54.908Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server: Handled event for server in area: event=member-join server=Node-b4487afd-50f9-80f0-18ad-f489542ba3b1.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:54.908Z [INFO]  TestAgentConnectAuthorize_serviceWrite: Started DNS server: address=127.0.0.1:16421 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.909Z [INFO]  TestAgentConnectAuthorize_serviceWrite: Started HTTP server: address=127.0.0.1:16422 network=tcp
>     writer.go:29: 2020-02-23T02:46:54.909Z [INFO]  TestAgentConnectAuthorize_serviceWrite: started state syncer
>     writer.go:29: 2020-02-23T02:46:54.973Z [WARN]  TestAgentConnectAuthorize_serviceWrite.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:54.973Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server.raft: entering candidate state: node="Node at 127.0.0.1:16426 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:54.976Z [DEBUG] TestAgentConnectAuthorize_serviceWrite.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:54.976Z [DEBUG] TestAgentConnectAuthorize_serviceWrite.server.raft: vote granted: from=b4487afd-50f9-80f0-18ad-f489542ba3b1 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:54.976Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:54.976Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server.raft: entering leader state: leader="Node at 127.0.0.1:16426 [Leader]"
>     writer.go:29: 2020-02-23T02:46:54.976Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:54.976Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server: New leader elected: payload=Node-b4487afd-50f9-80f0-18ad-f489542ba3b1
>     writer.go:29: 2020-02-23T02:46:54.978Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:54.979Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:54.979Z [WARN]  TestAgentConnectAuthorize_serviceWrite.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:54.982Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:54.990Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:54.990Z [INFO]  TestAgentConnectAuthorize_serviceWrite.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:54.990Z [INFO]  TestAgentConnectAuthorize_serviceWrite.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:54.990Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server.serf.lan: serf: EventMemberUpdate: Node-b4487afd-50f9-80f0-18ad-f489542ba3b1
>     writer.go:29: 2020-02-23T02:46:54.990Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server.serf.wan: serf: EventMemberUpdate: Node-b4487afd-50f9-80f0-18ad-f489542ba3b1.dc1
>     writer.go:29: 2020-02-23T02:46:54.990Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server: Handled event for server in area: event=member-update server=Node-b4487afd-50f9-80f0-18ad-f489542ba3b1.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:54.991Z [INFO]  TestAgentConnectAuthorize_serviceWrite: Synced node info
>     writer.go:29: 2020-02-23T02:46:54.991Z [DEBUG] TestAgentConnectAuthorize_serviceWrite: Node info in sync
>     writer.go:29: 2020-02-23T02:46:54.995Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:55.003Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:55.003Z [INFO]  TestAgentConnectAuthorize_serviceWrite.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.003Z [DEBUG] TestAgentConnectAuthorize_serviceWrite.server: Skipping self join check for node since the cluster is too small: node=Node-b4487afd-50f9-80f0-18ad-f489542ba3b1
>     writer.go:29: 2020-02-23T02:46:55.003Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server: member joined, marking health alive: member=Node-b4487afd-50f9-80f0-18ad-f489542ba3b1
>     writer.go:29: 2020-02-23T02:46:55.006Z [DEBUG] TestAgentConnectAuthorize_serviceWrite.server: Skipping self join check for node since the cluster is too small: node=Node-b4487afd-50f9-80f0-18ad-f489542ba3b1
>     writer.go:29: 2020-02-23T02:46:55.220Z [DEBUG] TestAgentConnectAuthorize_serviceWrite.acl: dropping node from result due to ACLs: node=Node-b4487afd-50f9-80f0-18ad-f489542ba3b1
>     writer.go:29: 2020-02-23T02:46:55.220Z [DEBUG] TestAgentConnectAuthorize_serviceWrite.acl: dropping node from result due to ACLs: node=Node-b4487afd-50f9-80f0-18ad-f489542ba3b1
>     writer.go:29: 2020-02-23T02:46:55.223Z [INFO]  TestAgentConnectAuthorize_serviceWrite: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:55.223Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:55.223Z [DEBUG] TestAgentConnectAuthorize_serviceWrite.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:55.223Z [DEBUG] TestAgentConnectAuthorize_serviceWrite.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:55.223Z [DEBUG] TestAgentConnectAuthorize_serviceWrite.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.223Z [WARN]  TestAgentConnectAuthorize_serviceWrite.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.223Z [DEBUG] TestAgentConnectAuthorize_serviceWrite.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:55.223Z [DEBUG] TestAgentConnectAuthorize_serviceWrite.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.223Z [DEBUG] TestAgentConnectAuthorize_serviceWrite.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:55.225Z [WARN]  TestAgentConnectAuthorize_serviceWrite.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.230Z [INFO]  TestAgentConnectAuthorize_serviceWrite.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:55.230Z [INFO]  TestAgentConnectAuthorize_serviceWrite: consul server down
>     writer.go:29: 2020-02-23T02:46:55.230Z [INFO]  TestAgentConnectAuthorize_serviceWrite: shutdown complete
>     writer.go:29: 2020-02-23T02:46:55.230Z [INFO]  TestAgentConnectAuthorize_serviceWrite: Stopping server: protocol=DNS address=127.0.0.1:16421 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.230Z [INFO]  TestAgentConnectAuthorize_serviceWrite: Stopping server: protocol=DNS address=127.0.0.1:16421 network=udp
>     writer.go:29: 2020-02-23T02:46:55.230Z [INFO]  TestAgentConnectAuthorize_serviceWrite: Stopping server: protocol=HTTP address=127.0.0.1:16422 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.230Z [INFO]  TestAgentConnectAuthorize_serviceWrite: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:55.230Z [INFO]  TestAgentConnectAuthorize_serviceWrite: Endpoints down
> === CONT  TestAgentConnectAuthorize_allow
> --- PASS: TestAgentConnectAuthorize_allowTrustDomain (0.33s)
>     writer.go:29: 2020-02-23T02:46:55.146Z [WARN]  TestAgentConnectAuthorize_allowTrustDomain: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:55.146Z [DEBUG] TestAgentConnectAuthorize_allowTrustDomain.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:55.147Z [DEBUG] TestAgentConnectAuthorize_allowTrustDomain.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:55.155Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:ddbd0d92-a59a-322c-66fc-96faf05603da Address:127.0.0.1:16432}]"
>     writer.go:29: 2020-02-23T02:46:55.155Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.server.raft: entering follower state: follower="Node at 127.0.0.1:16432 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:55.156Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.server.serf.wan: serf: EventMemberJoin: Node-ddbd0d92-a59a-322c-66fc-96faf05603da.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.156Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.server.serf.lan: serf: EventMemberJoin: Node-ddbd0d92-a59a-322c-66fc-96faf05603da 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.157Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.server: Adding LAN server: server="Node-ddbd0d92-a59a-322c-66fc-96faf05603da (Addr: tcp/127.0.0.1:16432) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:55.157Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain: Started DNS server: address=127.0.0.1:16427 network=udp
>     writer.go:29: 2020-02-23T02:46:55.157Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.server: Handled event for server in area: event=member-join server=Node-ddbd0d92-a59a-322c-66fc-96faf05603da.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:55.157Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain: Started DNS server: address=127.0.0.1:16427 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.157Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain: Started HTTP server: address=127.0.0.1:16428 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.157Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain: started state syncer
>     writer.go:29: 2020-02-23T02:46:55.191Z [WARN]  TestAgentConnectAuthorize_allowTrustDomain.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:55.191Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.server.raft: entering candidate state: node="Node at 127.0.0.1:16432 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:55.194Z [DEBUG] TestAgentConnectAuthorize_allowTrustDomain.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:55.195Z [DEBUG] TestAgentConnectAuthorize_allowTrustDomain.server.raft: vote granted: from=ddbd0d92-a59a-322c-66fc-96faf05603da term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:55.195Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:55.195Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.server.raft: entering leader state: leader="Node at 127.0.0.1:16432 [Leader]"
>     writer.go:29: 2020-02-23T02:46:55.195Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:55.195Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.server: New leader elected: payload=Node-ddbd0d92-a59a-322c-66fc-96faf05603da
>     writer.go:29: 2020-02-23T02:46:55.202Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:55.210Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:55.210Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.210Z [DEBUG] TestAgentConnectAuthorize_allowTrustDomain.server: Skipping self join check for node since the cluster is too small: node=Node-ddbd0d92-a59a-322c-66fc-96faf05603da
>     writer.go:29: 2020-02-23T02:46:55.210Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.server: member joined, marking health alive: member=Node-ddbd0d92-a59a-322c-66fc-96faf05603da
>     writer.go:29: 2020-02-23T02:46:55.341Z [DEBUG] TestAgentConnectAuthorize_allowTrustDomain: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:55.345Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain: Synced node info
>     writer.go:29: 2020-02-23T02:46:55.469Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:55.469Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:55.469Z [DEBUG] TestAgentConnectAuthorize_allowTrustDomain.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.469Z [WARN]  TestAgentConnectAuthorize_allowTrustDomain.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.469Z [DEBUG] TestAgentConnectAuthorize_allowTrustDomain.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.471Z [WARN]  TestAgentConnectAuthorize_allowTrustDomain.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.473Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:55.473Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain: consul server down
>     writer.go:29: 2020-02-23T02:46:55.473Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain: shutdown complete
>     writer.go:29: 2020-02-23T02:46:55.473Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain: Stopping server: protocol=DNS address=127.0.0.1:16427 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.473Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain: Stopping server: protocol=DNS address=127.0.0.1:16427 network=udp
>     writer.go:29: 2020-02-23T02:46:55.473Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain: Stopping server: protocol=HTTP address=127.0.0.1:16428 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.473Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:55.473Z [INFO]  TestAgentConnectAuthorize_allowTrustDomain: Endpoints down
> === CONT  TestAgentConnectAuthorize_idNotService
> --- PASS: TestAgentConnectAuthorize_allow (0.30s)
>     writer.go:29: 2020-02-23T02:46:55.247Z [WARN]  TestAgentConnectAuthorize_allow: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:55.247Z [DEBUG] TestAgentConnectAuthorize_allow.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:55.248Z [DEBUG] TestAgentConnectAuthorize_allow.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:55.259Z [INFO]  TestAgentConnectAuthorize_allow.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:03444c00-1470-2e8d-3850-7424e3342e71 Address:127.0.0.1:16354}]"
>     writer.go:29: 2020-02-23T02:46:55.259Z [INFO]  TestAgentConnectAuthorize_allow.server.serf.wan: serf: EventMemberJoin: Node-03444c00-1470-2e8d-3850-7424e3342e71.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.260Z [INFO]  TestAgentConnectAuthorize_allow.server.serf.lan: serf: EventMemberJoin: Node-03444c00-1470-2e8d-3850-7424e3342e71 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.260Z [INFO]  TestAgentConnectAuthorize_allow: Started DNS server: address=127.0.0.1:16349 network=udp
>     writer.go:29: 2020-02-23T02:46:55.260Z [INFO]  TestAgentConnectAuthorize_allow.server.raft: entering follower state: follower="Node at 127.0.0.1:16354 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:55.260Z [INFO]  TestAgentConnectAuthorize_allow.server: Adding LAN server: server="Node-03444c00-1470-2e8d-3850-7424e3342e71 (Addr: tcp/127.0.0.1:16354) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:55.260Z [INFO]  TestAgentConnectAuthorize_allow.server: Handled event for server in area: event=member-join server=Node-03444c00-1470-2e8d-3850-7424e3342e71.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:55.260Z [INFO]  TestAgentConnectAuthorize_allow: Started DNS server: address=127.0.0.1:16349 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.261Z [INFO]  TestAgentConnectAuthorize_allow: Started HTTP server: address=127.0.0.1:16350 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.261Z [INFO]  TestAgentConnectAuthorize_allow: started state syncer
>     writer.go:29: 2020-02-23T02:46:55.321Z [WARN]  TestAgentConnectAuthorize_allow.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:55.321Z [INFO]  TestAgentConnectAuthorize_allow.server.raft: entering candidate state: node="Node at 127.0.0.1:16354 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:55.324Z [DEBUG] TestAgentConnectAuthorize_allow.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:55.324Z [DEBUG] TestAgentConnectAuthorize_allow.server.raft: vote granted: from=03444c00-1470-2e8d-3850-7424e3342e71 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:55.324Z [INFO]  TestAgentConnectAuthorize_allow.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:55.324Z [INFO]  TestAgentConnectAuthorize_allow.server.raft: entering leader state: leader="Node at 127.0.0.1:16354 [Leader]"
>     writer.go:29: 2020-02-23T02:46:55.324Z [INFO]  TestAgentConnectAuthorize_allow.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:55.324Z [INFO]  TestAgentConnectAuthorize_allow.server: New leader elected: payload=Node-03444c00-1470-2e8d-3850-7424e3342e71
>     writer.go:29: 2020-02-23T02:46:55.332Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:55.351Z [INFO]  TestAgentConnectAuthorize_allow.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:55.351Z [INFO]  TestAgentConnectAuthorize_allow.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.351Z [DEBUG] TestAgentConnectAuthorize_allow.server: Skipping self join check for node since the cluster is too small: node=Node-03444c00-1470-2e8d-3850-7424e3342e71
>     writer.go:29: 2020-02-23T02:46:55.352Z [INFO]  TestAgentConnectAuthorize_allow.server: member joined, marking health alive: member=Node-03444c00-1470-2e8d-3850-7424e3342e71
>     writer.go:29: 2020-02-23T02:46:55.417Z [DEBUG] TestAgentConnectAuthorize_allow: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:55.421Z [INFO]  TestAgentConnectAuthorize_allow: Synced node info
>     writer.go:29: 2020-02-23T02:46:55.523Z [INFO]  TestAgentConnectAuthorize_allow: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:55.523Z [INFO]  TestAgentConnectAuthorize_allow.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:55.523Z [DEBUG] TestAgentConnectAuthorize_allow.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.523Z [WARN]  TestAgentConnectAuthorize_allow.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.523Z [DEBUG] TestAgentConnectAuthorize_allow.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.525Z [WARN]  TestAgentConnectAuthorize_allow.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.527Z [INFO]  TestAgentConnectAuthorize_allow.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:55.527Z [INFO]  TestAgentConnectAuthorize_allow: consul server down
>     writer.go:29: 2020-02-23T02:46:55.527Z [INFO]  TestAgentConnectAuthorize_allow: shutdown complete
>     writer.go:29: 2020-02-23T02:46:55.527Z [INFO]  TestAgentConnectAuthorize_allow: Stopping server: protocol=DNS address=127.0.0.1:16349 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.527Z [INFO]  TestAgentConnectAuthorize_allow: Stopping server: protocol=DNS address=127.0.0.1:16349 network=udp
>     writer.go:29: 2020-02-23T02:46:55.527Z [INFO]  TestAgentConnectAuthorize_allow: Stopping server: protocol=HTTP address=127.0.0.1:16350 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.527Z [INFO]  TestAgentConnectAuthorize_allow: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:55.527Z [INFO]  TestAgentConnectAuthorize_allow: Endpoints down
> === CONT  TestAgentConnectAuthorize_idInvalidFormat
> --- PASS: TestAgentConnectAuthorize_deny (0.47s)
>     writer.go:29: 2020-02-23T02:46:55.222Z [WARN]  TestAgentConnectAuthorize_deny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:55.223Z [DEBUG] TestAgentConnectAuthorize_deny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:55.223Z [DEBUG] TestAgentConnectAuthorize_deny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:55.258Z [INFO]  TestAgentConnectAuthorize_deny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:3fc61303-fda2-538f-5e18-08d83b7d4464 Address:127.0.0.1:16360}]"
>     writer.go:29: 2020-02-23T02:46:55.258Z [INFO]  TestAgentConnectAuthorize_deny.server.raft: entering follower state: follower="Node at 127.0.0.1:16360 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:55.259Z [INFO]  TestAgentConnectAuthorize_deny.server.serf.wan: serf: EventMemberJoin: Node-3fc61303-fda2-538f-5e18-08d83b7d4464.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.259Z [INFO]  TestAgentConnectAuthorize_deny.server.serf.lan: serf: EventMemberJoin: Node-3fc61303-fda2-538f-5e18-08d83b7d4464 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.259Z [INFO]  TestAgentConnectAuthorize_deny: Started DNS server: address=127.0.0.1:16355 network=udp
>     writer.go:29: 2020-02-23T02:46:55.259Z [INFO]  TestAgentConnectAuthorize_deny.server: Adding LAN server: server="Node-3fc61303-fda2-538f-5e18-08d83b7d4464 (Addr: tcp/127.0.0.1:16360) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:55.259Z [INFO]  TestAgentConnectAuthorize_deny.server: Handled event for server in area: event=member-join server=Node-3fc61303-fda2-538f-5e18-08d83b7d4464.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:55.259Z [INFO]  TestAgentConnectAuthorize_deny: Started DNS server: address=127.0.0.1:16355 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.260Z [INFO]  TestAgentConnectAuthorize_deny: Started HTTP server: address=127.0.0.1:16356 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.260Z [INFO]  TestAgentConnectAuthorize_deny: started state syncer
>     writer.go:29: 2020-02-23T02:46:55.307Z [WARN]  TestAgentConnectAuthorize_deny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:55.307Z [INFO]  TestAgentConnectAuthorize_deny.server.raft: entering candidate state: node="Node at 127.0.0.1:16360 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:55.320Z [DEBUG] TestAgentConnectAuthorize_deny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:55.320Z [DEBUG] TestAgentConnectAuthorize_deny.server.raft: vote granted: from=3fc61303-fda2-538f-5e18-08d83b7d4464 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:55.320Z [INFO]  TestAgentConnectAuthorize_deny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:55.320Z [INFO]  TestAgentConnectAuthorize_deny.server.raft: entering leader state: leader="Node at 127.0.0.1:16360 [Leader]"
>     writer.go:29: 2020-02-23T02:46:55.320Z [INFO]  TestAgentConnectAuthorize_deny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:55.320Z [INFO]  TestAgentConnectAuthorize_deny.server: New leader elected: payload=Node-3fc61303-fda2-538f-5e18-08d83b7d4464
>     writer.go:29: 2020-02-23T02:46:55.327Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:55.345Z [INFO]  TestAgentConnectAuthorize_deny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:55.345Z [INFO]  TestAgentConnectAuthorize_deny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.345Z [DEBUG] TestAgentConnectAuthorize_deny.server: Skipping self join check for node since the cluster is too small: node=Node-3fc61303-fda2-538f-5e18-08d83b7d4464
>     writer.go:29: 2020-02-23T02:46:55.345Z [INFO]  TestAgentConnectAuthorize_deny.server: member joined, marking health alive: member=Node-3fc61303-fda2-538f-5e18-08d83b7d4464
>     writer.go:29: 2020-02-23T02:46:55.487Z [DEBUG] TestAgentConnectAuthorize_deny: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:55.507Z [INFO]  TestAgentConnectAuthorize_deny: Synced node info
>     writer.go:29: 2020-02-23T02:46:55.507Z [DEBUG] TestAgentConnectAuthorize_deny: Node info in sync
>     writer.go:29: 2020-02-23T02:46:55.682Z [INFO]  TestAgentConnectAuthorize_deny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:55.682Z [INFO]  TestAgentConnectAuthorize_deny.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:55.682Z [DEBUG] TestAgentConnectAuthorize_deny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.682Z [WARN]  TestAgentConnectAuthorize_deny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.682Z [DEBUG] TestAgentConnectAuthorize_deny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.684Z [WARN]  TestAgentConnectAuthorize_deny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.685Z [INFO]  TestAgentConnectAuthorize_deny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:55.685Z [INFO]  TestAgentConnectAuthorize_deny: consul server down
>     writer.go:29: 2020-02-23T02:46:55.685Z [INFO]  TestAgentConnectAuthorize_deny: shutdown complete
>     writer.go:29: 2020-02-23T02:46:55.685Z [INFO]  TestAgentConnectAuthorize_deny: Stopping server: protocol=DNS address=127.0.0.1:16355 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.685Z [INFO]  TestAgentConnectAuthorize_deny: Stopping server: protocol=DNS address=127.0.0.1:16355 network=udp
>     writer.go:29: 2020-02-23T02:46:55.685Z [INFO]  TestAgentConnectAuthorize_deny: Stopping server: protocol=HTTP address=127.0.0.1:16356 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.685Z [INFO]  TestAgentConnectAuthorize_deny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:55.685Z [INFO]  TestAgentConnectAuthorize_deny: Endpoints down
> === CONT  TestAgentConnectAuthorize_noTarget
> --- PASS: TestAgentConnectAuthorize_idNotService (0.25s)
>     writer.go:29: 2020-02-23T02:46:55.494Z [WARN]  TestAgentConnectAuthorize_idNotService: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:55.494Z [DEBUG] TestAgentConnectAuthorize_idNotService.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:55.494Z [DEBUG] TestAgentConnectAuthorize_idNotService.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:55.513Z [INFO]  TestAgentConnectAuthorize_idNotService.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:233dd9b6-93c7-2079-8013-35bd9fbcf157 Address:127.0.0.1:16420}]"
>     writer.go:29: 2020-02-23T02:46:55.514Z [INFO]  TestAgentConnectAuthorize_idNotService.server.raft: entering follower state: follower="Node at 127.0.0.1:16420 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:55.514Z [INFO]  TestAgentConnectAuthorize_idNotService.server.serf.wan: serf: EventMemberJoin: Node-233dd9b6-93c7-2079-8013-35bd9fbcf157.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.514Z [INFO]  TestAgentConnectAuthorize_idNotService.server.serf.lan: serf: EventMemberJoin: Node-233dd9b6-93c7-2079-8013-35bd9fbcf157 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.514Z [INFO]  TestAgentConnectAuthorize_idNotService.server: Adding LAN server: server="Node-233dd9b6-93c7-2079-8013-35bd9fbcf157 (Addr: tcp/127.0.0.1:16420) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:55.514Z [INFO]  TestAgentConnectAuthorize_idNotService: Started DNS server: address=127.0.0.1:16415 network=udp
>     writer.go:29: 2020-02-23T02:46:55.515Z [INFO]  TestAgentConnectAuthorize_idNotService.server: Handled event for server in area: event=member-join server=Node-233dd9b6-93c7-2079-8013-35bd9fbcf157.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:55.515Z [INFO]  TestAgentConnectAuthorize_idNotService: Started DNS server: address=127.0.0.1:16415 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.515Z [INFO]  TestAgentConnectAuthorize_idNotService: Started HTTP server: address=127.0.0.1:16416 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.515Z [INFO]  TestAgentConnectAuthorize_idNotService: started state syncer
>     writer.go:29: 2020-02-23T02:46:55.578Z [WARN]  TestAgentConnectAuthorize_idNotService.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:55.578Z [INFO]  TestAgentConnectAuthorize_idNotService.server.raft: entering candidate state: node="Node at 127.0.0.1:16420 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:55.589Z [DEBUG] TestAgentConnectAuthorize_idNotService.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:55.589Z [DEBUG] TestAgentConnectAuthorize_idNotService.server.raft: vote granted: from=233dd9b6-93c7-2079-8013-35bd9fbcf157 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:55.589Z [INFO]  TestAgentConnectAuthorize_idNotService.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:55.589Z [INFO]  TestAgentConnectAuthorize_idNotService.server.raft: entering leader state: leader="Node at 127.0.0.1:16420 [Leader]"
>     writer.go:29: 2020-02-23T02:46:55.589Z [INFO]  TestAgentConnectAuthorize_idNotService.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:55.589Z [INFO]  TestAgentConnectAuthorize_idNotService.server: New leader elected: payload=Node-233dd9b6-93c7-2079-8013-35bd9fbcf157
>     writer.go:29: 2020-02-23T02:46:55.597Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:55.604Z [INFO]  TestAgentConnectAuthorize_idNotService.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:55.604Z [INFO]  TestAgentConnectAuthorize_idNotService.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.604Z [DEBUG] TestAgentConnectAuthorize_idNotService.server: Skipping self join check for node since the cluster is too small: node=Node-233dd9b6-93c7-2079-8013-35bd9fbcf157
>     writer.go:29: 2020-02-23T02:46:55.604Z [INFO]  TestAgentConnectAuthorize_idNotService.server: member joined, marking health alive: member=Node-233dd9b6-93c7-2079-8013-35bd9fbcf157
>     writer.go:29: 2020-02-23T02:46:55.609Z [DEBUG] TestAgentConnectAuthorize_idNotService: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:55.612Z [INFO]  TestAgentConnectAuthorize_idNotService: Synced node info
>     writer.go:29: 2020-02-23T02:46:55.612Z [DEBUG] TestAgentConnectAuthorize_idNotService: Node info in sync
>     writer.go:29: 2020-02-23T02:46:55.720Z [INFO]  TestAgentConnectAuthorize_idNotService: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:55.720Z [INFO]  TestAgentConnectAuthorize_idNotService.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:55.720Z [DEBUG] TestAgentConnectAuthorize_idNotService.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.720Z [WARN]  TestAgentConnectAuthorize_idNotService.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.720Z [DEBUG] TestAgentConnectAuthorize_idNotService.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.722Z [WARN]  TestAgentConnectAuthorize_idNotService.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.723Z [INFO]  TestAgentConnectAuthorize_idNotService.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:55.723Z [INFO]  TestAgentConnectAuthorize_idNotService: consul server down
>     writer.go:29: 2020-02-23T02:46:55.723Z [INFO]  TestAgentConnectAuthorize_idNotService: shutdown complete
>     writer.go:29: 2020-02-23T02:46:55.723Z [INFO]  TestAgentConnectAuthorize_idNotService: Stopping server: protocol=DNS address=127.0.0.1:16415 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.723Z [INFO]  TestAgentConnectAuthorize_idNotService: Stopping server: protocol=DNS address=127.0.0.1:16415 network=udp
>     writer.go:29: 2020-02-23T02:46:55.724Z [INFO]  TestAgentConnectAuthorize_idNotService: Stopping server: protocol=HTTP address=127.0.0.1:16416 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.724Z [INFO]  TestAgentConnectAuthorize_idNotService: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:55.724Z [INFO]  TestAgentConnectAuthorize_idNotService: Endpoints down
> === CONT  TestAgentConnectAuthorize_badBody
> --- PASS: TestAgentConnectAuthorize_noTarget (0.15s)
>     writer.go:29: 2020-02-23T02:46:55.693Z [WARN]  TestAgentConnectAuthorize_noTarget: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:55.693Z [DEBUG] TestAgentConnectAuthorize_noTarget.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:55.694Z [DEBUG] TestAgentConnectAuthorize_noTarget.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:55.703Z [INFO]  TestAgentConnectAuthorize_noTarget.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:ac853405-6ed0-be07-8a85-9dcc1a46f3e3 Address:127.0.0.1:16456}]"
>     writer.go:29: 2020-02-23T02:46:55.703Z [INFO]  TestAgentConnectAuthorize_noTarget.server.raft: entering follower state: follower="Node at 127.0.0.1:16456 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:55.704Z [INFO]  TestAgentConnectAuthorize_noTarget.server.serf.wan: serf: EventMemberJoin: Node-ac853405-6ed0-be07-8a85-9dcc1a46f3e3.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.704Z [INFO]  TestAgentConnectAuthorize_noTarget.server.serf.lan: serf: EventMemberJoin: Node-ac853405-6ed0-be07-8a85-9dcc1a46f3e3 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.704Z [INFO]  TestAgentConnectAuthorize_noTarget.server: Adding LAN server: server="Node-ac853405-6ed0-be07-8a85-9dcc1a46f3e3 (Addr: tcp/127.0.0.1:16456) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:55.704Z [INFO]  TestAgentConnectAuthorize_noTarget: Started DNS server: address=127.0.0.1:16451 network=udp
>     writer.go:29: 2020-02-23T02:46:55.704Z [INFO]  TestAgentConnectAuthorize_noTarget.server: Handled event for server in area: event=member-join server=Node-ac853405-6ed0-be07-8a85-9dcc1a46f3e3.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:55.704Z [INFO]  TestAgentConnectAuthorize_noTarget: Started DNS server: address=127.0.0.1:16451 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.705Z [INFO]  TestAgentConnectAuthorize_noTarget: Started HTTP server: address=127.0.0.1:16452 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.705Z [INFO]  TestAgentConnectAuthorize_noTarget: started state syncer
>     writer.go:29: 2020-02-23T02:46:55.758Z [WARN]  TestAgentConnectAuthorize_noTarget.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:55.758Z [INFO]  TestAgentConnectAuthorize_noTarget.server.raft: entering candidate state: node="Node at 127.0.0.1:16456 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:55.761Z [DEBUG] TestAgentConnectAuthorize_noTarget.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:55.761Z [DEBUG] TestAgentConnectAuthorize_noTarget.server.raft: vote granted: from=ac853405-6ed0-be07-8a85-9dcc1a46f3e3 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:55.761Z [INFO]  TestAgentConnectAuthorize_noTarget.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:55.761Z [INFO]  TestAgentConnectAuthorize_noTarget.server.raft: entering leader state: leader="Node at 127.0.0.1:16456 [Leader]"
>     writer.go:29: 2020-02-23T02:46:55.761Z [INFO]  TestAgentConnectAuthorize_noTarget.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:55.761Z [INFO]  TestAgentConnectAuthorize_noTarget.server: New leader elected: payload=Node-ac853405-6ed0-be07-8a85-9dcc1a46f3e3
>     writer.go:29: 2020-02-23T02:46:55.769Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:55.784Z [INFO]  TestAgentConnectAuthorize_noTarget.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:55.784Z [INFO]  TestAgentConnectAuthorize_noTarget.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.784Z [DEBUG] TestAgentConnectAuthorize_noTarget.server: Skipping self join check for node since the cluster is too small: node=Node-ac853405-6ed0-be07-8a85-9dcc1a46f3e3
>     writer.go:29: 2020-02-23T02:46:55.784Z [INFO]  TestAgentConnectAuthorize_noTarget.server: member joined, marking health alive: member=Node-ac853405-6ed0-be07-8a85-9dcc1a46f3e3
>     writer.go:29: 2020-02-23T02:46:55.834Z [INFO]  TestAgentConnectAuthorize_noTarget: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:55.834Z [INFO]  TestAgentConnectAuthorize_noTarget.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:55.834Z [DEBUG] TestAgentConnectAuthorize_noTarget.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.834Z [WARN]  TestAgentConnectAuthorize_noTarget.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.834Z [ERROR] TestAgentConnectAuthorize_noTarget.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:55.834Z [DEBUG] TestAgentConnectAuthorize_noTarget.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.836Z [WARN]  TestAgentConnectAuthorize_noTarget.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.838Z [INFO]  TestAgentConnectAuthorize_noTarget.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:55.838Z [INFO]  TestAgentConnectAuthorize_noTarget: consul server down
>     writer.go:29: 2020-02-23T02:46:55.838Z [INFO]  TestAgentConnectAuthorize_noTarget: shutdown complete
>     writer.go:29: 2020-02-23T02:46:55.838Z [INFO]  TestAgentConnectAuthorize_noTarget: Stopping server: protocol=DNS address=127.0.0.1:16451 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.838Z [INFO]  TestAgentConnectAuthorize_noTarget: Stopping server: protocol=DNS address=127.0.0.1:16451 network=udp
>     writer.go:29: 2020-02-23T02:46:55.838Z [INFO]  TestAgentConnectAuthorize_noTarget: Stopping server: protocol=HTTP address=127.0.0.1:16452 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.838Z [INFO]  TestAgentConnectAuthorize_noTarget: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:55.838Z [INFO]  TestAgentConnectAuthorize_noTarget: Endpoints down
> === CONT  TestAgentConnectCALeafCert_secondaryDC_good
> --- PASS: TestAgentConnectAuthorize_badBody (0.15s)
>     writer.go:29: 2020-02-23T02:46:55.731Z [WARN]  TestAgentConnectAuthorize_badBody: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:55.731Z [DEBUG] TestAgentConnectAuthorize_badBody.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:55.732Z [DEBUG] TestAgentConnectAuthorize_badBody.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:55.741Z [INFO]  TestAgentConnectAuthorize_badBody.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:9e225753-5271-be42-dc32-40100439adc0 Address:127.0.0.1:16450}]"
>     writer.go:29: 2020-02-23T02:46:55.741Z [INFO]  TestAgentConnectAuthorize_badBody.server.raft: entering follower state: follower="Node at 127.0.0.1:16450 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:55.741Z [INFO]  TestAgentConnectAuthorize_badBody.server.serf.wan: serf: EventMemberJoin: Node-9e225753-5271-be42-dc32-40100439adc0.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.742Z [INFO]  TestAgentConnectAuthorize_badBody.server.serf.lan: serf: EventMemberJoin: Node-9e225753-5271-be42-dc32-40100439adc0 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.742Z [INFO]  TestAgentConnectAuthorize_badBody.server: Adding LAN server: server="Node-9e225753-5271-be42-dc32-40100439adc0 (Addr: tcp/127.0.0.1:16450) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:55.742Z [INFO]  TestAgentConnectAuthorize_badBody.server: Handled event for server in area: event=member-join server=Node-9e225753-5271-be42-dc32-40100439adc0.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:55.742Z [INFO]  TestAgentConnectAuthorize_badBody: Started DNS server: address=127.0.0.1:16445 network=udp
>     writer.go:29: 2020-02-23T02:46:55.742Z [INFO]  TestAgentConnectAuthorize_badBody: Started DNS server: address=127.0.0.1:16445 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.742Z [INFO]  TestAgentConnectAuthorize_badBody: Started HTTP server: address=127.0.0.1:16446 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.742Z [INFO]  TestAgentConnectAuthorize_badBody: started state syncer
>     writer.go:29: 2020-02-23T02:46:55.785Z [WARN]  TestAgentConnectAuthorize_badBody.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:55.785Z [INFO]  TestAgentConnectAuthorize_badBody.server.raft: entering candidate state: node="Node at 127.0.0.1:16450 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:55.788Z [DEBUG] TestAgentConnectAuthorize_badBody.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:55.788Z [DEBUG] TestAgentConnectAuthorize_badBody.server.raft: vote granted: from=9e225753-5271-be42-dc32-40100439adc0 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:55.788Z [INFO]  TestAgentConnectAuthorize_badBody.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:55.788Z [INFO]  TestAgentConnectAuthorize_badBody.server.raft: entering leader state: leader="Node at 127.0.0.1:16450 [Leader]"
>     writer.go:29: 2020-02-23T02:46:55.789Z [INFO]  TestAgentConnectAuthorize_badBody.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:55.789Z [INFO]  TestAgentConnectAuthorize_badBody.server: New leader elected: payload=Node-9e225753-5271-be42-dc32-40100439adc0
>     writer.go:29: 2020-02-23T02:46:55.796Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:55.804Z [INFO]  TestAgentConnectAuthorize_badBody.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:55.804Z [INFO]  TestAgentConnectAuthorize_badBody.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.804Z [DEBUG] TestAgentConnectAuthorize_badBody.server: Skipping self join check for node since the cluster is too small: node=Node-9e225753-5271-be42-dc32-40100439adc0
>     writer.go:29: 2020-02-23T02:46:55.804Z [INFO]  TestAgentConnectAuthorize_badBody.server: member joined, marking health alive: member=Node-9e225753-5271-be42-dc32-40100439adc0
>     writer.go:29: 2020-02-23T02:46:55.808Z [DEBUG] TestAgentConnectAuthorize_badBody: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:55.810Z [INFO]  TestAgentConnectAuthorize_badBody: Synced node info
>     writer.go:29: 2020-02-23T02:46:55.810Z [DEBUG] TestAgentConnectAuthorize_badBody: Node info in sync
>     writer.go:29: 2020-02-23T02:46:55.872Z [INFO]  TestAgentConnectAuthorize_badBody: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:55.872Z [INFO]  TestAgentConnectAuthorize_badBody.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:55.872Z [DEBUG] TestAgentConnectAuthorize_badBody.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.872Z [WARN]  TestAgentConnectAuthorize_badBody.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.872Z [DEBUG] TestAgentConnectAuthorize_badBody.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.874Z [WARN]  TestAgentConnectAuthorize_badBody.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.876Z [INFO]  TestAgentConnectAuthorize_badBody.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:55.876Z [INFO]  TestAgentConnectAuthorize_badBody: consul server down
>     writer.go:29: 2020-02-23T02:46:55.876Z [INFO]  TestAgentConnectAuthorize_badBody: shutdown complete
>     writer.go:29: 2020-02-23T02:46:55.876Z [INFO]  TestAgentConnectAuthorize_badBody: Stopping server: protocol=DNS address=127.0.0.1:16445 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.876Z [INFO]  TestAgentConnectAuthorize_badBody: Stopping server: protocol=DNS address=127.0.0.1:16445 network=udp
>     writer.go:29: 2020-02-23T02:46:55.876Z [INFO]  TestAgentConnectAuthorize_badBody: Stopping server: protocol=HTTP address=127.0.0.1:16446 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.876Z [INFO]  TestAgentConnectAuthorize_badBody: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:55.876Z [INFO]  TestAgentConnectAuthorize_badBody: Endpoints down
> === CONT  TestAgentConnectCALeafCert_aclServiceReadDeny
> --- PASS: TestAgentConnectAuthorize_idInvalidFormat (0.42s)
>     writer.go:29: 2020-02-23T02:46:55.534Z [WARN]  TestAgentConnectAuthorize_idInvalidFormat: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:55.534Z [DEBUG] TestAgentConnectAuthorize_idInvalidFormat.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:55.534Z [DEBUG] TestAgentConnectAuthorize_idInvalidFormat.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:55.543Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:76fc7333-358a-ed4c-c46c-dd0dee9a5371 Address:127.0.0.1:16438}]"
>     writer.go:29: 2020-02-23T02:46:55.543Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.server.raft: entering follower state: follower="Node at 127.0.0.1:16438 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:55.544Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.server.serf.wan: serf: EventMemberJoin: Node-76fc7333-358a-ed4c-c46c-dd0dee9a5371.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.544Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.server.serf.lan: serf: EventMemberJoin: Node-76fc7333-358a-ed4c-c46c-dd0dee9a5371 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.544Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.server: Handled event for server in area: event=member-join server=Node-76fc7333-358a-ed4c-c46c-dd0dee9a5371.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:55.544Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.server: Adding LAN server: server="Node-76fc7333-358a-ed4c-c46c-dd0dee9a5371 (Addr: tcp/127.0.0.1:16438) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:55.544Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat: Started DNS server: address=127.0.0.1:16433 network=udp
>     writer.go:29: 2020-02-23T02:46:55.545Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat: Started DNS server: address=127.0.0.1:16433 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.545Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat: Started HTTP server: address=127.0.0.1:16434 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.545Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat: started state syncer
>     writer.go:29: 2020-02-23T02:46:55.605Z [WARN]  TestAgentConnectAuthorize_idInvalidFormat.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:55.605Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.server.raft: entering candidate state: node="Node at 127.0.0.1:16438 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:55.609Z [DEBUG] TestAgentConnectAuthorize_idInvalidFormat.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:55.609Z [DEBUG] TestAgentConnectAuthorize_idInvalidFormat.server.raft: vote granted: from=76fc7333-358a-ed4c-c46c-dd0dee9a5371 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:55.609Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:55.609Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.server.raft: entering leader state: leader="Node at 127.0.0.1:16438 [Leader]"
>     writer.go:29: 2020-02-23T02:46:55.609Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:55.609Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.server: New leader elected: payload=Node-76fc7333-358a-ed4c-c46c-dd0dee9a5371
>     writer.go:29: 2020-02-23T02:46:55.617Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:55.624Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:55.624Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.624Z [DEBUG] TestAgentConnectAuthorize_idInvalidFormat.server: Skipping self join check for node since the cluster is too small: node=Node-76fc7333-358a-ed4c-c46c-dd0dee9a5371
>     writer.go:29: 2020-02-23T02:46:55.624Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.server: member joined, marking health alive: member=Node-76fc7333-358a-ed4c-c46c-dd0dee9a5371
>     writer.go:29: 2020-02-23T02:46:55.901Z [DEBUG] TestAgentConnectAuthorize_idInvalidFormat: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:55.905Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat: Synced node info
>     writer.go:29: 2020-02-23T02:46:55.905Z [DEBUG] TestAgentConnectAuthorize_idInvalidFormat: Node info in sync
>     writer.go:29: 2020-02-23T02:46:55.944Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:55.944Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:55.944Z [DEBUG] TestAgentConnectAuthorize_idInvalidFormat.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.944Z [WARN]  TestAgentConnectAuthorize_idInvalidFormat.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.944Z [DEBUG] TestAgentConnectAuthorize_idInvalidFormat.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.946Z [WARN]  TestAgentConnectAuthorize_idInvalidFormat.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:55.948Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:55.948Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat: consul server down
>     writer.go:29: 2020-02-23T02:46:55.948Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat: shutdown complete
>     writer.go:29: 2020-02-23T02:46:55.948Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat: Stopping server: protocol=DNS address=127.0.0.1:16433 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.948Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat: Stopping server: protocol=DNS address=127.0.0.1:16433 network=udp
>     writer.go:29: 2020-02-23T02:46:55.948Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat: Stopping server: protocol=HTTP address=127.0.0.1:16434 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.948Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:55.948Z [INFO]  TestAgentConnectAuthorize_idInvalidFormat: Endpoints down
> === CONT  TestAgentConnectCALeafCert_aclServiceWrite
> --- PASS: TestAgentConnectCALeafCert_aclServiceReadDeny (0.42s)
>     writer.go:29: 2020-02-23T02:46:55.886Z [WARN]  TestAgentConnectCALeafCert_aclServiceReadDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:55.886Z [WARN]  TestAgentConnectCALeafCert_aclServiceReadDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:55.886Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:55.886Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:55.903Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:db698caa-1124-bf76-9b2e-8f1e80d0064a Address:127.0.0.1:16444}]"
>     writer.go:29: 2020-02-23T02:46:55.903Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server.serf.wan: serf: EventMemberJoin: Node-db698caa-1124-bf76-9b2e-8f1e80d0064a.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.904Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16444 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:55.904Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server.serf.lan: serf: EventMemberJoin: Node-db698caa-1124-bf76-9b2e-8f1e80d0064a 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.904Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server: Handled event for server in area: event=member-join server=Node-db698caa-1124-bf76-9b2e-8f1e80d0064a.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:55.904Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server: Adding LAN server: server="Node-db698caa-1124-bf76-9b2e-8f1e80d0064a (Addr: tcp/127.0.0.1:16444) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:55.904Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny: Started DNS server: address=127.0.0.1:16439 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.904Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny: Started DNS server: address=127.0.0.1:16439 network=udp
>     writer.go:29: 2020-02-23T02:46:55.905Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny: Started HTTP server: address=127.0.0.1:16440 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.905Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny: started state syncer
>     writer.go:29: 2020-02-23T02:46:55.965Z [WARN]  TestAgentConnectCALeafCert_aclServiceReadDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:55.965Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16444 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:55.969Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:55.969Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny.server.raft: vote granted: from=db698caa-1124-bf76-9b2e-8f1e80d0064a term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:55.969Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:55.969Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16444 [Leader]"
>     writer.go:29: 2020-02-23T02:46:55.969Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:55.969Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server: New leader elected: payload=Node-db698caa-1124-bf76-9b2e-8f1e80d0064a
>     writer.go:29: 2020-02-23T02:46:55.971Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:55.973Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:55.973Z [WARN]  TestAgentConnectCALeafCert_aclServiceReadDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:55.976Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:55.979Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:55.979Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:55.979Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:55.979Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server.serf.lan: serf: EventMemberUpdate: Node-db698caa-1124-bf76-9b2e-8f1e80d0064a
>     writer.go:29: 2020-02-23T02:46:55.979Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server.serf.wan: serf: EventMemberUpdate: Node-db698caa-1124-bf76-9b2e-8f1e80d0064a.dc1
>     writer.go:29: 2020-02-23T02:46:55.979Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server: Handled event for server in area: event=member-update server=Node-db698caa-1124-bf76-9b2e-8f1e80d0064a.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:55.983Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:55.989Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:55.989Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.989Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny.server: Skipping self join check for node since the cluster is too small: node=Node-db698caa-1124-bf76-9b2e-8f1e80d0064a
>     writer.go:29: 2020-02-23T02:46:55.989Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server: member joined, marking health alive: member=Node-db698caa-1124-bf76-9b2e-8f1e80d0064a
>     writer.go:29: 2020-02-23T02:46:55.992Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny.server: Skipping self join check for node since the cluster is too small: node=Node-db698caa-1124-bf76-9b2e-8f1e80d0064a
>     writer.go:29: 2020-02-23T02:46:56.206Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:56.251Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny: Synced node info
>     writer.go:29: 2020-02-23T02:46:56.252Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:46:56.277Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny.acl: dropping node from result due to ACLs: node=Node-db698caa-1124-bf76-9b2e-8f1e80d0064a
>     writer.go:29: 2020-02-23T02:46:56.277Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny.acl: dropping node from result due to ACLs: node=Node-db698caa-1124-bf76-9b2e-8f1e80d0064a
>     writer.go:29: 2020-02-23T02:46:56.282Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:46:56.285Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny: Synced service: service=test-id
>     writer.go:29: 2020-02-23T02:46:56.285Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny: Check in sync: check=service:test-id
>     writer.go:29: 2020-02-23T02:46:56.288Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:56.288Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:56.288Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:56.288Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:56.288Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:56.288Z [WARN]  TestAgentConnectCALeafCert_aclServiceReadDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:56.288Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:56.288Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:56.288Z [DEBUG] TestAgentConnectCALeafCert_aclServiceReadDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:56.292Z [WARN]  TestAgentConnectCALeafCert_aclServiceReadDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:56.294Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:56.294Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny: consul server down
>     writer.go:29: 2020-02-23T02:46:56.294Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:46:56.294Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny: Stopping server: protocol=DNS address=127.0.0.1:16439 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.294Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny: Stopping server: protocol=DNS address=127.0.0.1:16439 network=udp
>     writer.go:29: 2020-02-23T02:46:56.294Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny: Stopping server: protocol=HTTP address=127.0.0.1:16440 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.294Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:56.294Z [INFO]  TestAgentConnectCALeafCert_aclServiceReadDeny: Endpoints down
> === CONT  TestAgentConnectCALeafCert_aclDefaultDeny
> 2020-02-23T02:46:56.427Z [ERROR] watch.watch: Watch errored: type=key error="Get https://127.0.0.1:17143/v1/kv/asdf: dial tcp 127.0.0.1:17143: connect: connection refused" retry=20s
> --- PASS: TestAgentConnectCALeafCert_aclServiceWrite (0.52s)
>     writer.go:29: 2020-02-23T02:46:55.956Z [WARN]  TestAgentConnectCALeafCert_aclServiceWrite: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:55.956Z [WARN]  TestAgentConnectCALeafCert_aclServiceWrite: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:55.957Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:55.957Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:55.970Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:45f0a58f-e866-bbf6-5782-ad541e281bcf Address:127.0.0.1:16468}]"
>     writer.go:29: 2020-02-23T02:46:55.970Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server.serf.wan: serf: EventMemberJoin: Node-45f0a58f-e866-bbf6-5782-ad541e281bcf.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.971Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server.serf.lan: serf: EventMemberJoin: Node-45f0a58f-e866-bbf6-5782-ad541e281bcf 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.971Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite: Started DNS server: address=127.0.0.1:16463 network=udp
>     writer.go:29: 2020-02-23T02:46:55.971Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server.raft: entering follower state: follower="Node at 127.0.0.1:16468 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:55.971Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server: Adding LAN server: server="Node-45f0a58f-e866-bbf6-5782-ad541e281bcf (Addr: tcp/127.0.0.1:16468) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:55.971Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server: Handled event for server in area: event=member-join server=Node-45f0a58f-e866-bbf6-5782-ad541e281bcf.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:55.971Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite: Started DNS server: address=127.0.0.1:16463 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.972Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite: Started HTTP server: address=127.0.0.1:16464 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.972Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite: started state syncer
>     writer.go:29: 2020-02-23T02:46:56.013Z [WARN]  TestAgentConnectCALeafCert_aclServiceWrite.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:56.013Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server.raft: entering candidate state: node="Node at 127.0.0.1:16468 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:56.017Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:56.017Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.server.raft: vote granted: from=45f0a58f-e866-bbf6-5782-ad541e281bcf term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:56.017Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:56.017Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server.raft: entering leader state: leader="Node at 127.0.0.1:16468 [Leader]"
>     writer.go:29: 2020-02-23T02:46:56.017Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:56.017Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server: New leader elected: payload=Node-45f0a58f-e866-bbf6-5782-ad541e281bcf
>     writer.go:29: 2020-02-23T02:46:56.019Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:56.020Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:56.020Z [WARN]  TestAgentConnectCALeafCert_aclServiceWrite.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:56.021Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:56.021Z [WARN]  TestAgentConnectCALeafCert_aclServiceWrite.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:56.023Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:56.028Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:56.028Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:56.028Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:56.028Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server.serf.lan: serf: EventMemberUpdate: Node-45f0a58f-e866-bbf6-5782-ad541e281bcf
>     writer.go:29: 2020-02-23T02:46:56.028Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server.serf.wan: serf: EventMemberUpdate: Node-45f0a58f-e866-bbf6-5782-ad541e281bcf.dc1
>     writer.go:29: 2020-02-23T02:46:56.028Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:56.028Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:46:56.028Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server: Handled event for server in area: event=member-update server=Node-45f0a58f-e866-bbf6-5782-ad541e281bcf.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:56.028Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server.serf.lan: serf: EventMemberUpdate: Node-45f0a58f-e866-bbf6-5782-ad541e281bcf
>     writer.go:29: 2020-02-23T02:46:56.028Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server.serf.wan: serf: EventMemberUpdate: Node-45f0a58f-e866-bbf6-5782-ad541e281bcf.dc1
>     writer.go:29: 2020-02-23T02:46:56.028Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server: Handled event for server in area: event=member-update server=Node-45f0a58f-e866-bbf6-5782-ad541e281bcf.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:56.032Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:56.039Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:56.039Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:56.039Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.server: Skipping self join check for node since the cluster is too small: node=Node-45f0a58f-e866-bbf6-5782-ad541e281bcf
>     writer.go:29: 2020-02-23T02:46:56.039Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server: member joined, marking health alive: member=Node-45f0a58f-e866-bbf6-5782-ad541e281bcf
>     writer.go:29: 2020-02-23T02:46:56.041Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.server: Skipping self join check for node since the cluster is too small: node=Node-45f0a58f-e866-bbf6-5782-ad541e281bcf
>     writer.go:29: 2020-02-23T02:46:56.041Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.server: Skipping self join check for node since the cluster is too small: node=Node-45f0a58f-e866-bbf6-5782-ad541e281bcf
>     writer.go:29: 2020-02-23T02:46:56.141Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:56.143Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite: Synced node info
>     writer.go:29: 2020-02-23T02:46:56.371Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.acl: dropping node from result due to ACLs: node=Node-45f0a58f-e866-bbf6-5782-ad541e281bcf
>     writer.go:29: 2020-02-23T02:46:56.371Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.acl: dropping node from result due to ACLs: node=Node-45f0a58f-e866-bbf6-5782-ad541e281bcf
>     writer.go:29: 2020-02-23T02:46:56.441Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite: Node info in sync
>     writer.go:29: 2020-02-23T02:46:56.444Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite: Synced service: service=test-id
>     writer.go:29: 2020-02-23T02:46:56.444Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite: Check in sync: check=service:test-id
>     writer.go:29: 2020-02-23T02:46:56.460Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:56.460Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:56.460Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:56.460Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:56.460Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:56.460Z [WARN]  TestAgentConnectCALeafCert_aclServiceWrite.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:56.460Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:56.460Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:56.460Z [DEBUG] TestAgentConnectCALeafCert_aclServiceWrite.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:56.461Z [WARN]  TestAgentConnectCALeafCert_aclServiceWrite.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:56.464Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:56.465Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite: consul server down
>     writer.go:29: 2020-02-23T02:46:56.465Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite: shutdown complete
>     writer.go:29: 2020-02-23T02:46:56.465Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite: Stopping server: protocol=DNS address=127.0.0.1:16463 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.465Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite: Stopping server: protocol=DNS address=127.0.0.1:16463 network=udp
>     writer.go:29: 2020-02-23T02:46:56.465Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite: Stopping server: protocol=HTTP address=127.0.0.1:16464 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.465Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:56.465Z [INFO]  TestAgentConnectCALeafCert_aclServiceWrite: Endpoints down
> === CONT  TestAgentConnectCARoots_list
> --- PASS: TestAgentConnectCALeafCert_aclDefaultDeny (0.21s)
>     writer.go:29: 2020-02-23T02:46:56.303Z [WARN]  TestAgentConnectCALeafCert_aclDefaultDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:56.303Z [WARN]  TestAgentConnectCALeafCert_aclDefaultDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:56.303Z [DEBUG] TestAgentConnectCALeafCert_aclDefaultDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:56.303Z [DEBUG] TestAgentConnectCALeafCert_aclDefaultDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:56.312Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:9355c83d-59cd-1663-a446-fe30f3e689f9 Address:127.0.0.1:16480}]"
>     writer.go:29: 2020-02-23T02:46:56.312Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server.serf.wan: serf: EventMemberJoin: Node-9355c83d-59cd-1663-a446-fe30f3e689f9.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:56.313Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server.serf.lan: serf: EventMemberJoin: Node-9355c83d-59cd-1663-a446-fe30f3e689f9 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:56.313Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny: Started DNS server: address=127.0.0.1:16475 network=udp
>     writer.go:29: 2020-02-23T02:46:56.313Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16480 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:56.313Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server: Adding LAN server: server="Node-9355c83d-59cd-1663-a446-fe30f3e689f9 (Addr: tcp/127.0.0.1:16480) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:56.313Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server: Handled event for server in area: event=member-join server=Node-9355c83d-59cd-1663-a446-fe30f3e689f9.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:56.313Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny: Started DNS server: address=127.0.0.1:16475 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.314Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny: Started HTTP server: address=127.0.0.1:16476 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.314Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny: started state syncer
>     writer.go:29: 2020-02-23T02:46:56.381Z [WARN]  TestAgentConnectCALeafCert_aclDefaultDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:56.382Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16480 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:56.441Z [DEBUG] TestAgentConnectCALeafCert_aclDefaultDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:56.441Z [DEBUG] TestAgentConnectCALeafCert_aclDefaultDeny.server.raft: vote granted: from=9355c83d-59cd-1663-a446-fe30f3e689f9 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:56.441Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:56.441Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16480 [Leader]"
>     writer.go:29: 2020-02-23T02:46:56.441Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:56.441Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server: New leader elected: payload=Node-9355c83d-59cd-1663-a446-fe30f3e689f9
>     writer.go:29: 2020-02-23T02:46:56.444Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:56.446Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:56.446Z [WARN]  TestAgentConnectCALeafCert_aclDefaultDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:56.459Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:56.462Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:56.462Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:56.462Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:56.462Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server.serf.lan: serf: EventMemberUpdate: Node-9355c83d-59cd-1663-a446-fe30f3e689f9
>     writer.go:29: 2020-02-23T02:46:56.462Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server.serf.wan: serf: EventMemberUpdate: Node-9355c83d-59cd-1663-a446-fe30f3e689f9.dc1
>     writer.go:29: 2020-02-23T02:46:56.462Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server: Handled event for server in area: event=member-update server=Node-9355c83d-59cd-1663-a446-fe30f3e689f9.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:56.466Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:56.473Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:56.473Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:56.473Z [DEBUG] TestAgentConnectCALeafCert_aclDefaultDeny.server: Skipping self join check for node since the cluster is too small: node=Node-9355c83d-59cd-1663-a446-fe30f3e689f9
>     writer.go:29: 2020-02-23T02:46:56.473Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server: member joined, marking health alive: member=Node-9355c83d-59cd-1663-a446-fe30f3e689f9
>     writer.go:29: 2020-02-23T02:46:56.476Z [DEBUG] TestAgentConnectCALeafCert_aclDefaultDeny.server: Skipping self join check for node since the cluster is too small: node=Node-9355c83d-59cd-1663-a446-fe30f3e689f9
>     writer.go:29: 2020-02-23T02:46:56.488Z [DEBUG] TestAgentConnectCALeafCert_aclDefaultDeny.acl: dropping node from result due to ACLs: node=Node-9355c83d-59cd-1663-a446-fe30f3e689f9
>     writer.go:29: 2020-02-23T02:46:56.488Z [DEBUG] TestAgentConnectCALeafCert_aclDefaultDeny.acl: dropping node from result due to ACLs: node=Node-9355c83d-59cd-1663-a446-fe30f3e689f9
>     writer.go:29: 2020-02-23T02:46:56.495Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny: Synced node info
>     writer.go:29: 2020-02-23T02:46:56.497Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny: Synced service: service=test-id
>     writer.go:29: 2020-02-23T02:46:56.497Z [DEBUG] TestAgentConnectCALeafCert_aclDefaultDeny: Check in sync: check=service:test-id
>     writer.go:29: 2020-02-23T02:46:56.499Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:56.499Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:56.499Z [DEBUG] TestAgentConnectCALeafCert_aclDefaultDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:56.499Z [DEBUG] TestAgentConnectCALeafCert_aclDefaultDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:56.499Z [DEBUG] TestAgentConnectCALeafCert_aclDefaultDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:56.499Z [WARN]  TestAgentConnectCALeafCert_aclDefaultDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:56.499Z [ERROR] TestAgentConnectCALeafCert_aclDefaultDeny.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:56.499Z [DEBUG] TestAgentConnectCALeafCert_aclDefaultDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:56.499Z [DEBUG] TestAgentConnectCALeafCert_aclDefaultDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:56.499Z [DEBUG] TestAgentConnectCALeafCert_aclDefaultDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:56.501Z [WARN]  TestAgentConnectCALeafCert_aclDefaultDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:56.502Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:56.502Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny: consul server down
>     writer.go:29: 2020-02-23T02:46:56.502Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:46:56.502Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny: Stopping server: protocol=DNS address=127.0.0.1:16475 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.502Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny: Stopping server: protocol=DNS address=127.0.0.1:16475 network=udp
>     writer.go:29: 2020-02-23T02:46:56.502Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny: Stopping server: protocol=HTTP address=127.0.0.1:16476 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.502Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:56.502Z [INFO]  TestAgentConnectCALeafCert_aclDefaultDeny: Endpoints down
> === CONT  TestAgentConnectCARoots_empty
> --- PASS: TestAgentConnectCARoots_empty (0.38s)
>     writer.go:29: 2020-02-23T02:46:56.510Z [WARN]  TestAgentConnectCARoots_empty: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:56.510Z [DEBUG] TestAgentConnectCARoots_empty.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:56.510Z [DEBUG] TestAgentConnectCARoots_empty.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:56.519Z [INFO]  TestAgentConnectCARoots_empty.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:077b8af9-2264-1f4f-14a7-70ec277dea12 Address:127.0.0.1:16510}]"
>     writer.go:29: 2020-02-23T02:46:56.519Z [INFO]  TestAgentConnectCARoots_empty.server.raft: entering follower state: follower="Node at 127.0.0.1:16510 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:56.520Z [INFO]  TestAgentConnectCARoots_empty.server.serf.wan: serf: EventMemberJoin: Node-077b8af9-2264-1f4f-14a7-70ec277dea12.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:56.521Z [INFO]  TestAgentConnectCARoots_empty.server.serf.lan: serf: EventMemberJoin: Node-077b8af9-2264-1f4f-14a7-70ec277dea12 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:56.521Z [INFO]  TestAgentConnectCARoots_empty.server: Handled event for server in area: event=member-join server=Node-077b8af9-2264-1f4f-14a7-70ec277dea12.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:56.521Z [INFO]  TestAgentConnectCARoots_empty.server: Adding LAN server: server="Node-077b8af9-2264-1f4f-14a7-70ec277dea12 (Addr: tcp/127.0.0.1:16510) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:56.521Z [INFO]  TestAgentConnectCARoots_empty: Started DNS server: address=127.0.0.1:16505 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.521Z [INFO]  TestAgentConnectCARoots_empty: Started DNS server: address=127.0.0.1:16505 network=udp
>     writer.go:29: 2020-02-23T02:46:56.522Z [INFO]  TestAgentConnectCARoots_empty: Started HTTP server: address=127.0.0.1:16506 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.522Z [INFO]  TestAgentConnectCARoots_empty: started state syncer
>     writer.go:29: 2020-02-23T02:46:56.589Z [WARN]  TestAgentConnectCARoots_empty.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:56.589Z [INFO]  TestAgentConnectCARoots_empty.server.raft: entering candidate state: node="Node at 127.0.0.1:16510 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:56.592Z [DEBUG] TestAgentConnectCARoots_empty.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:56.592Z [DEBUG] TestAgentConnectCARoots_empty.server.raft: vote granted: from=077b8af9-2264-1f4f-14a7-70ec277dea12 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:56.592Z [INFO]  TestAgentConnectCARoots_empty.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:56.592Z [INFO]  TestAgentConnectCARoots_empty.server.raft: entering leader state: leader="Node at 127.0.0.1:16510 [Leader]"
>     writer.go:29: 2020-02-23T02:46:56.593Z [INFO]  TestAgentConnectCARoots_empty.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:56.593Z [INFO]  TestAgentConnectCARoots_empty.server: New leader elected: payload=Node-077b8af9-2264-1f4f-14a7-70ec277dea12
>     writer.go:29: 2020-02-23T02:46:56.596Z [INFO]  TestAgentConnectCARoots_empty.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:56.596Z [DEBUG] TestAgentConnectCARoots_empty.server: Skipping self join check for node since the cluster is too small: node=Node-077b8af9-2264-1f4f-14a7-70ec277dea12
>     writer.go:29: 2020-02-23T02:46:56.596Z [INFO]  TestAgentConnectCARoots_empty.server: member joined, marking health alive: member=Node-077b8af9-2264-1f4f-14a7-70ec277dea12
>     writer.go:29: 2020-02-23T02:46:56.882Z [INFO]  TestAgentConnectCARoots_empty: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:56.882Z [INFO]  TestAgentConnectCARoots_empty.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:56.882Z [DEBUG] TestAgentConnectCARoots_empty.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:56.882Z [WARN]  TestAgentConnectCARoots_empty.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:56.882Z [ERROR] TestAgentConnectCARoots_empty.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:56.882Z [DEBUG] TestAgentConnectCARoots_empty.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:56.884Z [WARN]  TestAgentConnectCARoots_empty.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:56.886Z [INFO]  TestAgentConnectCARoots_empty.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:56.886Z [INFO]  TestAgentConnectCARoots_empty: consul server down
>     writer.go:29: 2020-02-23T02:46:56.886Z [INFO]  TestAgentConnectCARoots_empty: shutdown complete
>     writer.go:29: 2020-02-23T02:46:56.886Z [INFO]  TestAgentConnectCARoots_empty: Stopping server: protocol=DNS address=127.0.0.1:16505 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.886Z [INFO]  TestAgentConnectCARoots_empty: Stopping server: protocol=DNS address=127.0.0.1:16505 network=udp
>     writer.go:29: 2020-02-23T02:46:56.886Z [INFO]  TestAgentConnectCARoots_empty: Stopping server: protocol=HTTP address=127.0.0.1:16506 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.886Z [INFO]  TestAgentConnectCARoots_empty: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:56.886Z [INFO]  TestAgentConnectCARoots_empty: Endpoints down
> === CONT  TestAgent_Token
> --- PASS: TestAgentConnectCARoots_list (0.47s)
>     writer.go:29: 2020-02-23T02:46:56.472Z [WARN]  TestAgentConnectCARoots_list: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:56.472Z [DEBUG] TestAgentConnectCARoots_list.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:56.472Z [DEBUG] TestAgentConnectCARoots_list.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:56.481Z [INFO]  TestAgentConnectCARoots_list.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d163d017-464f-834f-8c0b-17ceef2e171a Address:127.0.0.1:16486}]"
>     writer.go:29: 2020-02-23T02:46:56.481Z [INFO]  TestAgentConnectCARoots_list.server.raft: entering follower state: follower="Node at 127.0.0.1:16486 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:56.482Z [INFO]  TestAgentConnectCARoots_list.server.serf.wan: serf: EventMemberJoin: Node-d163d017-464f-834f-8c0b-17ceef2e171a.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:56.483Z [INFO]  TestAgentConnectCARoots_list.server.serf.lan: serf: EventMemberJoin: Node-d163d017-464f-834f-8c0b-17ceef2e171a 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:56.483Z [INFO]  TestAgentConnectCARoots_list.server: Adding LAN server: server="Node-d163d017-464f-834f-8c0b-17ceef2e171a (Addr: tcp/127.0.0.1:16486) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:56.483Z [INFO]  TestAgentConnectCARoots_list.server: Handled event for server in area: event=member-join server=Node-d163d017-464f-834f-8c0b-17ceef2e171a.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:56.483Z [INFO]  TestAgentConnectCARoots_list: Started DNS server: address=127.0.0.1:16481 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.483Z [INFO]  TestAgentConnectCARoots_list: Started DNS server: address=127.0.0.1:16481 network=udp
>     writer.go:29: 2020-02-23T02:46:56.484Z [INFO]  TestAgentConnectCARoots_list: Started HTTP server: address=127.0.0.1:16482 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.484Z [INFO]  TestAgentConnectCARoots_list: started state syncer
>     writer.go:29: 2020-02-23T02:46:56.535Z [WARN]  TestAgentConnectCARoots_list.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:56.535Z [INFO]  TestAgentConnectCARoots_list.server.raft: entering candidate state: node="Node at 127.0.0.1:16486 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:56.539Z [DEBUG] TestAgentConnectCARoots_list.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:56.539Z [DEBUG] TestAgentConnectCARoots_list.server.raft: vote granted: from=d163d017-464f-834f-8c0b-17ceef2e171a term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:56.539Z [INFO]  TestAgentConnectCARoots_list.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:56.539Z [INFO]  TestAgentConnectCARoots_list.server.raft: entering leader state: leader="Node at 127.0.0.1:16486 [Leader]"
>     writer.go:29: 2020-02-23T02:46:56.539Z [INFO]  TestAgentConnectCARoots_list.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:56.539Z [INFO]  TestAgentConnectCARoots_list.server: New leader elected: payload=Node-d163d017-464f-834f-8c0b-17ceef2e171a
>     writer.go:29: 2020-02-23T02:46:56.546Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:56.553Z [INFO]  TestAgentConnectCARoots_list.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:56.553Z [INFO]  TestAgentConnectCARoots_list.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:56.554Z [DEBUG] TestAgentConnectCARoots_list.server: Skipping self join check for node since the cluster is too small: node=Node-d163d017-464f-834f-8c0b-17ceef2e171a
>     writer.go:29: 2020-02-23T02:46:56.554Z [INFO]  TestAgentConnectCARoots_list.server: member joined, marking health alive: member=Node-d163d017-464f-834f-8c0b-17ceef2e171a
>     writer.go:29: 2020-02-23T02:46:56.636Z [DEBUG] TestAgentConnectCARoots_list: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:56.686Z [INFO]  TestAgentConnectCARoots_list: Synced node info
>     writer.go:29: 2020-02-23T02:46:56.686Z [DEBUG] TestAgentConnectCARoots_list: Node info in sync
>     writer.go:29: 2020-02-23T02:46:56.882Z [DEBUG] connect.ca.consul: consul CA provider configured: id=38:4f:9e:fa:1b:c0:70:0a:08:72:d3:8b:2a:b0:74:cd:7c:37:7e:63:b3:f2:92:28:2c:78:49:96:54:a8:e2:b8 is_primary=true
>     writer.go:29: 2020-02-23T02:46:56.897Z [INFO]  TestAgentConnectCARoots_list.server.connect: CA rotated to new root under provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:56.900Z [DEBUG] connect.ca.consul: consul CA provider configured: id=65:36:0b:ff:b4:f3:5c:ee:ea:8f:f9:a9:6f:6f:7a:6c:e1:4c:ae:81:e7:d1:86:7d:33:e3:85:c5:ae:9b:79:72 is_primary=true
>     writer.go:29: 2020-02-23T02:46:56.927Z [INFO]  TestAgentConnectCARoots_list.server.connect: CA rotated to new root under provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:56.927Z [INFO]  TestAgentConnectCARoots_list: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:56.927Z [INFO]  TestAgentConnectCARoots_list.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:56.927Z [DEBUG] TestAgentConnectCARoots_list.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:56.927Z [WARN]  TestAgentConnectCARoots_list.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:56.927Z [DEBUG] TestAgentConnectCARoots_list.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:56.932Z [WARN]  TestAgentConnectCARoots_list.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:56.938Z [INFO]  TestAgentConnectCARoots_list.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:56.938Z [INFO]  TestAgentConnectCARoots_list: consul server down
>     writer.go:29: 2020-02-23T02:46:56.938Z [INFO]  TestAgentConnectCARoots_list: shutdown complete
>     writer.go:29: 2020-02-23T02:46:56.938Z [INFO]  TestAgentConnectCARoots_list: Stopping server: protocol=DNS address=127.0.0.1:16481 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.938Z [INFO]  TestAgentConnectCARoots_list: Stopping server: protocol=DNS address=127.0.0.1:16481 network=udp
>     writer.go:29: 2020-02-23T02:46:56.939Z [INFO]  TestAgentConnectCARoots_list: Stopping server: protocol=HTTP address=127.0.0.1:16482 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.939Z [INFO]  TestAgentConnectCARoots_list: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:56.939Z [INFO]  TestAgentConnectCARoots_list: Endpoints down
> === CONT  TestAgent_TokenTriggersFullSync
> === RUN   TestAgent_TokenTriggersFullSync/acl_agent_token
> === RUN   TestAgent_TokenTriggersFullSync/agent
> === RUN   TestAgent_Token/bad_token_name
> === RUN   TestAgent_Token/bad_JSON
> === RUN   TestAgent_Token/set_user_legacy
> === RUN   TestAgent_Token/set_default
> === RUN   TestAgent_Token/set_agent_legacy
> === RUN   TestAgent_Token/set_agent
> === RUN   TestAgent_Token/set_master_legacy
> === RUN   TestAgent_Token/set_master_
> === RUN   TestAgent_Token/set_repl_legacy
> === RUN   TestAgent_Token/set_repl
> === RUN   TestAgent_Token/clear_user_legacy
> === RUN   TestAgent_Token/clear_default
> === RUN   TestAgent_Token/clear_agent_legacy
> === RUN   TestAgent_Token/clear_agent
> === RUN   TestAgent_Token/clear_master_legacy
> === RUN   TestAgent_Token/clear_master
> === RUN   TestAgent_Token/clear_repl_legacy
> === RUN   TestAgent_Token/clear_repl
> === RUN   TestAgent_Token/permission_denied
> --- PASS: TestAgent_Token (0.45s)
>     writer.go:29: 2020-02-23T02:46:56.895Z [WARN]  TestAgent_Token: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:56.895Z [WARN]  TestAgent_Token: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:56.895Z [DEBUG] TestAgent_Token.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:56.896Z [DEBUG] TestAgent_Token.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:56.933Z [INFO]  TestAgent_Token.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d2d5dfbc-ba40-8865-ffdd-ba952c641eb5 Address:127.0.0.1:16492}]"
>     writer.go:29: 2020-02-23T02:46:56.933Z [INFO]  TestAgent_Token.server.serf.wan: serf: EventMemberJoin: Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:56.934Z [INFO]  TestAgent_Token.server.serf.lan: serf: EventMemberJoin: Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:56.934Z [INFO]  TestAgent_Token: Started DNS server: address=127.0.0.1:16487 network=udp
>     writer.go:29: 2020-02-23T02:46:56.934Z [INFO]  TestAgent_Token.server.raft: entering follower state: follower="Node at 127.0.0.1:16492 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:56.934Z [INFO]  TestAgent_Token.server: Adding LAN server: server="Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5 (Addr: tcp/127.0.0.1:16492) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:56.934Z [INFO]  TestAgent_Token.server: Handled event for server in area: event=member-join server=Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:56.934Z [INFO]  TestAgent_Token: Started DNS server: address=127.0.0.1:16487 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.935Z [INFO]  TestAgent_Token: Started HTTP server: address=127.0.0.1:16488 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.936Z [INFO]  TestAgent_Token: started state syncer
>     writer.go:29: 2020-02-23T02:46:56.977Z [WARN]  TestAgent_Token.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:56.977Z [INFO]  TestAgent_Token.server.raft: entering candidate state: node="Node at 127.0.0.1:16492 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:56.980Z [DEBUG] TestAgent_Token.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:56.980Z [DEBUG] TestAgent_Token.server.raft: vote granted: from=d2d5dfbc-ba40-8865-ffdd-ba952c641eb5 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:56.980Z [INFO]  TestAgent_Token.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:56.980Z [INFO]  TestAgent_Token.server.raft: entering leader state: leader="Node at 127.0.0.1:16492 [Leader]"
>     writer.go:29: 2020-02-23T02:46:56.980Z [INFO]  TestAgent_Token.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:56.980Z [INFO]  TestAgent_Token.server: New leader elected: payload=Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5
>     writer.go:29: 2020-02-23T02:46:56.983Z [INFO]  TestAgent_Token.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:56.984Z [INFO]  TestAgent_Token.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:56.984Z [WARN]  TestAgent_Token.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:56.984Z [INFO]  TestAgent_Token.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:56.984Z [WARN]  TestAgent_Token.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:56.990Z [INFO]  TestAgent_Token.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:56.990Z [INFO]  TestAgent_Token.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:56.992Z [INFO]  TestAgent_Token.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:56.992Z [INFO]  TestAgent_Token.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:56.992Z [INFO]  TestAgent_Token.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:56.993Z [INFO]  TestAgent_Token.server.serf.lan: serf: EventMemberUpdate: Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5
>     writer.go:29: 2020-02-23T02:46:56.993Z [INFO]  TestAgent_Token.server.serf.wan: serf: EventMemberUpdate: Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5.dc1
>     writer.go:29: 2020-02-23T02:46:56.993Z [INFO]  TestAgent_Token.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:56.993Z [DEBUG] TestAgent_Token.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:46:56.993Z [INFO]  TestAgent_Token.server.serf.lan: serf: EventMemberUpdate: Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5
>     writer.go:29: 2020-02-23T02:46:56.993Z [INFO]  TestAgent_Token.server.serf.wan: serf: EventMemberUpdate: Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5.dc1
>     writer.go:29: 2020-02-23T02:46:56.993Z [INFO]  TestAgent_Token.server: Handled event for server in area: event=member-update server=Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:56.993Z [INFO]  TestAgent_Token.server: Handled event for server in area: event=member-update server=Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:56.996Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:57.003Z [INFO]  TestAgent_Token.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:57.003Z [INFO]  TestAgent_Token.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:57.003Z [DEBUG] TestAgent_Token.server: Skipping self join check for node since the cluster is too small: node=Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5
>     writer.go:29: 2020-02-23T02:46:57.003Z [INFO]  TestAgent_Token.server: member joined, marking health alive: member=Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5
>     writer.go:29: 2020-02-23T02:46:57.005Z [DEBUG] TestAgent_Token.server: Skipping self join check for node since the cluster is too small: node=Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5
>     writer.go:29: 2020-02-23T02:46:57.005Z [DEBUG] TestAgent_Token.server: Skipping self join check for node since the cluster is too small: node=Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5
>     writer.go:29: 2020-02-23T02:46:57.228Z [DEBUG] TestAgent_Token.acl: dropping check from result due to ACLs: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:57.228Z [WARN]  TestAgent_Token: Node info update blocked by ACLs: node=d2d5dfbc-ba40-8865-ffdd-ba952c641eb5 accessorID=
>     writer.go:29: 2020-02-23T02:46:57.228Z [DEBUG] TestAgent_Token: Node info in sync
>     writer.go:29: 2020-02-23T02:46:57.329Z [DEBUG] TestAgent_Token.acl: dropping node from result due to ACLs: node=Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5
>     writer.go:29: 2020-02-23T02:46:57.329Z [DEBUG] TestAgent_Token.acl: dropping node from result due to ACLs: node=Node-d2d5dfbc-ba40-8865-ffdd-ba952c641eb5
>     --- PASS: TestAgent_Token/bad_token_name (0.00s)
>     --- PASS: TestAgent_Token/bad_JSON (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.330Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=acl_token
>     --- PASS: TestAgent_Token/set_user_legacy (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.330Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=default
>     --- PASS: TestAgent_Token/set_default (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.330Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=acl_agent_token
>     --- PASS: TestAgent_Token/set_agent_legacy (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.330Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=agent
>     --- PASS: TestAgent_Token/set_agent (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.330Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=acl_agent_master_token
>     --- PASS: TestAgent_Token/set_master_legacy (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.330Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=agent_master
>     --- PASS: TestAgent_Token/set_master_ (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.330Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=acl_replication_token
>     --- PASS: TestAgent_Token/set_repl_legacy (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.330Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=replication
>     --- PASS: TestAgent_Token/set_repl (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.331Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=acl_token
>     --- PASS: TestAgent_Token/clear_user_legacy (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.331Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=default
>     --- PASS: TestAgent_Token/clear_default (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.331Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=acl_agent_token
>     --- PASS: TestAgent_Token/clear_agent_legacy (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.331Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=agent
>     --- PASS: TestAgent_Token/clear_agent (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.331Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=acl_agent_master_token
>     --- PASS: TestAgent_Token/clear_master_legacy (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.331Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=agent_master
>     --- PASS: TestAgent_Token/clear_master (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.331Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=acl_replication_token
>     --- PASS: TestAgent_Token/clear_repl_legacy (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.331Z [INFO]  TestAgent_Token: Updated agent's ACL token: token=replication
>     --- PASS: TestAgent_Token/clear_repl (0.00s)
>     --- PASS: TestAgent_Token/permission_denied (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.332Z [INFO]  TestAgent_Token: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:57.332Z [INFO]  TestAgent_Token.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:57.332Z [DEBUG] TestAgent_Token.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:57.332Z [DEBUG] TestAgent_Token.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:57.332Z [DEBUG] TestAgent_Token.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:57.332Z [WARN]  TestAgent_Token.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:57.332Z [DEBUG] TestAgent_Token.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:57.332Z [DEBUG] TestAgent_Token.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:57.332Z [DEBUG] TestAgent_Token.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:57.333Z [WARN]  TestAgent_Token.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:57.335Z [INFO]  TestAgent_Token.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:57.335Z [INFO]  TestAgent_Token: consul server down
>     writer.go:29: 2020-02-23T02:46:57.335Z [INFO]  TestAgent_Token: shutdown complete
>     writer.go:29: 2020-02-23T02:46:57.335Z [INFO]  TestAgent_Token: Stopping server: protocol=DNS address=127.0.0.1:16487 network=tcp
>     writer.go:29: 2020-02-23T02:46:57.336Z [INFO]  TestAgent_Token: Stopping server: protocol=DNS address=127.0.0.1:16487 network=udp
>     writer.go:29: 2020-02-23T02:46:57.336Z [INFO]  TestAgent_Token: Stopping server: protocol=HTTP address=127.0.0.1:16488 network=tcp
>     writer.go:29: 2020-02-23T02:46:57.336Z [INFO]  TestAgent_Token: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:57.336Z [INFO]  TestAgent_Token: Endpoints down
> === CONT  TestAgent_Monitor_ACLDeny
> --- PASS: TestAgent_Monitor_ACLDeny (0.46s)
>     writer.go:29: 2020-02-23T02:46:57.345Z [WARN]  TestAgent_Monitor_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:57.345Z [WARN]  TestAgent_Monitor_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:57.347Z [DEBUG] TestAgent_Monitor_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:57.347Z [DEBUG] TestAgent_Monitor_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:57.362Z [INFO]  TestAgent_Monitor_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b1c71a2e-77e0-0890-2199-bf8faf14a819 Address:127.0.0.1:16516}]"
>     writer.go:29: 2020-02-23T02:46:57.362Z [INFO]  TestAgent_Monitor_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-b1c71a2e-77e0-0890-2199-bf8faf14a819.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:57.363Z [INFO]  TestAgent_Monitor_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-b1c71a2e-77e0-0890-2199-bf8faf14a819 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:57.363Z [INFO]  TestAgent_Monitor_ACLDeny: Started DNS server: address=127.0.0.1:16511 network=udp
>     writer.go:29: 2020-02-23T02:46:57.363Z [INFO]  TestAgent_Monitor_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16516 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:57.363Z [INFO]  TestAgent_Monitor_ACLDeny.server: Adding LAN server: server="Node-b1c71a2e-77e0-0890-2199-bf8faf14a819 (Addr: tcp/127.0.0.1:16516) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:57.363Z [INFO]  TestAgent_Monitor_ACLDeny.server: Handled event for server in area: event=member-join server=Node-b1c71a2e-77e0-0890-2199-bf8faf14a819.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:57.363Z [INFO]  TestAgent_Monitor_ACLDeny: Started DNS server: address=127.0.0.1:16511 network=tcp
>     writer.go:29: 2020-02-23T02:46:57.364Z [INFO]  TestAgent_Monitor_ACLDeny: Started HTTP server: address=127.0.0.1:16512 network=tcp
>     writer.go:29: 2020-02-23T02:46:57.364Z [INFO]  TestAgent_Monitor_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:46:57.429Z [WARN]  TestAgent_Monitor_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:57.429Z [INFO]  TestAgent_Monitor_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16516 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:57.433Z [DEBUG] TestAgent_Monitor_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:57.433Z [DEBUG] TestAgent_Monitor_ACLDeny.server.raft: vote granted: from=b1c71a2e-77e0-0890-2199-bf8faf14a819 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:57.433Z [INFO]  TestAgent_Monitor_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:57.433Z [INFO]  TestAgent_Monitor_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16516 [Leader]"
>     writer.go:29: 2020-02-23T02:46:57.433Z [INFO]  TestAgent_Monitor_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:57.433Z [INFO]  TestAgent_Monitor_ACLDeny.server: New leader elected: payload=Node-b1c71a2e-77e0-0890-2199-bf8faf14a819
>     writer.go:29: 2020-02-23T02:46:57.435Z [INFO]  TestAgent_Monitor_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:57.436Z [INFO]  TestAgent_Monitor_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:57.437Z [WARN]  TestAgent_Monitor_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:57.439Z [INFO]  TestAgent_Monitor_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:57.443Z [INFO]  TestAgent_Monitor_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:57.443Z [INFO]  TestAgent_Monitor_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:57.443Z [INFO]  TestAgent_Monitor_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:57.443Z [INFO]  TestAgent_Monitor_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-b1c71a2e-77e0-0890-2199-bf8faf14a819
>     writer.go:29: 2020-02-23T02:46:57.443Z [INFO]  TestAgent_Monitor_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-b1c71a2e-77e0-0890-2199-bf8faf14a819.dc1
>     writer.go:29: 2020-02-23T02:46:57.443Z [INFO]  TestAgent_Monitor_ACLDeny.server: Handled event for server in area: event=member-update server=Node-b1c71a2e-77e0-0890-2199-bf8faf14a819.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:57.448Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:57.455Z [INFO]  TestAgent_Monitor_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:57.455Z [INFO]  TestAgent_Monitor_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:57.455Z [DEBUG] TestAgent_Monitor_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-b1c71a2e-77e0-0890-2199-bf8faf14a819
>     writer.go:29: 2020-02-23T02:46:57.455Z [INFO]  TestAgent_Monitor_ACLDeny.server: member joined, marking health alive: member=Node-b1c71a2e-77e0-0890-2199-bf8faf14a819
>     writer.go:29: 2020-02-23T02:46:57.458Z [DEBUG] TestAgent_Monitor_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-b1c71a2e-77e0-0890-2199-bf8faf14a819
>     writer.go:29: 2020-02-23T02:46:57.743Z [DEBUG] TestAgent_Monitor_ACLDeny: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:57.746Z [INFO]  TestAgent_Monitor_ACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:46:57.746Z [DEBUG] TestAgent_Monitor_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:46:57.793Z [DEBUG] TestAgent_Monitor_ACLDeny.acl: dropping node from result due to ACLs: node=Node-b1c71a2e-77e0-0890-2199-bf8faf14a819
>     writer.go:29: 2020-02-23T02:46:57.793Z [DEBUG] TestAgent_Monitor_ACLDeny.acl: dropping node from result due to ACLs: node=Node-b1c71a2e-77e0-0890-2199-bf8faf14a819
>     writer.go:29: 2020-02-23T02:46:57.793Z [INFO]  TestAgent_Monitor_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:57.793Z [INFO]  TestAgent_Monitor_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:57.793Z [DEBUG] TestAgent_Monitor_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:57.793Z [DEBUG] TestAgent_Monitor_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:57.793Z [DEBUG] TestAgent_Monitor_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:57.793Z [WARN]  TestAgent_Monitor_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:57.793Z [DEBUG] TestAgent_Monitor_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:57.793Z [DEBUG] TestAgent_Monitor_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:57.793Z [DEBUG] TestAgent_Monitor_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:57.795Z [WARN]  TestAgent_Monitor_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:57.797Z [INFO]  TestAgent_Monitor_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:57.797Z [INFO]  TestAgent_Monitor_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:46:57.797Z [INFO]  TestAgent_Monitor_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:46:57.797Z [INFO]  TestAgent_Monitor_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16511 network=tcp
>     writer.go:29: 2020-02-23T02:46:57.797Z [INFO]  TestAgent_Monitor_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16511 network=udp
>     writer.go:29: 2020-02-23T02:46:57.797Z [INFO]  TestAgent_Monitor_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16512 network=tcp
>     writer.go:29: 2020-02-23T02:46:57.797Z [INFO]  TestAgent_Monitor_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:57.797Z [INFO]  TestAgent_Monitor_ACLDeny: Endpoints down
> === CONT  TestAgent_Monitor
> === RUN   TestAgent_Monitor/unknown_log_level
> === RUN   TestAgent_Monitor/stream_unstructured_logs
> === RUN   TestAgent_Monitor/stream_JSON_logs
> === RUN   TestAgent_Monitor/serf_shutdown_logging
> --- PASS: TestAgent_Monitor (0.47s)
>     writer.go:29: 2020-02-23T02:46:57.805Z [WARN]  TestAgent_Monitor: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:57.805Z [DEBUG] TestAgent_Monitor.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:57.806Z [DEBUG] TestAgent_Monitor.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:57.816Z [INFO]  TestAgent_Monitor.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:90cfbdef-eecc-2c5b-dd45-8845a66d1de1 Address:127.0.0.1:16528}]"
>     writer.go:29: 2020-02-23T02:46:57.816Z [INFO]  TestAgent_Monitor.server.raft: entering follower state: follower="Node at 127.0.0.1:16528 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:57.816Z [INFO]  TestAgent_Monitor.server.serf.wan: serf: EventMemberJoin: Node-90cfbdef-eecc-2c5b-dd45-8845a66d1de1.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:57.817Z [INFO]  TestAgent_Monitor.server.serf.lan: serf: EventMemberJoin: Node-90cfbdef-eecc-2c5b-dd45-8845a66d1de1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:57.817Z [INFO]  TestAgent_Monitor.server: Adding LAN server: server="Node-90cfbdef-eecc-2c5b-dd45-8845a66d1de1 (Addr: tcp/127.0.0.1:16528) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:57.817Z [INFO]  TestAgent_Monitor.server: Handled event for server in area: event=member-join server=Node-90cfbdef-eecc-2c5b-dd45-8845a66d1de1.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:57.817Z [INFO]  TestAgent_Monitor: Started DNS server: address=127.0.0.1:16523 network=udp
>     writer.go:29: 2020-02-23T02:46:57.817Z [INFO]  TestAgent_Monitor: Started DNS server: address=127.0.0.1:16523 network=tcp
>     writer.go:29: 2020-02-23T02:46:57.817Z [INFO]  TestAgent_Monitor: Started HTTP server: address=127.0.0.1:16524 network=tcp
>     writer.go:29: 2020-02-23T02:46:57.817Z [INFO]  TestAgent_Monitor: started state syncer
>     writer.go:29: 2020-02-23T02:46:57.880Z [WARN]  TestAgent_Monitor.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:57.880Z [INFO]  TestAgent_Monitor.server.raft: entering candidate state: node="Node at 127.0.0.1:16528 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:57.883Z [DEBUG] TestAgent_Monitor.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:57.883Z [DEBUG] TestAgent_Monitor.server.raft: vote granted: from=90cfbdef-eecc-2c5b-dd45-8845a66d1de1 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:57.883Z [INFO]  TestAgent_Monitor.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:57.883Z [INFO]  TestAgent_Monitor.server.raft: entering leader state: leader="Node at 127.0.0.1:16528 [Leader]"
>     writer.go:29: 2020-02-23T02:46:57.883Z [INFO]  TestAgent_Monitor.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:57.884Z [INFO]  TestAgent_Monitor.server: New leader elected: payload=Node-90cfbdef-eecc-2c5b-dd45-8845a66d1de1
>     writer.go:29: 2020-02-23T02:46:57.891Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:57.899Z [INFO]  TestAgent_Monitor.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:57.899Z [INFO]  TestAgent_Monitor.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:57.899Z [DEBUG] TestAgent_Monitor.server: Skipping self join check for node since the cluster is too small: node=Node-90cfbdef-eecc-2c5b-dd45-8845a66d1de1
>     writer.go:29: 2020-02-23T02:46:57.899Z [INFO]  TestAgent_Monitor.server: member joined, marking health alive: member=Node-90cfbdef-eecc-2c5b-dd45-8845a66d1de1
>     --- PASS: TestAgent_Monitor/unknown_log_level (0.00s)
>     writer.go:29: 2020-02-23T02:46:57.954Z [INFO]  TestAgent_Monitor: Synced node info
>     writer.go:29: 2020-02-23T02:46:57.955Z [INFO]  TestAgent_Monitor: Synced service: service=monitor
>     writer.go:29: 2020-02-23T02:46:57.955Z [DEBUG] TestAgent_Monitor: Check in sync: check=service:monitor
>     --- PASS: TestAgent_Monitor/stream_unstructured_logs (0.11s)
>     writer.go:29: 2020-02-23T02:46:58.060Z [DEBUG] TestAgent_Monitor: Node info in sync
>     writer.go:29: 2020-02-23T02:46:58.063Z [INFO]  TestAgent_Monitor: Synced service: service=monitor
>     writer.go:29: 2020-02-23T02:46:58.063Z [DEBUG] TestAgent_Monitor: Check in sync: check=service:monitor
>     --- PASS: TestAgent_Monitor/stream_JSON_logs (0.11s)
>     writer.go:29: 2020-02-23T02:46:58.163Z [INFO]  TestAgent_Monitor: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:58.163Z [INFO]  TestAgent_Monitor.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:58.163Z [DEBUG] TestAgent_Monitor.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:58.163Z [WARN]  TestAgent_Monitor.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:58.164Z [ERROR] TestAgent_Monitor.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:58.164Z [DEBUG] TestAgent_Monitor.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:58.165Z [WARN]  TestAgent_Monitor.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:58.167Z [INFO]  TestAgent_Monitor.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:58.167Z [INFO]  TestAgent_Monitor: consul server down
>     writer.go:29: 2020-02-23T02:46:58.167Z [INFO]  TestAgent_Monitor: shutdown complete
>     writer.go:29: 2020-02-23T02:46:58.167Z [INFO]  TestAgent_Monitor: Stopping server: protocol=DNS address=127.0.0.1:16523 network=tcp
>     writer.go:29: 2020-02-23T02:46:58.167Z [INFO]  TestAgent_Monitor: Stopping server: protocol=DNS address=127.0.0.1:16523 network=udp
>     writer.go:29: 2020-02-23T02:46:58.167Z [INFO]  TestAgent_Monitor: Stopping server: protocol=HTTP address=127.0.0.1:16524 network=tcp
>     writer.go:29: 2020-02-23T02:46:58.167Z [INFO]  TestAgent_Monitor: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:58.167Z [INFO]  TestAgent_Monitor: Endpoints down
>     --- PASS: TestAgent_Monitor/serf_shutdown_logging (0.10s)
> === CONT  TestAgent_RegisterCheck_Service
> --- PASS: TestAgent_RegisterCheck_Service (0.39s)
>     writer.go:29: 2020-02-23T02:46:58.273Z [WARN]  TestAgent_RegisterCheck_Service: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:58.273Z [DEBUG] TestAgent_RegisterCheck_Service.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:58.273Z [DEBUG] TestAgent_RegisterCheck_Service.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:58.282Z [INFO]  TestAgent_RegisterCheck_Service.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:56edea5e-cd50-64de-680a-c4131f09c526 Address:127.0.0.1:16534}]"
>     writer.go:29: 2020-02-23T02:46:58.282Z [INFO]  TestAgent_RegisterCheck_Service.server.raft: entering follower state: follower="Node at 127.0.0.1:16534 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:58.282Z [INFO]  TestAgent_RegisterCheck_Service.server.serf.wan: serf: EventMemberJoin: Node-56edea5e-cd50-64de-680a-c4131f09c526.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:58.282Z [INFO]  TestAgent_RegisterCheck_Service.server.serf.lan: serf: EventMemberJoin: Node-56edea5e-cd50-64de-680a-c4131f09c526 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:58.282Z [INFO]  TestAgent_RegisterCheck_Service.server: Handled event for server in area: event=member-join server=Node-56edea5e-cd50-64de-680a-c4131f09c526.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:58.283Z [INFO]  TestAgent_RegisterCheck_Service.server: Adding LAN server: server="Node-56edea5e-cd50-64de-680a-c4131f09c526 (Addr: tcp/127.0.0.1:16534) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:58.283Z [INFO]  TestAgent_RegisterCheck_Service: Started DNS server: address=127.0.0.1:16529 network=udp
>     writer.go:29: 2020-02-23T02:46:58.283Z [INFO]  TestAgent_RegisterCheck_Service: Started DNS server: address=127.0.0.1:16529 network=tcp
>     writer.go:29: 2020-02-23T02:46:58.283Z [INFO]  TestAgent_RegisterCheck_Service: Started HTTP server: address=127.0.0.1:16530 network=tcp
>     writer.go:29: 2020-02-23T02:46:58.283Z [INFO]  TestAgent_RegisterCheck_Service: started state syncer
>     writer.go:29: 2020-02-23T02:46:58.322Z [WARN]  TestAgent_RegisterCheck_Service.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:58.322Z [INFO]  TestAgent_RegisterCheck_Service.server.raft: entering candidate state: node="Node at 127.0.0.1:16534 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:58.326Z [DEBUG] TestAgent_RegisterCheck_Service.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:58.326Z [DEBUG] TestAgent_RegisterCheck_Service.server.raft: vote granted: from=56edea5e-cd50-64de-680a-c4131f09c526 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:58.326Z [INFO]  TestAgent_RegisterCheck_Service.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:58.326Z [INFO]  TestAgent_RegisterCheck_Service.server.raft: entering leader state: leader="Node at 127.0.0.1:16534 [Leader]"
>     writer.go:29: 2020-02-23T02:46:58.326Z [INFO]  TestAgent_RegisterCheck_Service.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:58.326Z [INFO]  TestAgent_RegisterCheck_Service.server: New leader elected: payload=Node-56edea5e-cd50-64de-680a-c4131f09c526
>     writer.go:29: 2020-02-23T02:46:58.333Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:58.342Z [INFO]  TestAgent_RegisterCheck_Service.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:58.342Z [INFO]  TestAgent_RegisterCheck_Service.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:58.342Z [DEBUG] TestAgent_RegisterCheck_Service.server: Skipping self join check for node since the cluster is too small: node=Node-56edea5e-cd50-64de-680a-c4131f09c526
>     writer.go:29: 2020-02-23T02:46:58.342Z [INFO]  TestAgent_RegisterCheck_Service.server: member joined, marking health alive: member=Node-56edea5e-cd50-64de-680a-c4131f09c526
>     writer.go:29: 2020-02-23T02:46:58.495Z [DEBUG] TestAgent_RegisterCheck_Service: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:58.498Z [INFO]  TestAgent_RegisterCheck_Service: Synced node info
>     writer.go:29: 2020-02-23T02:46:58.648Z [DEBUG] TestAgent_RegisterCheck_Service: Node info in sync
>     writer.go:29: 2020-02-23T02:46:58.649Z [INFO]  TestAgent_RegisterCheck_Service: Synced service: service=memcache
>     writer.go:29: 2020-02-23T02:46:58.649Z [DEBUG] TestAgent_RegisterCheck_Service: Check in sync: check=service:memcache
>     writer.go:29: 2020-02-23T02:46:58.651Z [DEBUG] TestAgent_RegisterCheck_Service: Node info in sync
>     writer.go:29: 2020-02-23T02:46:58.651Z [DEBUG] TestAgent_RegisterCheck_Service: Service in sync: service=memcache
>     writer.go:29: 2020-02-23T02:46:58.651Z [DEBUG] TestAgent_RegisterCheck_Service: Check in sync: check=service:memcache
>     writer.go:29: 2020-02-23T02:46:58.654Z [INFO]  TestAgent_RegisterCheck_Service: Synced check: check=memcache_check2
>     writer.go:29: 2020-02-23T02:46:58.654Z [INFO]  TestAgent_RegisterCheck_Service: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:58.654Z [INFO]  TestAgent_RegisterCheck_Service.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:58.654Z [DEBUG] TestAgent_RegisterCheck_Service.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:58.654Z [WARN]  TestAgent_RegisterCheck_Service.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:58.654Z [DEBUG] TestAgent_RegisterCheck_Service.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:58.656Z [WARN]  TestAgent_RegisterCheck_Service.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:58.658Z [INFO]  TestAgent_RegisterCheck_Service.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:58.658Z [INFO]  TestAgent_RegisterCheck_Service: consul server down
>     writer.go:29: 2020-02-23T02:46:58.658Z [INFO]  TestAgent_RegisterCheck_Service: shutdown complete
>     writer.go:29: 2020-02-23T02:46:58.658Z [INFO]  TestAgent_RegisterCheck_Service: Stopping server: protocol=DNS address=127.0.0.1:16529 network=tcp
>     writer.go:29: 2020-02-23T02:46:58.658Z [INFO]  TestAgent_RegisterCheck_Service: Stopping server: protocol=DNS address=127.0.0.1:16529 network=udp
>     writer.go:29: 2020-02-23T02:46:58.658Z [INFO]  TestAgent_RegisterCheck_Service: Stopping server: protocol=HTTP address=127.0.0.1:16530 network=tcp
>     writer.go:29: 2020-02-23T02:46:58.658Z [INFO]  TestAgent_RegisterCheck_Service: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:58.658Z [INFO]  TestAgent_RegisterCheck_Service: Endpoints down
> === CONT  TestAgent_NodeMaintenance_ACLDeny
> === RUN   TestAgent_TokenTriggersFullSync/acl_token
> === RUN   TestAgent_NodeMaintenance_ACLDeny/no_token
> === RUN   TestAgent_NodeMaintenance_ACLDeny/root_token
> --- PASS: TestAgent_NodeMaintenance_ACLDeny (0.25s)
>     writer.go:29: 2020-02-23T02:46:58.666Z [WARN]  TestAgent_NodeMaintenance_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:46:58.666Z [WARN]  TestAgent_NodeMaintenance_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:58.666Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:58.666Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:58.675Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:0a70a4c1-e756-4c71-2a27-b0b0917ca3f6 Address:127.0.0.1:16540}]"
>     writer.go:29: 2020-02-23T02:46:58.675Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16540 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:58.676Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-0a70a4c1-e756-4c71-2a27-b0b0917ca3f6.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:58.676Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-0a70a4c1-e756-4c71-2a27-b0b0917ca3f6 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:58.676Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server: Adding LAN server: server="Node-0a70a4c1-e756-4c71-2a27-b0b0917ca3f6 (Addr: tcp/127.0.0.1:16540) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:58.677Z [INFO]  TestAgent_NodeMaintenance_ACLDeny: Started DNS server: address=127.0.0.1:16535 network=udp
>     writer.go:29: 2020-02-23T02:46:58.677Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server: Handled event for server in area: event=member-join server=Node-0a70a4c1-e756-4c71-2a27-b0b0917ca3f6.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:58.677Z [INFO]  TestAgent_NodeMaintenance_ACLDeny: Started DNS server: address=127.0.0.1:16535 network=tcp
>     writer.go:29: 2020-02-23T02:46:58.677Z [INFO]  TestAgent_NodeMaintenance_ACLDeny: Started HTTP server: address=127.0.0.1:16536 network=tcp
>     writer.go:29: 2020-02-23T02:46:58.677Z [INFO]  TestAgent_NodeMaintenance_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:46:58.738Z [WARN]  TestAgent_NodeMaintenance_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:58.738Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16540 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:58.742Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:58.742Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny.server.raft: vote granted: from=0a70a4c1-e756-4c71-2a27-b0b0917ca3f6 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:58.742Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:58.742Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16540 [Leader]"
>     writer.go:29: 2020-02-23T02:46:58.745Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:58.745Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server: New leader elected: payload=Node-0a70a4c1-e756-4c71-2a27-b0b0917ca3f6
>     writer.go:29: 2020-02-23T02:46:58.746Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:46:58.748Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:46:58.748Z [WARN]  TestAgent_NodeMaintenance_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:46:58.752Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:46:58.755Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:46:58.755Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:58.755Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:58.755Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-0a70a4c1-e756-4c71-2a27-b0b0917ca3f6
>     writer.go:29: 2020-02-23T02:46:58.756Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-0a70a4c1-e756-4c71-2a27-b0b0917ca3f6.dc1
>     writer.go:29: 2020-02-23T02:46:58.760Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server: Handled event for server in area: event=member-update server=Node-0a70a4c1-e756-4c71-2a27-b0b0917ca3f6.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:58.761Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:58.773Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:58.773Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:58.773Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-0a70a4c1-e756-4c71-2a27-b0b0917ca3f6
>     writer.go:29: 2020-02-23T02:46:58.773Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server: member joined, marking health alive: member=Node-0a70a4c1-e756-4c71-2a27-b0b0917ca3f6
>     writer.go:29: 2020-02-23T02:46:58.777Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-0a70a4c1-e756-4c71-2a27-b0b0917ca3f6
>     writer.go:29: 2020-02-23T02:46:58.796Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:58.800Z [INFO]  TestAgent_NodeMaintenance_ACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:46:58.902Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny.acl: dropping node from result due to ACLs: node=Node-0a70a4c1-e756-4c71-2a27-b0b0917ca3f6
>     writer.go:29: 2020-02-23T02:46:58.902Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny.acl: dropping node from result due to ACLs: node=Node-0a70a4c1-e756-4c71-2a27-b0b0917ca3f6
>     --- PASS: TestAgent_NodeMaintenance_ACLDeny/no_token (0.00s)
>     writer.go:29: 2020-02-23T02:46:58.905Z [INFO]  TestAgent_NodeMaintenance_ACLDeny: Node entered maintenance mode
>     writer.go:29: 2020-02-23T02:46:58.905Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:46:58.908Z [INFO]  TestAgent_NodeMaintenance_ACLDeny: Synced check: check=_node_maintenance
>     --- PASS: TestAgent_NodeMaintenance_ACLDeny/root_token (0.01s)
>     writer.go:29: 2020-02-23T02:46:58.908Z [INFO]  TestAgent_NodeMaintenance_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:58.908Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:58.908Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:58.908Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:58.908Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:58.908Z [WARN]  TestAgent_NodeMaintenance_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:58.908Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:46:58.908Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:46:58.908Z [DEBUG] TestAgent_NodeMaintenance_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:58.910Z [WARN]  TestAgent_NodeMaintenance_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:58.911Z [INFO]  TestAgent_NodeMaintenance_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:58.911Z [INFO]  TestAgent_NodeMaintenance_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:46:58.911Z [INFO]  TestAgent_NodeMaintenance_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:46:58.911Z [INFO]  TestAgent_NodeMaintenance_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16535 network=tcp
>     writer.go:29: 2020-02-23T02:46:58.911Z [INFO]  TestAgent_NodeMaintenance_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16535 network=udp
>     writer.go:29: 2020-02-23T02:46:58.912Z [INFO]  TestAgent_NodeMaintenance_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16536 network=tcp
>     writer.go:29: 2020-02-23T02:46:58.912Z [INFO]  TestAgent_NodeMaintenance_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:58.912Z [INFO]  TestAgent_NodeMaintenance_ACLDeny: Endpoints down
> === CONT  TestAgent_NodeMaintenance_Disable
> --- PASS: TestAgent_NodeMaintenance_Disable (0.36s)
>     writer.go:29: 2020-02-23T02:46:58.919Z [WARN]  TestAgent_NodeMaintenance_Disable: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:58.919Z [DEBUG] TestAgent_NodeMaintenance_Disable.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:58.920Z [DEBUG] TestAgent_NodeMaintenance_Disable.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:58.934Z [INFO]  TestAgent_NodeMaintenance_Disable.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:3ec7648c-1ad3-ef86-b380-89a8fd8be481 Address:127.0.0.1:16552}]"
>     writer.go:29: 2020-02-23T02:46:58.935Z [INFO]  TestAgent_NodeMaintenance_Disable.server.serf.wan: serf: EventMemberJoin: Node-3ec7648c-1ad3-ef86-b380-89a8fd8be481.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:58.935Z [INFO]  TestAgent_NodeMaintenance_Disable.server.raft: entering follower state: follower="Node at 127.0.0.1:16552 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:58.935Z [INFO]  TestAgent_NodeMaintenance_Disable.server.serf.lan: serf: EventMemberJoin: Node-3ec7648c-1ad3-ef86-b380-89a8fd8be481 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:58.936Z [INFO]  TestAgent_NodeMaintenance_Disable: Started DNS server: address=127.0.0.1:16547 network=udp
>     writer.go:29: 2020-02-23T02:46:58.936Z [INFO]  TestAgent_NodeMaintenance_Disable.server: Adding LAN server: server="Node-3ec7648c-1ad3-ef86-b380-89a8fd8be481 (Addr: tcp/127.0.0.1:16552) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:58.936Z [INFO]  TestAgent_NodeMaintenance_Disable.server: Handled event for server in area: event=member-join server=Node-3ec7648c-1ad3-ef86-b380-89a8fd8be481.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:58.936Z [INFO]  TestAgent_NodeMaintenance_Disable: Started DNS server: address=127.0.0.1:16547 network=tcp
>     writer.go:29: 2020-02-23T02:46:58.936Z [INFO]  TestAgent_NodeMaintenance_Disable: Started HTTP server: address=127.0.0.1:16548 network=tcp
>     writer.go:29: 2020-02-23T02:46:58.936Z [INFO]  TestAgent_NodeMaintenance_Disable: started state syncer
>     writer.go:29: 2020-02-23T02:46:58.986Z [WARN]  TestAgent_NodeMaintenance_Disable.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:58.986Z [INFO]  TestAgent_NodeMaintenance_Disable.server.raft: entering candidate state: node="Node at 127.0.0.1:16552 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:59.011Z [DEBUG] TestAgent_NodeMaintenance_Disable.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:59.011Z [DEBUG] TestAgent_NodeMaintenance_Disable.server.raft: vote granted: from=3ec7648c-1ad3-ef86-b380-89a8fd8be481 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:59.011Z [INFO]  TestAgent_NodeMaintenance_Disable.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:59.011Z [INFO]  TestAgent_NodeMaintenance_Disable.server.raft: entering leader state: leader="Node at 127.0.0.1:16552 [Leader]"
>     writer.go:29: 2020-02-23T02:46:59.011Z [INFO]  TestAgent_NodeMaintenance_Disable.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:59.011Z [INFO]  TestAgent_NodeMaintenance_Disable.server: New leader elected: payload=Node-3ec7648c-1ad3-ef86-b380-89a8fd8be481
>     writer.go:29: 2020-02-23T02:46:59.023Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:59.032Z [INFO]  TestAgent_NodeMaintenance_Disable.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:59.032Z [INFO]  TestAgent_NodeMaintenance_Disable.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:59.032Z [DEBUG] TestAgent_NodeMaintenance_Disable.server: Skipping self join check for node since the cluster is too small: node=Node-3ec7648c-1ad3-ef86-b380-89a8fd8be481
>     writer.go:29: 2020-02-23T02:46:59.032Z [INFO]  TestAgent_NodeMaintenance_Disable.server: member joined, marking health alive: member=Node-3ec7648c-1ad3-ef86-b380-89a8fd8be481
>     writer.go:29: 2020-02-23T02:46:59.170Z [DEBUG] TestAgent_NodeMaintenance_Disable: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:59.173Z [INFO]  TestAgent_NodeMaintenance_Disable: Synced node info
>     writer.go:29: 2020-02-23T02:46:59.268Z [INFO]  TestAgent_NodeMaintenance_Disable: Node entered maintenance mode
>     writer.go:29: 2020-02-23T02:46:59.268Z [DEBUG] TestAgent_NodeMaintenance_Disable: removed check: check=_node_maintenance
>     writer.go:29: 2020-02-23T02:46:59.268Z [INFO]  TestAgent_NodeMaintenance_Disable: Node left maintenance mode
>     writer.go:29: 2020-02-23T02:46:59.268Z [DEBUG] TestAgent_NodeMaintenance_Disable: Node info in sync
>     writer.go:29: 2020-02-23T02:46:59.270Z [INFO]  TestAgent_NodeMaintenance_Disable: Deregistered check: check=_node_maintenance
>     writer.go:29: 2020-02-23T02:46:59.270Z [INFO]  TestAgent_NodeMaintenance_Disable: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:59.270Z [INFO]  TestAgent_NodeMaintenance_Disable.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:59.270Z [DEBUG] TestAgent_NodeMaintenance_Disable.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:59.270Z [WARN]  TestAgent_NodeMaintenance_Disable.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:59.270Z [DEBUG] TestAgent_NodeMaintenance_Disable.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:59.272Z [WARN]  TestAgent_NodeMaintenance_Disable.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:59.273Z [INFO]  TestAgent_NodeMaintenance_Disable.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:59.273Z [INFO]  TestAgent_NodeMaintenance_Disable: consul server down
>     writer.go:29: 2020-02-23T02:46:59.273Z [INFO]  TestAgent_NodeMaintenance_Disable: shutdown complete
>     writer.go:29: 2020-02-23T02:46:59.273Z [INFO]  TestAgent_NodeMaintenance_Disable: Stopping server: protocol=DNS address=127.0.0.1:16547 network=tcp
>     writer.go:29: 2020-02-23T02:46:59.273Z [INFO]  TestAgent_NodeMaintenance_Disable: Stopping server: protocol=DNS address=127.0.0.1:16547 network=udp
>     writer.go:29: 2020-02-23T02:46:59.274Z [INFO]  TestAgent_NodeMaintenance_Disable: Stopping server: protocol=HTTP address=127.0.0.1:16548 network=tcp
>     writer.go:29: 2020-02-23T02:46:59.274Z [INFO]  TestAgent_NodeMaintenance_Disable: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:59.274Z [INFO]  TestAgent_NodeMaintenance_Disable: Endpoints down
> === CONT  TestAgent_NodeMaintenance_Enable
> === RUN   TestAgent_TokenTriggersFullSync/default
> --- PASS: TestAgent_NodeMaintenance_Enable (0.38s)
>     writer.go:29: 2020-02-23T02:46:59.281Z [WARN]  TestAgent_NodeMaintenance_Enable: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:59.281Z [DEBUG] TestAgent_NodeMaintenance_Enable.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:59.282Z [DEBUG] TestAgent_NodeMaintenance_Enable.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:59.291Z [INFO]  TestAgent_NodeMaintenance_Enable.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a92375c2-87b6-f65a-a398-2c3f08ac61a1 Address:127.0.0.1:16546}]"
>     writer.go:29: 2020-02-23T02:46:59.292Z [INFO]  TestAgent_NodeMaintenance_Enable.server.serf.wan: serf: EventMemberJoin: Node-a92375c2-87b6-f65a-a398-2c3f08ac61a1.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:59.292Z [INFO]  TestAgent_NodeMaintenance_Enable.server.serf.lan: serf: EventMemberJoin: Node-a92375c2-87b6-f65a-a398-2c3f08ac61a1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:59.292Z [INFO]  TestAgent_NodeMaintenance_Enable: Started DNS server: address=127.0.0.1:16541 network=udp
>     writer.go:29: 2020-02-23T02:46:59.292Z [INFO]  TestAgent_NodeMaintenance_Enable.server.raft: entering follower state: follower="Node at 127.0.0.1:16546 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:59.292Z [INFO]  TestAgent_NodeMaintenance_Enable.server: Adding LAN server: server="Node-a92375c2-87b6-f65a-a398-2c3f08ac61a1 (Addr: tcp/127.0.0.1:16546) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:59.293Z [INFO]  TestAgent_NodeMaintenance_Enable.server: Handled event for server in area: event=member-join server=Node-a92375c2-87b6-f65a-a398-2c3f08ac61a1.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:59.293Z [INFO]  TestAgent_NodeMaintenance_Enable: Started DNS server: address=127.0.0.1:16541 network=tcp
>     writer.go:29: 2020-02-23T02:46:59.293Z [INFO]  TestAgent_NodeMaintenance_Enable: Started HTTP server: address=127.0.0.1:16542 network=tcp
>     writer.go:29: 2020-02-23T02:46:59.293Z [INFO]  TestAgent_NodeMaintenance_Enable: started state syncer
>     writer.go:29: 2020-02-23T02:46:59.328Z [WARN]  TestAgent_NodeMaintenance_Enable.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:59.328Z [INFO]  TestAgent_NodeMaintenance_Enable.server.raft: entering candidate state: node="Node at 127.0.0.1:16546 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:59.331Z [DEBUG] TestAgent_NodeMaintenance_Enable.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:59.331Z [DEBUG] TestAgent_NodeMaintenance_Enable.server.raft: vote granted: from=a92375c2-87b6-f65a-a398-2c3f08ac61a1 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:59.331Z [INFO]  TestAgent_NodeMaintenance_Enable.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:59.331Z [INFO]  TestAgent_NodeMaintenance_Enable.server.raft: entering leader state: leader="Node at 127.0.0.1:16546 [Leader]"
>     writer.go:29: 2020-02-23T02:46:59.331Z [INFO]  TestAgent_NodeMaintenance_Enable.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:59.331Z [INFO]  TestAgent_NodeMaintenance_Enable.server: New leader elected: payload=Node-a92375c2-87b6-f65a-a398-2c3f08ac61a1
>     writer.go:29: 2020-02-23T02:46:59.343Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:59.351Z [INFO]  TestAgent_NodeMaintenance_Enable.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:59.351Z [INFO]  TestAgent_NodeMaintenance_Enable.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:59.351Z [DEBUG] TestAgent_NodeMaintenance_Enable.server: Skipping self join check for node since the cluster is too small: node=Node-a92375c2-87b6-f65a-a398-2c3f08ac61a1
>     writer.go:29: 2020-02-23T02:46:59.351Z [INFO]  TestAgent_NodeMaintenance_Enable.server: member joined, marking health alive: member=Node-a92375c2-87b6-f65a-a398-2c3f08ac61a1
>     writer.go:29: 2020-02-23T02:46:59.552Z [INFO]  TestAgent_NodeMaintenance_Enable: Node entered maintenance mode
>     writer.go:29: 2020-02-23T02:46:59.598Z [INFO]  TestAgent_NodeMaintenance_Enable: Synced node info
>     writer.go:29: 2020-02-23T02:46:59.622Z [INFO]  TestAgent_NodeMaintenance_Enable: Synced check: check=_node_maintenance
>     writer.go:29: 2020-02-23T02:46:59.622Z [INFO]  TestAgent_NodeMaintenance_Enable: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:59.622Z [INFO]  TestAgent_NodeMaintenance_Enable.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:59.622Z [DEBUG] TestAgent_NodeMaintenance_Enable.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:59.622Z [WARN]  TestAgent_NodeMaintenance_Enable.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:59.623Z [ERROR] TestAgent_NodeMaintenance_Enable.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:46:59.623Z [DEBUG] TestAgent_NodeMaintenance_Enable.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:59.654Z [WARN]  TestAgent_NodeMaintenance_Enable.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:59.657Z [INFO]  TestAgent_NodeMaintenance_Enable.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:59.657Z [INFO]  TestAgent_NodeMaintenance_Enable: consul server down
>     writer.go:29: 2020-02-23T02:46:59.657Z [INFO]  TestAgent_NodeMaintenance_Enable: shutdown complete
>     writer.go:29: 2020-02-23T02:46:59.657Z [INFO]  TestAgent_NodeMaintenance_Enable: Stopping server: protocol=DNS address=127.0.0.1:16541 network=tcp
>     writer.go:29: 2020-02-23T02:46:59.657Z [INFO]  TestAgent_NodeMaintenance_Enable: Stopping server: protocol=DNS address=127.0.0.1:16541 network=udp
>     writer.go:29: 2020-02-23T02:46:59.657Z [INFO]  TestAgent_NodeMaintenance_Enable: Stopping server: protocol=HTTP address=127.0.0.1:16542 network=tcp
>     writer.go:29: 2020-02-23T02:46:59.657Z [INFO]  TestAgent_NodeMaintenance_Enable: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:59.657Z [INFO]  TestAgent_NodeMaintenance_Enable: Endpoints down
> === CONT  TestAgent_NodeMaintenance_BadRequest
> --- PASS: TestAgent_NodeMaintenance_BadRequest (0.33s)
>     writer.go:29: 2020-02-23T02:46:59.662Z [WARN]  TestAgent_NodeMaintenance_BadRequest: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:59.663Z [DEBUG] TestAgent_NodeMaintenance_BadRequest.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:59.663Z [DEBUG] TestAgent_NodeMaintenance_BadRequest.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:59.677Z [INFO]  TestAgent_NodeMaintenance_BadRequest.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:7981c252-0620-0533-2944-c4351bb75717 Address:127.0.0.1:16576}]"
>     writer.go:29: 2020-02-23T02:46:59.677Z [INFO]  TestAgent_NodeMaintenance_BadRequest.server.serf.wan: serf: EventMemberJoin: Node-7981c252-0620-0533-2944-c4351bb75717.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:59.678Z [INFO]  TestAgent_NodeMaintenance_BadRequest.server.serf.lan: serf: EventMemberJoin: Node-7981c252-0620-0533-2944-c4351bb75717 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:59.678Z [INFO]  TestAgent_NodeMaintenance_BadRequest: Started DNS server: address=127.0.0.1:16571 network=udp
>     writer.go:29: 2020-02-23T02:46:59.678Z [INFO]  TestAgent_NodeMaintenance_BadRequest.server.raft: entering follower state: follower="Node at 127.0.0.1:16576 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:59.681Z [INFO]  TestAgent_NodeMaintenance_BadRequest.server: Handled event for server in area: event=member-join server=Node-7981c252-0620-0533-2944-c4351bb75717.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:59.682Z [INFO]  TestAgent_NodeMaintenance_BadRequest.server: Adding LAN server: server="Node-7981c252-0620-0533-2944-c4351bb75717 (Addr: tcp/127.0.0.1:16576) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:59.683Z [INFO]  TestAgent_NodeMaintenance_BadRequest: Started DNS server: address=127.0.0.1:16571 network=tcp
>     writer.go:29: 2020-02-23T02:46:59.685Z [INFO]  TestAgent_NodeMaintenance_BadRequest: Started HTTP server: address=127.0.0.1:16572 network=tcp
>     writer.go:29: 2020-02-23T02:46:59.685Z [INFO]  TestAgent_NodeMaintenance_BadRequest: started state syncer
>     writer.go:29: 2020-02-23T02:46:59.722Z [WARN]  TestAgent_NodeMaintenance_BadRequest.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:59.722Z [INFO]  TestAgent_NodeMaintenance_BadRequest.server.raft: entering candidate state: node="Node at 127.0.0.1:16576 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:59.725Z [DEBUG] TestAgent_NodeMaintenance_BadRequest.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:59.725Z [DEBUG] TestAgent_NodeMaintenance_BadRequest.server.raft: vote granted: from=7981c252-0620-0533-2944-c4351bb75717 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:59.725Z [INFO]  TestAgent_NodeMaintenance_BadRequest.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:59.725Z [INFO]  TestAgent_NodeMaintenance_BadRequest.server.raft: entering leader state: leader="Node at 127.0.0.1:16576 [Leader]"
>     writer.go:29: 2020-02-23T02:46:59.725Z [INFO]  TestAgent_NodeMaintenance_BadRequest.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:59.725Z [INFO]  TestAgent_NodeMaintenance_BadRequest.server: New leader elected: payload=Node-7981c252-0620-0533-2944-c4351bb75717
>     writer.go:29: 2020-02-23T02:46:59.732Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:59.737Z [INFO]  TestAgent_NodeMaintenance_BadRequest: Synced node info
>     writer.go:29: 2020-02-23T02:46:59.741Z [INFO]  TestAgent_NodeMaintenance_BadRequest.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:59.741Z [INFO]  TestAgent_NodeMaintenance_BadRequest.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:59.741Z [DEBUG] TestAgent_NodeMaintenance_BadRequest.server: Skipping self join check for node since the cluster is too small: node=Node-7981c252-0620-0533-2944-c4351bb75717
>     writer.go:29: 2020-02-23T02:46:59.741Z [INFO]  TestAgent_NodeMaintenance_BadRequest.server: member joined, marking health alive: member=Node-7981c252-0620-0533-2944-c4351bb75717
>     writer.go:29: 2020-02-23T02:46:59.980Z [INFO]  TestAgent_NodeMaintenance_BadRequest: Requesting shutdown
>     writer.go:29: 2020-02-23T02:46:59.980Z [INFO]  TestAgent_NodeMaintenance_BadRequest.server: shutting down server
>     writer.go:29: 2020-02-23T02:46:59.980Z [DEBUG] TestAgent_NodeMaintenance_BadRequest.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:59.980Z [WARN]  TestAgent_NodeMaintenance_BadRequest.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:59.980Z [DEBUG] TestAgent_NodeMaintenance_BadRequest.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:59.982Z [WARN]  TestAgent_NodeMaintenance_BadRequest.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:46:59.984Z [INFO]  TestAgent_NodeMaintenance_BadRequest.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:59.984Z [INFO]  TestAgent_NodeMaintenance_BadRequest: consul server down
>     writer.go:29: 2020-02-23T02:46:59.984Z [INFO]  TestAgent_NodeMaintenance_BadRequest: shutdown complete
>     writer.go:29: 2020-02-23T02:46:59.984Z [INFO]  TestAgent_NodeMaintenance_BadRequest: Stopping server: protocol=DNS address=127.0.0.1:16571 network=tcp
>     writer.go:29: 2020-02-23T02:46:59.984Z [INFO]  TestAgent_NodeMaintenance_BadRequest: Stopping server: protocol=DNS address=127.0.0.1:16571 network=udp
>     writer.go:29: 2020-02-23T02:46:59.984Z [INFO]  TestAgent_NodeMaintenance_BadRequest: Stopping server: protocol=HTTP address=127.0.0.1:16572 network=tcp
>     writer.go:29: 2020-02-23T02:46:59.984Z [INFO]  TestAgent_NodeMaintenance_BadRequest: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:46:59.984Z [INFO]  TestAgent_NodeMaintenance_BadRequest: Endpoints down
> === CONT  TestAgent_ServiceMaintenance_Disable
> --- PASS: TestAgent_ServiceMaintenance_Disable (0.31s)
>     writer.go:29: 2020-02-23T02:46:59.992Z [WARN]  TestAgent_ServiceMaintenance_Disable: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:59.992Z [DEBUG] TestAgent_ServiceMaintenance_Disable.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:59.993Z [DEBUG] TestAgent_ServiceMaintenance_Disable.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:00.005Z [INFO]  TestAgent_ServiceMaintenance_Disable.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:74a98288-6d2c-f3fa-d006-4ad171752c44 Address:127.0.0.1:16564}]"
>     writer.go:29: 2020-02-23T02:47:00.005Z [INFO]  TestAgent_ServiceMaintenance_Disable.server.raft: entering follower state: follower="Node at 127.0.0.1:16564 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:00.005Z [INFO]  TestAgent_ServiceMaintenance_Disable.server.serf.wan: serf: EventMemberJoin: Node-74a98288-6d2c-f3fa-d006-4ad171752c44.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:00.006Z [INFO]  TestAgent_ServiceMaintenance_Disable.server.serf.lan: serf: EventMemberJoin: Node-74a98288-6d2c-f3fa-d006-4ad171752c44 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:00.006Z [INFO]  TestAgent_ServiceMaintenance_Disable.server: Adding LAN server: server="Node-74a98288-6d2c-f3fa-d006-4ad171752c44 (Addr: tcp/127.0.0.1:16564) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:00.006Z [INFO]  TestAgent_ServiceMaintenance_Disable: Started DNS server: address=127.0.0.1:16559 network=udp
>     writer.go:29: 2020-02-23T02:47:00.006Z [INFO]  TestAgent_ServiceMaintenance_Disable.server: Handled event for server in area: event=member-join server=Node-74a98288-6d2c-f3fa-d006-4ad171752c44.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:00.006Z [INFO]  TestAgent_ServiceMaintenance_Disable: Started DNS server: address=127.0.0.1:16559 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.006Z [INFO]  TestAgent_ServiceMaintenance_Disable: Started HTTP server: address=127.0.0.1:16560 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.006Z [INFO]  TestAgent_ServiceMaintenance_Disable: started state syncer
>     writer.go:29: 2020-02-23T02:47:00.045Z [WARN]  TestAgent_ServiceMaintenance_Disable.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:00.045Z [INFO]  TestAgent_ServiceMaintenance_Disable.server.raft: entering candidate state: node="Node at 127.0.0.1:16564 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:00.099Z [DEBUG] TestAgent_ServiceMaintenance_Disable.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:00.099Z [DEBUG] TestAgent_ServiceMaintenance_Disable.server.raft: vote granted: from=74a98288-6d2c-f3fa-d006-4ad171752c44 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:00.099Z [INFO]  TestAgent_ServiceMaintenance_Disable.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:00.099Z [INFO]  TestAgent_ServiceMaintenance_Disable.server.raft: entering leader state: leader="Node at 127.0.0.1:16564 [Leader]"
>     writer.go:29: 2020-02-23T02:47:00.099Z [INFO]  TestAgent_ServiceMaintenance_Disable.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:00.099Z [INFO]  TestAgent_ServiceMaintenance_Disable.server: New leader elected: payload=Node-74a98288-6d2c-f3fa-d006-4ad171752c44
>     writer.go:29: 2020-02-23T02:47:00.142Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:00.172Z [INFO]  TestAgent_ServiceMaintenance_Disable: Synced node info
>     writer.go:29: 2020-02-23T02:47:00.175Z [INFO]  TestAgent_ServiceMaintenance_Disable.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:00.175Z [INFO]  TestAgent_ServiceMaintenance_Disable.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:00.175Z [DEBUG] TestAgent_ServiceMaintenance_Disable.server: Skipping self join check for node since the cluster is too small: node=Node-74a98288-6d2c-f3fa-d006-4ad171752c44
>     writer.go:29: 2020-02-23T02:47:00.175Z [INFO]  TestAgent_ServiceMaintenance_Disable.server: member joined, marking health alive: member=Node-74a98288-6d2c-f3fa-d006-4ad171752c44
>     writer.go:29: 2020-02-23T02:47:00.285Z [INFO]  TestAgent_ServiceMaintenance_Disable: Service entered maintenance mode: service=test
>     writer.go:29: 2020-02-23T02:47:00.285Z [DEBUG] TestAgent_ServiceMaintenance_Disable: removed check: check=_service_maintenance:test
>     writer.go:29: 2020-02-23T02:47:00.285Z [INFO]  TestAgent_ServiceMaintenance_Disable: Service left maintenance mode: service=test
>     writer.go:29: 2020-02-23T02:47:00.285Z [DEBUG] TestAgent_ServiceMaintenance_Disable: Node info in sync
>     writer.go:29: 2020-02-23T02:47:00.289Z [INFO]  TestAgent_ServiceMaintenance_Disable: Synced service: service=test
>     writer.go:29: 2020-02-23T02:47:00.290Z [INFO]  TestAgent_ServiceMaintenance_Disable: Deregistered check: check=_service_maintenance:test
>     writer.go:29: 2020-02-23T02:47:00.290Z [INFO]  TestAgent_ServiceMaintenance_Disable: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:00.290Z [INFO]  TestAgent_ServiceMaintenance_Disable.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:00.290Z [DEBUG] TestAgent_ServiceMaintenance_Disable.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:00.290Z [WARN]  TestAgent_ServiceMaintenance_Disable.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:00.290Z [DEBUG] TestAgent_ServiceMaintenance_Disable.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:00.292Z [WARN]  TestAgent_ServiceMaintenance_Disable.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:00.294Z [INFO]  TestAgent_ServiceMaintenance_Disable.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:00.294Z [INFO]  TestAgent_ServiceMaintenance_Disable: consul server down
>     writer.go:29: 2020-02-23T02:47:00.294Z [INFO]  TestAgent_ServiceMaintenance_Disable: shutdown complete
>     writer.go:29: 2020-02-23T02:47:00.294Z [INFO]  TestAgent_ServiceMaintenance_Disable: Stopping server: protocol=DNS address=127.0.0.1:16559 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.294Z [INFO]  TestAgent_ServiceMaintenance_Disable: Stopping server: protocol=DNS address=127.0.0.1:16559 network=udp
>     writer.go:29: 2020-02-23T02:47:00.294Z [INFO]  TestAgent_ServiceMaintenance_Disable: Stopping server: protocol=HTTP address=127.0.0.1:16560 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.294Z [INFO]  TestAgent_ServiceMaintenance_Disable: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:00.294Z [INFO]  TestAgent_ServiceMaintenance_Disable: Endpoints down
> === CONT  TestAgent_ServiceMaintenance_BadRequest
> === RUN   TestAgent_ServiceMaintenance_BadRequest/not_enabled
> === RUN   TestAgent_ServiceMaintenance_BadRequest/no_service_id
> === RUN   TestAgent_ServiceMaintenance_BadRequest/bad_service_id
> --- PASS: TestAgent_ServiceMaintenance_BadRequest (0.17s)
>     writer.go:29: 2020-02-23T02:47:00.301Z [WARN]  TestAgent_ServiceMaintenance_BadRequest: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:00.301Z [DEBUG] TestAgent_ServiceMaintenance_BadRequest.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:00.302Z [DEBUG] TestAgent_ServiceMaintenance_BadRequest.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:00.311Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:0c54255a-11b3-2797-1992-1841088e7efe Address:127.0.0.1:16558}]"
>     writer.go:29: 2020-02-23T02:47:00.311Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.server.raft: entering follower state: follower="Node at 127.0.0.1:16558 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:00.311Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.server.serf.wan: serf: EventMemberJoin: Node-0c54255a-11b3-2797-1992-1841088e7efe.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:00.312Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.server.serf.lan: serf: EventMemberJoin: Node-0c54255a-11b3-2797-1992-1841088e7efe 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:00.312Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.server: Handled event for server in area: event=member-join server=Node-0c54255a-11b3-2797-1992-1841088e7efe.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:00.312Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.server: Adding LAN server: server="Node-0c54255a-11b3-2797-1992-1841088e7efe (Addr: tcp/127.0.0.1:16558) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:00.312Z [INFO]  TestAgent_ServiceMaintenance_BadRequest: Started DNS server: address=127.0.0.1:16553 network=udp
>     writer.go:29: 2020-02-23T02:47:00.312Z [INFO]  TestAgent_ServiceMaintenance_BadRequest: Started DNS server: address=127.0.0.1:16553 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.312Z [INFO]  TestAgent_ServiceMaintenance_BadRequest: Started HTTP server: address=127.0.0.1:16554 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.312Z [INFO]  TestAgent_ServiceMaintenance_BadRequest: started state syncer
>     writer.go:29: 2020-02-23T02:47:00.366Z [WARN]  TestAgent_ServiceMaintenance_BadRequest.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:00.366Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.server.raft: entering candidate state: node="Node at 127.0.0.1:16558 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:00.369Z [DEBUG] TestAgent_ServiceMaintenance_BadRequest.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:00.369Z [DEBUG] TestAgent_ServiceMaintenance_BadRequest.server.raft: vote granted: from=0c54255a-11b3-2797-1992-1841088e7efe term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:00.369Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:00.369Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.server.raft: entering leader state: leader="Node at 127.0.0.1:16558 [Leader]"
>     writer.go:29: 2020-02-23T02:47:00.369Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:00.369Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.server: New leader elected: payload=Node-0c54255a-11b3-2797-1992-1841088e7efe
>     writer.go:29: 2020-02-23T02:47:00.377Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:00.385Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:00.385Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:00.385Z [DEBUG] TestAgent_ServiceMaintenance_BadRequest.server: Skipping self join check for node since the cluster is too small: node=Node-0c54255a-11b3-2797-1992-1841088e7efe
>     writer.go:29: 2020-02-23T02:47:00.385Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.server: member joined, marking health alive: member=Node-0c54255a-11b3-2797-1992-1841088e7efe
>     writer.go:29: 2020-02-23T02:47:00.424Z [DEBUG] TestAgent_ServiceMaintenance_BadRequest: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:00.427Z [INFO]  TestAgent_ServiceMaintenance_BadRequest: Synced node info
>     --- PASS: TestAgent_ServiceMaintenance_BadRequest/not_enabled (0.00s)
>     --- PASS: TestAgent_ServiceMaintenance_BadRequest/no_service_id (0.00s)
>     --- PASS: TestAgent_ServiceMaintenance_BadRequest/bad_service_id (0.00s)
>     writer.go:29: 2020-02-23T02:47:00.464Z [INFO]  TestAgent_ServiceMaintenance_BadRequest: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:00.464Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:00.464Z [DEBUG] TestAgent_ServiceMaintenance_BadRequest.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:00.464Z [WARN]  TestAgent_ServiceMaintenance_BadRequest.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:00.464Z [DEBUG] TestAgent_ServiceMaintenance_BadRequest.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:00.467Z [WARN]  TestAgent_ServiceMaintenance_BadRequest.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:00.468Z [INFO]  TestAgent_ServiceMaintenance_BadRequest.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:00.468Z [INFO]  TestAgent_ServiceMaintenance_BadRequest: consul server down
>     writer.go:29: 2020-02-23T02:47:00.468Z [INFO]  TestAgent_ServiceMaintenance_BadRequest: shutdown complete
>     writer.go:29: 2020-02-23T02:47:00.468Z [INFO]  TestAgent_ServiceMaintenance_BadRequest: Stopping server: protocol=DNS address=127.0.0.1:16553 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.468Z [INFO]  TestAgent_ServiceMaintenance_BadRequest: Stopping server: protocol=DNS address=127.0.0.1:16553 network=udp
>     writer.go:29: 2020-02-23T02:47:00.468Z [INFO]  TestAgent_ServiceMaintenance_BadRequest: Stopping server: protocol=HTTP address=127.0.0.1:16554 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.469Z [INFO]  TestAgent_ServiceMaintenance_BadRequest: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:00.469Z [INFO]  TestAgent_ServiceMaintenance_BadRequest: Endpoints down
> === CONT  TestAgent_DeregisterService_ACLDeny
> === RUN   TestAgent_DeregisterService_ACLDeny/no_token
> === RUN   TestAgent_DeregisterService_ACLDeny/root_token
> --- PASS: TestAgent_DeregisterService_ACLDeny (0.17s)
>     writer.go:29: 2020-02-23T02:47:00.476Z [WARN]  TestAgent_DeregisterService_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:00.476Z [WARN]  TestAgent_DeregisterService_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:00.476Z [DEBUG] TestAgent_DeregisterService_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:00.477Z [DEBUG] TestAgent_DeregisterService_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:00.486Z [INFO]  TestAgent_DeregisterService_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9 Address:127.0.0.1:16594}]"
>     writer.go:29: 2020-02-23T02:47:00.487Z [INFO]  TestAgent_DeregisterService_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:00.487Z [INFO]  TestAgent_DeregisterService_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:00.487Z [INFO]  TestAgent_DeregisterService_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16594 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:00.487Z [INFO]  TestAgent_DeregisterService_ACLDeny: Started DNS server: address=127.0.0.1:16589 network=udp
>     writer.go:29: 2020-02-23T02:47:00.487Z [INFO]  TestAgent_DeregisterService_ACLDeny.server: Adding LAN server: server="Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9 (Addr: tcp/127.0.0.1:16594) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:00.487Z [INFO]  TestAgent_DeregisterService_ACLDeny.server: Handled event for server in area: event=member-join server=Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:00.487Z [INFO]  TestAgent_DeregisterService_ACLDeny: Started DNS server: address=127.0.0.1:16589 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.488Z [INFO]  TestAgent_DeregisterService_ACLDeny: Started HTTP server: address=127.0.0.1:16590 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.488Z [INFO]  TestAgent_DeregisterService_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:47:00.524Z [WARN]  TestAgent_DeregisterService_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:00.524Z [INFO]  TestAgent_DeregisterService_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16594 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:00.527Z [DEBUG] TestAgent_DeregisterService_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:00.527Z [DEBUG] TestAgent_DeregisterService_ACLDeny.server.raft: vote granted: from=8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:00.527Z [INFO]  TestAgent_DeregisterService_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:00.528Z [INFO]  TestAgent_DeregisterService_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16594 [Leader]"
>     writer.go:29: 2020-02-23T02:47:00.528Z [INFO]  TestAgent_DeregisterService_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:00.528Z [INFO]  TestAgent_DeregisterService_ACLDeny.server: New leader elected: payload=Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9
>     writer.go:29: 2020-02-23T02:47:00.530Z [INFO]  TestAgent_DeregisterService_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:00.531Z [INFO]  TestAgent_DeregisterService_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:00.531Z [WARN]  TestAgent_DeregisterService_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:00.534Z [INFO]  TestAgent_DeregisterService_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:00.539Z [INFO]  TestAgent_DeregisterService_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:00.539Z [WARN]  TestAgent_DeregisterService_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:00.551Z [INFO]  TestAgent_DeregisterService_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:00.551Z [INFO]  TestAgent_DeregisterService_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:00.551Z [INFO]  TestAgent_DeregisterService_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:00.551Z [INFO]  TestAgent_DeregisterService_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9
>     writer.go:29: 2020-02-23T02:47:00.551Z [INFO]  TestAgent_DeregisterService_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9.dc1
>     writer.go:29: 2020-02-23T02:47:00.551Z [INFO]  TestAgent_DeregisterService_ACLDeny.server: Handled event for server in area: event=member-update server=Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:00.553Z [INFO]  TestAgent_DeregisterService_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:00.553Z [DEBUG] TestAgent_DeregisterService_ACLDeny.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:47:00.553Z [INFO]  TestAgent_DeregisterService_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9
>     writer.go:29: 2020-02-23T02:47:00.553Z [INFO]  TestAgent_DeregisterService_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9.dc1
>     writer.go:29: 2020-02-23T02:47:00.553Z [INFO]  TestAgent_DeregisterService_ACLDeny.server: Handled event for server in area: event=member-update server=Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:00.564Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:00.577Z [INFO]  TestAgent_DeregisterService_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:00.577Z [INFO]  TestAgent_DeregisterService_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:00.577Z [DEBUG] TestAgent_DeregisterService_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9
>     writer.go:29: 2020-02-23T02:47:00.577Z [INFO]  TestAgent_DeregisterService_ACLDeny.server: member joined, marking health alive: member=Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9
>     writer.go:29: 2020-02-23T02:47:00.578Z [DEBUG] TestAgent_DeregisterService_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9
>     writer.go:29: 2020-02-23T02:47:00.578Z [DEBUG] TestAgent_DeregisterService_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9
>     writer.go:29: 2020-02-23T02:47:00.628Z [DEBUG] TestAgent_DeregisterService_ACLDeny.acl: dropping node from result due to ACLs: node=Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9
>     writer.go:29: 2020-02-23T02:47:00.628Z [DEBUG] TestAgent_DeregisterService_ACLDeny.acl: dropping node from result due to ACLs: node=Node-8ff5915b-7ce0-5861-e095-1cc2fbdeb2a9
>     --- PASS: TestAgent_DeregisterService_ACLDeny/no_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:00.629Z [DEBUG] TestAgent_DeregisterService_ACLDeny: removed service: service=test
>     writer.go:29: 2020-02-23T02:47:00.632Z [INFO]  TestAgent_DeregisterService_ACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:47:00.632Z [INFO]  TestAgent_DeregisterService_ACLDeny: Deregistered service: service=test
>     --- PASS: TestAgent_DeregisterService_ACLDeny/root_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:00.632Z [INFO]  TestAgent_DeregisterService_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:00.632Z [INFO]  TestAgent_DeregisterService_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:00.632Z [DEBUG] TestAgent_DeregisterService_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:00.632Z [DEBUG] TestAgent_DeregisterService_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:00.632Z [DEBUG] TestAgent_DeregisterService_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:00.632Z [WARN]  TestAgent_DeregisterService_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:00.632Z [ERROR] TestAgent_DeregisterService_ACLDeny.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:00.632Z [DEBUG] TestAgent_DeregisterService_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:00.632Z [DEBUG] TestAgent_DeregisterService_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:00.632Z [DEBUG] TestAgent_DeregisterService_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:00.634Z [WARN]  TestAgent_DeregisterService_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:00.636Z [INFO]  TestAgent_DeregisterService_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:00.636Z [INFO]  TestAgent_DeregisterService_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:47:00.636Z [INFO]  TestAgent_DeregisterService_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:47:00.636Z [INFO]  TestAgent_DeregisterService_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16589 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.636Z [INFO]  TestAgent_DeregisterService_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16589 network=udp
>     writer.go:29: 2020-02-23T02:47:00.636Z [INFO]  TestAgent_DeregisterService_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16590 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.636Z [INFO]  TestAgent_DeregisterService_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:00.636Z [INFO]  TestAgent_DeregisterService_ACLDeny: Endpoints down
> === CONT  TestAgent_DeregisterService
> --- PASS: TestAgent_DeregisterService (0.16s)
>     writer.go:29: 2020-02-23T02:47:00.643Z [WARN]  TestAgent_DeregisterService: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:00.643Z [DEBUG] TestAgent_DeregisterService.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:00.643Z [DEBUG] TestAgent_DeregisterService.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:00.652Z [INFO]  TestAgent_DeregisterService.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:028ffd03-c9a5-3d26-65c6-c43617956f38 Address:127.0.0.1:16588}]"
>     writer.go:29: 2020-02-23T02:47:00.652Z [INFO]  TestAgent_DeregisterService.server.raft: entering follower state: follower="Node at 127.0.0.1:16588 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:00.652Z [INFO]  TestAgent_DeregisterService.server.serf.wan: serf: EventMemberJoin: Node-028ffd03-c9a5-3d26-65c6-c43617956f38.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:00.653Z [INFO]  TestAgent_DeregisterService.server.serf.lan: serf: EventMemberJoin: Node-028ffd03-c9a5-3d26-65c6-c43617956f38 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:00.653Z [INFO]  TestAgent_DeregisterService.server: Adding LAN server: server="Node-028ffd03-c9a5-3d26-65c6-c43617956f38 (Addr: tcp/127.0.0.1:16588) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:00.653Z [INFO]  TestAgent_DeregisterService: Started DNS server: address=127.0.0.1:16583 network=udp
>     writer.go:29: 2020-02-23T02:47:00.653Z [INFO]  TestAgent_DeregisterService.server: Handled event for server in area: event=member-join server=Node-028ffd03-c9a5-3d26-65c6-c43617956f38.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:00.653Z [INFO]  TestAgent_DeregisterService: Started DNS server: address=127.0.0.1:16583 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.654Z [INFO]  TestAgent_DeregisterService: Started HTTP server: address=127.0.0.1:16584 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.654Z [INFO]  TestAgent_DeregisterService: started state syncer
>     writer.go:29: 2020-02-23T02:47:00.693Z [WARN]  TestAgent_DeregisterService.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:00.693Z [INFO]  TestAgent_DeregisterService.server.raft: entering candidate state: node="Node at 127.0.0.1:16588 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:00.696Z [DEBUG] TestAgent_DeregisterService.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:00.696Z [DEBUG] TestAgent_DeregisterService.server.raft: vote granted: from=028ffd03-c9a5-3d26-65c6-c43617956f38 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:00.696Z [INFO]  TestAgent_DeregisterService.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:00.697Z [INFO]  TestAgent_DeregisterService.server.raft: entering leader state: leader="Node at 127.0.0.1:16588 [Leader]"
>     writer.go:29: 2020-02-23T02:47:00.697Z [INFO]  TestAgent_DeregisterService.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:00.697Z [INFO]  TestAgent_DeregisterService.server: New leader elected: payload=Node-028ffd03-c9a5-3d26-65c6-c43617956f38
>     writer.go:29: 2020-02-23T02:47:00.703Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:00.711Z [INFO]  TestAgent_DeregisterService.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:00.711Z [INFO]  TestAgent_DeregisterService.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:00.711Z [DEBUG] TestAgent_DeregisterService.server: Skipping self join check for node since the cluster is too small: node=Node-028ffd03-c9a5-3d26-65c6-c43617956f38
>     writer.go:29: 2020-02-23T02:47:00.711Z [INFO]  TestAgent_DeregisterService.server: member joined, marking health alive: member=Node-028ffd03-c9a5-3d26-65c6-c43617956f38
>     writer.go:29: 2020-02-23T02:47:00.749Z [DEBUG] TestAgent_DeregisterService: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:00.752Z [INFO]  TestAgent_DeregisterService: Synced node info
>     writer.go:29: 2020-02-23T02:47:00.794Z [DEBUG] TestAgent_DeregisterService: removed service: service=test
>     writer.go:29: 2020-02-23T02:47:00.794Z [DEBUG] TestAgent_DeregisterService: Node info in sync
>     writer.go:29: 2020-02-23T02:47:00.796Z [INFO]  TestAgent_DeregisterService: Deregistered service: service=test
>     writer.go:29: 2020-02-23T02:47:00.796Z [INFO]  TestAgent_DeregisterService: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:00.796Z [INFO]  TestAgent_DeregisterService.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:00.796Z [DEBUG] TestAgent_DeregisterService.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:00.796Z [WARN]  TestAgent_DeregisterService.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:00.796Z [DEBUG] TestAgent_DeregisterService.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:00.798Z [WARN]  TestAgent_DeregisterService.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:00.800Z [INFO]  TestAgent_DeregisterService.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:00.800Z [INFO]  TestAgent_DeregisterService: consul server down
>     writer.go:29: 2020-02-23T02:47:00.800Z [INFO]  TestAgent_DeregisterService: shutdown complete
>     writer.go:29: 2020-02-23T02:47:00.800Z [INFO]  TestAgent_DeregisterService: Stopping server: protocol=DNS address=127.0.0.1:16583 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.800Z [INFO]  TestAgent_DeregisterService: Stopping server: protocol=DNS address=127.0.0.1:16583 network=udp
>     writer.go:29: 2020-02-23T02:47:00.800Z [INFO]  TestAgent_DeregisterService: Stopping server: protocol=HTTP address=127.0.0.1:16584 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.800Z [INFO]  TestAgent_DeregisterService: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:00.800Z [INFO]  TestAgent_DeregisterService: Endpoints down
> === CONT  TestAgent_UpdateCheck_ACLDeny
> === RUN   TestAgent_UpdateCheck_ACLDeny/no_token
> === RUN   TestAgent_UpdateCheck_ACLDeny/root_token
> --- PASS: TestAgent_UpdateCheck_ACLDeny (0.36s)
>     writer.go:29: 2020-02-23T02:47:00.808Z [WARN]  TestAgent_UpdateCheck_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:00.808Z [WARN]  TestAgent_UpdateCheck_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:00.808Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:00.809Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:00.818Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:673b18d1-4621-900e-496a-9163d356ee11 Address:127.0.0.1:16600}]"
>     writer.go:29: 2020-02-23T02:47:00.818Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16600 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:00.818Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-673b18d1-4621-900e-496a-9163d356ee11.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:00.818Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-673b18d1-4621-900e-496a-9163d356ee11 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:00.819Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server: Adding LAN server: server="Node-673b18d1-4621-900e-496a-9163d356ee11 (Addr: tcp/127.0.0.1:16600) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:00.819Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server: Handled event for server in area: event=member-join server=Node-673b18d1-4621-900e-496a-9163d356ee11.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:00.819Z [INFO]  TestAgent_UpdateCheck_ACLDeny: Started DNS server: address=127.0.0.1:16595 network=udp
>     writer.go:29: 2020-02-23T02:47:00.819Z [INFO]  TestAgent_UpdateCheck_ACLDeny: Started DNS server: address=127.0.0.1:16595 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.819Z [INFO]  TestAgent_UpdateCheck_ACLDeny: Started HTTP server: address=127.0.0.1:16596 network=tcp
>     writer.go:29: 2020-02-23T02:47:00.819Z [INFO]  TestAgent_UpdateCheck_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:47:00.864Z [WARN]  TestAgent_UpdateCheck_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:00.864Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16600 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:00.868Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:00.868Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.server.raft: vote granted: from=673b18d1-4621-900e-496a-9163d356ee11 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:00.868Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:00.868Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16600 [Leader]"
>     writer.go:29: 2020-02-23T02:47:00.868Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:00.868Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server: New leader elected: payload=Node-673b18d1-4621-900e-496a-9163d356ee11
>     writer.go:29: 2020-02-23T02:47:00.869Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:00.870Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:00.870Z [WARN]  TestAgent_UpdateCheck_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:00.870Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:00.871Z [WARN]  TestAgent_UpdateCheck_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:00.877Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:00.877Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:00.880Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:00.880Z [INFO]  TestAgent_UpdateCheck_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:00.880Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:00.880Z [INFO]  TestAgent_UpdateCheck_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:00.880Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:47:00.880Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-673b18d1-4621-900e-496a-9163d356ee11
>     writer.go:29: 2020-02-23T02:47:00.880Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-673b18d1-4621-900e-496a-9163d356ee11.dc1
>     writer.go:29: 2020-02-23T02:47:00.880Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-673b18d1-4621-900e-496a-9163d356ee11
>     writer.go:29: 2020-02-23T02:47:00.880Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server: Handled event for server in area: event=member-update server=Node-673b18d1-4621-900e-496a-9163d356ee11.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:00.880Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-673b18d1-4621-900e-496a-9163d356ee11.dc1
>     writer.go:29: 2020-02-23T02:47:00.881Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server: Handled event for server in area: event=member-update server=Node-673b18d1-4621-900e-496a-9163d356ee11.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:00.884Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:00.891Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:00.891Z [INFO]  TestAgent_UpdateCheck_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:00.892Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-673b18d1-4621-900e-496a-9163d356ee11
>     writer.go:29: 2020-02-23T02:47:00.892Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server: member joined, marking health alive: member=Node-673b18d1-4621-900e-496a-9163d356ee11
>     writer.go:29: 2020-02-23T02:47:00.893Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-673b18d1-4621-900e-496a-9163d356ee11
>     writer.go:29: 2020-02-23T02:47:00.893Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-673b18d1-4621-900e-496a-9163d356ee11
>     writer.go:29: 2020-02-23T02:47:01.146Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.acl: dropping node from result due to ACLs: node=Node-673b18d1-4621-900e-496a-9163d356ee11
>     writer.go:29: 2020-02-23T02:47:01.146Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.acl: dropping node from result due to ACLs: node=Node-673b18d1-4621-900e-496a-9163d356ee11
>     --- PASS: TestAgent_UpdateCheck_ACLDeny/no_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:01.147Z [DEBUG] TestAgent_UpdateCheck_ACLDeny: Check status updated: check=test status=passing
>     writer.go:29: 2020-02-23T02:47:01.152Z [INFO]  TestAgent_UpdateCheck_ACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:47:01.152Z [WARN]  TestAgent_UpdateCheck_ACLDeny: Check registration blocked by ACLs: check=test accessorID=
>     --- PASS: TestAgent_UpdateCheck_ACLDeny/root_token (0.01s)
>     writer.go:29: 2020-02-23T02:47:01.152Z [INFO]  TestAgent_UpdateCheck_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:01.152Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:01.152Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:01.152Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:01.152Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:01.152Z [WARN]  TestAgent_UpdateCheck_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:01.152Z [ERROR] TestAgent_UpdateCheck_ACLDeny.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:01.152Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:01.153Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:01.153Z [DEBUG] TestAgent_UpdateCheck_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:01.154Z [WARN]  TestAgent_UpdateCheck_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:01.156Z [INFO]  TestAgent_UpdateCheck_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:01.156Z [INFO]  TestAgent_UpdateCheck_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:47:01.156Z [INFO]  TestAgent_UpdateCheck_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:47:01.156Z [INFO]  TestAgent_UpdateCheck_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16595 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.156Z [INFO]  TestAgent_UpdateCheck_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16595 network=udp
>     writer.go:29: 2020-02-23T02:47:01.156Z [INFO]  TestAgent_UpdateCheck_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16596 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.156Z [INFO]  TestAgent_UpdateCheck_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:01.156Z [INFO]  TestAgent_UpdateCheck_ACLDeny: Endpoints down
> === CONT  TestAgent_FailCheck_ACLDeny
> --- PASS: TestAgent_TokenTriggersFullSync (4.48s)
>     --- PASS: TestAgent_TokenTriggersFullSync/acl_agent_token (0.17s)
>         writer.go:29: 2020-02-23T02:46:56.955Z [WARN]  TestAgent_TokenTriggersFullSync/acl_agent_token: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:56.955Z [WARN]  TestAgent_TokenTriggersFullSync/acl_agent_token: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:56.955Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_agent_token.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:56.956Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_agent_token.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:56.964Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:e5fc83f9-414c-e3f1-182d-71a9f97580e2 Address:127.0.0.1:16498}]"
>         writer.go:29: 2020-02-23T02:46:56.964Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server.raft: entering follower state: follower="Node at 127.0.0.1:16498 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:56.965Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server.serf.wan: serf: EventMemberJoin: Node-e5fc83f9-414c-e3f1-182d-71a9f97580e2.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:56.965Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server.serf.lan: serf: EventMemberJoin: Node-e5fc83f9-414c-e3f1-182d-71a9f97580e2 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:56.965Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server: Adding LAN server: server="Node-e5fc83f9-414c-e3f1-182d-71a9f97580e2 (Addr: tcp/127.0.0.1:16498) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:56.965Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token: Started DNS server: address=127.0.0.1:16493 network=udp
>         writer.go:29: 2020-02-23T02:46:56.965Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server: Handled event for server in area: event=member-join server=Node-e5fc83f9-414c-e3f1-182d-71a9f97580e2.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:56.965Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token: Started DNS server: address=127.0.0.1:16493 network=tcp
>         writer.go:29: 2020-02-23T02:46:56.966Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token: Started HTTP server: address=127.0.0.1:16494 network=tcp
>         writer.go:29: 2020-02-23T02:46:56.966Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token: started state syncer
>         writer.go:29: 2020-02-23T02:46:57.001Z [WARN]  TestAgent_TokenTriggersFullSync/acl_agent_token.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:57.001Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server.raft: entering candidate state: node="Node at 127.0.0.1:16498 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:57.006Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_agent_token.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:57.006Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_agent_token.server.raft: vote granted: from=e5fc83f9-414c-e3f1-182d-71a9f97580e2 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:57.006Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:57.006Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server.raft: entering leader state: leader="Node at 127.0.0.1:16498 [Leader]"
>         writer.go:29: 2020-02-23T02:46:57.006Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:57.006Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server: New leader elected: payload=Node-e5fc83f9-414c-e3f1-182d-71a9f97580e2
>         writer.go:29: 2020-02-23T02:46:57.008Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:57.009Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:57.009Z [WARN]  TestAgent_TokenTriggersFullSync/acl_agent_token.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:57.012Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:57.015Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:57.015Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:57.015Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:57.015Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server.serf.lan: serf: EventMemberUpdate: Node-e5fc83f9-414c-e3f1-182d-71a9f97580e2
>         writer.go:29: 2020-02-23T02:46:57.015Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server.serf.wan: serf: EventMemberUpdate: Node-e5fc83f9-414c-e3f1-182d-71a9f97580e2.dc1
>         writer.go:29: 2020-02-23T02:46:57.015Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server: Handled event for server in area: event=member-update server=Node-e5fc83f9-414c-e3f1-182d-71a9f97580e2.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:57.019Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:57.026Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:57.026Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:57.026Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_agent_token.server: Skipping self join check for node since the cluster is too small: node=Node-e5fc83f9-414c-e3f1-182d-71a9f97580e2
>         writer.go:29: 2020-02-23T02:46:57.026Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server: member joined, marking health alive: member=Node-e5fc83f9-414c-e3f1-182d-71a9f97580e2
>         writer.go:29: 2020-02-23T02:46:57.029Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_agent_token.server: Skipping self join check for node since the cluster is too small: node=Node-e5fc83f9-414c-e3f1-182d-71a9f97580e2
>         writer.go:29: 2020-02-23T02:46:57.072Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_agent_token.acl: dropping node from result due to ACLs: node=Node-e5fc83f9-414c-e3f1-182d-71a9f97580e2
>         writer.go:29: 2020-02-23T02:46:57.072Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_agent_token.acl: dropping node from result due to ACLs: node=Node-e5fc83f9-414c-e3f1-182d-71a9f97580e2
>         writer.go:29: 2020-02-23T02:46:57.078Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token: Updated agent's ACL token: token=acl_agent_token
>         writer.go:29: 2020-02-23T02:46:57.084Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_agent_token.acl: dropping check from result due to ACLs: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:57.085Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token: Synced node info
>         writer.go:29: 2020-02-23T02:46:57.103Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:57.103Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:57.103Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_agent_token.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:57.103Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_agent_token.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:57.103Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_agent_token.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:57.103Z [WARN]  TestAgent_TokenTriggersFullSync/acl_agent_token.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:57.104Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_agent_token.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:57.104Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_agent_token.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:57.104Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_agent_token.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:57.105Z [WARN]  TestAgent_TokenTriggersFullSync/acl_agent_token.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:57.107Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:57.107Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token: consul server down
>         writer.go:29: 2020-02-23T02:46:57.107Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token: shutdown complete
>         writer.go:29: 2020-02-23T02:46:57.107Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token: Stopping server: protocol=DNS address=127.0.0.1:16493 network=tcp
>         writer.go:29: 2020-02-23T02:46:57.107Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token: Stopping server: protocol=DNS address=127.0.0.1:16493 network=udp
>         writer.go:29: 2020-02-23T02:46:57.107Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token: Stopping server: protocol=HTTP address=127.0.0.1:16494 network=tcp
>         writer.go:29: 2020-02-23T02:46:57.107Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:57.107Z [INFO]  TestAgent_TokenTriggersFullSync/acl_agent_token: Endpoints down
>     --- PASS: TestAgent_TokenTriggersFullSync/agent (1.56s)
>         writer.go:29: 2020-02-23T02:46:57.115Z [WARN]  TestAgent_TokenTriggersFullSync/agent: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:57.115Z [WARN]  TestAgent_TokenTriggersFullSync/agent: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:57.115Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:57.116Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:57.125Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:75e791dc-bd77-b708-2457-5c81327a02aa Address:127.0.0.1:16504}]"
>         writer.go:29: 2020-02-23T02:46:57.125Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server.raft: entering follower state: follower="Node at 127.0.0.1:16504 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:57.126Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server.serf.wan: serf: EventMemberJoin: Node-75e791dc-bd77-b708-2457-5c81327a02aa.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:57.126Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server.serf.lan: serf: EventMemberJoin: Node-75e791dc-bd77-b708-2457-5c81327a02aa 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:57.127Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server: Handled event for server in area: event=member-join server=Node-75e791dc-bd77-b708-2457-5c81327a02aa.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:57.127Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server: Adding LAN server: server="Node-75e791dc-bd77-b708-2457-5c81327a02aa (Addr: tcp/127.0.0.1:16504) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:57.131Z [INFO]  TestAgent_TokenTriggersFullSync/agent: Started DNS server: address=127.0.0.1:16499 network=tcp
>         writer.go:29: 2020-02-23T02:46:57.131Z [INFO]  TestAgent_TokenTriggersFullSync/agent: Started DNS server: address=127.0.0.1:16499 network=udp
>         writer.go:29: 2020-02-23T02:46:57.131Z [INFO]  TestAgent_TokenTriggersFullSync/agent: Started HTTP server: address=127.0.0.1:16500 network=tcp
>         writer.go:29: 2020-02-23T02:46:57.132Z [INFO]  TestAgent_TokenTriggersFullSync/agent: started state syncer
>         writer.go:29: 2020-02-23T02:46:57.189Z [WARN]  TestAgent_TokenTriggersFullSync/agent.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:57.189Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server.raft: entering candidate state: node="Node at 127.0.0.1:16504 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:57.193Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:57.193Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.server.raft: vote granted: from=75e791dc-bd77-b708-2457-5c81327a02aa term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:57.193Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:57.193Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server.raft: entering leader state: leader="Node at 127.0.0.1:16504 [Leader]"
>         writer.go:29: 2020-02-23T02:46:57.193Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:57.193Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server: New leader elected: payload=Node-75e791dc-bd77-b708-2457-5c81327a02aa
>         writer.go:29: 2020-02-23T02:46:57.195Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:57.196Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:57.196Z [WARN]  TestAgent_TokenTriggersFullSync/agent.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:57.199Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:57.202Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:57.202Z [INFO]  TestAgent_TokenTriggersFullSync/agent.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:57.202Z [INFO]  TestAgent_TokenTriggersFullSync/agent.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:57.202Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server.serf.lan: serf: EventMemberUpdate: Node-75e791dc-bd77-b708-2457-5c81327a02aa
>         writer.go:29: 2020-02-23T02:46:57.203Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server.serf.wan: serf: EventMemberUpdate: Node-75e791dc-bd77-b708-2457-5c81327a02aa.dc1
>         writer.go:29: 2020-02-23T02:46:57.203Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server: Handled event for server in area: event=member-update server=Node-75e791dc-bd77-b708-2457-5c81327a02aa.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:57.206Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:57.213Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:57.213Z [INFO]  TestAgent_TokenTriggersFullSync/agent.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:57.213Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.server: Skipping self join check for node since the cluster is too small: node=Node-75e791dc-bd77-b708-2457-5c81327a02aa
>         writer.go:29: 2020-02-23T02:46:57.213Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server: member joined, marking health alive: member=Node-75e791dc-bd77-b708-2457-5c81327a02aa
>         writer.go:29: 2020-02-23T02:46:57.216Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.server: Skipping self join check for node since the cluster is too small: node=Node-75e791dc-bd77-b708-2457-5c81327a02aa
>         writer.go:29: 2020-02-23T02:46:57.253Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.acl: dropping check from result due to ACLs: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:57.253Z [WARN]  TestAgent_TokenTriggersFullSync/agent: Node info update blocked by ACLs: node=75e791dc-bd77-b708-2457-5c81327a02aa accessorID=
>         writer.go:29: 2020-02-23T02:46:57.253Z [DEBUG] TestAgent_TokenTriggersFullSync/agent: Node info in sync
>         writer.go:29: 2020-02-23T02:46:57.442Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.acl: dropping node from result due to ACLs: node=Node-75e791dc-bd77-b708-2457-5c81327a02aa
>         writer.go:29: 2020-02-23T02:46:57.442Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.acl: dropping node from result due to ACLs: node=Node-75e791dc-bd77-b708-2457-5c81327a02aa
>         writer.go:29: 2020-02-23T02:46:57.449Z [INFO]  TestAgent_TokenTriggersFullSync/agent: Updated agent's ACL token: token=agent
>         writer.go:29: 2020-02-23T02:46:58.655Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.acl: dropping service from result due to ACLs: service="{consul {}}"
>         writer.go:29: 2020-02-23T02:46:58.655Z [DEBUG] TestAgent_TokenTriggersFullSync/agent: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:58.657Z [INFO]  TestAgent_TokenTriggersFullSync/agent: Synced node info
>         writer.go:29: 2020-02-23T02:46:58.659Z [INFO]  TestAgent_TokenTriggersFullSync/agent: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:58.659Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:58.659Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:58.659Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:58.659Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:58.659Z [WARN]  TestAgent_TokenTriggersFullSync/agent.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:58.660Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:58.660Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:58.660Z [DEBUG] TestAgent_TokenTriggersFullSync/agent.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:58.661Z [WARN]  TestAgent_TokenTriggersFullSync/agent.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:58.663Z [INFO]  TestAgent_TokenTriggersFullSync/agent.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:58.663Z [INFO]  TestAgent_TokenTriggersFullSync/agent: consul server down
>         writer.go:29: 2020-02-23T02:46:58.663Z [INFO]  TestAgent_TokenTriggersFullSync/agent: shutdown complete
>         writer.go:29: 2020-02-23T02:46:58.663Z [INFO]  TestAgent_TokenTriggersFullSync/agent: Stopping server: protocol=DNS address=127.0.0.1:16499 network=tcp
>         writer.go:29: 2020-02-23T02:46:58.663Z [INFO]  TestAgent_TokenTriggersFullSync/agent: Stopping server: protocol=DNS address=127.0.0.1:16499 network=udp
>         writer.go:29: 2020-02-23T02:46:58.663Z [INFO]  TestAgent_TokenTriggersFullSync/agent: Stopping server: protocol=HTTP address=127.0.0.1:16500 network=tcp
>         writer.go:29: 2020-02-23T02:46:58.663Z [INFO]  TestAgent_TokenTriggersFullSync/agent: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:58.663Z [INFO]  TestAgent_TokenTriggersFullSync/agent: Endpoints down
>     --- PASS: TestAgent_TokenTriggersFullSync/acl_token (0.76s)
>         writer.go:29: 2020-02-23T02:46:58.671Z [WARN]  TestAgent_TokenTriggersFullSync/acl_token: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:58.671Z [WARN]  TestAgent_TokenTriggersFullSync/acl_token: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:58.671Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:58.672Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:58.681Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb Address:127.0.0.1:16522}]"
>         writer.go:29: 2020-02-23T02:46:58.681Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server.raft: entering follower state: follower="Node at 127.0.0.1:16522 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:58.681Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server.serf.wan: serf: EventMemberJoin: Node-cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:58.682Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server.serf.lan: serf: EventMemberJoin: Node-cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:58.682Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server: Adding LAN server: server="Node-cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb (Addr: tcp/127.0.0.1:16522) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:58.682Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server: Handled event for server in area: event=member-join server=Node-cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:58.682Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token: Started DNS server: address=127.0.0.1:16517 network=udp
>         writer.go:29: 2020-02-23T02:46:58.682Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token: Started DNS server: address=127.0.0.1:16517 network=tcp
>         writer.go:29: 2020-02-23T02:46:58.682Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token: Started HTTP server: address=127.0.0.1:16518 network=tcp
>         writer.go:29: 2020-02-23T02:46:58.682Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token: started state syncer
>         writer.go:29: 2020-02-23T02:46:58.736Z [WARN]  TestAgent_TokenTriggersFullSync/acl_token.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:58.736Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server.raft: entering candidate state: node="Node at 127.0.0.1:16522 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:58.739Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:58.739Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.server.raft: vote granted: from=cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:58.739Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:58.739Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server.raft: entering leader state: leader="Node at 127.0.0.1:16522 [Leader]"
>         writer.go:29: 2020-02-23T02:46:58.739Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:58.739Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server: New leader elected: payload=Node-cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb
>         writer.go:29: 2020-02-23T02:46:58.742Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:58.745Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:58.745Z [WARN]  TestAgent_TokenTriggersFullSync/acl_token.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:58.748Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:58.752Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:58.752Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:58.752Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:58.752Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server.serf.lan: serf: EventMemberUpdate: Node-cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb
>         writer.go:29: 2020-02-23T02:46:58.752Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server.serf.wan: serf: EventMemberUpdate: Node-cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb.dc1
>         writer.go:29: 2020-02-23T02:46:58.752Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server: Handled event for server in area: event=member-update server=Node-cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:58.757Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:58.766Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:58.766Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:58.766Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.server: Skipping self join check for node since the cluster is too small: node=Node-cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb
>         writer.go:29: 2020-02-23T02:46:58.766Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server: member joined, marking health alive: member=Node-cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb
>         writer.go:29: 2020-02-23T02:46:58.772Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.server: Skipping self join check for node since the cluster is too small: node=Node-cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb
>         writer.go:29: 2020-02-23T02:46:58.966Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.acl: dropping node from result due to ACLs: node=Node-cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb
>         writer.go:29: 2020-02-23T02:46:58.966Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.acl: dropping node from result due to ACLs: node=Node-cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb
>         writer.go:29: 2020-02-23T02:46:59.000Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.acl: dropping check from result due to ACLs: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:59.001Z [WARN]  TestAgent_TokenTriggersFullSync/acl_token: Node info update blocked by ACLs: node=cc0d4a0f-3e42-4632-2d8c-a8b2d91761eb accessorID=
>         writer.go:29: 2020-02-23T02:46:59.001Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token: Node info in sync
>         writer.go:29: 2020-02-23T02:46:59.011Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token: Updated agent's ACL token: token=acl_token
>         writer.go:29: 2020-02-23T02:46:59.414Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.acl: dropping service from result due to ACLs: service="{consul {}}"
>         writer.go:29: 2020-02-23T02:46:59.414Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:46:59.416Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token: Synced node info
>         writer.go:29: 2020-02-23T02:46:59.416Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token: Requesting shutdown
>         writer.go:29: 2020-02-23T02:46:59.416Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server: shutting down server
>         writer.go:29: 2020-02-23T02:46:59.416Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:59.416Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:59.416Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:59.416Z [WARN]  TestAgent_TokenTriggersFullSync/acl_token.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:59.416Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:59.416Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:59.416Z [DEBUG] TestAgent_TokenTriggersFullSync/acl_token.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:59.418Z [WARN]  TestAgent_TokenTriggersFullSync/acl_token.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:46:59.420Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:46:59.420Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token: consul server down
>         writer.go:29: 2020-02-23T02:46:59.420Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token: shutdown complete
>         writer.go:29: 2020-02-23T02:46:59.420Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token: Stopping server: protocol=DNS address=127.0.0.1:16517 network=tcp
>         writer.go:29: 2020-02-23T02:46:59.420Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token: Stopping server: protocol=DNS address=127.0.0.1:16517 network=udp
>         writer.go:29: 2020-02-23T02:46:59.420Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token: Stopping server: protocol=HTTP address=127.0.0.1:16518 network=tcp
>         writer.go:29: 2020-02-23T02:46:59.420Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:46:59.420Z [INFO]  TestAgent_TokenTriggersFullSync/acl_token: Endpoints down
>     --- PASS: TestAgent_TokenTriggersFullSync/default (2.00s)
>         writer.go:29: 2020-02-23T02:46:59.431Z [WARN]  TestAgent_TokenTriggersFullSync/default: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:46:59.431Z [WARN]  TestAgent_TokenTriggersFullSync/default: bootstrap = true: do not enable unless necessary
>         writer.go:29: 2020-02-23T02:46:59.431Z [DEBUG] TestAgent_TokenTriggersFullSync/default.tlsutil: Update: version=1
>         writer.go:29: 2020-02-23T02:46:59.431Z [DEBUG] TestAgent_TokenTriggersFullSync/default.tlsutil: OutgoingRPCWrapper: version=1
>         writer.go:29: 2020-02-23T02:46:59.477Z [INFO]  TestAgent_TokenTriggersFullSync/default.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:8fe0ed6c-31f6-29ef-8f10-1002d5158359 Address:127.0.0.1:16570}]"
>         writer.go:29: 2020-02-23T02:46:59.478Z [INFO]  TestAgent_TokenTriggersFullSync/default.server.raft: entering follower state: follower="Node at 127.0.0.1:16570 [Follower]" leader=
>         writer.go:29: 2020-02-23T02:46:59.478Z [INFO]  TestAgent_TokenTriggersFullSync/default.server.serf.wan: serf: EventMemberJoin: Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359.dc1 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:59.478Z [INFO]  TestAgent_TokenTriggersFullSync/default.server.serf.lan: serf: EventMemberJoin: Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359 127.0.0.1
>         writer.go:29: 2020-02-23T02:46:59.478Z [INFO]  TestAgent_TokenTriggersFullSync/default.server: Adding LAN server: server="Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359 (Addr: tcp/127.0.0.1:16570) (DC: dc1)"
>         writer.go:29: 2020-02-23T02:46:59.478Z [INFO]  TestAgent_TokenTriggersFullSync/default.server: Handled event for server in area: event=member-join server=Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:59.478Z [INFO]  TestAgent_TokenTriggersFullSync/default: Started DNS server: address=127.0.0.1:16565 network=udp
>         writer.go:29: 2020-02-23T02:46:59.478Z [INFO]  TestAgent_TokenTriggersFullSync/default: Started DNS server: address=127.0.0.1:16565 network=tcp
>         writer.go:29: 2020-02-23T02:46:59.479Z [INFO]  TestAgent_TokenTriggersFullSync/default: Started HTTP server: address=127.0.0.1:16566 network=tcp
>         writer.go:29: 2020-02-23T02:46:59.479Z [INFO]  TestAgent_TokenTriggersFullSync/default: started state syncer
>         writer.go:29: 2020-02-23T02:46:59.537Z [WARN]  TestAgent_TokenTriggersFullSync/default.server.raft: heartbeat timeout reached, starting election: last-leader=
>         writer.go:29: 2020-02-23T02:46:59.537Z [INFO]  TestAgent_TokenTriggersFullSync/default.server.raft: entering candidate state: node="Node at 127.0.0.1:16570 [Candidate]" term=2
>         writer.go:29: 2020-02-23T02:46:59.596Z [DEBUG] TestAgent_TokenTriggersFullSync/default.server.raft: votes: needed=1
>         writer.go:29: 2020-02-23T02:46:59.596Z [DEBUG] TestAgent_TokenTriggersFullSync/default.server.raft: vote granted: from=8fe0ed6c-31f6-29ef-8f10-1002d5158359 term=2 tally=1
>         writer.go:29: 2020-02-23T02:46:59.596Z [INFO]  TestAgent_TokenTriggersFullSync/default.server.raft: election won: tally=1
>         writer.go:29: 2020-02-23T02:46:59.596Z [INFO]  TestAgent_TokenTriggersFullSync/default.server.raft: entering leader state: leader="Node at 127.0.0.1:16570 [Leader]"
>         writer.go:29: 2020-02-23T02:46:59.596Z [INFO]  TestAgent_TokenTriggersFullSync/default.server: cluster leadership acquired
>         writer.go:29: 2020-02-23T02:46:59.596Z [INFO]  TestAgent_TokenTriggersFullSync/default.server: New leader elected: payload=Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359
>         writer.go:29: 2020-02-23T02:46:59.629Z [INFO]  TestAgent_TokenTriggersFullSync/default.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:59.642Z [INFO]  TestAgent_TokenTriggersFullSync/default.server: initializing acls
>         writer.go:29: 2020-02-23T02:46:59.654Z [INFO]  TestAgent_TokenTriggersFullSync/default.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:59.654Z [WARN]  TestAgent_TokenTriggersFullSync/default.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:59.663Z [INFO]  TestAgent_TokenTriggersFullSync/default.server: Created ACL 'global-management' policy
>         writer.go:29: 2020-02-23T02:46:59.663Z [WARN]  TestAgent_TokenTriggersFullSync/default.server: Configuring a non-UUID master token is deprecated
>         writer.go:29: 2020-02-23T02:46:59.671Z [INFO]  TestAgent_TokenTriggersFullSync/default.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:59.673Z [INFO]  TestAgent_TokenTriggersFullSync/default.server: Bootstrapped ACL master token from configuration
>         writer.go:29: 2020-02-23T02:46:59.673Z [INFO]  TestAgent_TokenTriggersFullSync/default.server: Created ACL anonymous token from configuration
>         writer.go:29: 2020-02-23T02:46:59.673Z [INFO]  TestAgent_TokenTriggersFullSync/default.leader: started routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:46:59.674Z [INFO]  TestAgent_TokenTriggersFullSync/default.leader: started routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:46:59.674Z [DEBUG] TestAgent_TokenTriggersFullSync/default.server: transitioning out of legacy ACL mode
>         writer.go:29: 2020-02-23T02:46:59.674Z [INFO]  TestAgent_TokenTriggersFullSync/default.server.serf.lan: serf: EventMemberUpdate: Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359
>         writer.go:29: 2020-02-23T02:46:59.674Z [INFO]  TestAgent_TokenTriggersFullSync/default.server.serf.wan: serf: EventMemberUpdate: Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359.dc1
>         writer.go:29: 2020-02-23T02:46:59.674Z [INFO]  TestAgent_TokenTriggersFullSync/default.server.serf.lan: serf: EventMemberUpdate: Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359
>         writer.go:29: 2020-02-23T02:46:59.674Z [INFO]  TestAgent_TokenTriggersFullSync/default.server.serf.wan: serf: EventMemberUpdate: Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359.dc1
>         writer.go:29: 2020-02-23T02:46:59.674Z [INFO]  TestAgent_TokenTriggersFullSync/default.server: Handled event for server in area: event=member-update server=Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:59.674Z [INFO]  TestAgent_TokenTriggersFullSync/default.server: Handled event for server in area: event=member-update server=Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359.dc1 area=wan
>         writer.go:29: 2020-02-23T02:46:59.678Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>         writer.go:29: 2020-02-23T02:46:59.688Z [WARN]  TestAgent_TokenTriggersFullSync/default: Node info update blocked by ACLs: node=8fe0ed6c-31f6-29ef-8f10-1002d5158359 accessorID=
>         writer.go:29: 2020-02-23T02:46:59.688Z [DEBUG] TestAgent_TokenTriggersFullSync/default: Node info in sync
>         writer.go:29: 2020-02-23T02:46:59.702Z [INFO]  TestAgent_TokenTriggersFullSync/default.server.connect: initialized primary datacenter CA with provider: provider=consul
>         writer.go:29: 2020-02-23T02:46:59.702Z [INFO]  TestAgent_TokenTriggersFullSync/default.leader: started routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:46:59.703Z [DEBUG] TestAgent_TokenTriggersFullSync/default.server: Skipping self join check for node since the cluster is too small: node=Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359
>         writer.go:29: 2020-02-23T02:46:59.703Z [INFO]  TestAgent_TokenTriggersFullSync/default.server: member joined, marking health alive: member=Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359
>         writer.go:29: 2020-02-23T02:46:59.710Z [DEBUG] TestAgent_TokenTriggersFullSync/default.server: Skipping self join check for node since the cluster is too small: node=Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359
>         writer.go:29: 2020-02-23T02:46:59.710Z [DEBUG] TestAgent_TokenTriggersFullSync/default.server: Skipping self join check for node since the cluster is too small: node=Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359
>         writer.go:29: 2020-02-23T02:46:59.885Z [DEBUG] TestAgent_TokenTriggersFullSync/default.acl: dropping node from result due to ACLs: node=Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359
>         writer.go:29: 2020-02-23T02:46:59.885Z [DEBUG] TestAgent_TokenTriggersFullSync/default.acl: dropping node from result due to ACLs: node=Node-8fe0ed6c-31f6-29ef-8f10-1002d5158359
>         writer.go:29: 2020-02-23T02:46:59.890Z [INFO]  TestAgent_TokenTriggersFullSync/default: Updated agent's ACL token: token=default
>         writer.go:29: 2020-02-23T02:47:01.404Z [DEBUG] TestAgent_TokenTriggersFullSync/default.acl: dropping service from result due to ACLs: service="{consul {}}"
>         writer.go:29: 2020-02-23T02:47:01.404Z [DEBUG] TestAgent_TokenTriggersFullSync/default: Skipping remote check since it is managed automatically: check=serfHealth
>         writer.go:29: 2020-02-23T02:47:01.406Z [INFO]  TestAgent_TokenTriggersFullSync/default: Synced node info
>         writer.go:29: 2020-02-23T02:47:01.416Z [INFO]  TestAgent_TokenTriggersFullSync/default: Requesting shutdown
>         writer.go:29: 2020-02-23T02:47:01.416Z [INFO]  TestAgent_TokenTriggersFullSync/default.server: shutting down server
>         writer.go:29: 2020-02-23T02:47:01.416Z [DEBUG] TestAgent_TokenTriggersFullSync/default.leader: stopping routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:47:01.416Z [DEBUG] TestAgent_TokenTriggersFullSync/default.leader: stopping routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:47:01.416Z [DEBUG] TestAgent_TokenTriggersFullSync/default.leader: stopping routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:47:01.416Z [WARN]  TestAgent_TokenTriggersFullSync/default.server.serf.lan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:47:01.416Z [DEBUG] TestAgent_TokenTriggersFullSync/default.leader: stopped routine: routine="legacy ACL token upgrade"
>         writer.go:29: 2020-02-23T02:47:01.416Z [DEBUG] TestAgent_TokenTriggersFullSync/default.leader: stopped routine: routine="acl token reaping"
>         writer.go:29: 2020-02-23T02:47:01.416Z [DEBUG] TestAgent_TokenTriggersFullSync/default.leader: stopped routine: routine="CA root pruning"
>         writer.go:29: 2020-02-23T02:47:01.418Z [WARN]  TestAgent_TokenTriggersFullSync/default.server.serf.wan: serf: Shutdown without a Leave
>         writer.go:29: 2020-02-23T02:47:01.419Z [INFO]  TestAgent_TokenTriggersFullSync/default.server.router.manager: shutting down
>         writer.go:29: 2020-02-23T02:47:01.420Z [INFO]  TestAgent_TokenTriggersFullSync/default: consul server down
>         writer.go:29: 2020-02-23T02:47:01.420Z [INFO]  TestAgent_TokenTriggersFullSync/default: shutdown complete
>         writer.go:29: 2020-02-23T02:47:01.420Z [INFO]  TestAgent_TokenTriggersFullSync/default: Stopping server: protocol=DNS address=127.0.0.1:16565 network=tcp
>         writer.go:29: 2020-02-23T02:47:01.420Z [INFO]  TestAgent_TokenTriggersFullSync/default: Stopping server: protocol=DNS address=127.0.0.1:16565 network=udp
>         writer.go:29: 2020-02-23T02:47:01.420Z [INFO]  TestAgent_TokenTriggersFullSync/default: Stopping server: protocol=HTTP address=127.0.0.1:16566 network=tcp
>         writer.go:29: 2020-02-23T02:47:01.420Z [INFO]  TestAgent_TokenTriggersFullSync/default: Waiting for endpoints to shut down
>         writer.go:29: 2020-02-23T02:47:01.420Z [INFO]  TestAgent_TokenTriggersFullSync/default: Endpoints down
> === CONT  TestAgent_FailCheck
> --- PASS: TestAgent_FailCheck (0.11s)
>     writer.go:29: 2020-02-23T02:47:01.428Z [WARN]  TestAgent_FailCheck: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:01.428Z [DEBUG] TestAgent_FailCheck.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:01.428Z [DEBUG] TestAgent_FailCheck.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:01.437Z [INFO]  TestAgent_FailCheck.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fca36387-af32-5fc5-0d08-9587ed2bdd66 Address:127.0.0.1:16612}]"
>     writer.go:29: 2020-02-23T02:47:01.437Z [INFO]  TestAgent_FailCheck.server.raft: entering follower state: follower="Node at 127.0.0.1:16612 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:01.438Z [INFO]  TestAgent_FailCheck.server.serf.wan: serf: EventMemberJoin: Node-fca36387-af32-5fc5-0d08-9587ed2bdd66.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:01.439Z [INFO]  TestAgent_FailCheck.server.serf.lan: serf: EventMemberJoin: Node-fca36387-af32-5fc5-0d08-9587ed2bdd66 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:01.439Z [INFO]  TestAgent_FailCheck.server: Handled event for server in area: event=member-join server=Node-fca36387-af32-5fc5-0d08-9587ed2bdd66.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:01.439Z [INFO]  TestAgent_FailCheck.server: Adding LAN server: server="Node-fca36387-af32-5fc5-0d08-9587ed2bdd66 (Addr: tcp/127.0.0.1:16612) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:01.439Z [INFO]  TestAgent_FailCheck: Started DNS server: address=127.0.0.1:16607 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.439Z [INFO]  TestAgent_FailCheck: Started DNS server: address=127.0.0.1:16607 network=udp
>     writer.go:29: 2020-02-23T02:47:01.440Z [INFO]  TestAgent_FailCheck: Started HTTP server: address=127.0.0.1:16608 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.440Z [INFO]  TestAgent_FailCheck: started state syncer
>     writer.go:29: 2020-02-23T02:47:01.479Z [WARN]  TestAgent_FailCheck.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:01.479Z [INFO]  TestAgent_FailCheck.server.raft: entering candidate state: node="Node at 127.0.0.1:16612 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:01.482Z [DEBUG] TestAgent_FailCheck.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:01.482Z [DEBUG] TestAgent_FailCheck.server.raft: vote granted: from=fca36387-af32-5fc5-0d08-9587ed2bdd66 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:01.482Z [INFO]  TestAgent_FailCheck.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:01.482Z [INFO]  TestAgent_FailCheck.server.raft: entering leader state: leader="Node at 127.0.0.1:16612 [Leader]"
>     writer.go:29: 2020-02-23T02:47:01.482Z [INFO]  TestAgent_FailCheck.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:01.482Z [INFO]  TestAgent_FailCheck.server: New leader elected: payload=Node-fca36387-af32-5fc5-0d08-9587ed2bdd66
>     writer.go:29: 2020-02-23T02:47:01.490Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:01.497Z [INFO]  TestAgent_FailCheck.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:01.498Z [INFO]  TestAgent_FailCheck.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:01.498Z [DEBUG] TestAgent_FailCheck.server: Skipping self join check for node since the cluster is too small: node=Node-fca36387-af32-5fc5-0d08-9587ed2bdd66
>     writer.go:29: 2020-02-23T02:47:01.498Z [INFO]  TestAgent_FailCheck.server: member joined, marking health alive: member=Node-fca36387-af32-5fc5-0d08-9587ed2bdd66
>     writer.go:29: 2020-02-23T02:47:01.524Z [DEBUG] TestAgent_FailCheck: Check status updated: check=test status=critical
>     writer.go:29: 2020-02-23T02:47:01.527Z [INFO]  TestAgent_FailCheck: Synced node info
>     writer.go:29: 2020-02-23T02:47:01.529Z [INFO]  TestAgent_FailCheck: Synced check: check=test
>     writer.go:29: 2020-02-23T02:47:01.529Z [INFO]  TestAgent_FailCheck: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:01.529Z [INFO]  TestAgent_FailCheck.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:01.529Z [DEBUG] TestAgent_FailCheck.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:01.529Z [WARN]  TestAgent_FailCheck.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:01.529Z [ERROR] TestAgent_FailCheck.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:01.529Z [DEBUG] TestAgent_FailCheck.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:01.530Z [WARN]  TestAgent_FailCheck.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:01.533Z [INFO]  TestAgent_FailCheck.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:01.533Z [INFO]  TestAgent_FailCheck: consul server down
>     writer.go:29: 2020-02-23T02:47:01.533Z [INFO]  TestAgent_FailCheck: shutdown complete
>     writer.go:29: 2020-02-23T02:47:01.533Z [INFO]  TestAgent_FailCheck: Stopping server: protocol=DNS address=127.0.0.1:16607 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.533Z [INFO]  TestAgent_FailCheck: Stopping server: protocol=DNS address=127.0.0.1:16607 network=udp
>     writer.go:29: 2020-02-23T02:47:01.533Z [INFO]  TestAgent_FailCheck: Stopping server: protocol=HTTP address=127.0.0.1:16608 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.533Z [INFO]  TestAgent_FailCheck: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:01.533Z [INFO]  TestAgent_FailCheck: Endpoints down
> === CONT  TestAgent_WarnCheck_ACLDeny
> === RUN   TestAgent_FailCheck_ACLDeny/no_token
> === RUN   TestAgent_FailCheck_ACLDeny/root_token
> --- PASS: TestAgent_FailCheck_ACLDeny (0.39s)
>     writer.go:29: 2020-02-23T02:47:01.169Z [WARN]  TestAgent_FailCheck_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:01.169Z [WARN]  TestAgent_FailCheck_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:01.169Z [DEBUG] TestAgent_FailCheck_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:01.170Z [DEBUG] TestAgent_FailCheck_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:01.179Z [INFO]  TestAgent_FailCheck_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a9c17b53-6825-6a94-c6ec-d4cbe652d998 Address:127.0.0.1:16624}]"
>     writer.go:29: 2020-02-23T02:47:01.179Z [INFO]  TestAgent_FailCheck_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-a9c17b53-6825-6a94-c6ec-d4cbe652d998.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:01.179Z [INFO]  TestAgent_FailCheck_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-a9c17b53-6825-6a94-c6ec-d4cbe652d998 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:01.180Z [INFO]  TestAgent_FailCheck_ACLDeny: Started DNS server: address=127.0.0.1:16619 network=udp
>     writer.go:29: 2020-02-23T02:47:01.180Z [INFO]  TestAgent_FailCheck_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16624 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:01.180Z [INFO]  TestAgent_FailCheck_ACLDeny.server: Adding LAN server: server="Node-a9c17b53-6825-6a94-c6ec-d4cbe652d998 (Addr: tcp/127.0.0.1:16624) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:01.180Z [INFO]  TestAgent_FailCheck_ACLDeny.server: Handled event for server in area: event=member-join server=Node-a9c17b53-6825-6a94-c6ec-d4cbe652d998.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:01.180Z [INFO]  TestAgent_FailCheck_ACLDeny: Started DNS server: address=127.0.0.1:16619 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.180Z [INFO]  TestAgent_FailCheck_ACLDeny: Started HTTP server: address=127.0.0.1:16620 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.180Z [INFO]  TestAgent_FailCheck_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:47:01.241Z [WARN]  TestAgent_FailCheck_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:01.241Z [INFO]  TestAgent_FailCheck_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16624 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:01.244Z [DEBUG] TestAgent_FailCheck_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:01.244Z [DEBUG] TestAgent_FailCheck_ACLDeny.server.raft: vote granted: from=a9c17b53-6825-6a94-c6ec-d4cbe652d998 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:01.244Z [INFO]  TestAgent_FailCheck_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:01.244Z [INFO]  TestAgent_FailCheck_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16624 [Leader]"
>     writer.go:29: 2020-02-23T02:47:01.244Z [INFO]  TestAgent_FailCheck_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:01.244Z [INFO]  TestAgent_FailCheck_ACLDeny.server: New leader elected: payload=Node-a9c17b53-6825-6a94-c6ec-d4cbe652d998
>     writer.go:29: 2020-02-23T02:47:01.246Z [INFO]  TestAgent_FailCheck_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:01.248Z [INFO]  TestAgent_FailCheck_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:01.248Z [WARN]  TestAgent_FailCheck_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:01.251Z [INFO]  TestAgent_FailCheck_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:01.254Z [INFO]  TestAgent_FailCheck_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:01.254Z [INFO]  TestAgent_FailCheck_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:01.254Z [INFO]  TestAgent_FailCheck_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:01.254Z [INFO]  TestAgent_FailCheck_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-a9c17b53-6825-6a94-c6ec-d4cbe652d998
>     writer.go:29: 2020-02-23T02:47:01.254Z [INFO]  TestAgent_FailCheck_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-a9c17b53-6825-6a94-c6ec-d4cbe652d998.dc1
>     writer.go:29: 2020-02-23T02:47:01.254Z [INFO]  TestAgent_FailCheck_ACLDeny.server: Handled event for server in area: event=member-update server=Node-a9c17b53-6825-6a94-c6ec-d4cbe652d998.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:01.258Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:01.265Z [INFO]  TestAgent_FailCheck_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:01.265Z [INFO]  TestAgent_FailCheck_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:01.265Z [DEBUG] TestAgent_FailCheck_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-a9c17b53-6825-6a94-c6ec-d4cbe652d998
>     writer.go:29: 2020-02-23T02:47:01.265Z [INFO]  TestAgent_FailCheck_ACLDeny.server: member joined, marking health alive: member=Node-a9c17b53-6825-6a94-c6ec-d4cbe652d998
>     writer.go:29: 2020-02-23T02:47:01.268Z [DEBUG] TestAgent_FailCheck_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-a9c17b53-6825-6a94-c6ec-d4cbe652d998
>     writer.go:29: 2020-02-23T02:47:01.532Z [DEBUG] TestAgent_FailCheck_ACLDeny: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:01.535Z [INFO]  TestAgent_FailCheck_ACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:47:01.540Z [DEBUG] TestAgent_FailCheck_ACLDeny.acl: dropping node from result due to ACLs: node=Node-a9c17b53-6825-6a94-c6ec-d4cbe652d998
>     writer.go:29: 2020-02-23T02:47:01.540Z [DEBUG] TestAgent_FailCheck_ACLDeny.acl: dropping node from result due to ACLs: node=Node-a9c17b53-6825-6a94-c6ec-d4cbe652d998
>     --- PASS: TestAgent_FailCheck_ACLDeny/no_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:01.541Z [DEBUG] TestAgent_FailCheck_ACLDeny: Check status updated: check=test status=critical
>     writer.go:29: 2020-02-23T02:47:01.541Z [DEBUG] TestAgent_FailCheck_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:47:01.541Z [WARN]  TestAgent_FailCheck_ACLDeny: Check registration blocked by ACLs: check=test accessorID=
>     --- PASS: TestAgent_FailCheck_ACLDeny/root_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:01.541Z [INFO]  TestAgent_FailCheck_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:01.541Z [INFO]  TestAgent_FailCheck_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:01.541Z [DEBUG] TestAgent_FailCheck_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:01.541Z [DEBUG] TestAgent_FailCheck_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:01.541Z [DEBUG] TestAgent_FailCheck_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:01.541Z [WARN]  TestAgent_FailCheck_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:01.541Z [DEBUG] TestAgent_FailCheck_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:01.541Z [DEBUG] TestAgent_FailCheck_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:01.541Z [DEBUG] TestAgent_FailCheck_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:01.543Z [WARN]  TestAgent_FailCheck_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:01.545Z [INFO]  TestAgent_FailCheck_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:01.545Z [INFO]  TestAgent_FailCheck_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:47:01.545Z [INFO]  TestAgent_FailCheck_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:47:01.545Z [INFO]  TestAgent_FailCheck_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16619 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.545Z [INFO]  TestAgent_FailCheck_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16619 network=udp
>     writer.go:29: 2020-02-23T02:47:01.545Z [INFO]  TestAgent_FailCheck_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16620 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.545Z [INFO]  TestAgent_FailCheck_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:01.545Z [INFO]  TestAgent_FailCheck_ACLDeny: Endpoints down
> === CONT  TestAgent_WarnCheck
> === RUN   TestAgent_WarnCheck_ACLDeny/no_token
> === RUN   TestAgent_WarnCheck_ACLDeny/root_token
> --- PASS: TestAgent_WarnCheck_ACLDeny (0.23s)
>     writer.go:29: 2020-02-23T02:47:01.542Z [WARN]  TestAgent_WarnCheck_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:01.542Z [WARN]  TestAgent_WarnCheck_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:01.542Z [DEBUG] TestAgent_WarnCheck_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:01.543Z [DEBUG] TestAgent_WarnCheck_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:01.563Z [INFO]  TestAgent_WarnCheck_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:c8b97a0f-ff81-1ca3-c94f-a3fcb4f17fb3 Address:127.0.0.1:16618}]"
>     writer.go:29: 2020-02-23T02:47:01.563Z [INFO]  TestAgent_WarnCheck_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16618 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:01.564Z [INFO]  TestAgent_WarnCheck_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-c8b97a0f-ff81-1ca3-c94f-a3fcb4f17fb3.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:01.565Z [INFO]  TestAgent_WarnCheck_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-c8b97a0f-ff81-1ca3-c94f-a3fcb4f17fb3 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:01.565Z [INFO]  TestAgent_WarnCheck_ACLDeny.server: Handled event for server in area: event=member-join server=Node-c8b97a0f-ff81-1ca3-c94f-a3fcb4f17fb3.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:01.565Z [INFO]  TestAgent_WarnCheck_ACLDeny.server: Adding LAN server: server="Node-c8b97a0f-ff81-1ca3-c94f-a3fcb4f17fb3 (Addr: tcp/127.0.0.1:16618) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:01.565Z [INFO]  TestAgent_WarnCheck_ACLDeny: Started DNS server: address=127.0.0.1:16613 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.566Z [INFO]  TestAgent_WarnCheck_ACLDeny: Started DNS server: address=127.0.0.1:16613 network=udp
>     writer.go:29: 2020-02-23T02:47:01.566Z [INFO]  TestAgent_WarnCheck_ACLDeny: Started HTTP server: address=127.0.0.1:16614 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.566Z [INFO]  TestAgent_WarnCheck_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:47:01.627Z [WARN]  TestAgent_WarnCheck_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:01.628Z [INFO]  TestAgent_WarnCheck_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16618 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:01.631Z [DEBUG] TestAgent_WarnCheck_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:01.631Z [DEBUG] TestAgent_WarnCheck_ACLDeny.server.raft: vote granted: from=c8b97a0f-ff81-1ca3-c94f-a3fcb4f17fb3 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:01.631Z [INFO]  TestAgent_WarnCheck_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:01.631Z [INFO]  TestAgent_WarnCheck_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16618 [Leader]"
>     writer.go:29: 2020-02-23T02:47:01.631Z [INFO]  TestAgent_WarnCheck_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:01.631Z [INFO]  TestAgent_WarnCheck_ACLDeny.server: New leader elected: payload=Node-c8b97a0f-ff81-1ca3-c94f-a3fcb4f17fb3
>     writer.go:29: 2020-02-23T02:47:01.633Z [INFO]  TestAgent_WarnCheck_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:01.635Z [INFO]  TestAgent_WarnCheck_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:01.635Z [WARN]  TestAgent_WarnCheck_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:01.637Z [INFO]  TestAgent_WarnCheck_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:01.647Z [INFO]  TestAgent_WarnCheck_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:01.647Z [INFO]  TestAgent_WarnCheck_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:01.647Z [INFO]  TestAgent_WarnCheck_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:01.647Z [INFO]  TestAgent_WarnCheck_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-c8b97a0f-ff81-1ca3-c94f-a3fcb4f17fb3
>     writer.go:29: 2020-02-23T02:47:01.647Z [INFO]  TestAgent_WarnCheck_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-c8b97a0f-ff81-1ca3-c94f-a3fcb4f17fb3.dc1
>     writer.go:29: 2020-02-23T02:47:01.648Z [INFO]  TestAgent_WarnCheck_ACLDeny.server: Handled event for server in area: event=member-update server=Node-c8b97a0f-ff81-1ca3-c94f-a3fcb4f17fb3.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:01.651Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:01.659Z [INFO]  TestAgent_WarnCheck_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:01.659Z [INFO]  TestAgent_WarnCheck_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:01.659Z [DEBUG] TestAgent_WarnCheck_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-c8b97a0f-ff81-1ca3-c94f-a3fcb4f17fb3
>     writer.go:29: 2020-02-23T02:47:01.659Z [INFO]  TestAgent_WarnCheck_ACLDeny.server: member joined, marking health alive: member=Node-c8b97a0f-ff81-1ca3-c94f-a3fcb4f17fb3
>     writer.go:29: 2020-02-23T02:47:01.662Z [DEBUG] TestAgent_WarnCheck_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-c8b97a0f-ff81-1ca3-c94f-a3fcb4f17fb3
>     writer.go:29: 2020-02-23T02:47:01.755Z [DEBUG] TestAgent_WarnCheck_ACLDeny.acl: dropping node from result due to ACLs: node=Node-c8b97a0f-ff81-1ca3-c94f-a3fcb4f17fb3
>     writer.go:29: 2020-02-23T02:47:01.755Z [DEBUG] TestAgent_WarnCheck_ACLDeny.acl: dropping node from result due to ACLs: node=Node-c8b97a0f-ff81-1ca3-c94f-a3fcb4f17fb3
>     --- PASS: TestAgent_WarnCheck_ACLDeny/no_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:01.756Z [DEBUG] TestAgent_WarnCheck_ACLDeny: Check status updated: check=test status=warning
>     writer.go:29: 2020-02-23T02:47:01.759Z [INFO]  TestAgent_WarnCheck_ACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:47:01.759Z [WARN]  TestAgent_WarnCheck_ACLDeny: Check registration blocked by ACLs: check=test accessorID=
>     --- PASS: TestAgent_WarnCheck_ACLDeny/root_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:01.759Z [INFO]  TestAgent_WarnCheck_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:01.759Z [INFO]  TestAgent_WarnCheck_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:01.759Z [DEBUG] TestAgent_WarnCheck_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:01.759Z [DEBUG] TestAgent_WarnCheck_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:01.759Z [DEBUG] TestAgent_WarnCheck_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:01.759Z [WARN]  TestAgent_WarnCheck_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:01.759Z [ERROR] TestAgent_WarnCheck_ACLDeny.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:01.759Z [DEBUG] TestAgent_WarnCheck_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:01.759Z [DEBUG] TestAgent_WarnCheck_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:01.759Z [DEBUG] TestAgent_WarnCheck_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:01.761Z [WARN]  TestAgent_WarnCheck_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:01.763Z [INFO]  TestAgent_WarnCheck_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:01.763Z [INFO]  TestAgent_WarnCheck_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:47:01.763Z [INFO]  TestAgent_WarnCheck_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:47:01.763Z [INFO]  TestAgent_WarnCheck_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16613 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.763Z [INFO]  TestAgent_WarnCheck_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16613 network=udp
>     writer.go:29: 2020-02-23T02:47:01.763Z [INFO]  TestAgent_WarnCheck_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16614 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.763Z [INFO]  TestAgent_WarnCheck_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:01.763Z [INFO]  TestAgent_WarnCheck_ACLDeny: Endpoints down
> === CONT  TestAgent_PassCheck_ACLDeny
> --- PASS: TestAgent_WarnCheck (0.41s)
>     writer.go:29: 2020-02-23T02:47:01.552Z [WARN]  TestAgent_WarnCheck: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:01.552Z [DEBUG] TestAgent_WarnCheck.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:01.552Z [DEBUG] TestAgent_WarnCheck.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:01.562Z [INFO]  TestAgent_WarnCheck.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:06e08088-bf41-69ed-e122-363b4c79d994 Address:127.0.0.1:16630}]"
>     writer.go:29: 2020-02-23T02:47:01.562Z [INFO]  TestAgent_WarnCheck.server.raft: entering follower state: follower="Node at 127.0.0.1:16630 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:01.562Z [INFO]  TestAgent_WarnCheck.server.serf.wan: serf: EventMemberJoin: Node-06e08088-bf41-69ed-e122-363b4c79d994.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:01.563Z [INFO]  TestAgent_WarnCheck.server.serf.lan: serf: EventMemberJoin: Node-06e08088-bf41-69ed-e122-363b4c79d994 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:01.564Z [INFO]  TestAgent_WarnCheck.server: Handled event for server in area: event=member-join server=Node-06e08088-bf41-69ed-e122-363b4c79d994.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:01.564Z [INFO]  TestAgent_WarnCheck.server: Adding LAN server: server="Node-06e08088-bf41-69ed-e122-363b4c79d994 (Addr: tcp/127.0.0.1:16630) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:01.564Z [INFO]  TestAgent_WarnCheck: Started DNS server: address=127.0.0.1:16625 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.564Z [INFO]  TestAgent_WarnCheck: Started DNS server: address=127.0.0.1:16625 network=udp
>     writer.go:29: 2020-02-23T02:47:01.565Z [INFO]  TestAgent_WarnCheck: Started HTTP server: address=127.0.0.1:16626 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.565Z [INFO]  TestAgent_WarnCheck: started state syncer
>     writer.go:29: 2020-02-23T02:47:01.603Z [WARN]  TestAgent_WarnCheck.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:01.603Z [INFO]  TestAgent_WarnCheck.server.raft: entering candidate state: node="Node at 127.0.0.1:16630 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:01.606Z [DEBUG] TestAgent_WarnCheck.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:01.606Z [DEBUG] TestAgent_WarnCheck.server.raft: vote granted: from=06e08088-bf41-69ed-e122-363b4c79d994 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:01.606Z [INFO]  TestAgent_WarnCheck.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:01.606Z [INFO]  TestAgent_WarnCheck.server.raft: entering leader state: leader="Node at 127.0.0.1:16630 [Leader]"
>     writer.go:29: 2020-02-23T02:47:01.606Z [INFO]  TestAgent_WarnCheck.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:01.606Z [INFO]  TestAgent_WarnCheck.server: New leader elected: payload=Node-06e08088-bf41-69ed-e122-363b4c79d994
>     writer.go:29: 2020-02-23T02:47:01.613Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:01.621Z [INFO]  TestAgent_WarnCheck.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:01.621Z [INFO]  TestAgent_WarnCheck.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:01.622Z [DEBUG] TestAgent_WarnCheck.server: Skipping self join check for node since the cluster is too small: node=Node-06e08088-bf41-69ed-e122-363b4c79d994
>     writer.go:29: 2020-02-23T02:47:01.622Z [INFO]  TestAgent_WarnCheck.server: member joined, marking health alive: member=Node-06e08088-bf41-69ed-e122-363b4c79d994
>     writer.go:29: 2020-02-23T02:47:01.729Z [DEBUG] TestAgent_WarnCheck: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:01.732Z [INFO]  TestAgent_WarnCheck: Synced node info
>     writer.go:29: 2020-02-23T02:47:01.946Z [DEBUG] TestAgent_WarnCheck: Check status updated: check=test status=warning
>     writer.go:29: 2020-02-23T02:47:01.946Z [DEBUG] TestAgent_WarnCheck: Node info in sync
>     writer.go:29: 2020-02-23T02:47:01.948Z [INFO]  TestAgent_WarnCheck: Synced check: check=test
>     writer.go:29: 2020-02-23T02:47:01.948Z [INFO]  TestAgent_WarnCheck: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:01.948Z [INFO]  TestAgent_WarnCheck.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:01.948Z [DEBUG] TestAgent_WarnCheck.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:01.948Z [WARN]  TestAgent_WarnCheck.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:01.948Z [DEBUG] TestAgent_WarnCheck.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:01.950Z [WARN]  TestAgent_WarnCheck.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:01.951Z [INFO]  TestAgent_WarnCheck.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:01.951Z [INFO]  TestAgent_WarnCheck: consul server down
>     writer.go:29: 2020-02-23T02:47:01.951Z [INFO]  TestAgent_WarnCheck: shutdown complete
>     writer.go:29: 2020-02-23T02:47:01.951Z [INFO]  TestAgent_WarnCheck: Stopping server: protocol=DNS address=127.0.0.1:16625 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.952Z [INFO]  TestAgent_WarnCheck: Stopping server: protocol=DNS address=127.0.0.1:16625 network=udp
>     writer.go:29: 2020-02-23T02:47:01.952Z [INFO]  TestAgent_WarnCheck: Stopping server: protocol=HTTP address=127.0.0.1:16626 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.952Z [INFO]  TestAgent_WarnCheck: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:01.952Z [INFO]  TestAgent_WarnCheck: Endpoints down
> === CONT  TestAgent_PassCheck
> === RUN   TestAgent_PassCheck_ACLDeny/no_token
> === RUN   TestAgent_PassCheck_ACLDeny/root_token
> --- PASS: TestAgent_PassCheck_ACLDeny (0.29s)
>     writer.go:29: 2020-02-23T02:47:01.771Z [WARN]  TestAgent_PassCheck_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:01.771Z [WARN]  TestAgent_PassCheck_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:01.771Z [DEBUG] TestAgent_PassCheck_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:01.772Z [DEBUG] TestAgent_PassCheck_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:01.783Z [INFO]  TestAgent_PassCheck_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fc421f57-4022-582d-a684-0da3a6c9e9c1 Address:127.0.0.1:16606}]"
>     writer.go:29: 2020-02-23T02:47:01.783Z [INFO]  TestAgent_PassCheck_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16606 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:01.783Z [INFO]  TestAgent_PassCheck_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-fc421f57-4022-582d-a684-0da3a6c9e9c1.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:01.784Z [INFO]  TestAgent_PassCheck_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-fc421f57-4022-582d-a684-0da3a6c9e9c1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:01.784Z [INFO]  TestAgent_PassCheck_ACLDeny.server: Adding LAN server: server="Node-fc421f57-4022-582d-a684-0da3a6c9e9c1 (Addr: tcp/127.0.0.1:16606) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:01.784Z [INFO]  TestAgent_PassCheck_ACLDeny: Started DNS server: address=127.0.0.1:16601 network=udp
>     writer.go:29: 2020-02-23T02:47:01.784Z [INFO]  TestAgent_PassCheck_ACLDeny.server: Handled event for server in area: event=member-join server=Node-fc421f57-4022-582d-a684-0da3a6c9e9c1.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:01.784Z [INFO]  TestAgent_PassCheck_ACLDeny: Started DNS server: address=127.0.0.1:16601 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.784Z [INFO]  TestAgent_PassCheck_ACLDeny: Started HTTP server: address=127.0.0.1:16602 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.784Z [INFO]  TestAgent_PassCheck_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:47:01.820Z [WARN]  TestAgent_PassCheck_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:01.820Z [INFO]  TestAgent_PassCheck_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16606 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:01.823Z [DEBUG] TestAgent_PassCheck_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:01.823Z [DEBUG] TestAgent_PassCheck_ACLDeny.server.raft: vote granted: from=fc421f57-4022-582d-a684-0da3a6c9e9c1 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:01.823Z [INFO]  TestAgent_PassCheck_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:01.823Z [INFO]  TestAgent_PassCheck_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16606 [Leader]"
>     writer.go:29: 2020-02-23T02:47:01.823Z [INFO]  TestAgent_PassCheck_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:01.823Z [INFO]  TestAgent_PassCheck_ACLDeny.server: New leader elected: payload=Node-fc421f57-4022-582d-a684-0da3a6c9e9c1
>     writer.go:29: 2020-02-23T02:47:01.826Z [INFO]  TestAgent_PassCheck_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:01.827Z [INFO]  TestAgent_PassCheck_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:01.827Z [WARN]  TestAgent_PassCheck_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:01.830Z [INFO]  TestAgent_PassCheck_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:01.833Z [INFO]  TestAgent_PassCheck_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:01.833Z [INFO]  TestAgent_PassCheck_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:01.833Z [INFO]  TestAgent_PassCheck_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:01.833Z [INFO]  TestAgent_PassCheck_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-fc421f57-4022-582d-a684-0da3a6c9e9c1
>     writer.go:29: 2020-02-23T02:47:01.834Z [INFO]  TestAgent_PassCheck_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-fc421f57-4022-582d-a684-0da3a6c9e9c1.dc1
>     writer.go:29: 2020-02-23T02:47:01.834Z [INFO]  TestAgent_PassCheck_ACLDeny.server: Handled event for server in area: event=member-update server=Node-fc421f57-4022-582d-a684-0da3a6c9e9c1.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:01.838Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:01.845Z [INFO]  TestAgent_PassCheck_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:01.845Z [INFO]  TestAgent_PassCheck_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:01.845Z [DEBUG] TestAgent_PassCheck_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-fc421f57-4022-582d-a684-0da3a6c9e9c1
>     writer.go:29: 2020-02-23T02:47:01.845Z [INFO]  TestAgent_PassCheck_ACLDeny.server: member joined, marking health alive: member=Node-fc421f57-4022-582d-a684-0da3a6c9e9c1
>     writer.go:29: 2020-02-23T02:47:01.848Z [DEBUG] TestAgent_PassCheck_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-fc421f57-4022-582d-a684-0da3a6c9e9c1
>     writer.go:29: 2020-02-23T02:47:01.927Z [DEBUG] TestAgent_PassCheck_ACLDeny: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:01.930Z [INFO]  TestAgent_PassCheck_ACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:47:01.930Z [DEBUG] TestAgent_PassCheck_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:47:02.052Z [DEBUG] TestAgent_PassCheck_ACLDeny.acl: dropping node from result due to ACLs: node=Node-fc421f57-4022-582d-a684-0da3a6c9e9c1
>     writer.go:29: 2020-02-23T02:47:02.052Z [DEBUG] TestAgent_PassCheck_ACLDeny.acl: dropping node from result due to ACLs: node=Node-fc421f57-4022-582d-a684-0da3a6c9e9c1
>     --- PASS: TestAgent_PassCheck_ACLDeny/no_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:02.052Z [DEBUG] TestAgent_PassCheck_ACLDeny: Check status updated: check=test status=passing
>     writer.go:29: 2020-02-23T02:47:02.052Z [DEBUG] TestAgent_PassCheck_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:47:02.052Z [WARN]  TestAgent_PassCheck_ACLDeny: Check registration blocked by ACLs: check=test accessorID=
>     --- PASS: TestAgent_PassCheck_ACLDeny/root_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:02.052Z [INFO]  TestAgent_PassCheck_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:02.052Z [INFO]  TestAgent_PassCheck_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:02.052Z [DEBUG] TestAgent_PassCheck_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:02.053Z [DEBUG] TestAgent_PassCheck_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.053Z [DEBUG] TestAgent_PassCheck_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:02.053Z [WARN]  TestAgent_PassCheck_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:02.053Z [DEBUG] TestAgent_PassCheck_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:02.053Z [DEBUG] TestAgent_PassCheck_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.053Z [DEBUG] TestAgent_PassCheck_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:02.055Z [WARN]  TestAgent_PassCheck_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:02.057Z [INFO]  TestAgent_PassCheck_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:02.057Z [INFO]  TestAgent_PassCheck_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:47:02.057Z [INFO]  TestAgent_PassCheck_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:47:02.057Z [INFO]  TestAgent_PassCheck_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16601 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.057Z [INFO]  TestAgent_PassCheck_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16601 network=udp
>     writer.go:29: 2020-02-23T02:47:02.057Z [INFO]  TestAgent_PassCheck_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16602 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.057Z [INFO]  TestAgent_PassCheck_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:02.057Z [INFO]  TestAgent_PassCheck_ACLDeny: Endpoints down
> === CONT  TestAgent_DeregisterCheckACLDeny
> === RUN   TestAgent_DeregisterCheckACLDeny/no_token
> === RUN   TestAgent_DeregisterCheckACLDeny/root_token
> --- PASS: TestAgent_DeregisterCheckACLDeny (0.18s)
>     writer.go:29: 2020-02-23T02:47:02.064Z [WARN]  TestAgent_DeregisterCheckACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:02.064Z [WARN]  TestAgent_DeregisterCheckACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:02.065Z [DEBUG] TestAgent_DeregisterCheckACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:02.065Z [DEBUG] TestAgent_DeregisterCheckACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:02.074Z [INFO]  TestAgent_DeregisterCheckACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:b6d7bcb2-be06-c901-1fdf-1b6eff531c33 Address:127.0.0.1:16660}]"
>     writer.go:29: 2020-02-23T02:47:02.074Z [INFO]  TestAgent_DeregisterCheckACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16660 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:02.075Z [INFO]  TestAgent_DeregisterCheckACLDeny.server.serf.wan: serf: EventMemberJoin: Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:02.075Z [INFO]  TestAgent_DeregisterCheckACLDeny.server.serf.lan: serf: EventMemberJoin: Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:02.075Z [INFO]  TestAgent_DeregisterCheckACLDeny.server: Handled event for server in area: event=member-join server=Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:02.075Z [INFO]  TestAgent_DeregisterCheckACLDeny.server: Adding LAN server: server="Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33 (Addr: tcp/127.0.0.1:16660) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:02.076Z [INFO]  TestAgent_DeregisterCheckACLDeny: Started DNS server: address=127.0.0.1:16655 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.076Z [INFO]  TestAgent_DeregisterCheckACLDeny: Started DNS server: address=127.0.0.1:16655 network=udp
>     writer.go:29: 2020-02-23T02:47:02.076Z [INFO]  TestAgent_DeregisterCheckACLDeny: Started HTTP server: address=127.0.0.1:16656 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.076Z [INFO]  TestAgent_DeregisterCheckACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:47:02.117Z [WARN]  TestAgent_DeregisterCheckACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:02.117Z [INFO]  TestAgent_DeregisterCheckACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16660 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:02.120Z [DEBUG] TestAgent_DeregisterCheckACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:02.120Z [DEBUG] TestAgent_DeregisterCheckACLDeny.server.raft: vote granted: from=b6d7bcb2-be06-c901-1fdf-1b6eff531c33 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:02.120Z [INFO]  TestAgent_DeregisterCheckACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:02.120Z [INFO]  TestAgent_DeregisterCheckACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16660 [Leader]"
>     writer.go:29: 2020-02-23T02:47:02.120Z [INFO]  TestAgent_DeregisterCheckACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:02.120Z [INFO]  TestAgent_DeregisterCheckACLDeny.server: New leader elected: payload=Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33
>     writer.go:29: 2020-02-23T02:47:02.122Z [INFO]  TestAgent_DeregisterCheckACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:02.124Z [INFO]  TestAgent_DeregisterCheckACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:02.124Z [WARN]  TestAgent_DeregisterCheckACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:02.125Z [INFO]  TestAgent_DeregisterCheckACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:02.126Z [WARN]  TestAgent_DeregisterCheckACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:02.130Z [INFO]  TestAgent_DeregisterCheckACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:02.130Z [INFO]  TestAgent_DeregisterCheckACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:02.131Z [INFO]  TestAgent_DeregisterCheckACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:02.131Z [INFO]  TestAgent_DeregisterCheckACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:02.131Z [INFO]  TestAgent_DeregisterCheckACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:02.131Z [DEBUG] TestAgent_DeregisterCheckACLDeny.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:47:02.131Z [INFO]  TestAgent_DeregisterCheckACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33
>     writer.go:29: 2020-02-23T02:47:02.132Z [INFO]  TestAgent_DeregisterCheckACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33.dc1
>     writer.go:29: 2020-02-23T02:47:02.132Z [INFO]  TestAgent_DeregisterCheckACLDeny.server: Handled event for server in area: event=member-update server=Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:02.132Z [INFO]  TestAgent_DeregisterCheckACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:02.132Z [INFO]  TestAgent_DeregisterCheckACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33
>     writer.go:29: 2020-02-23T02:47:02.133Z [INFO]  TestAgent_DeregisterCheckACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33.dc1
>     writer.go:29: 2020-02-23T02:47:02.133Z [INFO]  TestAgent_DeregisterCheckACLDeny.server: Handled event for server in area: event=member-update server=Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:02.136Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:02.143Z [INFO]  TestAgent_DeregisterCheckACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:02.143Z [INFO]  TestAgent_DeregisterCheckACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.143Z [DEBUG] TestAgent_DeregisterCheckACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33
>     writer.go:29: 2020-02-23T02:47:02.143Z [INFO]  TestAgent_DeregisterCheckACLDeny.server: member joined, marking health alive: member=Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33
>     writer.go:29: 2020-02-23T02:47:02.145Z [DEBUG] TestAgent_DeregisterCheckACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33
>     writer.go:29: 2020-02-23T02:47:02.145Z [DEBUG] TestAgent_DeregisterCheckACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33
>     writer.go:29: 2020-02-23T02:47:02.234Z [DEBUG] TestAgent_DeregisterCheckACLDeny.acl: dropping node from result due to ACLs: node=Node-b6d7bcb2-be06-c901-1fdf-1b6eff531c33
>     --- PASS: TestAgent_DeregisterCheckACLDeny/no_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:02.234Z [DEBUG] TestAgent_DeregisterCheckACLDeny: removed check: check=test
>     writer.go:29: 2020-02-23T02:47:02.237Z [INFO]  TestAgent_DeregisterCheckACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:47:02.237Z [INFO]  TestAgent_DeregisterCheckACLDeny: Deregistered check: check=test
>     --- PASS: TestAgent_DeregisterCheckACLDeny/root_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:02.237Z [INFO]  TestAgent_DeregisterCheckACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:02.237Z [INFO]  TestAgent_DeregisterCheckACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:02.237Z [DEBUG] TestAgent_DeregisterCheckACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:02.237Z [DEBUG] TestAgent_DeregisterCheckACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.237Z [DEBUG] TestAgent_DeregisterCheckACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:02.237Z [WARN]  TestAgent_DeregisterCheckACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:02.238Z [ERROR] TestAgent_DeregisterCheckACLDeny.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:02.238Z [DEBUG] TestAgent_DeregisterCheckACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:02.238Z [DEBUG] TestAgent_DeregisterCheckACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.238Z [DEBUG] TestAgent_DeregisterCheckACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:02.239Z [WARN]  TestAgent_DeregisterCheckACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:02.241Z [INFO]  TestAgent_DeregisterCheckACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:02.241Z [INFO]  TestAgent_DeregisterCheckACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:47:02.241Z [INFO]  TestAgent_DeregisterCheckACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:47:02.241Z [INFO]  TestAgent_DeregisterCheckACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16655 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.241Z [INFO]  TestAgent_DeregisterCheckACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16655 network=udp
>     writer.go:29: 2020-02-23T02:47:02.241Z [INFO]  TestAgent_DeregisterCheckACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16656 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.241Z [INFO]  TestAgent_DeregisterCheckACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:02.241Z [INFO]  TestAgent_DeregisterCheckACLDeny: Endpoints down
> === CONT  TestAgent_DeregisterCheck
> --- PASS: TestAgent_PassCheck (0.34s)
>     writer.go:29: 2020-02-23T02:47:01.959Z [WARN]  TestAgent_PassCheck: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:01.959Z [DEBUG] TestAgent_PassCheck.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:01.960Z [DEBUG] TestAgent_PassCheck.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:01.970Z [INFO]  TestAgent_PassCheck.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:8ef34eaf-3156-644a-b5f4-baeef86639a5 Address:127.0.0.1:16636}]"
>     writer.go:29: 2020-02-23T02:47:01.970Z [INFO]  TestAgent_PassCheck.server.raft: entering follower state: follower="Node at 127.0.0.1:16636 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:01.970Z [INFO]  TestAgent_PassCheck.server.serf.wan: serf: EventMemberJoin: Node-8ef34eaf-3156-644a-b5f4-baeef86639a5.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:01.971Z [INFO]  TestAgent_PassCheck.server.serf.lan: serf: EventMemberJoin: Node-8ef34eaf-3156-644a-b5f4-baeef86639a5 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:01.971Z [INFO]  TestAgent_PassCheck.server: Adding LAN server: server="Node-8ef34eaf-3156-644a-b5f4-baeef86639a5 (Addr: tcp/127.0.0.1:16636) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:01.971Z [INFO]  TestAgent_PassCheck.server: Handled event for server in area: event=member-join server=Node-8ef34eaf-3156-644a-b5f4-baeef86639a5.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:01.971Z [INFO]  TestAgent_PassCheck: Started DNS server: address=127.0.0.1:16631 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.972Z [INFO]  TestAgent_PassCheck: Started DNS server: address=127.0.0.1:16631 network=udp
>     writer.go:29: 2020-02-23T02:47:01.972Z [INFO]  TestAgent_PassCheck: Started HTTP server: address=127.0.0.1:16632 network=tcp
>     writer.go:29: 2020-02-23T02:47:01.972Z [INFO]  TestAgent_PassCheck: started state syncer
>     writer.go:29: 2020-02-23T02:47:02.011Z [WARN]  TestAgent_PassCheck.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:02.011Z [INFO]  TestAgent_PassCheck.server.raft: entering candidate state: node="Node at 127.0.0.1:16636 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:02.015Z [DEBUG] TestAgent_PassCheck.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:02.015Z [DEBUG] TestAgent_PassCheck.server.raft: vote granted: from=8ef34eaf-3156-644a-b5f4-baeef86639a5 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:02.015Z [INFO]  TestAgent_PassCheck.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:02.015Z [INFO]  TestAgent_PassCheck.server.raft: entering leader state: leader="Node at 127.0.0.1:16636 [Leader]"
>     writer.go:29: 2020-02-23T02:47:02.015Z [INFO]  TestAgent_PassCheck.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:02.015Z [INFO]  TestAgent_PassCheck.server: New leader elected: payload=Node-8ef34eaf-3156-644a-b5f4-baeef86639a5
>     writer.go:29: 2020-02-23T02:47:02.022Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:02.031Z [INFO]  TestAgent_PassCheck.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:02.031Z [INFO]  TestAgent_PassCheck.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.031Z [DEBUG] TestAgent_PassCheck.server: Skipping self join check for node since the cluster is too small: node=Node-8ef34eaf-3156-644a-b5f4-baeef86639a5
>     writer.go:29: 2020-02-23T02:47:02.031Z [INFO]  TestAgent_PassCheck.server: member joined, marking health alive: member=Node-8ef34eaf-3156-644a-b5f4-baeef86639a5
>     writer.go:29: 2020-02-23T02:47:02.279Z [DEBUG] TestAgent_PassCheck: Check status updated: check=test status=passing
>     writer.go:29: 2020-02-23T02:47:02.282Z [INFO]  TestAgent_PassCheck: Synced node info
>     writer.go:29: 2020-02-23T02:47:02.284Z [INFO]  TestAgent_PassCheck: Synced check: check=test
>     writer.go:29: 2020-02-23T02:47:02.284Z [INFO]  TestAgent_PassCheck: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:02.284Z [INFO]  TestAgent_PassCheck.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:02.284Z [DEBUG] TestAgent_PassCheck.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.284Z [WARN]  TestAgent_PassCheck.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:02.284Z [ERROR] TestAgent_PassCheck.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:02.284Z [DEBUG] TestAgent_PassCheck.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.286Z [WARN]  TestAgent_PassCheck.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:02.290Z [INFO]  TestAgent_PassCheck.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:02.290Z [INFO]  TestAgent_PassCheck: consul server down
>     writer.go:29: 2020-02-23T02:47:02.290Z [INFO]  TestAgent_PassCheck: shutdown complete
>     writer.go:29: 2020-02-23T02:47:02.290Z [INFO]  TestAgent_PassCheck: Stopping server: protocol=DNS address=127.0.0.1:16631 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.290Z [INFO]  TestAgent_PassCheck: Stopping server: protocol=DNS address=127.0.0.1:16631 network=udp
>     writer.go:29: 2020-02-23T02:47:02.290Z [INFO]  TestAgent_PassCheck: Stopping server: protocol=HTTP address=127.0.0.1:16632 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.290Z [INFO]  TestAgent_PassCheck: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:02.290Z [INFO]  TestAgent_PassCheck: Endpoints down
> === CONT  TestAgent_RegisterCheck_ACLDeny
> --- PASS: TestAgent_DeregisterCheck (0.19s)
>     writer.go:29: 2020-02-23T02:47:02.287Z [WARN]  TestAgent_DeregisterCheck: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:02.287Z [DEBUG] TestAgent_DeregisterCheck.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:02.288Z [DEBUG] TestAgent_DeregisterCheck.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:02.303Z [INFO]  TestAgent_DeregisterCheck.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f62b83f1-bfb7-0a9e-d439-ccbb1277284b Address:127.0.0.1:16648}]"
>     writer.go:29: 2020-02-23T02:47:02.303Z [INFO]  TestAgent_DeregisterCheck.server.serf.wan: serf: EventMemberJoin: Node-f62b83f1-bfb7-0a9e-d439-ccbb1277284b.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:02.306Z [INFO]  TestAgent_DeregisterCheck.server.raft: entering follower state: follower="Node at 127.0.0.1:16648 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:02.309Z [INFO]  TestAgent_DeregisterCheck.server.serf.lan: serf: EventMemberJoin: Node-f62b83f1-bfb7-0a9e-d439-ccbb1277284b 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:02.310Z [INFO]  TestAgent_DeregisterCheck: Started DNS server: address=127.0.0.1:16643 network=udp
>     writer.go:29: 2020-02-23T02:47:02.310Z [INFO]  TestAgent_DeregisterCheck.server: Adding LAN server: server="Node-f62b83f1-bfb7-0a9e-d439-ccbb1277284b (Addr: tcp/127.0.0.1:16648) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:02.311Z [INFO]  TestAgent_DeregisterCheck.server: Handled event for server in area: event=member-join server=Node-f62b83f1-bfb7-0a9e-d439-ccbb1277284b.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:02.319Z [INFO]  TestAgent_DeregisterCheck: Started DNS server: address=127.0.0.1:16643 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.320Z [INFO]  TestAgent_DeregisterCheck: Started HTTP server: address=127.0.0.1:16644 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.320Z [INFO]  TestAgent_DeregisterCheck: started state syncer
>     writer.go:29: 2020-02-23T02:47:02.342Z [WARN]  TestAgent_DeregisterCheck.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:02.342Z [INFO]  TestAgent_DeregisterCheck.server.raft: entering candidate state: node="Node at 127.0.0.1:16648 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:02.346Z [DEBUG] TestAgent_DeregisterCheck.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:02.346Z [DEBUG] TestAgent_DeregisterCheck.server.raft: vote granted: from=f62b83f1-bfb7-0a9e-d439-ccbb1277284b term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:02.346Z [INFO]  TestAgent_DeregisterCheck.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:02.346Z [INFO]  TestAgent_DeregisterCheck.server.raft: entering leader state: leader="Node at 127.0.0.1:16648 [Leader]"
>     writer.go:29: 2020-02-23T02:47:02.346Z [INFO]  TestAgent_DeregisterCheck.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:02.346Z [INFO]  TestAgent_DeregisterCheck.server: New leader elected: payload=Node-f62b83f1-bfb7-0a9e-d439-ccbb1277284b
>     writer.go:29: 2020-02-23T02:47:02.361Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:02.369Z [INFO]  TestAgent_DeregisterCheck.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:02.369Z [INFO]  TestAgent_DeregisterCheck.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.369Z [DEBUG] TestAgent_DeregisterCheck.server: Skipping self join check for node since the cluster is too small: node=Node-f62b83f1-bfb7-0a9e-d439-ccbb1277284b
>     writer.go:29: 2020-02-23T02:47:02.369Z [INFO]  TestAgent_DeregisterCheck.server: member joined, marking health alive: member=Node-f62b83f1-bfb7-0a9e-d439-ccbb1277284b
>     writer.go:29: 2020-02-23T02:47:02.420Z [DEBUG] TestAgent_DeregisterCheck: removed check: check=test
>     writer.go:29: 2020-02-23T02:47:02.422Z [INFO]  TestAgent_DeregisterCheck: Synced node info
>     writer.go:29: 2020-02-23T02:47:02.424Z [INFO]  TestAgent_DeregisterCheck: Deregistered check: check=test
>     writer.go:29: 2020-02-23T02:47:02.424Z [INFO]  TestAgent_DeregisterCheck: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:02.424Z [INFO]  TestAgent_DeregisterCheck.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:02.424Z [DEBUG] TestAgent_DeregisterCheck.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.424Z [WARN]  TestAgent_DeregisterCheck.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:02.424Z [ERROR] TestAgent_DeregisterCheck.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:02.424Z [DEBUG] TestAgent_DeregisterCheck.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.426Z [WARN]  TestAgent_DeregisterCheck.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:02.428Z [INFO]  TestAgent_DeregisterCheck.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:02.428Z [INFO]  TestAgent_DeregisterCheck: consul server down
>     writer.go:29: 2020-02-23T02:47:02.428Z [INFO]  TestAgent_DeregisterCheck: shutdown complete
>     writer.go:29: 2020-02-23T02:47:02.428Z [INFO]  TestAgent_DeregisterCheck: Stopping server: protocol=DNS address=127.0.0.1:16643 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.428Z [INFO]  TestAgent_DeregisterCheck: Stopping server: protocol=DNS address=127.0.0.1:16643 network=udp
>     writer.go:29: 2020-02-23T02:47:02.428Z [INFO]  TestAgent_DeregisterCheck: Stopping server: protocol=HTTP address=127.0.0.1:16644 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.428Z [INFO]  TestAgent_DeregisterCheck: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:02.428Z [INFO]  TestAgent_DeregisterCheck: Endpoints down
> === CONT  TestAgent_RegisterCheck_BadStatus
> === RUN   TestAgent_RegisterCheck_ACLDeny/no_token_-_node_check
> === RUN   TestAgent_RegisterCheck_ACLDeny/svc_token_-_node_check
> === RUN   TestAgent_RegisterCheck_ACLDeny/node_token_-_node_check
> === RUN   TestAgent_RegisterCheck_ACLDeny/no_token_-_svc_check
> === RUN   TestAgent_RegisterCheck_ACLDeny/node_token_-_svc_check
> === RUN   TestAgent_RegisterCheck_ACLDeny/svc_token_-_svc_check
> --- PASS: TestAgent_RegisterCheck_ACLDeny (0.27s)
>     writer.go:29: 2020-02-23T02:47:02.318Z [WARN]  TestAgent_RegisterCheck_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:02.318Z [DEBUG] TestAgent_RegisterCheck_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:02.318Z [DEBUG] TestAgent_RegisterCheck_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:02.328Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:bc70d9dd-d2e3-2d80-904f-718043e794f3 Address:127.0.0.1:16642}]"
>     writer.go:29: 2020-02-23T02:47:02.329Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16642 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:02.329Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-bc70d9dd-d2e3-2d80-904f-718043e794f3.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:02.329Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-bc70d9dd-d2e3-2d80-904f-718043e794f3 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:02.330Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server: Adding LAN server: server="Node-bc70d9dd-d2e3-2d80-904f-718043e794f3 (Addr: tcp/127.0.0.1:16642) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:02.330Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server: Handled event for server in area: event=member-join server=Node-bc70d9dd-d2e3-2d80-904f-718043e794f3.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:02.330Z [INFO]  TestAgent_RegisterCheck_ACLDeny: Started DNS server: address=127.0.0.1:16637 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.330Z [INFO]  TestAgent_RegisterCheck_ACLDeny: Started DNS server: address=127.0.0.1:16637 network=udp
>     writer.go:29: 2020-02-23T02:47:02.330Z [INFO]  TestAgent_RegisterCheck_ACLDeny: Started HTTP server: address=127.0.0.1:16638 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.330Z [INFO]  TestAgent_RegisterCheck_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:47:02.384Z [WARN]  TestAgent_RegisterCheck_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:02.384Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16642 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:02.387Z [DEBUG] TestAgent_RegisterCheck_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:02.387Z [DEBUG] TestAgent_RegisterCheck_ACLDeny.server.raft: vote granted: from=bc70d9dd-d2e3-2d80-904f-718043e794f3 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:02.387Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:02.388Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16642 [Leader]"
>     writer.go:29: 2020-02-23T02:47:02.389Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:02.390Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:02.391Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server: New leader elected: payload=Node-bc70d9dd-d2e3-2d80-904f-718043e794f3
>     writer.go:29: 2020-02-23T02:47:02.391Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:02.391Z [WARN]  TestAgent_RegisterCheck_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:02.394Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:02.397Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:02.397Z [INFO]  TestAgent_RegisterCheck_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:02.397Z [INFO]  TestAgent_RegisterCheck_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:02.397Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-bc70d9dd-d2e3-2d80-904f-718043e794f3
>     writer.go:29: 2020-02-23T02:47:02.397Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-bc70d9dd-d2e3-2d80-904f-718043e794f3.dc1
>     writer.go:29: 2020-02-23T02:47:02.398Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server: Handled event for server in area: event=member-update server=Node-bc70d9dd-d2e3-2d80-904f-718043e794f3.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:02.401Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:02.408Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:02.408Z [INFO]  TestAgent_RegisterCheck_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.408Z [DEBUG] TestAgent_RegisterCheck_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-bc70d9dd-d2e3-2d80-904f-718043e794f3
>     writer.go:29: 2020-02-23T02:47:02.408Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server: member joined, marking health alive: member=Node-bc70d9dd-d2e3-2d80-904f-718043e794f3
>     writer.go:29: 2020-02-23T02:47:02.411Z [DEBUG] TestAgent_RegisterCheck_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-bc70d9dd-d2e3-2d80-904f-718043e794f3
>     writer.go:29: 2020-02-23T02:47:02.530Z [DEBUG] TestAgent_RegisterCheck_ACLDeny.acl: dropping node from result due to ACLs: node=Node-bc70d9dd-d2e3-2d80-904f-718043e794f3
>     writer.go:29: 2020-02-23T02:47:02.530Z [DEBUG] TestAgent_RegisterCheck_ACLDeny.acl: dropping node from result due to ACLs: node=Node-bc70d9dd-d2e3-2d80-904f-718043e794f3
>     writer.go:29: 2020-02-23T02:47:02.536Z [INFO]  TestAgent_RegisterCheck_ACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:47:02.539Z [INFO]  TestAgent_RegisterCheck_ACLDeny: Synced service: service=foo:1234
>     --- PASS: TestAgent_RegisterCheck_ACLDeny/no_token_-_node_check (0.00s)
>     --- PASS: TestAgent_RegisterCheck_ACLDeny/svc_token_-_node_check (0.00s)
>     writer.go:29: 2020-02-23T02:47:02.548Z [DEBUG] TestAgent_RegisterCheck_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:47:02.548Z [DEBUG] TestAgent_RegisterCheck_ACLDeny: Service in sync: service=foo:1234
>     writer.go:29: 2020-02-23T02:47:02.550Z [INFO]  TestAgent_RegisterCheck_ACLDeny: Synced check: check=test
>     --- PASS: TestAgent_RegisterCheck_ACLDeny/node_token_-_node_check (0.00s)
>     --- PASS: TestAgent_RegisterCheck_ACLDeny/no_token_-_svc_check (0.00s)
>     --- PASS: TestAgent_RegisterCheck_ACLDeny/node_token_-_svc_check (0.00s)
>     writer.go:29: 2020-02-23T02:47:02.552Z [DEBUG] TestAgent_RegisterCheck_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:47:02.552Z [DEBUG] TestAgent_RegisterCheck_ACLDeny: Service in sync: service=foo:1234
>     writer.go:29: 2020-02-23T02:47:02.552Z [DEBUG] TestAgent_RegisterCheck_ACLDeny: Check in sync: check=test
>     writer.go:29: 2020-02-23T02:47:02.555Z [INFO]  TestAgent_RegisterCheck_ACLDeny: Synced check: check=test2
>     --- PASS: TestAgent_RegisterCheck_ACLDeny/svc_token_-_svc_check (0.01s)
>     writer.go:29: 2020-02-23T02:47:02.555Z [INFO]  TestAgent_RegisterCheck_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:02.555Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:02.555Z [DEBUG] TestAgent_RegisterCheck_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:02.555Z [DEBUG] TestAgent_RegisterCheck_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:02.555Z [DEBUG] TestAgent_RegisterCheck_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.555Z [WARN]  TestAgent_RegisterCheck_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:02.555Z [DEBUG] TestAgent_RegisterCheck_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:02.555Z [DEBUG] TestAgent_RegisterCheck_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:02.555Z [DEBUG] TestAgent_RegisterCheck_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.555Z [ERROR] TestAgent_RegisterCheck_ACLDeny.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:02.557Z [WARN]  TestAgent_RegisterCheck_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:02.559Z [INFO]  TestAgent_RegisterCheck_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:02.559Z [INFO]  TestAgent_RegisterCheck_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:47:02.559Z [INFO]  TestAgent_RegisterCheck_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:47:02.559Z [INFO]  TestAgent_RegisterCheck_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16637 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.559Z [INFO]  TestAgent_RegisterCheck_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16637 network=udp
>     writer.go:29: 2020-02-23T02:47:02.559Z [INFO]  TestAgent_RegisterCheck_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16638 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.559Z [INFO]  TestAgent_RegisterCheck_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:02.559Z [INFO]  TestAgent_RegisterCheck_ACLDeny: Endpoints down
> === CONT  TestAgent_RegisterCheck_Passing
> --- PASS: TestAgent_RegisterCheck_Passing (0.25s)
>     writer.go:29: 2020-02-23T02:47:02.567Z [WARN]  TestAgent_RegisterCheck_Passing: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:02.567Z [DEBUG] TestAgent_RegisterCheck_Passing.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:02.567Z [DEBUG] TestAgent_RegisterCheck_Passing.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:02.581Z [INFO]  TestAgent_RegisterCheck_Passing.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:124b2a98-c8e5-5ded-86c0-157ab074f46e Address:127.0.0.1:16672}]"
>     writer.go:29: 2020-02-23T02:47:02.581Z [INFO]  TestAgent_RegisterCheck_Passing.server.raft: entering follower state: follower="Node at 127.0.0.1:16672 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:02.582Z [INFO]  TestAgent_RegisterCheck_Passing.server.serf.wan: serf: EventMemberJoin: Node-124b2a98-c8e5-5ded-86c0-157ab074f46e.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:02.582Z [INFO]  TestAgent_RegisterCheck_Passing.server.serf.lan: serf: EventMemberJoin: Node-124b2a98-c8e5-5ded-86c0-157ab074f46e 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:02.582Z [INFO]  TestAgent_RegisterCheck_Passing.server: Adding LAN server: server="Node-124b2a98-c8e5-5ded-86c0-157ab074f46e (Addr: tcp/127.0.0.1:16672) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:02.582Z [INFO]  TestAgent_RegisterCheck_Passing: Started DNS server: address=127.0.0.1:16667 network=udp
>     writer.go:29: 2020-02-23T02:47:02.582Z [INFO]  TestAgent_RegisterCheck_Passing.server: Handled event for server in area: event=member-join server=Node-124b2a98-c8e5-5ded-86c0-157ab074f46e.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:02.583Z [INFO]  TestAgent_RegisterCheck_Passing: Started DNS server: address=127.0.0.1:16667 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.583Z [INFO]  TestAgent_RegisterCheck_Passing: Started HTTP server: address=127.0.0.1:16668 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.583Z [INFO]  TestAgent_RegisterCheck_Passing: started state syncer
>     writer.go:29: 2020-02-23T02:47:02.641Z [WARN]  TestAgent_RegisterCheck_Passing.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:02.641Z [INFO]  TestAgent_RegisterCheck_Passing.server.raft: entering candidate state: node="Node at 127.0.0.1:16672 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:02.644Z [DEBUG] TestAgent_RegisterCheck_Passing.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:02.644Z [DEBUG] TestAgent_RegisterCheck_Passing.server.raft: vote granted: from=124b2a98-c8e5-5ded-86c0-157ab074f46e term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:02.644Z [INFO]  TestAgent_RegisterCheck_Passing.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:02.644Z [INFO]  TestAgent_RegisterCheck_Passing.server.raft: entering leader state: leader="Node at 127.0.0.1:16672 [Leader]"
>     writer.go:29: 2020-02-23T02:47:02.644Z [INFO]  TestAgent_RegisterCheck_Passing.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:02.645Z [INFO]  TestAgent_RegisterCheck_Passing.server: New leader elected: payload=Node-124b2a98-c8e5-5ded-86c0-157ab074f46e
>     writer.go:29: 2020-02-23T02:47:02.652Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:02.663Z [INFO]  TestAgent_RegisterCheck_Passing.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:02.663Z [INFO]  TestAgent_RegisterCheck_Passing.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.663Z [DEBUG] TestAgent_RegisterCheck_Passing.server: Skipping self join check for node since the cluster is too small: node=Node-124b2a98-c8e5-5ded-86c0-157ab074f46e
>     writer.go:29: 2020-02-23T02:47:02.663Z [INFO]  TestAgent_RegisterCheck_Passing.server: member joined, marking health alive: member=Node-124b2a98-c8e5-5ded-86c0-157ab074f46e
>     writer.go:29: 2020-02-23T02:47:02.804Z [INFO]  TestAgent_RegisterCheck_Passing: Synced node info
>     writer.go:29: 2020-02-23T02:47:02.805Z [INFO]  TestAgent_RegisterCheck_Passing: Synced check: check=test
>     writer.go:29: 2020-02-23T02:47:02.805Z [INFO]  TestAgent_RegisterCheck_Passing: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:02.805Z [INFO]  TestAgent_RegisterCheck_Passing.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:02.805Z [DEBUG] TestAgent_RegisterCheck_Passing.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.805Z [WARN]  TestAgent_RegisterCheck_Passing.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:02.805Z [ERROR] TestAgent_RegisterCheck_Passing.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:02.805Z [DEBUG] TestAgent_RegisterCheck_Passing.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.807Z [WARN]  TestAgent_RegisterCheck_Passing.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:02.809Z [INFO]  TestAgent_RegisterCheck_Passing.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:02.809Z [INFO]  TestAgent_RegisterCheck_Passing: consul server down
>     writer.go:29: 2020-02-23T02:47:02.809Z [INFO]  TestAgent_RegisterCheck_Passing: shutdown complete
>     writer.go:29: 2020-02-23T02:47:02.809Z [INFO]  TestAgent_RegisterCheck_Passing: Stopping server: protocol=DNS address=127.0.0.1:16667 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.809Z [INFO]  TestAgent_RegisterCheck_Passing: Stopping server: protocol=DNS address=127.0.0.1:16667 network=udp
>     writer.go:29: 2020-02-23T02:47:02.809Z [INFO]  TestAgent_RegisterCheck_Passing: Stopping server: protocol=HTTP address=127.0.0.1:16668 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.809Z [INFO]  TestAgent_RegisterCheck_Passing: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:02.809Z [INFO]  TestAgent_RegisterCheck_Passing: Endpoints down
> === CONT  TestAgent_RegisterCheckScriptsExecRemoteDisable
> --- PASS: TestAgent_RegisterCheck_BadStatus (0.46s)
>     writer.go:29: 2020-02-23T02:47:02.434Z [WARN]  TestAgent_RegisterCheck_BadStatus: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:02.435Z [DEBUG] TestAgent_RegisterCheck_BadStatus.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:02.435Z [DEBUG] TestAgent_RegisterCheck_BadStatus.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:02.450Z [INFO]  TestAgent_RegisterCheck_BadStatus.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:20de6a72-9035-fbb5-5925-7f43c9c6b8c7 Address:127.0.0.1:16654}]"
>     writer.go:29: 2020-02-23T02:47:02.450Z [INFO]  TestAgent_RegisterCheck_BadStatus.server.raft: entering follower state: follower="Node at 127.0.0.1:16654 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:02.451Z [INFO]  TestAgent_RegisterCheck_BadStatus.server.serf.wan: serf: EventMemberJoin: Node-20de6a72-9035-fbb5-5925-7f43c9c6b8c7.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:02.451Z [INFO]  TestAgent_RegisterCheck_BadStatus.server.serf.lan: serf: EventMemberJoin: Node-20de6a72-9035-fbb5-5925-7f43c9c6b8c7 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:02.451Z [INFO]  TestAgent_RegisterCheck_BadStatus.server: Handled event for server in area: event=member-join server=Node-20de6a72-9035-fbb5-5925-7f43c9c6b8c7.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:02.451Z [INFO]  TestAgent_RegisterCheck_BadStatus.server: Adding LAN server: server="Node-20de6a72-9035-fbb5-5925-7f43c9c6b8c7 (Addr: tcp/127.0.0.1:16654) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:02.451Z [INFO]  TestAgent_RegisterCheck_BadStatus: Started DNS server: address=127.0.0.1:16649 network=udp
>     writer.go:29: 2020-02-23T02:47:02.451Z [INFO]  TestAgent_RegisterCheck_BadStatus: Started DNS server: address=127.0.0.1:16649 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.452Z [INFO]  TestAgent_RegisterCheck_BadStatus: Started HTTP server: address=127.0.0.1:16650 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.452Z [INFO]  TestAgent_RegisterCheck_BadStatus: started state syncer
>     writer.go:29: 2020-02-23T02:47:02.514Z [WARN]  TestAgent_RegisterCheck_BadStatus.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:02.514Z [INFO]  TestAgent_RegisterCheck_BadStatus.server.raft: entering candidate state: node="Node at 127.0.0.1:16654 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:02.518Z [DEBUG] TestAgent_RegisterCheck_BadStatus.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:02.518Z [DEBUG] TestAgent_RegisterCheck_BadStatus.server.raft: vote granted: from=20de6a72-9035-fbb5-5925-7f43c9c6b8c7 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:02.518Z [INFO]  TestAgent_RegisterCheck_BadStatus.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:02.518Z [INFO]  TestAgent_RegisterCheck_BadStatus.server.raft: entering leader state: leader="Node at 127.0.0.1:16654 [Leader]"
>     writer.go:29: 2020-02-23T02:47:02.518Z [INFO]  TestAgent_RegisterCheck_BadStatus.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:02.518Z [INFO]  TestAgent_RegisterCheck_BadStatus.server: New leader elected: payload=Node-20de6a72-9035-fbb5-5925-7f43c9c6b8c7
>     writer.go:29: 2020-02-23T02:47:02.526Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:02.535Z [INFO]  TestAgent_RegisterCheck_BadStatus.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:02.535Z [INFO]  TestAgent_RegisterCheck_BadStatus.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.535Z [DEBUG] TestAgent_RegisterCheck_BadStatus.server: Skipping self join check for node since the cluster is too small: node=Node-20de6a72-9035-fbb5-5925-7f43c9c6b8c7
>     writer.go:29: 2020-02-23T02:47:02.535Z [INFO]  TestAgent_RegisterCheck_BadStatus.server: member joined, marking health alive: member=Node-20de6a72-9035-fbb5-5925-7f43c9c6b8c7
>     writer.go:29: 2020-02-23T02:47:02.671Z [DEBUG] TestAgent_RegisterCheck_BadStatus: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:02.674Z [INFO]  TestAgent_RegisterCheck_BadStatus: Synced node info
>     writer.go:29: 2020-02-23T02:47:02.674Z [DEBUG] TestAgent_RegisterCheck_BadStatus: Node info in sync
>     writer.go:29: 2020-02-23T02:47:02.882Z [INFO]  TestAgent_RegisterCheck_BadStatus: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:02.882Z [INFO]  TestAgent_RegisterCheck_BadStatus.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:02.882Z [DEBUG] TestAgent_RegisterCheck_BadStatus.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.882Z [WARN]  TestAgent_RegisterCheck_BadStatus.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:02.882Z [DEBUG] TestAgent_RegisterCheck_BadStatus.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.884Z [WARN]  TestAgent_RegisterCheck_BadStatus.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:02.886Z [INFO]  TestAgent_RegisterCheck_BadStatus.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:02.886Z [INFO]  TestAgent_RegisterCheck_BadStatus: consul server down
>     writer.go:29: 2020-02-23T02:47:02.886Z [INFO]  TestAgent_RegisterCheck_BadStatus: shutdown complete
>     writer.go:29: 2020-02-23T02:47:02.886Z [INFO]  TestAgent_RegisterCheck_BadStatus: Stopping server: protocol=DNS address=127.0.0.1:16649 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.886Z [INFO]  TestAgent_RegisterCheck_BadStatus: Stopping server: protocol=DNS address=127.0.0.1:16649 network=udp
>     writer.go:29: 2020-02-23T02:47:02.886Z [INFO]  TestAgent_RegisterCheck_BadStatus: Stopping server: protocol=HTTP address=127.0.0.1:16650 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.886Z [INFO]  TestAgent_RegisterCheck_BadStatus: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:02.886Z [INFO]  TestAgent_RegisterCheck_BadStatus: Endpoints down
> === CONT  TestAgent_RegisterCheckScriptsExecDisable
> --- PASS: TestAgent_RegisterCheckScriptsExecDisable (0.16s)
>     writer.go:29: 2020-02-23T02:47:02.894Z [WARN]  TestAgent_RegisterCheckScriptsExecDisable: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:02.895Z [DEBUG] TestAgent_RegisterCheckScriptsExecDisable.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:02.895Z [DEBUG] TestAgent_RegisterCheckScriptsExecDisable.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:02.909Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:e7101d0a-da20-328c-d42f-cb0174db2260 Address:127.0.0.1:16684}]"
>     writer.go:29: 2020-02-23T02:47:02.910Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.server.serf.wan: serf: EventMemberJoin: Node-e7101d0a-da20-328c-d42f-cb0174db2260.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:02.910Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.server.serf.lan: serf: EventMemberJoin: Node-e7101d0a-da20-328c-d42f-cb0174db2260 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:02.910Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable: Started DNS server: address=127.0.0.1:16679 network=udp
>     writer.go:29: 2020-02-23T02:47:02.910Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.server.raft: entering follower state: follower="Node at 127.0.0.1:16684 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:02.911Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.server: Adding LAN server: server="Node-e7101d0a-da20-328c-d42f-cb0174db2260 (Addr: tcp/127.0.0.1:16684) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:02.911Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.server: Handled event for server in area: event=member-join server=Node-e7101d0a-da20-328c-d42f-cb0174db2260.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:02.911Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable: Started DNS server: address=127.0.0.1:16679 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.911Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable: Started HTTP server: address=127.0.0.1:16680 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.911Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable: started state syncer
>     writer.go:29: 2020-02-23T02:47:02.956Z [WARN]  TestAgent_RegisterCheckScriptsExecDisable.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:02.956Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.server.raft: entering candidate state: node="Node at 127.0.0.1:16684 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:02.960Z [DEBUG] TestAgent_RegisterCheckScriptsExecDisable.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:02.960Z [DEBUG] TestAgent_RegisterCheckScriptsExecDisable.server.raft: vote granted: from=e7101d0a-da20-328c-d42f-cb0174db2260 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:02.960Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:02.960Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.server.raft: entering leader state: leader="Node at 127.0.0.1:16684 [Leader]"
>     writer.go:29: 2020-02-23T02:47:02.960Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:02.960Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.server: New leader elected: payload=Node-e7101d0a-da20-328c-d42f-cb0174db2260
>     writer.go:29: 2020-02-23T02:47:02.968Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:02.976Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:02.977Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.977Z [DEBUG] TestAgent_RegisterCheckScriptsExecDisable.server: Skipping self join check for node since the cluster is too small: node=Node-e7101d0a-da20-328c-d42f-cb0174db2260
>     writer.go:29: 2020-02-23T02:47:02.977Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.server: member joined, marking health alive: member=Node-e7101d0a-da20-328c-d42f-cb0174db2260
>     writer.go:29: 2020-02-23T02:47:03.036Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:03.036Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:03.036Z [DEBUG] TestAgent_RegisterCheckScriptsExecDisable.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.036Z [WARN]  TestAgent_RegisterCheckScriptsExecDisable.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:03.037Z [ERROR] TestAgent_RegisterCheckScriptsExecDisable.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:03.037Z [DEBUG] TestAgent_RegisterCheckScriptsExecDisable.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.042Z [WARN]  TestAgent_RegisterCheckScriptsExecDisable.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:03.044Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:03.044Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable: consul server down
>     writer.go:29: 2020-02-23T02:47:03.044Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable: shutdown complete
>     writer.go:29: 2020-02-23T02:47:03.044Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable: Stopping server: protocol=DNS address=127.0.0.1:16679 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.044Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable: Stopping server: protocol=DNS address=127.0.0.1:16679 network=udp
>     writer.go:29: 2020-02-23T02:47:03.044Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable: Stopping server: protocol=HTTP address=127.0.0.1:16680 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.044Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:03.044Z [INFO]  TestAgent_RegisterCheckScriptsExecDisable: Endpoints down
> === CONT  TestAgent_RegisterCheck
> --- PASS: TestAgent_RegisterCheckScriptsExecRemoteDisable (0.33s)
>     writer.go:29: 2020-02-23T02:47:02.817Z [WARN]  TestAgent_RegisterCheckScriptsExecRemoteDisable: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:02.817Z [DEBUG] TestAgent_RegisterCheckScriptsExecRemoteDisable.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:02.818Z [DEBUG] TestAgent_RegisterCheckScriptsExecRemoteDisable.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:02.830Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:0be03745-73bd-28e0-72ba-bea719840883 Address:127.0.0.1:16666}]"
>     writer.go:29: 2020-02-23T02:47:02.830Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server.raft: entering follower state: follower="Node at 127.0.0.1:16666 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:02.831Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server.serf.wan: serf: EventMemberJoin: Node-0be03745-73bd-28e0-72ba-bea719840883.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:02.831Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server.serf.lan: serf: EventMemberJoin: Node-0be03745-73bd-28e0-72ba-bea719840883 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:02.832Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server: Handled event for server in area: event=member-join server=Node-0be03745-73bd-28e0-72ba-bea719840883.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:02.832Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server: Adding LAN server: server="Node-0be03745-73bd-28e0-72ba-bea719840883 (Addr: tcp/127.0.0.1:16666) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:02.832Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable: Started DNS server: address=127.0.0.1:16661 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.832Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable: Started DNS server: address=127.0.0.1:16661 network=udp
>     writer.go:29: 2020-02-23T02:47:02.832Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable: Started HTTP server: address=127.0.0.1:16662 network=tcp
>     writer.go:29: 2020-02-23T02:47:02.832Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable: started state syncer
>     writer.go:29: 2020-02-23T02:47:02.884Z [WARN]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:02.884Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server.raft: entering candidate state: node="Node at 127.0.0.1:16666 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:02.892Z [DEBUG] TestAgent_RegisterCheckScriptsExecRemoteDisable.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:02.892Z [DEBUG] TestAgent_RegisterCheckScriptsExecRemoteDisable.server.raft: vote granted: from=0be03745-73bd-28e0-72ba-bea719840883 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:02.892Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:02.892Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server.raft: entering leader state: leader="Node at 127.0.0.1:16666 [Leader]"
>     writer.go:29: 2020-02-23T02:47:02.892Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:02.893Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server: New leader elected: payload=Node-0be03745-73bd-28e0-72ba-bea719840883
>     writer.go:29: 2020-02-23T02:47:02.900Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:02.909Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:02.909Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:02.909Z [DEBUG] TestAgent_RegisterCheckScriptsExecRemoteDisable.server: Skipping self join check for node since the cluster is too small: node=Node-0be03745-73bd-28e0-72ba-bea719840883
>     writer.go:29: 2020-02-23T02:47:02.909Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server: member joined, marking health alive: member=Node-0be03745-73bd-28e0-72ba-bea719840883
>     writer.go:29: 2020-02-23T02:47:03.098Z [DEBUG] TestAgent_RegisterCheckScriptsExecRemoteDisable: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:03.101Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable: Synced node info
>     writer.go:29: 2020-02-23T02:47:03.136Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:03.136Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:03.136Z [DEBUG] TestAgent_RegisterCheckScriptsExecRemoteDisable.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.136Z [WARN]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:03.137Z [DEBUG] TestAgent_RegisterCheckScriptsExecRemoteDisable.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.138Z [WARN]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:03.140Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:03.140Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable: consul server down
>     writer.go:29: 2020-02-23T02:47:03.140Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable: shutdown complete
>     writer.go:29: 2020-02-23T02:47:03.140Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable: Stopping server: protocol=DNS address=127.0.0.1:16661 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.140Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable: Stopping server: protocol=DNS address=127.0.0.1:16661 network=udp
>     writer.go:29: 2020-02-23T02:47:03.140Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable: Stopping server: protocol=HTTP address=127.0.0.1:16662 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.140Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:03.140Z [INFO]  TestAgent_RegisterCheckScriptsExecRemoteDisable: Endpoints down
> === CONT  TestAgent_ForceLeavePrune
> --- PASS: TestAgent_RegisterCheck (0.43s)
>     writer.go:29: 2020-02-23T02:47:03.052Z [WARN]  TestAgent_RegisterCheck: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:03.052Z [DEBUG] TestAgent_RegisterCheck.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:03.052Z [DEBUG] TestAgent_RegisterCheck.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:03.064Z [INFO]  TestAgent_RegisterCheck.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:55db591f-7196-f844-9bf2-26010aad1311 Address:127.0.0.1:16678}]"
>     writer.go:29: 2020-02-23T02:47:03.064Z [INFO]  TestAgent_RegisterCheck.server.serf.wan: serf: EventMemberJoin: Node-55db591f-7196-f844-9bf2-26010aad1311.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.065Z [INFO]  TestAgent_RegisterCheck.server.serf.lan: serf: EventMemberJoin: Node-55db591f-7196-f844-9bf2-26010aad1311 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.065Z [INFO]  TestAgent_RegisterCheck: Started DNS server: address=127.0.0.1:16673 network=udp
>     writer.go:29: 2020-02-23T02:47:03.065Z [INFO]  TestAgent_RegisterCheck.server.raft: entering follower state: follower="Node at 127.0.0.1:16678 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:03.065Z [INFO]  TestAgent_RegisterCheck.server: Adding LAN server: server="Node-55db591f-7196-f844-9bf2-26010aad1311 (Addr: tcp/127.0.0.1:16678) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:03.065Z [INFO]  TestAgent_RegisterCheck.server: Handled event for server in area: event=member-join server=Node-55db591f-7196-f844-9bf2-26010aad1311.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:03.065Z [INFO]  TestAgent_RegisterCheck: Started DNS server: address=127.0.0.1:16673 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.066Z [INFO]  TestAgent_RegisterCheck: Started HTTP server: address=127.0.0.1:16674 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.066Z [INFO]  TestAgent_RegisterCheck: started state syncer
>     writer.go:29: 2020-02-23T02:47:03.121Z [WARN]  TestAgent_RegisterCheck.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:03.121Z [INFO]  TestAgent_RegisterCheck.server.raft: entering candidate state: node="Node at 127.0.0.1:16678 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:03.125Z [DEBUG] TestAgent_RegisterCheck.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:03.125Z [DEBUG] TestAgent_RegisterCheck.server.raft: vote granted: from=55db591f-7196-f844-9bf2-26010aad1311 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:03.125Z [INFO]  TestAgent_RegisterCheck.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:03.125Z [INFO]  TestAgent_RegisterCheck.server.raft: entering leader state: leader="Node at 127.0.0.1:16678 [Leader]"
>     writer.go:29: 2020-02-23T02:47:03.125Z [INFO]  TestAgent_RegisterCheck.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:03.125Z [INFO]  TestAgent_RegisterCheck.server: New leader elected: payload=Node-55db591f-7196-f844-9bf2-26010aad1311
>     writer.go:29: 2020-02-23T02:47:03.133Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:03.144Z [INFO]  TestAgent_RegisterCheck.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:03.144Z [INFO]  TestAgent_RegisterCheck.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.144Z [DEBUG] TestAgent_RegisterCheck.server: Skipping self join check for node since the cluster is too small: node=Node-55db591f-7196-f844-9bf2-26010aad1311
>     writer.go:29: 2020-02-23T02:47:03.144Z [INFO]  TestAgent_RegisterCheck.server: member joined, marking health alive: member=Node-55db591f-7196-f844-9bf2-26010aad1311
>     writer.go:29: 2020-02-23T02:47:03.462Z [INFO]  TestAgent_RegisterCheck: Synced node info
>     writer.go:29: 2020-02-23T02:47:03.464Z [INFO]  TestAgent_RegisterCheck: Synced check: check=test
>     writer.go:29: 2020-02-23T02:47:03.464Z [INFO]  TestAgent_RegisterCheck: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:03.464Z [INFO]  TestAgent_RegisterCheck.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:03.464Z [DEBUG] TestAgent_RegisterCheck.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.464Z [WARN]  TestAgent_RegisterCheck.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:03.464Z [ERROR] TestAgent_RegisterCheck.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:03.464Z [DEBUG] TestAgent_RegisterCheck.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.472Z [WARN]  TestAgent_RegisterCheck.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:03.474Z [INFO]  TestAgent_RegisterCheck.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:03.474Z [INFO]  TestAgent_RegisterCheck: consul server down
>     writer.go:29: 2020-02-23T02:47:03.474Z [INFO]  TestAgent_RegisterCheck: shutdown complete
>     writer.go:29: 2020-02-23T02:47:03.474Z [INFO]  TestAgent_RegisterCheck: Stopping server: protocol=DNS address=127.0.0.1:16673 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.474Z [INFO]  TestAgent_RegisterCheck: Stopping server: protocol=DNS address=127.0.0.1:16673 network=udp
>     writer.go:29: 2020-02-23T02:47:03.474Z [INFO]  TestAgent_RegisterCheck: Stopping server: protocol=HTTP address=127.0.0.1:16674 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.474Z [INFO]  TestAgent_RegisterCheck: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:03.474Z [INFO]  TestAgent_RegisterCheck: Endpoints down
> === CONT  TestAgent_ForceLeave_ACLDeny
> --- PASS: TestAgentConnectCALeafCert_secondaryDC_good (7.67s)
>     writer.go:29: 2020-02-23T02:46:55.855Z [WARN]  TestAgentConnectCALeafCert_secondaryDC_good-dc1: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:55.856Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:55.856Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:55.882Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:f888ea4f-c003-53fd-5ff5-2fe9d10794c9 Address:127.0.0.1:16462}]"
>     writer.go:29: 2020-02-23T02:46:55.882Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.serf.wan: serf: EventMemberJoin: Node-f888ea4f-c003-53fd-5ff5-2fe9d10794c9.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.883Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.serf.lan: serf: EventMemberJoin: Node-f888ea4f-c003-53fd-5ff5-2fe9d10794c9 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:55.883Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1: Started DNS server: address=127.0.0.1:16457 network=udp
>     writer.go:29: 2020-02-23T02:46:55.883Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.raft: entering follower state: follower="Node at 127.0.0.1:16462 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:55.883Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server: Adding LAN server: server="Node-f888ea4f-c003-53fd-5ff5-2fe9d10794c9 (Addr: tcp/127.0.0.1:16462) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:55.883Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server: Handled event for server in area: event=member-join server=Node-f888ea4f-c003-53fd-5ff5-2fe9d10794c9.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:55.883Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1: Started DNS server: address=127.0.0.1:16457 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.884Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1: Started HTTP server: address=127.0.0.1:16458 network=tcp
>     writer.go:29: 2020-02-23T02:46:55.884Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1: started state syncer
>     writer.go:29: 2020-02-23T02:46:55.952Z [WARN]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:55.952Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.raft: entering candidate state: node="Node at 127.0.0.1:16462 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:55.955Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:55.955Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.raft: vote granted: from=f888ea4f-c003-53fd-5ff5-2fe9d10794c9 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:55.955Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:55.955Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.raft: entering leader state: leader="Node at 127.0.0.1:16462 [Leader]"
>     writer.go:29: 2020-02-23T02:46:55.955Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:55.956Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server: New leader elected: payload=Node-f888ea4f-c003-53fd-5ff5-2fe9d10794c9
>     writer.go:29: 2020-02-23T02:46:55.963Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:55.971Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:55.971Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:55.971Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1.server: Skipping self join check for node since the cluster is too small: node=Node-f888ea4f-c003-53fd-5ff5-2fe9d10794c9
>     writer.go:29: 2020-02-23T02:46:55.971Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server: member joined, marking health alive: member=Node-f888ea4f-c003-53fd-5ff5-2fe9d10794c9
>     writer.go:29: 2020-02-23T02:46:55.976Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:55.979Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1: Synced node info
>     writer.go:29: 2020-02-23T02:46:55.979Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1: Node info in sync
>     writer.go:29: 2020-02-23T02:46:56.181Z [WARN]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:56.181Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:56.181Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:56.260Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:34fb8370-bb31-aaf9-2afe-db2d5b965d27 Address:127.0.0.1:16474}]"
>     writer.go:29: 2020-02-23T02:46:56.260Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.raft: entering follower state: follower="Node at 127.0.0.1:16474 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:56.261Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.serf.wan: serf: EventMemberJoin: Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27.dc2 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:56.262Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.serf.lan: serf: EventMemberJoin: Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:56.262Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server: Handled event for server in area: event=member-join server=Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27.dc2 area=wan
>     writer.go:29: 2020-02-23T02:46:56.262Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server: Adding LAN server: server="Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27 (Addr: tcp/127.0.0.1:16474) (DC: dc2)"
>     writer.go:29: 2020-02-23T02:46:56.262Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: Started DNS server: address=127.0.0.1:16469 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.262Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: Started DNS server: address=127.0.0.1:16469 network=udp
>     writer.go:29: 2020-02-23T02:46:56.263Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: Started HTTP server: address=127.0.0.1:16470 network=tcp
>     writer.go:29: 2020-02-23T02:46:56.263Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: started state syncer
>     writer.go:29: 2020-02-23T02:46:56.325Z [WARN]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:56.325Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.raft: entering candidate state: node="Node at 127.0.0.1:16474 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:56.328Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:56.328Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.raft: vote granted: from=34fb8370-bb31-aaf9-2afe-db2d5b965d27 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:56.328Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:56.328Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.raft: entering leader state: leader="Node at 127.0.0.1:16474 [Leader]"
>     writer.go:29: 2020-02-23T02:46:56.328Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:56.328Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server: New leader elected: payload=Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27
>     writer.go:29: 2020-02-23T02:46:56.333Z [WARN]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.connect: primary datacenter is configured but unreachable - deferring initialization of the secondary datacenter CA
>     writer.go:29: 2020-02-23T02:46:56.333Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.leader: started routine: routine="config entry replication"
>     writer.go:29: 2020-02-23T02:46:56.333Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.leader: started routine: routine="secondary CA roots watch"
>     writer.go:29: 2020-02-23T02:46:56.333Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.leader: started routine: routine="intention replication"
>     writer.go:29: 2020-02-23T02:46:56.333Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.leader: started routine: routine="secondary cert renew watch"
>     writer.go:29: 2020-02-23T02:46:56.333Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:56.333Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server: Skipping self join check for node since the cluster is too small: node=Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27
>     writer.go:29: 2020-02-23T02:46:56.333Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server: member joined, marking health alive: member=Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27
>     writer.go:29: 2020-02-23T02:46:56.333Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.connect: starting Connect intention replication from primary datacenter: primary=dc1
>     writer.go:29: 2020-02-23T02:46:56.333Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.connect: starting Connect CA root replication from primary datacenter: primary=dc1
>     writer.go:29: 2020-02-23T02:46:56.333Z [WARN]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.rpc: RPC request for DC is currently failing as no path was found: datacenter=dc1 method=Intention.List
>     writer.go:29: 2020-02-23T02:46:56.333Z [ERROR] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.connect: error replicating intentions: routine="intention replication" error="No path to datacenter"
>     writer.go:29: 2020-02-23T02:46:56.333Z [WARN]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.rpc: RPC request for DC is currently failing as no path was found: datacenter=dc1 method=ConfigEntry.ListAll
>     writer.go:29: 2020-02-23T02:46:56.333Z [WARN]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.rpc: RPC request for DC is currently failing as no path was found: datacenter=dc1 method=ConnectCA.Roots
>     writer.go:29: 2020-02-23T02:46:56.333Z [ERROR] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.connect: CA root replication failed, will retry: routine="secondary CA roots watch" error="Error retrieving the primary datacenter's roots: No path to datacenter"
>     writer.go:29: 2020-02-23T02:46:56.374Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:56.442Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: Synced node info
>     writer.go:29: 2020-02-23T02:46:56.442Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2: Node info in sync
>     writer.go:29: 2020-02-23T02:46:56.491Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:56.491Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2: Node info in sync
>     writer.go:29: 2020-02-23T02:46:56.675Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: (WAN) joining: wan_addresses=[127.0.0.1:16461]
>     writer.go:29: 2020-02-23T02:46:56.675Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.memberlist.wan: memberlist: Initiating push/pull sync with: 127.0.0.1:16461
>     writer.go:29: 2020-02-23T02:46:56.675Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.memberlist.wan: memberlist: Stream connection from=127.0.0.1:56908
>     writer.go:29: 2020-02-23T02:46:56.675Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.serf.wan: serf: EventMemberJoin: Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27.dc2 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:56.676Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server: Handled event for server in area: event=member-join server=Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27.dc2 area=wan
>     writer.go:29: 2020-02-23T02:46:56.676Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.serf.wan: serf: EventMemberJoin: Node-f888ea4f-c003-53fd-5ff5-2fe9d10794c9.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:56.676Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: (WAN) joined: number_of_nodes=1
>     writer.go:29: 2020-02-23T02:46:56.677Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server: Handled event for server in area: event=member-join server=Node-f888ea4f-c003-53fd-5ff5-2fe9d10794c9.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:56.699Z [DEBUG] connect.ca.consul: consul CA provider configured: id=70:d3:66:b7:22:7f:25:ca:4b:46:89:de:21:07:d2:9c:12:7a:68:f6:72:c4:42:e7:61:81:25:42:83:54:eb:d0 is_primary=true
>     writer.go:29: 2020-02-23T02:46:56.740Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:56.740Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1: Node info in sync
>     writer.go:29: 2020-02-23T02:46:56.762Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.serf.wan: serf: messageJoinType: Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27.dc2
>     writer.go:29: 2020-02-23T02:46:56.779Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.connect: CA rotated to new root under provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:56.883Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.serf.wan: serf: messageJoinType: Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27.dc2
>     writer.go:29: 2020-02-23T02:46:57.262Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.serf.wan: serf: messageJoinType: Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27.dc2
>     writer.go:29: 2020-02-23T02:46:57.349Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:57.349Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.replication.config_entry: finished fetching config entries: amount=0
>     writer.go:29: 2020-02-23T02:46:57.349Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.replication.config_entry: Config Entry replication: local=0 remote=0
>     writer.go:29: 2020-02-23T02:46:57.349Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.replication.config_entry: Config Entry replication: deletions=0 updates=0
>     writer.go:29: 2020-02-23T02:46:57.350Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.replication.config_entry: replication completed through remote index: index=1
>     writer.go:29: 2020-02-23T02:46:57.383Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.serf.wan: serf: messageJoinType: Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27.dc2
>     writer.go:29: 2020-02-23T02:46:57.762Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.serf.wan: serf: messageJoinType: Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27.dc2
>     writer.go:29: 2020-02-23T02:46:57.883Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.serf.wan: serf: messageJoinType: Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27.dc2
>     writer.go:29: 2020-02-23T02:46:57.959Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:58.262Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.serf.wan: serf: messageJoinType: Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27.dc2
>     writer.go:29: 2020-02-23T02:46:58.332Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:58.335Z [DEBUG] connect.ca.consul: consul CA provider configured: id=ad:4a:c6:ab:ef:63:c9:60:1a:51:7f:19:62:e3:e9:d9:0e:76:55:10:6e:74:24:69:28:a1:6c:b8:b9:8f:fd:89 is_primary=false
>     writer.go:29: 2020-02-23T02:46:58.344Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.connect: received new intermediate certificate from primary datacenter
>     writer.go:29: 2020-02-23T02:46:58.347Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.connect: updated root certificates from primary datacenter
>     writer.go:29: 2020-02-23T02:46:58.351Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2: Node info in sync
>     writer.go:29: 2020-02-23T02:46:58.354Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: Synced service: service=foo
>     writer.go:29: 2020-02-23T02:46:58.354Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2: Check in sync: check=service:foo
>     writer.go:29: 2020-02-23T02:46:58.354Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2: Node info in sync
>     writer.go:29: 2020-02-23T02:46:58.354Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2: Service in sync: service=foo
>     writer.go:29: 2020-02-23T02:46:58.354Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2: Check in sync: check=service:foo
>     writer.go:29: 2020-02-23T02:46:58.383Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.serf.wan: serf: messageJoinType: Node-34fb8370-bb31-aaf9-2afe-db2d5b965d27.dc2
>     writer.go:29: 2020-02-23T02:47:03.363Z [DEBUG] connect.ca.consul: consul CA provider configured: id=52:2c:c6:98:c5:c1:4a:8d:48:8c:91:be:14:b2:51:6d:80:26:f2:20:4a:c0:70:de:8a:fb:ce:91:ec:ea:dc:db is_primary=true
>     writer.go:29: 2020-02-23T02:47:03.396Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.connect: CA rotated to new root under provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:03.462Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.connect: received new intermediate certificate from primary datacenter
>     writer.go:29: 2020-02-23T02:47:03.466Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.connect: updated root certificates from primary datacenter
>     writer.go:29: 2020-02-23T02:47:03.498Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:03.498Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:03.498Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.leader: stopping routine: routine="secondary cert renew watch"
>     writer.go:29: 2020-02-23T02:47:03.498Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.498Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.leader: stopping routine: routine="config entry replication"
>     writer.go:29: 2020-02-23T02:47:03.498Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.leader: stopping routine: routine="secondary CA roots watch"
>     writer.go:29: 2020-02-23T02:47:03.498Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.leader: stopping routine: routine="intention replication"
>     writer.go:29: 2020-02-23T02:47:03.498Z [WARN]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:03.498Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.leader: stopped routine: routine="secondary cert renew watch"
>     writer.go:29: 2020-02-23T02:47:03.498Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.500Z [WARN]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:03.502Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:03.502Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:03.502Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: consul server down
>     writer.go:29: 2020-02-23T02:47:03.502Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: shutdown complete
>     writer.go:29: 2020-02-23T02:47:03.502Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: Stopping server: protocol=DNS address=127.0.0.1:16469 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.502Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: Stopping server: protocol=DNS address=127.0.0.1:16469 network=udp
>     writer.go:29: 2020-02-23T02:47:03.502Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: Stopping server: protocol=HTTP address=127.0.0.1:16470 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.502Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:03.502Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2: Endpoints down
>     writer.go:29: 2020-02-23T02:47:03.502Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:03.502Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:03.502Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.502Z [WARN]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:03.502Z [ERROR] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.rpc: RPC failed to server in DC: server=127.0.0.1:16462 datacenter=dc1 method=ConfigEntry.ListAll error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:03.502Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.replication.config_entry: stopped replication
>     writer.go:29: 2020-02-23T02:47:03.502Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc2.leader: stopped routine: routine="config entry replication"
>     writer.go:29: 2020-02-23T02:47:03.502Z [ERROR] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.rpc: RPC failed to server in DC: server=127.0.0.1:16462 datacenter=dc1 method=ConnectCA.Roots error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:03.502Z [ERROR] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.connect: CA root replication failed, will retry: routine="secondary CA roots watch" error="Error retrieving the primary datacenter's roots: rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:03.502Z [ERROR] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.rpc: RPC failed to server in DC: server=127.0.0.1:16462 datacenter=dc1 method=Intention.List error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:03.502Z [ERROR] TestAgentConnectCALeafCert_secondaryDC_good-dc2.server.connect: error replicating intentions: routine="intention replication" error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:03.502Z [DEBUG] TestAgentConnectCALeafCert_secondaryDC_good-dc1.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.504Z [WARN]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:03.506Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:03.506Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:03.506Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1: consul server down
>     writer.go:29: 2020-02-23T02:47:03.506Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1: shutdown complete
>     writer.go:29: 2020-02-23T02:47:03.506Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1: Stopping server: protocol=DNS address=127.0.0.1:16457 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.506Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1: Stopping server: protocol=DNS address=127.0.0.1:16457 network=udp
>     writer.go:29: 2020-02-23T02:47:03.506Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1: Stopping server: protocol=HTTP address=127.0.0.1:16458 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.506Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:03.506Z [INFO]  TestAgentConnectCALeafCert_secondaryDC_good-dc1: Endpoints down
> === CONT  TestAgent_Leave_ACLDeny
> === RUN   TestAgent_ForceLeave_ACLDeny/no_token
> === RUN   TestAgent_ForceLeave_ACLDeny/agent_master_token
> === RUN   TestAgent_ForceLeave_ACLDeny/read-only_token
> === RUN   TestAgent_ForceLeave_ACLDeny/operator_write_token
> === RUN   TestAgent_Leave_ACLDeny/no_token
> === RUN   TestAgent_Leave_ACLDeny/read-only_token
> === RUN   TestAgent_Leave_ACLDeny/agent_master_token
> --- PASS: TestAgent_ForceLeave_ACLDeny (0.41s)
>     writer.go:29: 2020-02-23T02:47:03.483Z [WARN]  TestAgent_ForceLeave_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:03.483Z [WARN]  TestAgent_ForceLeave_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:03.483Z [DEBUG] TestAgent_ForceLeave_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:03.483Z [DEBUG] TestAgent_ForceLeave_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:03.498Z [INFO]  TestAgent_ForceLeave_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2 Address:127.0.0.1:16702}]"
>     writer.go:29: 2020-02-23T02:47:03.499Z [INFO]  TestAgent_ForceLeave_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16702 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:03.500Z [INFO]  TestAgent_ForceLeave_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.500Z [INFO]  TestAgent_ForceLeave_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.501Z [INFO]  TestAgent_ForceLeave_ACLDeny.server: Handled event for server in area: event=member-join server=Node-fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:03.501Z [INFO]  TestAgent_ForceLeave_ACLDeny.server: Adding LAN server: server="Node-fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2 (Addr: tcp/127.0.0.1:16702) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:03.501Z [INFO]  TestAgent_ForceLeave_ACLDeny: Started DNS server: address=127.0.0.1:16697 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.501Z [INFO]  TestAgent_ForceLeave_ACLDeny: Started DNS server: address=127.0.0.1:16697 network=udp
>     writer.go:29: 2020-02-23T02:47:03.501Z [INFO]  TestAgent_ForceLeave_ACLDeny: Started HTTP server: address=127.0.0.1:16698 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.501Z [INFO]  TestAgent_ForceLeave_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:47:03.555Z [WARN]  TestAgent_ForceLeave_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:03.555Z [INFO]  TestAgent_ForceLeave_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16702 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:03.559Z [DEBUG] TestAgent_ForceLeave_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:03.559Z [DEBUG] TestAgent_ForceLeave_ACLDeny.server.raft: vote granted: from=fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:03.559Z [INFO]  TestAgent_ForceLeave_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:03.559Z [INFO]  TestAgent_ForceLeave_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16702 [Leader]"
>     writer.go:29: 2020-02-23T02:47:03.559Z [INFO]  TestAgent_ForceLeave_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:03.559Z [INFO]  TestAgent_ForceLeave_ACLDeny.server: New leader elected: payload=Node-fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2
>     writer.go:29: 2020-02-23T02:47:03.562Z [INFO]  TestAgent_ForceLeave_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:03.564Z [INFO]  TestAgent_ForceLeave_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:03.564Z [WARN]  TestAgent_ForceLeave_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:03.566Z [INFO]  TestAgent_ForceLeave_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:03.570Z [INFO]  TestAgent_ForceLeave_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:03.570Z [INFO]  TestAgent_ForceLeave_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:03.570Z [INFO]  TestAgent_ForceLeave_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:03.570Z [INFO]  TestAgent_ForceLeave_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2
>     writer.go:29: 2020-02-23T02:47:03.571Z [INFO]  TestAgent_ForceLeave_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2.dc1
>     writer.go:29: 2020-02-23T02:47:03.571Z [INFO]  TestAgent_ForceLeave_ACLDeny.server: Handled event for server in area: event=member-update server=Node-fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:03.574Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:03.581Z [INFO]  TestAgent_ForceLeave_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:03.581Z [INFO]  TestAgent_ForceLeave_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.581Z [DEBUG] TestAgent_ForceLeave_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2
>     writer.go:29: 2020-02-23T02:47:03.581Z [INFO]  TestAgent_ForceLeave_ACLDeny.server: member joined, marking health alive: member=Node-fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2
>     writer.go:29: 2020-02-23T02:47:03.585Z [DEBUG] TestAgent_ForceLeave_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2
>     writer.go:29: 2020-02-23T02:47:03.587Z [INFO]  TestAgent_ForceLeave_ACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:47:03.761Z [DEBUG] TestAgent_ForceLeave_ACLDeny.acl: dropping node from result due to ACLs: node=Node-fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2
>     writer.go:29: 2020-02-23T02:47:03.761Z [DEBUG] TestAgent_ForceLeave_ACLDeny.acl: dropping node from result due to ACLs: node=Node-fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2
>     --- PASS: TestAgent_ForceLeave_ACLDeny/no_token (0.00s)
>     --- PASS: TestAgent_ForceLeave_ACLDeny/agent_master_token (0.00s)
>     --- PASS: TestAgent_ForceLeave_ACLDeny/read-only_token (0.06s)
>     writer.go:29: 2020-02-23T02:47:03.881Z [INFO]  TestAgent_ForceLeave_ACLDeny: Force leaving node: node=Node-fa4cccb7-5a60-d8c3-cc8d-f7d9519a83f2
>     writer.go:29: 2020-02-23T02:47:03.881Z [DEBUG] TestAgent_ForceLeave_ACLDeny.server.serf.lan: serf: Refuting an older leave intent
>     writer.go:29: 2020-02-23T02:47:03.881Z [DEBUG] TestAgent_ForceLeave_ACLDeny.server.serf.wan: serf: Refuting an older leave intent
>     --- PASS: TestAgent_ForceLeave_ACLDeny/operator_write_token (0.06s)
>     writer.go:29: 2020-02-23T02:47:03.881Z [INFO]  TestAgent_ForceLeave_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:03.881Z [INFO]  TestAgent_ForceLeave_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:03.881Z [DEBUG] TestAgent_ForceLeave_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.881Z [DEBUG] TestAgent_ForceLeave_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:03.881Z [DEBUG] TestAgent_ForceLeave_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:03.881Z [WARN]  TestAgent_ForceLeave_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:03.881Z [DEBUG] TestAgent_ForceLeave_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.881Z [DEBUG] TestAgent_ForceLeave_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:03.881Z [DEBUG] TestAgent_ForceLeave_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:03.883Z [WARN]  TestAgent_ForceLeave_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:03.885Z [INFO]  TestAgent_ForceLeave_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:03.885Z [INFO]  TestAgent_ForceLeave_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:47:03.885Z [INFO]  TestAgent_ForceLeave_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:47:03.885Z [INFO]  TestAgent_ForceLeave_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16697 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.885Z [INFO]  TestAgent_ForceLeave_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16697 network=udp
>     writer.go:29: 2020-02-23T02:47:03.885Z [INFO]  TestAgent_ForceLeave_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16698 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.885Z [INFO]  TestAgent_ForceLeave_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:03.885Z [INFO]  TestAgent_ForceLeave_ACLDeny: Endpoints down
> === CONT  TestAgent_JoinLANNotify
> --- PASS: TestAgent_JoinLANNotify (0.41s)
>     writer.go:29: 2020-02-23T02:47:03.892Z [WARN]  TestAgent_JoinLANNotify: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:03.892Z [DEBUG] TestAgent_JoinLANNotify.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:03.893Z [DEBUG] TestAgent_JoinLANNotify.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:03.902Z [INFO]  TestAgent_JoinLANNotify.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:87c85625-bd31-6253-334a-b469fe956afc Address:127.0.0.1:16720}]"
>     writer.go:29: 2020-02-23T02:47:03.902Z [INFO]  TestAgent_JoinLANNotify.server.raft: entering follower state: follower="Node at 127.0.0.1:16720 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:03.903Z [INFO]  TestAgent_JoinLANNotify.server.serf.wan: serf: EventMemberJoin: Node-87c85625-bd31-6253-334a-b469fe956afc.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.903Z [INFO]  TestAgent_JoinLANNotify.server.serf.lan: serf: EventMemberJoin: Node-87c85625-bd31-6253-334a-b469fe956afc 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.904Z [INFO]  TestAgent_JoinLANNotify.server: Handled event for server in area: event=member-join server=Node-87c85625-bd31-6253-334a-b469fe956afc.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:03.904Z [INFO]  TestAgent_JoinLANNotify.server: Adding LAN server: server="Node-87c85625-bd31-6253-334a-b469fe956afc (Addr: tcp/127.0.0.1:16720) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:03.904Z [INFO]  TestAgent_JoinLANNotify: Started DNS server: address=127.0.0.1:16715 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.904Z [INFO]  TestAgent_JoinLANNotify: Started DNS server: address=127.0.0.1:16715 network=udp
>     writer.go:29: 2020-02-23T02:47:03.904Z [INFO]  TestAgent_JoinLANNotify: Started HTTP server: address=127.0.0.1:16716 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.904Z [INFO]  TestAgent_JoinLANNotify: started state syncer
>     writer.go:29: 2020-02-23T02:47:03.941Z [WARN]  TestAgent_JoinLANNotify.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:03.941Z [INFO]  TestAgent_JoinLANNotify.server.raft: entering candidate state: node="Node at 127.0.0.1:16720 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:03.944Z [DEBUG] TestAgent_JoinLANNotify.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:03.944Z [DEBUG] TestAgent_JoinLANNotify.server.raft: vote granted: from=87c85625-bd31-6253-334a-b469fe956afc term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:03.944Z [INFO]  TestAgent_JoinLANNotify.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:03.944Z [INFO]  TestAgent_JoinLANNotify.server.raft: entering leader state: leader="Node at 127.0.0.1:16720 [Leader]"
>     writer.go:29: 2020-02-23T02:47:03.944Z [INFO]  TestAgent_JoinLANNotify.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:03.944Z [INFO]  TestAgent_JoinLANNotify.server: New leader elected: payload=Node-87c85625-bd31-6253-334a-b469fe956afc
>     writer.go:29: 2020-02-23T02:47:03.952Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:03.960Z [INFO]  TestAgent_JoinLANNotify.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:03.960Z [INFO]  TestAgent_JoinLANNotify.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.960Z [DEBUG] TestAgent_JoinLANNotify.server: Skipping self join check for node since the cluster is too small: node=Node-87c85625-bd31-6253-334a-b469fe956afc
>     writer.go:29: 2020-02-23T02:47:03.960Z [INFO]  TestAgent_JoinLANNotify.server: member joined, marking health alive: member=Node-87c85625-bd31-6253-334a-b469fe956afc
>     writer.go:29: 2020-02-23T02:47:04.258Z [DEBUG] TestAgent_JoinLANNotify: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:04.261Z [INFO]  TestAgent_JoinLANNotify: Synced node info
>     writer.go:29: 2020-02-23T02:47:04.261Z [DEBUG] TestAgent_JoinLANNotify: Node info in sync
>     writer.go:29: 2020-02-23T02:47:04.282Z [DEBUG] TestAgent_JoinLANNotify.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:04.282Z [INFO]  TestAgent_JoinLANNotify.client.serf.lan: serf: EventMemberJoin: Node-e8318dcc-a1d5-3999-20bf-ffe46e327e27 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:04.282Z [INFO]  TestAgent_JoinLANNotify: Started DNS server: address=127.0.0.1:16721 network=tcp
>     writer.go:29: 2020-02-23T02:47:04.283Z [INFO]  TestAgent_JoinLANNotify: Started DNS server: address=127.0.0.1:16721 network=udp
>     writer.go:29: 2020-02-23T02:47:04.283Z [INFO]  TestAgent_JoinLANNotify: Started HTTP server: address=127.0.0.1:16722 network=tcp
>     writer.go:29: 2020-02-23T02:47:04.283Z [INFO]  TestAgent_JoinLANNotify: started state syncer
>     writer.go:29: 2020-02-23T02:47:04.284Z [WARN]  TestAgent_JoinLANNotify.client.manager: No servers available
>     writer.go:29: 2020-02-23T02:47:04.284Z [ERROR] TestAgent_JoinLANNotify.anti_entropy: failed to sync remote state: error="No known Consul servers"
>     writer.go:29: 2020-02-23T02:47:04.284Z [INFO]  TestAgent_JoinLANNotify: (LAN) joining: lan_addresses=[127.0.0.1:16724]
>     writer.go:29: 2020-02-23T02:47:04.284Z [DEBUG] TestAgent_JoinLANNotify.client.memberlist.lan: memberlist: Stream connection from=127.0.0.1:48392
>     writer.go:29: 2020-02-23T02:47:04.284Z [DEBUG] TestAgent_JoinLANNotify.server.memberlist.lan: memberlist: Initiating push/pull sync with: 127.0.0.1:16724
>     writer.go:29: 2020-02-23T02:47:04.285Z [INFO]  TestAgent_JoinLANNotify.client.serf.lan: serf: EventMemberJoin: Node-87c85625-bd31-6253-334a-b469fe956afc 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:04.285Z [INFO]  TestAgent_JoinLANNotify.client: adding server: server="Node-87c85625-bd31-6253-334a-b469fe956afc (Addr: tcp/127.0.0.1:16720) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:04.285Z [INFO]  TestAgent_JoinLANNotify.client: New leader elected: payload=Node-87c85625-bd31-6253-334a-b469fe956afc
>     writer.go:29: 2020-02-23T02:47:04.285Z [INFO]  TestAgent_JoinLANNotify.server.serf.lan: serf: EventMemberJoin: Node-e8318dcc-a1d5-3999-20bf-ffe46e327e27 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:04.285Z [INFO]  TestAgent_JoinLANNotify: (LAN) joined: number_of_nodes=1
>     writer.go:29: 2020-02-23T02:47:04.285Z [INFO]  TestAgent_JoinLANNotify: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:04.285Z [INFO]  TestAgent_JoinLANNotify.client: shutting down client
>     writer.go:29: 2020-02-23T02:47:04.285Z [WARN]  TestAgent_JoinLANNotify.client.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:04.285Z [INFO]  TestAgent_JoinLANNotify.server: member joined, marking health alive: member=Node-e8318dcc-a1d5-3999-20bf-ffe46e327e27
>     writer.go:29: 2020-02-23T02:47:04.285Z [INFO]  TestAgent_JoinLANNotify.client.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:04.288Z [INFO]  TestAgent_JoinLANNotify: consul client down
>     writer.go:29: 2020-02-23T02:47:04.288Z [INFO]  TestAgent_JoinLANNotify: shutdown complete
>     writer.go:29: 2020-02-23T02:47:04.288Z [INFO]  TestAgent_JoinLANNotify: Stopping server: protocol=DNS address=127.0.0.1:16721 network=tcp
>     writer.go:29: 2020-02-23T02:47:04.288Z [INFO]  TestAgent_JoinLANNotify: Stopping server: protocol=DNS address=127.0.0.1:16721 network=udp
>     writer.go:29: 2020-02-23T02:47:04.288Z [INFO]  TestAgent_JoinLANNotify: Stopping server: protocol=HTTP address=127.0.0.1:16722 network=tcp
>     writer.go:29: 2020-02-23T02:47:04.288Z [INFO]  TestAgent_JoinLANNotify: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:04.288Z [INFO]  TestAgent_JoinLANNotify: Endpoints down
>     writer.go:29: 2020-02-23T02:47:04.288Z [INFO]  TestAgent_JoinLANNotify: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:04.288Z [INFO]  TestAgent_JoinLANNotify.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:04.288Z [DEBUG] TestAgent_JoinLANNotify.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:04.288Z [WARN]  TestAgent_JoinLANNotify.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:04.288Z [DEBUG] TestAgent_JoinLANNotify.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:04.290Z [WARN]  TestAgent_JoinLANNotify.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:04.292Z [INFO]  TestAgent_JoinLANNotify.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:04.292Z [INFO]  TestAgent_JoinLANNotify: consul server down
>     writer.go:29: 2020-02-23T02:47:04.292Z [INFO]  TestAgent_JoinLANNotify: shutdown complete
>     writer.go:29: 2020-02-23T02:47:04.292Z [INFO]  TestAgent_JoinLANNotify: Stopping server: protocol=DNS address=127.0.0.1:16715 network=tcp
>     writer.go:29: 2020-02-23T02:47:04.292Z [INFO]  TestAgent_JoinLANNotify: Stopping server: protocol=DNS address=127.0.0.1:16715 network=udp
>     writer.go:29: 2020-02-23T02:47:04.292Z [INFO]  TestAgent_JoinLANNotify: Stopping server: protocol=HTTP address=127.0.0.1:16716 network=tcp
>     writer.go:29: 2020-02-23T02:47:04.292Z [INFO]  TestAgent_JoinLANNotify: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:04.292Z [INFO]  TestAgent_JoinLANNotify: Endpoints down
> === CONT  TestAgent_Join_ACLDeny
> === RUN   TestAgent_Join_ACLDeny/no_token
> === RUN   TestAgent_Join_ACLDeny/agent_master_token
> === RUN   TestAgent_Join_ACLDeny/read-only_token
> --- PASS: TestAgent_Join_ACLDeny (0.77s)
>     writer.go:29: 2020-02-23T02:47:04.299Z [WARN]  TestAgent_Join_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:04.299Z [WARN]  TestAgent_Join_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:04.299Z [DEBUG] TestAgent_Join_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:04.299Z [DEBUG] TestAgent_Join_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:04.310Z [INFO]  TestAgent_Join_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:6914e99f-6719-d865-ce74-2cbbce6c972d Address:127.0.0.1:16714}]"
>     writer.go:29: 2020-02-23T02:47:04.310Z [INFO]  TestAgent_Join_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16714 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:04.311Z [INFO]  TestAgent_Join_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-6914e99f-6719-d865-ce74-2cbbce6c972d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:04.311Z [INFO]  TestAgent_Join_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-6914e99f-6719-d865-ce74-2cbbce6c972d 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:04.311Z [INFO]  TestAgent_Join_ACLDeny.server: Handled event for server in area: event=member-join server=Node-6914e99f-6719-d865-ce74-2cbbce6c972d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:04.311Z [INFO]  TestAgent_Join_ACLDeny.server: Adding LAN server: server="Node-6914e99f-6719-d865-ce74-2cbbce6c972d (Addr: tcp/127.0.0.1:16714) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:04.311Z [INFO]  TestAgent_Join_ACLDeny: Started DNS server: address=127.0.0.1:16709 network=udp
>     writer.go:29: 2020-02-23T02:47:04.311Z [INFO]  TestAgent_Join_ACLDeny: Started DNS server: address=127.0.0.1:16709 network=tcp
>     writer.go:29: 2020-02-23T02:47:04.312Z [INFO]  TestAgent_Join_ACLDeny: Started HTTP server: address=127.0.0.1:16710 network=tcp
>     writer.go:29: 2020-02-23T02:47:04.312Z [INFO]  TestAgent_Join_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:47:04.351Z [WARN]  TestAgent_Join_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:04.351Z [INFO]  TestAgent_Join_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16714 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:04.407Z [DEBUG] TestAgent_Join_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:04.407Z [DEBUG] TestAgent_Join_ACLDeny.server.raft: vote granted: from=6914e99f-6719-d865-ce74-2cbbce6c972d term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:04.407Z [INFO]  TestAgent_Join_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:04.407Z [INFO]  TestAgent_Join_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16714 [Leader]"
>     writer.go:29: 2020-02-23T02:47:04.407Z [INFO]  TestAgent_Join_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:04.407Z [INFO]  TestAgent_Join_ACLDeny.server: New leader elected: payload=Node-6914e99f-6719-d865-ce74-2cbbce6c972d
>     writer.go:29: 2020-02-23T02:47:04.416Z [INFO]  TestAgent_Join_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:04.418Z [INFO]  TestAgent_Join_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:04.418Z [WARN]  TestAgent_Join_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:04.421Z [INFO]  TestAgent_Join_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:04.430Z [INFO]  TestAgent_Join_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:04.430Z [INFO]  TestAgent_Join_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:04.430Z [INFO]  TestAgent_Join_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:04.430Z [INFO]  TestAgent_Join_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-6914e99f-6719-d865-ce74-2cbbce6c972d
>     writer.go:29: 2020-02-23T02:47:04.430Z [INFO]  TestAgent_Join_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-6914e99f-6719-d865-ce74-2cbbce6c972d.dc1
>     writer.go:29: 2020-02-23T02:47:04.430Z [INFO]  TestAgent_Join_ACLDeny.server: Handled event for server in area: event=member-update server=Node-6914e99f-6719-d865-ce74-2cbbce6c972d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:04.435Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:04.442Z [INFO]  TestAgent_Join_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:04.442Z [INFO]  TestAgent_Join_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:04.442Z [DEBUG] TestAgent_Join_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-6914e99f-6719-d865-ce74-2cbbce6c972d
>     writer.go:29: 2020-02-23T02:47:04.442Z [INFO]  TestAgent_Join_ACLDeny.server: member joined, marking health alive: member=Node-6914e99f-6719-d865-ce74-2cbbce6c972d
>     writer.go:29: 2020-02-23T02:47:04.444Z [DEBUG] TestAgent_Join_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-6914e99f-6719-d865-ce74-2cbbce6c972d
>     writer.go:29: 2020-02-23T02:47:04.541Z [DEBUG] TestAgent_Join_ACLDeny: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:04.545Z [INFO]  TestAgent_Join_ACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:47:04.545Z [DEBUG] TestAgent_Join_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:47:04.628Z [DEBUG] TestAgent_Join_ACLDeny.acl: dropping node from result due to ACLs: node=Node-6914e99f-6719-d865-ce74-2cbbce6c972d
>     writer.go:29: 2020-02-23T02:47:04.635Z [WARN]  TestAgent_Join_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:04.636Z [DEBUG] TestAgent_Join_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:04.664Z [DEBUG] TestAgent_Join_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:04.733Z [INFO]  TestAgent_Join_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:0cbcbc44-46ba-448e-eccd-a7e76c0ee390 Address:127.0.0.1:16738}]"
>     writer.go:29: 2020-02-23T02:47:04.733Z [INFO]  TestAgent_Join_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16738 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:04.734Z [INFO]  TestAgent_Join_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-0cbcbc44-46ba-448e-eccd-a7e76c0ee390.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:04.734Z [INFO]  TestAgent_Join_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-0cbcbc44-46ba-448e-eccd-a7e76c0ee390 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:04.734Z [INFO]  TestAgent_Join_ACLDeny: Started DNS server: address=127.0.0.1:16733 network=udp
>     writer.go:29: 2020-02-23T02:47:04.734Z [INFO]  TestAgent_Join_ACLDeny.server: Adding LAN server: server="Node-0cbcbc44-46ba-448e-eccd-a7e76c0ee390 (Addr: tcp/127.0.0.1:16738) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:04.734Z [INFO]  TestAgent_Join_ACLDeny.server: Handled event for server in area: event=member-join server=Node-0cbcbc44-46ba-448e-eccd-a7e76c0ee390.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:04.734Z [INFO]  TestAgent_Join_ACLDeny: Started DNS server: address=127.0.0.1:16733 network=tcp
>     writer.go:29: 2020-02-23T02:47:04.735Z [INFO]  TestAgent_Join_ACLDeny: Started HTTP server: address=127.0.0.1:16734 network=tcp
>     writer.go:29: 2020-02-23T02:47:04.735Z [INFO]  TestAgent_Join_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:47:04.796Z [WARN]  TestAgent_Join_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:04.796Z [INFO]  TestAgent_Join_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16738 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:04.800Z [DEBUG] TestAgent_Join_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:04.800Z [DEBUG] TestAgent_Join_ACLDeny.server.raft: vote granted: from=0cbcbc44-46ba-448e-eccd-a7e76c0ee390 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:04.800Z [INFO]  TestAgent_Join_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:04.800Z [INFO]  TestAgent_Join_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16738 [Leader]"
>     writer.go:29: 2020-02-23T02:47:04.800Z [INFO]  TestAgent_Join_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:04.800Z [INFO]  TestAgent_Join_ACLDeny.server: New leader elected: payload=Node-0cbcbc44-46ba-448e-eccd-a7e76c0ee390
>     writer.go:29: 2020-02-23T02:47:04.808Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:04.821Z [INFO]  TestAgent_Join_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:04.821Z [INFO]  TestAgent_Join_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:04.821Z [DEBUG] TestAgent_Join_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-0cbcbc44-46ba-448e-eccd-a7e76c0ee390
>     writer.go:29: 2020-02-23T02:47:04.821Z [INFO]  TestAgent_Join_ACLDeny.server: member joined, marking health alive: member=Node-0cbcbc44-46ba-448e-eccd-a7e76c0ee390
>     writer.go:29: 2020-02-23T02:47:04.959Z [DEBUG] TestAgent_Join_ACLDeny: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:04.959Z [DEBUG] TestAgent_Join_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:47:04.969Z [DEBUG] TestAgent_Join_ACLDeny: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:04.995Z [INFO]  TestAgent_Join_ACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:47:05.053Z [DEBUG] TestAgent_Join_ACLDeny.acl: dropping node from result due to ACLs: node=Node-6914e99f-6719-d865-ce74-2cbbce6c972d
>     --- PASS: TestAgent_Join_ACLDeny/no_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:05.053Z [INFO]  TestAgent_Join_ACLDeny: (LAN) joining: lan_addresses=[127.0.0.1:16736]
>     writer.go:29: 2020-02-23T02:47:05.053Z [DEBUG] TestAgent_Join_ACLDeny.server.memberlist.lan: memberlist: Initiating push/pull sync with: 127.0.0.1:16736
>     writer.go:29: 2020-02-23T02:47:05.053Z [DEBUG] TestAgent_Join_ACLDeny.server.memberlist.lan: memberlist: Stream connection from=127.0.0.1:56914
>     writer.go:29: 2020-02-23T02:47:05.053Z [INFO]  TestAgent_Join_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-6914e99f-6719-d865-ce74-2cbbce6c972d 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.054Z [INFO]  TestAgent_Join_ACLDeny.server: Adding LAN server: server="Node-6914e99f-6719-d865-ce74-2cbbce6c972d (Addr: tcp/127.0.0.1:16714) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:05.054Z [INFO]  TestAgent_Join_ACLDeny.server: New leader elected: payload=Node-6914e99f-6719-d865-ce74-2cbbce6c972d
>     writer.go:29: 2020-02-23T02:47:05.054Z [ERROR] TestAgent_Join_ACLDeny.server: Two nodes are in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.: node_to_add=Node-6914e99f-6719-d865-ce74-2cbbce6c972d other=Node-0cbcbc44-46ba-448e-eccd-a7e76c0ee390
>     writer.go:29: 2020-02-23T02:47:05.054Z [INFO]  TestAgent_Join_ACLDeny.server: member joined, marking health alive: member=Node-6914e99f-6719-d865-ce74-2cbbce6c972d
>     writer.go:29: 2020-02-23T02:47:05.054Z [INFO]  TestAgent_Join_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-0cbcbc44-46ba-448e-eccd-a7e76c0ee390 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.054Z [INFO]  TestAgent_Join_ACLDeny: (LAN) joined: number_of_nodes=1
>     writer.go:29: 2020-02-23T02:47:05.054Z [DEBUG] TestAgent_Join_ACLDeny: systemd notify failed: error="No socket"
>     --- PASS: TestAgent_Join_ACLDeny/agent_master_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:05.055Z [INFO]  TestAgent_Join_ACLDeny.server: Adding LAN server: server="Node-0cbcbc44-46ba-448e-eccd-a7e76c0ee390 (Addr: tcp/127.0.0.1:16738) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:05.055Z [ERROR] TestAgent_Join_ACLDeny.server: Two nodes are in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.: node_to_add=Node-0cbcbc44-46ba-448e-eccd-a7e76c0ee390 other=Node-6914e99f-6719-d865-ce74-2cbbce6c972d
>     writer.go:29: 2020-02-23T02:47:05.055Z [INFO]  TestAgent_Join_ACLDeny.server: member joined, marking health alive: member=Node-0cbcbc44-46ba-448e-eccd-a7e76c0ee390
>     writer.go:29: 2020-02-23T02:47:05.055Z [DEBUG] TestAgent_Join_ACLDeny.server.memberlist.wan: memberlist: Initiating push/pull sync with: 127.0.0.1:16737
>     writer.go:29: 2020-02-23T02:47:05.055Z [DEBUG] TestAgent_Join_ACLDeny.server.memberlist.wan: memberlist: Stream connection from=127.0.0.1:54154
>     writer.go:29: 2020-02-23T02:47:05.055Z [INFO]  TestAgent_Join_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-6914e99f-6719-d865-ce74-2cbbce6c972d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.055Z [INFO]  TestAgent_Join_ACLDeny.server: Handled event for server in area: event=member-join server=Node-6914e99f-6719-d865-ce74-2cbbce6c972d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:05.056Z [INFO]  TestAgent_Join_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-0cbcbc44-46ba-448e-eccd-a7e76c0ee390.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.056Z [DEBUG] TestAgent_Join_ACLDeny.server: Successfully performed flood-join for server at address: server=Node-0cbcbc44-46ba-448e-eccd-a7e76c0ee390 address=127.0.0.1:16737
>     writer.go:29: 2020-02-23T02:47:05.056Z [INFO]  TestAgent_Join_ACLDeny.server: Handled event for server in area: event=member-join server=Node-0cbcbc44-46ba-448e-eccd-a7e76c0ee390.dc1 area=wan
>     --- PASS: TestAgent_Join_ACLDeny/read-only_token (0.01s)
>     writer.go:29: 2020-02-23T02:47:05.059Z [INFO]  TestAgent_Join_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:05.059Z [INFO]  TestAgent_Join_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:05.059Z [DEBUG] TestAgent_Join_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.059Z [WARN]  TestAgent_Join_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:05.060Z [DEBUG] TestAgent_Join_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.061Z [WARN]  TestAgent_Join_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:05.063Z [INFO]  TestAgent_Join_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:05.063Z [INFO]  TestAgent_Join_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:47:05.063Z [INFO]  TestAgent_Join_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:47:05.063Z [INFO]  TestAgent_Join_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16733 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.063Z [INFO]  TestAgent_Join_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16733 network=udp
>     writer.go:29: 2020-02-23T02:47:05.063Z [INFO]  TestAgent_Join_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16734 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.063Z [INFO]  TestAgent_Join_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:05.063Z [INFO]  TestAgent_Join_ACLDeny: Endpoints down
>     writer.go:29: 2020-02-23T02:47:05.063Z [INFO]  TestAgent_Join_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:05.063Z [INFO]  TestAgent_Join_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:05.063Z [DEBUG] TestAgent_Join_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:05.063Z [DEBUG] TestAgent_Join_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:05.063Z [DEBUG] TestAgent_Join_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.063Z [WARN]  TestAgent_Join_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:05.063Z [DEBUG] TestAgent_Join_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:05.063Z [DEBUG] TestAgent_Join_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:05.063Z [DEBUG] TestAgent_Join_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.065Z [WARN]  TestAgent_Join_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:05.067Z [INFO]  TestAgent_Join_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:05.067Z [INFO]  TestAgent_Join_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:47:05.067Z [INFO]  TestAgent_Join_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:47:05.067Z [INFO]  TestAgent_Join_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16709 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.067Z [INFO]  TestAgent_Join_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16709 network=udp
>     writer.go:29: 2020-02-23T02:47:05.067Z [INFO]  TestAgent_Join_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16710 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.067Z [INFO]  TestAgent_Join_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:05.067Z [INFO]  TestAgent_Join_ACLDeny: Endpoints down
> === CONT  TestAgent_Join_WAN
> --- PASS: TestAgent_StartStop (11.50s)
>     writer.go:29: 2020-02-23T02:46:53.908Z [WARN]  TestAgent_StartStop: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:46:53.908Z [DEBUG] TestAgent_StartStop.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:46:53.908Z [DEBUG] TestAgent_StartStop.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:53.932Z [INFO]  TestAgent_StartStop.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:0976a694-2356-7769-7707-48587823c169 Address:127.0.0.1:16186}]"
>     writer.go:29: 2020-02-23T02:46:53.933Z [INFO]  TestAgent_StartStop.server.raft: entering follower state: follower="Node at 127.0.0.1:16186 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:46:53.943Z [INFO]  TestAgent_StartStop.server.serf.wan: serf: EventMemberJoin: Node-0976a694-2356-7769-7707-48587823c169.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.944Z [INFO]  TestAgent_StartStop.server.serf.lan: serf: EventMemberJoin: Node-0976a694-2356-7769-7707-48587823c169 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:53.944Z [INFO]  TestAgent_StartStop: Started DNS server: address=127.0.0.1:16181 network=udp
>     writer.go:29: 2020-02-23T02:46:53.944Z [INFO]  TestAgent_StartStop.server: Adding LAN server: server="Node-0976a694-2356-7769-7707-48587823c169 (Addr: tcp/127.0.0.1:16186) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:53.944Z [INFO]  TestAgent_StartStop.server: Handled event for server in area: event=member-join server=Node-0976a694-2356-7769-7707-48587823c169.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:53.945Z [INFO]  TestAgent_StartStop: Started DNS server: address=127.0.0.1:16181 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.945Z [INFO]  TestAgent_StartStop: Started HTTP server: address=127.0.0.1:16182 network=tcp
>     writer.go:29: 2020-02-23T02:46:53.945Z [INFO]  TestAgent_StartStop: started state syncer
>     writer.go:29: 2020-02-23T02:46:53.974Z [WARN]  TestAgent_StartStop.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:46:53.974Z [INFO]  TestAgent_StartStop.server.raft: entering candidate state: node="Node at 127.0.0.1:16186 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:46:54.113Z [DEBUG] TestAgent_StartStop.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:46:54.113Z [DEBUG] TestAgent_StartStop.server.raft: vote granted: from=0976a694-2356-7769-7707-48587823c169 term=2 tally=1
>     writer.go:29: 2020-02-23T02:46:54.113Z [INFO]  TestAgent_StartStop.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:46:54.113Z [INFO]  TestAgent_StartStop.server.raft: entering leader state: leader="Node at 127.0.0.1:16186 [Leader]"
>     writer.go:29: 2020-02-23T02:46:54.113Z [INFO]  TestAgent_StartStop.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:46:54.113Z [INFO]  TestAgent_StartStop.server: New leader elected: payload=Node-0976a694-2356-7769-7707-48587823c169
>     writer.go:29: 2020-02-23T02:46:54.175Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:46:54.184Z [INFO]  TestAgent_StartStop.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:46:54.184Z [INFO]  TestAgent_StartStop.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:46:54.185Z [DEBUG] TestAgent_StartStop.server: Skipping self join check for node since the cluster is too small: node=Node-0976a694-2356-7769-7707-48587823c169
>     writer.go:29: 2020-02-23T02:46:54.185Z [INFO]  TestAgent_StartStop.server: member joined, marking health alive: member=Node-0976a694-2356-7769-7707-48587823c169
>     writer.go:29: 2020-02-23T02:46:54.365Z [INFO]  TestAgent_StartStop.server: server starting leave
>     writer.go:29: 2020-02-23T02:46:54.365Z [INFO]  TestAgent_StartStop.server.serf.wan: serf: EventMemberLeave: Node-0976a694-2356-7769-7707-48587823c169.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:54.365Z [INFO]  TestAgent_StartStop.server: Handled event for server in area: event=member-leave server=Node-0976a694-2356-7769-7707-48587823c169.dc1 area=wan
>     writer.go:29: 2020-02-23T02:46:54.365Z [INFO]  TestAgent_StartStop.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:46:54.397Z [DEBUG] TestAgent_StartStop: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:54.400Z [INFO]  TestAgent_StartStop: Synced node info
>     writer.go:29: 2020-02-23T02:46:56.172Z [DEBUG] TestAgent_StartStop.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:46:56.576Z [DEBUG] TestAgent_StartStop: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:46:56.576Z [DEBUG] TestAgent_StartStop: Node info in sync
>     writer.go:29: 2020-02-23T02:46:56.576Z [DEBUG] TestAgent_StartStop: Node info in sync
>     writer.go:29: 2020-02-23T02:46:57.365Z [INFO]  TestAgent_StartStop.server.serf.lan: serf: EventMemberLeave: Node-0976a694-2356-7769-7707-48587823c169 127.0.0.1
>     writer.go:29: 2020-02-23T02:46:57.365Z [INFO]  TestAgent_StartStop.server: Removing LAN server: server="Node-0976a694-2356-7769-7707-48587823c169 (Addr: tcp/127.0.0.1:16186) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:46:57.365Z [WARN]  TestAgent_StartStop.server: deregistering self should be done by follower: name=Node-0976a694-2356-7769-7707-48587823c169
>     writer.go:29: 2020-02-23T02:46:58.170Z [ERROR] TestAgent_StartStop.server.autopilot: Error updating cluster health: error="error getting server raft protocol versions: No servers found"
>     writer.go:29: 2020-02-23T02:47:00.170Z [ERROR] TestAgent_StartStop.server.autopilot: Error updating cluster health: error="error getting server raft protocol versions: No servers found"
>     writer.go:29: 2020-02-23T02:47:00.365Z [INFO]  TestAgent_StartStop.server: Waiting to drain RPC traffic: drain_time=5s
>     writer.go:29: 2020-02-23T02:47:02.170Z [ERROR] TestAgent_StartStop.server.autopilot: Error updating cluster health: error="error getting server raft protocol versions: No servers found"
>     writer.go:29: 2020-02-23T02:47:04.170Z [ERROR] TestAgent_StartStop.server.autopilot: Error updating cluster health: error="error getting server raft protocol versions: No servers found"
>     writer.go:29: 2020-02-23T02:47:04.170Z [ERROR] TestAgent_StartStop.server.autopilot: Error promoting servers: error="error getting server raft protocol versions: No servers found"
>     writer.go:29: 2020-02-23T02:47:05.365Z [INFO]  TestAgent_StartStop: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:05.365Z [INFO]  TestAgent_StartStop.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:05.365Z [DEBUG] TestAgent_StartStop.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.365Z [DEBUG] TestAgent_StartStop.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.400Z [INFO]  TestAgent_StartStop: consul server down
>     writer.go:29: 2020-02-23T02:47:05.400Z [INFO]  TestAgent_StartStop: shutdown complete
>     writer.go:29: 2020-02-23T02:47:05.400Z [INFO]  TestAgent_StartStop: Stopping server: protocol=DNS address=127.0.0.1:16181 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.400Z [INFO]  TestAgent_StartStop: Stopping server: protocol=DNS address=127.0.0.1:16181 network=udp
>     writer.go:29: 2020-02-23T02:47:05.400Z [INFO]  TestAgent_StartStop: Stopping server: protocol=HTTP address=127.0.0.1:16182 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.400Z [INFO]  TestAgent_StartStop: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:05.400Z [INFO]  TestAgent_StartStop: Endpoints down
> === CONT  TestAgent_Join
> --- PASS: TestAgent_Join_WAN (0.67s)
>     writer.go:29: 2020-02-23T02:47:05.073Z [WARN]  TestAgent_Join_WAN: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:05.073Z [DEBUG] TestAgent_Join_WAN.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:05.074Z [DEBUG] TestAgent_Join_WAN.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:05.083Z [INFO]  TestAgent_Join_WAN.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:be30573a-c5f6-ee7b-2d50-1c8f690d3efb Address:127.0.0.1:16744}]"
>     writer.go:29: 2020-02-23T02:47:05.083Z [INFO]  TestAgent_Join_WAN.server.raft: entering follower state: follower="Node at 127.0.0.1:16744 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:05.084Z [INFO]  TestAgent_Join_WAN.server.serf.wan: serf: EventMemberJoin: Node-be30573a-c5f6-ee7b-2d50-1c8f690d3efb.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.084Z [INFO]  TestAgent_Join_WAN.server.serf.lan: serf: EventMemberJoin: Node-be30573a-c5f6-ee7b-2d50-1c8f690d3efb 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.084Z [INFO]  TestAgent_Join_WAN.server: Handled event for server in area: event=member-join server=Node-be30573a-c5f6-ee7b-2d50-1c8f690d3efb.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:05.084Z [INFO]  TestAgent_Join_WAN.server: Adding LAN server: server="Node-be30573a-c5f6-ee7b-2d50-1c8f690d3efb (Addr: tcp/127.0.0.1:16744) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:05.085Z [INFO]  TestAgent_Join_WAN: Started DNS server: address=127.0.0.1:16739 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.085Z [INFO]  TestAgent_Join_WAN: Started DNS server: address=127.0.0.1:16739 network=udp
>     writer.go:29: 2020-02-23T02:47:05.085Z [INFO]  TestAgent_Join_WAN: Started HTTP server: address=127.0.0.1:16740 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.085Z [INFO]  TestAgent_Join_WAN: started state syncer
>     writer.go:29: 2020-02-23T02:47:05.131Z [WARN]  TestAgent_Join_WAN.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:05.131Z [INFO]  TestAgent_Join_WAN.server.raft: entering candidate state: node="Node at 127.0.0.1:16744 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:05.193Z [DEBUG] TestAgent_Join_WAN.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:05.193Z [DEBUG] TestAgent_Join_WAN.server.raft: vote granted: from=be30573a-c5f6-ee7b-2d50-1c8f690d3efb term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:05.193Z [INFO]  TestAgent_Join_WAN.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:05.193Z [INFO]  TestAgent_Join_WAN.server.raft: entering leader state: leader="Node at 127.0.0.1:16744 [Leader]"
>     writer.go:29: 2020-02-23T02:47:05.193Z [INFO]  TestAgent_Join_WAN.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:05.193Z [INFO]  TestAgent_Join_WAN.server: New leader elected: payload=Node-be30573a-c5f6-ee7b-2d50-1c8f690d3efb
>     writer.go:29: 2020-02-23T02:47:05.208Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:05.216Z [INFO]  TestAgent_Join_WAN.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:05.216Z [INFO]  TestAgent_Join_WAN.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.216Z [DEBUG] TestAgent_Join_WAN.server: Skipping self join check for node since the cluster is too small: node=Node-be30573a-c5f6-ee7b-2d50-1c8f690d3efb
>     writer.go:29: 2020-02-23T02:47:05.216Z [INFO]  TestAgent_Join_WAN.server: member joined, marking health alive: member=Node-be30573a-c5f6-ee7b-2d50-1c8f690d3efb
>     writer.go:29: 2020-02-23T02:47:05.401Z [WARN]  TestAgent_Join_WAN: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:05.401Z [DEBUG] TestAgent_Join_WAN.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:05.401Z [DEBUG] TestAgent_Join_WAN.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:05.419Z [DEBUG] TestAgent_Join_WAN: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:05.450Z [INFO]  TestAgent_Join_WAN: Synced node info
>     writer.go:29: 2020-02-23T02:47:05.463Z [INFO]  TestAgent_Join_WAN.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:d121e5a7-f336-e579-c532-c2ca87b4fc52 Address:127.0.0.1:16732}]"
>     writer.go:29: 2020-02-23T02:47:05.463Z [INFO]  TestAgent_Join_WAN.server.raft: entering follower state: follower="Node at 127.0.0.1:16732 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:05.464Z [INFO]  TestAgent_Join_WAN.server.serf.wan: serf: EventMemberJoin: Node-d121e5a7-f336-e579-c532-c2ca87b4fc52.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.465Z [INFO]  TestAgent_Join_WAN.server.serf.lan: serf: EventMemberJoin: Node-d121e5a7-f336-e579-c532-c2ca87b4fc52 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.465Z [INFO]  TestAgent_Join_WAN.server: Handled event for server in area: event=member-join server=Node-d121e5a7-f336-e579-c532-c2ca87b4fc52.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:05.465Z [INFO]  TestAgent_Join_WAN.server: Adding LAN server: server="Node-d121e5a7-f336-e579-c532-c2ca87b4fc52 (Addr: tcp/127.0.0.1:16732) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:05.465Z [INFO]  TestAgent_Join_WAN: Started DNS server: address=127.0.0.1:16727 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.465Z [INFO]  TestAgent_Join_WAN: Started DNS server: address=127.0.0.1:16727 network=udp
>     writer.go:29: 2020-02-23T02:47:05.465Z [INFO]  TestAgent_Join_WAN: Started HTTP server: address=127.0.0.1:16728 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.466Z [INFO]  TestAgent_Join_WAN: started state syncer
>     writer.go:29: 2020-02-23T02:47:05.530Z [WARN]  TestAgent_Join_WAN.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:05.530Z [INFO]  TestAgent_Join_WAN.server.raft: entering candidate state: node="Node at 127.0.0.1:16732 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:05.533Z [DEBUG] TestAgent_Join_WAN.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:05.533Z [DEBUG] TestAgent_Join_WAN.server.raft: vote granted: from=d121e5a7-f336-e579-c532-c2ca87b4fc52 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:05.533Z [INFO]  TestAgent_Join_WAN.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:05.533Z [INFO]  TestAgent_Join_WAN.server.raft: entering leader state: leader="Node at 127.0.0.1:16732 [Leader]"
>     writer.go:29: 2020-02-23T02:47:05.533Z [INFO]  TestAgent_Join_WAN.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:05.533Z [INFO]  TestAgent_Join_WAN.server: New leader elected: payload=Node-d121e5a7-f336-e579-c532-c2ca87b4fc52
>     writer.go:29: 2020-02-23T02:47:05.543Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:05.553Z [INFO]  TestAgent_Join_WAN.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:05.554Z [INFO]  TestAgent_Join_WAN.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.554Z [DEBUG] TestAgent_Join_WAN.server: Skipping self join check for node since the cluster is too small: node=Node-d121e5a7-f336-e579-c532-c2ca87b4fc52
>     writer.go:29: 2020-02-23T02:47:05.554Z [INFO]  TestAgent_Join_WAN.server: member joined, marking health alive: member=Node-d121e5a7-f336-e579-c532-c2ca87b4fc52
>     writer.go:29: 2020-02-23T02:47:05.710Z [DEBUG] TestAgent_Join_WAN: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:05.713Z [INFO]  TestAgent_Join_WAN: Synced node info
>     writer.go:29: 2020-02-23T02:47:05.726Z [INFO]  TestAgent_Join_WAN: (WAN) joining: wan_addresses=[127.0.0.1:16731]
>     writer.go:29: 2020-02-23T02:47:05.726Z [DEBUG] TestAgent_Join_WAN.server.memberlist.wan: memberlist: Initiating push/pull sync with: 127.0.0.1:16731
>     writer.go:29: 2020-02-23T02:47:05.726Z [DEBUG] TestAgent_Join_WAN.server.memberlist.wan: memberlist: Stream connection from=127.0.0.1:44078
>     writer.go:29: 2020-02-23T02:47:05.726Z [INFO]  TestAgent_Join_WAN.server.serf.wan: serf: EventMemberJoin: Node-be30573a-c5f6-ee7b-2d50-1c8f690d3efb.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.726Z [INFO]  TestAgent_Join_WAN.server: Handled event for server in area: event=member-join server=Node-be30573a-c5f6-ee7b-2d50-1c8f690d3efb.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:05.727Z [INFO]  TestAgent_Join_WAN.server.serf.wan: serf: EventMemberJoin: Node-d121e5a7-f336-e579-c532-c2ca87b4fc52.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.727Z [INFO]  TestAgent_Join_WAN: (WAN) joined: number_of_nodes=1
>     writer.go:29: 2020-02-23T02:47:05.727Z [INFO]  TestAgent_Join_WAN: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:05.727Z [INFO]  TestAgent_Join_WAN.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:05.727Z [DEBUG] TestAgent_Join_WAN.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.727Z [WARN]  TestAgent_Join_WAN.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:05.727Z [INFO]  TestAgent_Join_WAN.server: Handled event for server in area: event=member-join server=Node-d121e5a7-f336-e579-c532-c2ca87b4fc52.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:05.727Z [DEBUG] TestAgent_Join_WAN.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.729Z [WARN]  TestAgent_Join_WAN.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:05.731Z [INFO]  TestAgent_Join_WAN.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:05.731Z [INFO]  TestAgent_Join_WAN: consul server down
>     writer.go:29: 2020-02-23T02:47:05.731Z [INFO]  TestAgent_Join_WAN: shutdown complete
>     writer.go:29: 2020-02-23T02:47:05.731Z [INFO]  TestAgent_Join_WAN: Stopping server: protocol=DNS address=127.0.0.1:16727 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.731Z [INFO]  TestAgent_Join_WAN: Stopping server: protocol=DNS address=127.0.0.1:16727 network=udp
>     writer.go:29: 2020-02-23T02:47:05.731Z [INFO]  TestAgent_Join_WAN: Stopping server: protocol=HTTP address=127.0.0.1:16728 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.731Z [INFO]  TestAgent_Join_WAN: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:05.731Z [INFO]  TestAgent_Join_WAN: Endpoints down
>     writer.go:29: 2020-02-23T02:47:05.731Z [INFO]  TestAgent_Join_WAN: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:05.731Z [INFO]  TestAgent_Join_WAN.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:05.731Z [DEBUG] TestAgent_Join_WAN.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.731Z [WARN]  TestAgent_Join_WAN.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:05.731Z [DEBUG] TestAgent_Join_WAN.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.733Z [WARN]  TestAgent_Join_WAN.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:05.735Z [INFO]  TestAgent_Join_WAN.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:05.735Z [INFO]  TestAgent_Join_WAN: consul server down
>     writer.go:29: 2020-02-23T02:47:05.735Z [INFO]  TestAgent_Join_WAN: shutdown complete
>     writer.go:29: 2020-02-23T02:47:05.735Z [INFO]  TestAgent_Join_WAN: Stopping server: protocol=DNS address=127.0.0.1:16739 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.735Z [INFO]  TestAgent_Join_WAN: Stopping server: protocol=DNS address=127.0.0.1:16739 network=udp
>     writer.go:29: 2020-02-23T02:47:05.735Z [INFO]  TestAgent_Join_WAN: Stopping server: protocol=HTTP address=127.0.0.1:16740 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.735Z [INFO]  TestAgent_Join_WAN: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:05.735Z [INFO]  TestAgent_Join_WAN: Endpoints down
> === CONT  TestAgent_Members_ACLFilter
> === RUN   TestAgent_Members_ACLFilter/no_token
> === RUN   TestAgent_Members_ACLFilter/root_token
> --- PASS: TestAgent_Members_ACLFilter (0.16s)
>     writer.go:29: 2020-02-23T02:47:05.743Z [WARN]  TestAgent_Members_ACLFilter: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:05.743Z [WARN]  TestAgent_Members_ACLFilter: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:05.743Z [DEBUG] TestAgent_Members_ACLFilter.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:05.743Z [DEBUG] TestAgent_Members_ACLFilter.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:05.752Z [INFO]  TestAgent_Members_ACLFilter.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:7d8331ac-9aba-399b-dfe9-5807e7d2e318 Address:127.0.0.1:16756}]"
>     writer.go:29: 2020-02-23T02:47:05.752Z [INFO]  TestAgent_Members_ACLFilter.server.raft: entering follower state: follower="Node at 127.0.0.1:16756 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:05.753Z [INFO]  TestAgent_Members_ACLFilter.server.serf.wan: serf: EventMemberJoin: Node-7d8331ac-9aba-399b-dfe9-5807e7d2e318.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.753Z [INFO]  TestAgent_Members_ACLFilter.server.serf.lan: serf: EventMemberJoin: Node-7d8331ac-9aba-399b-dfe9-5807e7d2e318 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.753Z [INFO]  TestAgent_Members_ACLFilter: Started DNS server: address=127.0.0.1:16751 network=udp
>     writer.go:29: 2020-02-23T02:47:05.754Z [INFO]  TestAgent_Members_ACLFilter.server: Adding LAN server: server="Node-7d8331ac-9aba-399b-dfe9-5807e7d2e318 (Addr: tcp/127.0.0.1:16756) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:05.754Z [INFO]  TestAgent_Members_ACLFilter.server: Handled event for server in area: event=member-join server=Node-7d8331ac-9aba-399b-dfe9-5807e7d2e318.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:05.754Z [INFO]  TestAgent_Members_ACLFilter: Started DNS server: address=127.0.0.1:16751 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.754Z [INFO]  TestAgent_Members_ACLFilter: Started HTTP server: address=127.0.0.1:16752 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.754Z [INFO]  TestAgent_Members_ACLFilter: started state syncer
>     writer.go:29: 2020-02-23T02:47:05.801Z [WARN]  TestAgent_Members_ACLFilter.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:05.801Z [INFO]  TestAgent_Members_ACLFilter.server.raft: entering candidate state: node="Node at 127.0.0.1:16756 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:05.804Z [DEBUG] TestAgent_Members_ACLFilter.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:05.804Z [DEBUG] TestAgent_Members_ACLFilter.server.raft: vote granted: from=7d8331ac-9aba-399b-dfe9-5807e7d2e318 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:05.804Z [INFO]  TestAgent_Members_ACLFilter.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:05.804Z [INFO]  TestAgent_Members_ACLFilter.server.raft: entering leader state: leader="Node at 127.0.0.1:16756 [Leader]"
>     writer.go:29: 2020-02-23T02:47:05.804Z [INFO]  TestAgent_Members_ACLFilter.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:05.805Z [INFO]  TestAgent_Members_ACLFilter.server: New leader elected: payload=Node-7d8331ac-9aba-399b-dfe9-5807e7d2e318
>     writer.go:29: 2020-02-23T02:47:05.807Z [INFO]  TestAgent_Members_ACLFilter.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:05.808Z [INFO]  TestAgent_Members_ACLFilter.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:05.808Z [WARN]  TestAgent_Members_ACLFilter.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:05.811Z [INFO]  TestAgent_Members_ACLFilter.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:05.814Z [INFO]  TestAgent_Members_ACLFilter.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:05.814Z [INFO]  TestAgent_Members_ACLFilter.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:05.814Z [INFO]  TestAgent_Members_ACLFilter.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:05.814Z [INFO]  TestAgent_Members_ACLFilter.server.serf.lan: serf: EventMemberUpdate: Node-7d8331ac-9aba-399b-dfe9-5807e7d2e318
>     writer.go:29: 2020-02-23T02:47:05.814Z [INFO]  TestAgent_Members_ACLFilter.server.serf.wan: serf: EventMemberUpdate: Node-7d8331ac-9aba-399b-dfe9-5807e7d2e318.dc1
>     writer.go:29: 2020-02-23T02:47:05.815Z [INFO]  TestAgent_Members_ACLFilter.server: Handled event for server in area: event=member-update server=Node-7d8331ac-9aba-399b-dfe9-5807e7d2e318.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:05.819Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:05.826Z [INFO]  TestAgent_Members_ACLFilter.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:05.826Z [INFO]  TestAgent_Members_ACLFilter.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.826Z [DEBUG] TestAgent_Members_ACLFilter.server: Skipping self join check for node since the cluster is too small: node=Node-7d8331ac-9aba-399b-dfe9-5807e7d2e318
>     writer.go:29: 2020-02-23T02:47:05.826Z [INFO]  TestAgent_Members_ACLFilter.server: member joined, marking health alive: member=Node-7d8331ac-9aba-399b-dfe9-5807e7d2e318
>     writer.go:29: 2020-02-23T02:47:05.829Z [DEBUG] TestAgent_Members_ACLFilter.server: Skipping self join check for node since the cluster is too small: node=Node-7d8331ac-9aba-399b-dfe9-5807e7d2e318
>     writer.go:29: 2020-02-23T02:47:05.892Z [DEBUG] TestAgent_Members_ACLFilter.acl: dropping node from result due to ACLs: node=Node-7d8331ac-9aba-399b-dfe9-5807e7d2e318
>     writer.go:29: 2020-02-23T02:47:05.892Z [DEBUG] TestAgent_Members_ACLFilter.acl: dropping node from result due to ACLs: node=Node-7d8331ac-9aba-399b-dfe9-5807e7d2e318
>     writer.go:29: 2020-02-23T02:47:05.892Z [DEBUG] TestAgent_Members_ACLFilter: dropping node from result due to ACLs: node=Node-7d8331ac-9aba-399b-dfe9-5807e7d2e318 accessorID=
>     --- PASS: TestAgent_Members_ACLFilter/no_token (0.00s)
>     --- PASS: TestAgent_Members_ACLFilter/root_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:05.893Z [INFO]  TestAgent_Members_ACLFilter: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:05.893Z [INFO]  TestAgent_Members_ACLFilter.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:05.893Z [DEBUG] TestAgent_Members_ACLFilter.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:05.893Z [DEBUG] TestAgent_Members_ACLFilter.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:05.893Z [DEBUG] TestAgent_Members_ACLFilter.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.893Z [WARN]  TestAgent_Members_ACLFilter.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:05.893Z [ERROR] TestAgent_Members_ACLFilter.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:05.893Z [DEBUG] TestAgent_Members_ACLFilter.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:05.893Z [DEBUG] TestAgent_Members_ACLFilter.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:05.893Z [DEBUG] TestAgent_Members_ACLFilter.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.895Z [WARN]  TestAgent_Members_ACLFilter.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:05.897Z [INFO]  TestAgent_Members_ACLFilter.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:05.897Z [INFO]  TestAgent_Members_ACLFilter: consul server down
>     writer.go:29: 2020-02-23T02:47:05.897Z [INFO]  TestAgent_Members_ACLFilter: shutdown complete
>     writer.go:29: 2020-02-23T02:47:05.897Z [INFO]  TestAgent_Members_ACLFilter: Stopping server: protocol=DNS address=127.0.0.1:16751 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.897Z [INFO]  TestAgent_Members_ACLFilter: Stopping server: protocol=DNS address=127.0.0.1:16751 network=udp
>     writer.go:29: 2020-02-23T02:47:05.897Z [INFO]  TestAgent_Members_ACLFilter: Stopping server: protocol=HTTP address=127.0.0.1:16752 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.897Z [INFO]  TestAgent_Members_ACLFilter: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:05.897Z [INFO]  TestAgent_Members_ACLFilter: Endpoints down
> === CONT  TestAgent_Members_WAN
> --- PASS: TestAgent_Join (0.64s)
>     writer.go:29: 2020-02-23T02:47:05.405Z [WARN]  TestAgent_Join: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:05.405Z [DEBUG] TestAgent_Join.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:05.406Z [DEBUG] TestAgent_Join.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:05.463Z [INFO]  TestAgent_Join.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:e479d3e5-c09a-9b4c-af7d-9ff9ad572127 Address:127.0.0.1:16750}]"
>     writer.go:29: 2020-02-23T02:47:05.463Z [INFO]  TestAgent_Join.server.raft: entering follower state: follower="Node at 127.0.0.1:16750 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:05.464Z [INFO]  TestAgent_Join.server.serf.wan: serf: EventMemberJoin: Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.464Z [INFO]  TestAgent_Join.server.serf.lan: serf: EventMemberJoin: Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.464Z [INFO]  TestAgent_Join.server: Adding LAN server: server="Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127 (Addr: tcp/127.0.0.1:16750) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:05.464Z [INFO]  TestAgent_Join: Started DNS server: address=127.0.0.1:16745 network=udp
>     writer.go:29: 2020-02-23T02:47:05.464Z [INFO]  TestAgent_Join.server: Handled event for server in area: event=member-join server=Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:05.464Z [INFO]  TestAgent_Join: Started DNS server: address=127.0.0.1:16745 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.465Z [INFO]  TestAgent_Join: Started HTTP server: address=127.0.0.1:16746 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.465Z [INFO]  TestAgent_Join: started state syncer
>     writer.go:29: 2020-02-23T02:47:05.527Z [WARN]  TestAgent_Join.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:05.527Z [INFO]  TestAgent_Join.server.raft: entering candidate state: node="Node at 127.0.0.1:16750 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:05.530Z [DEBUG] TestAgent_Join.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:05.530Z [DEBUG] TestAgent_Join.server.raft: vote granted: from=e479d3e5-c09a-9b4c-af7d-9ff9ad572127 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:05.530Z [INFO]  TestAgent_Join.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:05.530Z [INFO]  TestAgent_Join.server.raft: entering leader state: leader="Node at 127.0.0.1:16750 [Leader]"
>     writer.go:29: 2020-02-23T02:47:05.530Z [INFO]  TestAgent_Join.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:05.530Z [INFO]  TestAgent_Join.server: New leader elected: payload=Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127
>     writer.go:29: 2020-02-23T02:47:05.539Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:05.549Z [INFO]  TestAgent_Join.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:05.549Z [INFO]  TestAgent_Join.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.549Z [DEBUG] TestAgent_Join.server: Skipping self join check for node since the cluster is too small: node=Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127
>     writer.go:29: 2020-02-23T02:47:05.549Z [INFO]  TestAgent_Join.server: member joined, marking health alive: member=Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127
>     writer.go:29: 2020-02-23T02:47:05.600Z [WARN]  TestAgent_Join: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:05.600Z [DEBUG] TestAgent_Join.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:05.600Z [DEBUG] TestAgent_Join.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:05.613Z [INFO]  TestAgent_Join.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:0531a4ba-e31e-799b-d034-deac1c7b3181 Address:127.0.0.1:16762}]"
>     writer.go:29: 2020-02-23T02:47:05.613Z [INFO]  TestAgent_Join.server.raft: entering follower state: follower="Node at 127.0.0.1:16762 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:05.614Z [INFO]  TestAgent_Join.server.serf.wan: serf: EventMemberJoin: Node-0531a4ba-e31e-799b-d034-deac1c7b3181.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.614Z [INFO]  TestAgent_Join.server.serf.lan: serf: EventMemberJoin: Node-0531a4ba-e31e-799b-d034-deac1c7b3181 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.614Z [INFO]  TestAgent_Join.server: Adding LAN server: server="Node-0531a4ba-e31e-799b-d034-deac1c7b3181 (Addr: tcp/127.0.0.1:16762) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:05.614Z [INFO]  TestAgent_Join: Started DNS server: address=127.0.0.1:16757 network=udp
>     writer.go:29: 2020-02-23T02:47:05.614Z [INFO]  TestAgent_Join.server: Handled event for server in area: event=member-join server=Node-0531a4ba-e31e-799b-d034-deac1c7b3181.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:05.614Z [INFO]  TestAgent_Join: Started DNS server: address=127.0.0.1:16757 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.615Z [INFO]  TestAgent_Join: Started HTTP server: address=127.0.0.1:16758 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.615Z [INFO]  TestAgent_Join: started state syncer
>     writer.go:29: 2020-02-23T02:47:05.627Z [DEBUG] TestAgent_Join: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:05.630Z [INFO]  TestAgent_Join: Synced node info
>     writer.go:29: 2020-02-23T02:47:05.667Z [WARN]  TestAgent_Join.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:05.667Z [INFO]  TestAgent_Join.server.raft: entering candidate state: node="Node at 127.0.0.1:16762 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:05.670Z [DEBUG] TestAgent_Join.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:05.670Z [DEBUG] TestAgent_Join.server.raft: vote granted: from=0531a4ba-e31e-799b-d034-deac1c7b3181 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:05.670Z [INFO]  TestAgent_Join.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:05.670Z [INFO]  TestAgent_Join.server.raft: entering leader state: leader="Node at 127.0.0.1:16762 [Leader]"
>     writer.go:29: 2020-02-23T02:47:05.670Z [INFO]  TestAgent_Join.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:05.671Z [INFO]  TestAgent_Join.server: New leader elected: payload=Node-0531a4ba-e31e-799b-d034-deac1c7b3181
>     writer.go:29: 2020-02-23T02:47:05.678Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:05.687Z [INFO]  TestAgent_Join.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:05.687Z [INFO]  TestAgent_Join.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:05.687Z [DEBUG] TestAgent_Join.server: Skipping self join check for node since the cluster is too small: node=Node-0531a4ba-e31e-799b-d034-deac1c7b3181
>     writer.go:29: 2020-02-23T02:47:05.687Z [INFO]  TestAgent_Join.server: member joined, marking health alive: member=Node-0531a4ba-e31e-799b-d034-deac1c7b3181
>     writer.go:29: 2020-02-23T02:47:05.759Z [DEBUG] TestAgent_Join: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:05.762Z [INFO]  TestAgent_Join: Synced node info
>     writer.go:29: 2020-02-23T02:47:05.762Z [DEBUG] TestAgent_Join: Node info in sync
>     writer.go:29: 2020-02-23T02:47:05.803Z [DEBUG] TestAgent_Join: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:05.803Z [DEBUG] TestAgent_Join: Node info in sync
>     writer.go:29: 2020-02-23T02:47:06.029Z [INFO]  TestAgent_Join: (LAN) joining: lan_addresses=[127.0.0.1:16760]
>     writer.go:29: 2020-02-23T02:47:06.029Z [DEBUG] TestAgent_Join.server.memberlist.lan: memberlist: Initiating push/pull sync with: 127.0.0.1:16760
>     writer.go:29: 2020-02-23T02:47:06.029Z [DEBUG] TestAgent_Join.server.memberlist.lan: memberlist: Stream connection from=127.0.0.1:43842
>     writer.go:29: 2020-02-23T02:47:06.029Z [INFO]  TestAgent_Join.server.serf.lan: serf: EventMemberJoin: Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:06.029Z [INFO]  TestAgent_Join.server: Adding LAN server: server="Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127 (Addr: tcp/127.0.0.1:16750) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:06.030Z [INFO]  TestAgent_Join.server: New leader elected: payload=Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127
>     writer.go:29: 2020-02-23T02:47:06.030Z [ERROR] TestAgent_Join.server: Two nodes are in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.: node_to_add=Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127 other=Node-0531a4ba-e31e-799b-d034-deac1c7b3181
>     writer.go:29: 2020-02-23T02:47:06.030Z [INFO]  TestAgent_Join.server: member joined, marking health alive: member=Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127
>     writer.go:29: 2020-02-23T02:47:06.030Z [DEBUG] TestAgent_Join.server.memberlist.wan: memberlist: Initiating push/pull sync with: 127.0.0.1:16749
>     writer.go:29: 2020-02-23T02:47:06.030Z [DEBUG] TestAgent_Join.server.memberlist.wan: memberlist: Stream connection from=127.0.0.1:57966
>     writer.go:29: 2020-02-23T02:47:06.030Z [INFO]  TestAgent_Join.server.serf.wan: serf: EventMemberJoin: Node-0531a4ba-e31e-799b-d034-deac1c7b3181.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:06.030Z [INFO]  TestAgent_Join.server: Handled event for server in area: event=member-join server=Node-0531a4ba-e31e-799b-d034-deac1c7b3181.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:06.031Z [INFO]  TestAgent_Join.server.serf.lan: serf: EventMemberJoin: Node-0531a4ba-e31e-799b-d034-deac1c7b3181 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:06.031Z [INFO]  TestAgent_Join: (LAN) joined: number_of_nodes=1
>     writer.go:29: 2020-02-23T02:47:06.031Z [DEBUG] TestAgent_Join: systemd notify failed: error="No socket"
>     writer.go:29: 2020-02-23T02:47:06.031Z [INFO]  TestAgent_Join: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:06.031Z [INFO]  TestAgent_Join.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:06.031Z [DEBUG] TestAgent_Join.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.031Z [WARN]  TestAgent_Join.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:06.031Z [INFO]  TestAgent_Join.server: Adding LAN server: server="Node-0531a4ba-e31e-799b-d034-deac1c7b3181 (Addr: tcp/127.0.0.1:16762) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:06.031Z [ERROR] TestAgent_Join.server: Two nodes are in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.: node_to_add=Node-0531a4ba-e31e-799b-d034-deac1c7b3181 other=Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127
>     writer.go:29: 2020-02-23T02:47:06.031Z [INFO]  TestAgent_Join.server: member joined, marking health alive: member=Node-0531a4ba-e31e-799b-d034-deac1c7b3181
>     writer.go:29: 2020-02-23T02:47:06.031Z [INFO]  TestAgent_Join.server.serf.wan: serf: EventMemberJoin: Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:06.031Z [DEBUG] TestAgent_Join.server: Successfully performed flood-join for server at address: server=Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127 address=127.0.0.1:16749
>     writer.go:29: 2020-02-23T02:47:06.031Z [INFO]  TestAgent_Join.server: Handled event for server in area: event=member-join server=Node-e479d3e5-c09a-9b4c-af7d-9ff9ad572127.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:06.031Z [DEBUG] TestAgent_Join.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.034Z [WARN]  TestAgent_Join.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:06.036Z [INFO]  TestAgent_Join.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:06.036Z [INFO]  TestAgent_Join: consul server down
>     writer.go:29: 2020-02-23T02:47:06.036Z [INFO]  TestAgent_Join: shutdown complete
>     writer.go:29: 2020-02-23T02:47:06.036Z [INFO]  TestAgent_Join: Stopping server: protocol=DNS address=127.0.0.1:16757 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.036Z [INFO]  TestAgent_Join: Stopping server: protocol=DNS address=127.0.0.1:16757 network=udp
>     writer.go:29: 2020-02-23T02:47:06.036Z [INFO]  TestAgent_Join: Stopping server: protocol=HTTP address=127.0.0.1:16758 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.036Z [INFO]  TestAgent_Join: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:06.036Z [INFO]  TestAgent_Join: Endpoints down
>     writer.go:29: 2020-02-23T02:47:06.036Z [INFO]  TestAgent_Join: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:06.036Z [INFO]  TestAgent_Join.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:06.036Z [DEBUG] TestAgent_Join.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.036Z [WARN]  TestAgent_Join.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:06.036Z [DEBUG] TestAgent_Join.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.038Z [WARN]  TestAgent_Join.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:06.039Z [INFO]  TestAgent_Join.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:06.040Z [INFO]  TestAgent_Join: consul server down
>     writer.go:29: 2020-02-23T02:47:06.040Z [INFO]  TestAgent_Join: shutdown complete
>     writer.go:29: 2020-02-23T02:47:06.040Z [INFO]  TestAgent_Join: Stopping server: protocol=DNS address=127.0.0.1:16745 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.040Z [INFO]  TestAgent_Join: Stopping server: protocol=DNS address=127.0.0.1:16745 network=udp
>     writer.go:29: 2020-02-23T02:47:06.040Z [INFO]  TestAgent_Join: Stopping server: protocol=HTTP address=127.0.0.1:16746 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.040Z [INFO]  TestAgent_Join: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:06.040Z [INFO]  TestAgent_Join: Endpoints down
> === CONT  TestAgent_Members
> --- PASS: TestAgent_Members_WAN (0.15s)
>     writer.go:29: 2020-02-23T02:47:05.905Z [WARN]  TestAgent_Members_WAN: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:05.905Z [DEBUG] TestAgent_Members_WAN.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:05.906Z [DEBUG] TestAgent_Members_WAN.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:05.919Z [INFO]  TestAgent_Members_WAN.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:e8410a28-7a72-cb83-7d8e-681a598cddb5 Address:127.0.0.1:16768}]"
>     writer.go:29: 2020-02-23T02:47:05.919Z [INFO]  TestAgent_Members_WAN.server.raft: entering follower state: follower="Node at 127.0.0.1:16768 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:05.919Z [INFO]  TestAgent_Members_WAN.server.serf.wan: serf: EventMemberJoin: Node-e8410a28-7a72-cb83-7d8e-681a598cddb5.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.920Z [INFO]  TestAgent_Members_WAN.server.serf.lan: serf: EventMemberJoin: Node-e8410a28-7a72-cb83-7d8e-681a598cddb5 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:05.920Z [INFO]  TestAgent_Members_WAN.server: Adding LAN server: server="Node-e8410a28-7a72-cb83-7d8e-681a598cddb5 (Addr: tcp/127.0.0.1:16768) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:05.920Z [INFO]  TestAgent_Members_WAN: Started DNS server: address=127.0.0.1:16763 network=udp
>     writer.go:29: 2020-02-23T02:47:05.920Z [INFO]  TestAgent_Members_WAN.server: Handled event for server in area: event=member-join server=Node-e8410a28-7a72-cb83-7d8e-681a598cddb5.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:05.920Z [INFO]  TestAgent_Members_WAN: Started DNS server: address=127.0.0.1:16763 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.921Z [INFO]  TestAgent_Members_WAN: Started HTTP server: address=127.0.0.1:16764 network=tcp
>     writer.go:29: 2020-02-23T02:47:05.921Z [INFO]  TestAgent_Members_WAN: started state syncer
>     writer.go:29: 2020-02-23T02:47:05.985Z [WARN]  TestAgent_Members_WAN.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:05.985Z [INFO]  TestAgent_Members_WAN.server.raft: entering candidate state: node="Node at 127.0.0.1:16768 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:05.989Z [DEBUG] TestAgent_Members_WAN.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:05.989Z [DEBUG] TestAgent_Members_WAN.server.raft: vote granted: from=e8410a28-7a72-cb83-7d8e-681a598cddb5 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:05.989Z [INFO]  TestAgent_Members_WAN.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:05.989Z [INFO]  TestAgent_Members_WAN.server.raft: entering leader state: leader="Node at 127.0.0.1:16768 [Leader]"
>     writer.go:29: 2020-02-23T02:47:05.989Z [INFO]  TestAgent_Members_WAN.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:05.989Z [INFO]  TestAgent_Members_WAN.server: New leader elected: payload=Node-e8410a28-7a72-cb83-7d8e-681a598cddb5
>     writer.go:29: 2020-02-23T02:47:05.997Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:06.006Z [INFO]  TestAgent_Members_WAN.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:06.006Z [INFO]  TestAgent_Members_WAN.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.006Z [DEBUG] TestAgent_Members_WAN.server: Skipping self join check for node since the cluster is too small: node=Node-e8410a28-7a72-cb83-7d8e-681a598cddb5
>     writer.go:29: 2020-02-23T02:47:06.006Z [INFO]  TestAgent_Members_WAN.server: member joined, marking health alive: member=Node-e8410a28-7a72-cb83-7d8e-681a598cddb5
>     writer.go:29: 2020-02-23T02:47:06.042Z [INFO]  TestAgent_Members_WAN: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:06.042Z [INFO]  TestAgent_Members_WAN.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:06.042Z [DEBUG] TestAgent_Members_WAN.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.042Z [WARN]  TestAgent_Members_WAN.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:06.042Z [ERROR] TestAgent_Members_WAN.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:06.043Z [DEBUG] TestAgent_Members_WAN.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.045Z [WARN]  TestAgent_Members_WAN.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:06.046Z [INFO]  TestAgent_Members_WAN.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:06.046Z [INFO]  TestAgent_Members_WAN: consul server down
>     writer.go:29: 2020-02-23T02:47:06.046Z [INFO]  TestAgent_Members_WAN: shutdown complete
>     writer.go:29: 2020-02-23T02:47:06.046Z [INFO]  TestAgent_Members_WAN: Stopping server: protocol=DNS address=127.0.0.1:16763 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.046Z [INFO]  TestAgent_Members_WAN: Stopping server: protocol=DNS address=127.0.0.1:16763 network=udp
>     writer.go:29: 2020-02-23T02:47:06.046Z [INFO]  TestAgent_Members_WAN: Stopping server: protocol=HTTP address=127.0.0.1:16764 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.046Z [INFO]  TestAgent_Members_WAN: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:06.046Z [INFO]  TestAgent_Members_WAN: Endpoints down
> === CONT  TestAgent_Reload_ACLDeny
> --- PASS: TestAgent_Members (0.20s)
>     writer.go:29: 2020-02-23T02:47:06.056Z [WARN]  TestAgent_Members: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:06.058Z [DEBUG] TestAgent_Members.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:06.058Z [DEBUG] TestAgent_Members.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:06.070Z [INFO]  TestAgent_Members.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a9a8fb53-e37d-a269-1653-6dc3d0e3b914 Address:127.0.0.1:16774}]"
>     writer.go:29: 2020-02-23T02:47:06.070Z [INFO]  TestAgent_Members.server.raft: entering follower state: follower="Node at 127.0.0.1:16774 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:06.070Z [INFO]  TestAgent_Members.server.serf.wan: serf: EventMemberJoin: Node-a9a8fb53-e37d-a269-1653-6dc3d0e3b914.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:06.071Z [INFO]  TestAgent_Members.server.serf.lan: serf: EventMemberJoin: Node-a9a8fb53-e37d-a269-1653-6dc3d0e3b914 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:06.071Z [INFO]  TestAgent_Members.server: Handled event for server in area: event=member-join server=Node-a9a8fb53-e37d-a269-1653-6dc3d0e3b914.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:06.071Z [INFO]  TestAgent_Members.server: Adding LAN server: server="Node-a9a8fb53-e37d-a269-1653-6dc3d0e3b914 (Addr: tcp/127.0.0.1:16774) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:06.071Z [INFO]  TestAgent_Members: Started DNS server: address=127.0.0.1:16769 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.071Z [INFO]  TestAgent_Members: Started DNS server: address=127.0.0.1:16769 network=udp
>     writer.go:29: 2020-02-23T02:47:06.071Z [INFO]  TestAgent_Members: Started HTTP server: address=127.0.0.1:16770 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.071Z [INFO]  TestAgent_Members: started state syncer
>     writer.go:29: 2020-02-23T02:47:06.135Z [WARN]  TestAgent_Members.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:06.135Z [INFO]  TestAgent_Members.server.raft: entering candidate state: node="Node at 127.0.0.1:16774 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:06.141Z [DEBUG] TestAgent_Members.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:06.141Z [DEBUG] TestAgent_Members.server.raft: vote granted: from=a9a8fb53-e37d-a269-1653-6dc3d0e3b914 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:06.141Z [INFO]  TestAgent_Members.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:06.141Z [INFO]  TestAgent_Members.server.raft: entering leader state: leader="Node at 127.0.0.1:16774 [Leader]"
>     writer.go:29: 2020-02-23T02:47:06.141Z [INFO]  TestAgent_Members.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:06.141Z [INFO]  TestAgent_Members.server: New leader elected: payload=Node-a9a8fb53-e37d-a269-1653-6dc3d0e3b914
>     writer.go:29: 2020-02-23T02:47:06.149Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:06.158Z [INFO]  TestAgent_Members.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:06.158Z [INFO]  TestAgent_Members.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.158Z [DEBUG] TestAgent_Members.server: Skipping self join check for node since the cluster is too small: node=Node-a9a8fb53-e37d-a269-1653-6dc3d0e3b914
>     writer.go:29: 2020-02-23T02:47:06.158Z [INFO]  TestAgent_Members.server: member joined, marking health alive: member=Node-a9a8fb53-e37d-a269-1653-6dc3d0e3b914
>     writer.go:29: 2020-02-23T02:47:06.237Z [INFO]  TestAgent_Members: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:06.237Z [INFO]  TestAgent_Members.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:06.237Z [DEBUG] TestAgent_Members.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.237Z [WARN]  TestAgent_Members.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:06.237Z [ERROR] TestAgent_Members.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:06.237Z [DEBUG] TestAgent_Members.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.239Z [WARN]  TestAgent_Members.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:06.241Z [INFO]  TestAgent_Members.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:06.241Z [INFO]  TestAgent_Members: consul server down
>     writer.go:29: 2020-02-23T02:47:06.241Z [INFO]  TestAgent_Members: shutdown complete
>     writer.go:29: 2020-02-23T02:47:06.241Z [INFO]  TestAgent_Members: Stopping server: protocol=DNS address=127.0.0.1:16769 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.242Z [INFO]  TestAgent_Members: Stopping server: protocol=DNS address=127.0.0.1:16769 network=udp
>     writer.go:29: 2020-02-23T02:47:06.242Z [INFO]  TestAgent_Members: Stopping server: protocol=HTTP address=127.0.0.1:16770 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.242Z [INFO]  TestAgent_Members: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:06.242Z [INFO]  TestAgent_Members: Endpoints down
> === CONT  TestAgent_Reload
> === RUN   TestAgent_Reload_ACLDeny/no_token
> === RUN   TestAgent_Reload_ACLDeny/read-only_token
> --- PASS: TestAgent_Reload_ACLDeny (0.36s)
>     writer.go:29: 2020-02-23T02:47:06.057Z [WARN]  TestAgent_Reload_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:06.057Z [WARN]  TestAgent_Reload_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:06.057Z [DEBUG] TestAgent_Reload_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:06.057Z [DEBUG] TestAgent_Reload_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:06.073Z [INFO]  TestAgent_Reload_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:27f1d62b-9fbe-7e4e-850e-767e5ba9c62a Address:127.0.0.1:16582}]"
>     writer.go:29: 2020-02-23T02:47:06.073Z [INFO]  TestAgent_Reload_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16582 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:06.073Z [INFO]  TestAgent_Reload_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:06.073Z [INFO]  TestAgent_Reload_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:06.074Z [INFO]  TestAgent_Reload_ACLDeny.server: Adding LAN server: server="Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a (Addr: tcp/127.0.0.1:16582) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:06.074Z [INFO]  TestAgent_Reload_ACLDeny: Started DNS server: address=127.0.0.1:16577 network=udp
>     writer.go:29: 2020-02-23T02:47:06.074Z [INFO]  TestAgent_Reload_ACLDeny.server: Handled event for server in area: event=member-join server=Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:06.074Z [INFO]  TestAgent_Reload_ACLDeny: Started DNS server: address=127.0.0.1:16577 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.074Z [INFO]  TestAgent_Reload_ACLDeny: Started HTTP server: address=127.0.0.1:16578 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.074Z [INFO]  TestAgent_Reload_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:47:06.111Z [WARN]  TestAgent_Reload_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:06.112Z [INFO]  TestAgent_Reload_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16582 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:06.115Z [DEBUG] TestAgent_Reload_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:06.115Z [DEBUG] TestAgent_Reload_ACLDeny.server.raft: vote granted: from=27f1d62b-9fbe-7e4e-850e-767e5ba9c62a term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:06.115Z [INFO]  TestAgent_Reload_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:06.115Z [INFO]  TestAgent_Reload_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16582 [Leader]"
>     writer.go:29: 2020-02-23T02:47:06.115Z [INFO]  TestAgent_Reload_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:06.115Z [INFO]  TestAgent_Reload_ACLDeny.server: New leader elected: payload=Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a
>     writer.go:29: 2020-02-23T02:47:06.117Z [INFO]  TestAgent_Reload_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:06.119Z [INFO]  TestAgent_Reload_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:06.119Z [WARN]  TestAgent_Reload_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:06.122Z [INFO]  TestAgent_Reload_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:06.126Z [INFO]  TestAgent_Reload_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:06.126Z [WARN]  TestAgent_Reload_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:06.126Z [INFO]  TestAgent_Reload_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:06.126Z [INFO]  TestAgent_Reload_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:06.126Z [INFO]  TestAgent_Reload_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:06.126Z [INFO]  TestAgent_Reload_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a
>     writer.go:29: 2020-02-23T02:47:06.126Z [INFO]  TestAgent_Reload_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a.dc1
>     writer.go:29: 2020-02-23T02:47:06.127Z [INFO]  TestAgent_Reload_ACLDeny.server: Handled event for server in area: event=member-update server=Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:06.129Z [INFO]  TestAgent_Reload_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:06.129Z [DEBUG] TestAgent_Reload_ACLDeny.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:47:06.129Z [INFO]  TestAgent_Reload_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a
>     writer.go:29: 2020-02-23T02:47:06.129Z [INFO]  TestAgent_Reload_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a.dc1
>     writer.go:29: 2020-02-23T02:47:06.129Z [INFO]  TestAgent_Reload_ACLDeny.server: Handled event for server in area: event=member-update server=Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:06.133Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:06.140Z [INFO]  TestAgent_Reload_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:06.140Z [INFO]  TestAgent_Reload_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.140Z [DEBUG] TestAgent_Reload_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a
>     writer.go:29: 2020-02-23T02:47:06.140Z [INFO]  TestAgent_Reload_ACLDeny.server: member joined, marking health alive: member=Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a
>     writer.go:29: 2020-02-23T02:47:06.142Z [DEBUG] TestAgent_Reload_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a
>     writer.go:29: 2020-02-23T02:47:06.142Z [DEBUG] TestAgent_Reload_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a
>     writer.go:29: 2020-02-23T02:47:06.315Z [DEBUG] TestAgent_Reload_ACLDeny: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:06.318Z [INFO]  TestAgent_Reload_ACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:47:06.318Z [DEBUG] TestAgent_Reload_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:47:06.397Z [DEBUG] TestAgent_Reload_ACLDeny.acl: dropping node from result due to ACLs: node=Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a
>     writer.go:29: 2020-02-23T02:47:06.397Z [DEBUG] TestAgent_Reload_ACLDeny.acl: dropping node from result due to ACLs: node=Node-27f1d62b-9fbe-7e4e-850e-767e5ba9c62a
>     --- PASS: TestAgent_Reload_ACLDeny/no_token (0.00s)
>     --- PASS: TestAgent_Reload_ACLDeny/read-only_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:06.399Z [INFO]  TestAgent_Reload_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:06.399Z [INFO]  TestAgent_Reload_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:06.399Z [DEBUG] TestAgent_Reload_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.399Z [DEBUG] TestAgent_Reload_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:06.399Z [DEBUG] TestAgent_Reload_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:06.399Z [WARN]  TestAgent_Reload_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:06.399Z [DEBUG] TestAgent_Reload_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.399Z [DEBUG] TestAgent_Reload_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:06.399Z [DEBUG] TestAgent_Reload_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:06.401Z [WARN]  TestAgent_Reload_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:06.403Z [INFO]  TestAgent_Reload_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:06.403Z [INFO]  TestAgent_Reload_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:47:06.403Z [INFO]  TestAgent_Reload_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:47:06.403Z [INFO]  TestAgent_Reload_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16577 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.403Z [INFO]  TestAgent_Reload_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16577 network=udp
>     writer.go:29: 2020-02-23T02:47:06.403Z [INFO]  TestAgent_Reload_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16578 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.403Z [INFO]  TestAgent_Reload_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:06.403Z [INFO]  TestAgent_Reload_ACLDeny: Endpoints down
> === CONT  TestAgent_Metrics_ACLDeny
> === RUN   TestAgent_Metrics_ACLDeny/no_token
> === RUN   TestAgent_Metrics_ACLDeny/agent_master_token
> === RUN   TestAgent_Metrics_ACLDeny/read-only_token
> --- PASS: TestAgent_Metrics_ACLDeny (0.41s)
>     writer.go:29: 2020-02-23T02:47:06.411Z [WARN]  TestAgent_Metrics_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:06.411Z [WARN]  TestAgent_Metrics_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:06.411Z [DEBUG] TestAgent_Metrics_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:06.411Z [DEBUG] TestAgent_Metrics_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:06.437Z [INFO]  TestAgent_Metrics_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:7d164a43-e720-ba71-968b-2e3ab71b6a2d Address:127.0.0.1:16786}]"
>     writer.go:29: 2020-02-23T02:47:06.438Z [INFO]  TestAgent_Metrics_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-7d164a43-e720-ba71-968b-2e3ab71b6a2d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:06.438Z [INFO]  TestAgent_Metrics_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-7d164a43-e720-ba71-968b-2e3ab71b6a2d 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:06.438Z [INFO]  TestAgent_Metrics_ACLDeny: Started DNS server: address=127.0.0.1:16781 network=udp
>     writer.go:29: 2020-02-23T02:47:06.438Z [INFO]  TestAgent_Metrics_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16786 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:06.438Z [INFO]  TestAgent_Metrics_ACLDeny.server: Adding LAN server: server="Node-7d164a43-e720-ba71-968b-2e3ab71b6a2d (Addr: tcp/127.0.0.1:16786) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:06.438Z [INFO]  TestAgent_Metrics_ACLDeny.server: Handled event for server in area: event=member-join server=Node-7d164a43-e720-ba71-968b-2e3ab71b6a2d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:06.439Z [INFO]  TestAgent_Metrics_ACLDeny: Started DNS server: address=127.0.0.1:16781 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.439Z [INFO]  TestAgent_Metrics_ACLDeny: Started HTTP server: address=127.0.0.1:16782 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.439Z [INFO]  TestAgent_Metrics_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:47:06.500Z [WARN]  TestAgent_Metrics_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:06.500Z [INFO]  TestAgent_Metrics_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16786 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:06.503Z [DEBUG] TestAgent_Metrics_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:06.503Z [DEBUG] TestAgent_Metrics_ACLDeny.server.raft: vote granted: from=7d164a43-e720-ba71-968b-2e3ab71b6a2d term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:06.503Z [INFO]  TestAgent_Metrics_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:06.503Z [INFO]  TestAgent_Metrics_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16786 [Leader]"
>     writer.go:29: 2020-02-23T02:47:06.504Z [INFO]  TestAgent_Metrics_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:06.504Z [INFO]  TestAgent_Metrics_ACLDeny.server: New leader elected: payload=Node-7d164a43-e720-ba71-968b-2e3ab71b6a2d
>     writer.go:29: 2020-02-23T02:47:06.507Z [INFO]  TestAgent_Metrics_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:06.508Z [INFO]  TestAgent_Metrics_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:06.508Z [WARN]  TestAgent_Metrics_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:06.511Z [INFO]  TestAgent_Metrics_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:06.515Z [INFO]  TestAgent_Metrics_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:06.515Z [INFO]  TestAgent_Metrics_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:06.515Z [INFO]  TestAgent_Metrics_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:06.515Z [INFO]  TestAgent_Metrics_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-7d164a43-e720-ba71-968b-2e3ab71b6a2d
>     writer.go:29: 2020-02-23T02:47:06.515Z [INFO]  TestAgent_Metrics_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-7d164a43-e720-ba71-968b-2e3ab71b6a2d.dc1
>     writer.go:29: 2020-02-23T02:47:06.515Z [INFO]  TestAgent_Metrics_ACLDeny.server: Handled event for server in area: event=member-update server=Node-7d164a43-e720-ba71-968b-2e3ab71b6a2d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:06.519Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:06.526Z [INFO]  TestAgent_Metrics_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:06.526Z [INFO]  TestAgent_Metrics_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.526Z [DEBUG] TestAgent_Metrics_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-7d164a43-e720-ba71-968b-2e3ab71b6a2d
>     writer.go:29: 2020-02-23T02:47:06.526Z [INFO]  TestAgent_Metrics_ACLDeny.server: member joined, marking health alive: member=Node-7d164a43-e720-ba71-968b-2e3ab71b6a2d
>     writer.go:29: 2020-02-23T02:47:06.529Z [DEBUG] TestAgent_Metrics_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-7d164a43-e720-ba71-968b-2e3ab71b6a2d
>     writer.go:29: 2020-02-23T02:47:06.565Z [DEBUG] TestAgent_Metrics_ACLDeny: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:06.568Z [INFO]  TestAgent_Metrics_ACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:47:06.568Z [DEBUG] TestAgent_Metrics_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:47:06.569Z [DEBUG] TestAgent_Metrics_ACLDeny: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:06.570Z [DEBUG] TestAgent_Metrics_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:47:06.803Z [DEBUG] TestAgent_Metrics_ACLDeny.acl: dropping node from result due to ACLs: node=Node-7d164a43-e720-ba71-968b-2e3ab71b6a2d
>     writer.go:29: 2020-02-23T02:47:06.803Z [DEBUG] TestAgent_Metrics_ACLDeny.acl: dropping node from result due to ACLs: node=Node-7d164a43-e720-ba71-968b-2e3ab71b6a2d
>     --- PASS: TestAgent_Metrics_ACLDeny/no_token (0.00s)
>     --- PASS: TestAgent_Metrics_ACLDeny/agent_master_token (0.00s)
>     --- PASS: TestAgent_Metrics_ACLDeny/read-only_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:06.807Z [INFO]  TestAgent_Metrics_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:06.807Z [INFO]  TestAgent_Metrics_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:06.807Z [DEBUG] TestAgent_Metrics_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:06.808Z [DEBUG] TestAgent_Metrics_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.808Z [DEBUG] TestAgent_Metrics_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:06.808Z [WARN]  TestAgent_Metrics_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:06.808Z [DEBUG] TestAgent_Metrics_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:06.808Z [DEBUG] TestAgent_Metrics_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:06.808Z [DEBUG] TestAgent_Metrics_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.809Z [WARN]  TestAgent_Metrics_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:06.811Z [INFO]  TestAgent_Metrics_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:06.811Z [INFO]  TestAgent_Metrics_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:47:06.811Z [INFO]  TestAgent_Metrics_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:47:06.811Z [INFO]  TestAgent_Metrics_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16781 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.811Z [INFO]  TestAgent_Metrics_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16781 network=udp
>     writer.go:29: 2020-02-23T02:47:06.811Z [INFO]  TestAgent_Metrics_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16782 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.811Z [INFO]  TestAgent_Metrics_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:06.811Z [INFO]  TestAgent_Metrics_ACLDeny: Endpoints down
> === CONT  TestAgent_Self_ACLDeny
> === RUN   TestAgent_Self_ACLDeny/no_token
> === RUN   TestAgent_Self_ACLDeny/agent_master_token
> === RUN   TestAgent_Self_ACLDeny/read-only_token
> --- PASS: TestAgent_Self_ACLDeny (0.28s)
>     writer.go:29: 2020-02-23T02:47:06.819Z [WARN]  TestAgent_Self_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:06.819Z [WARN]  TestAgent_Self_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:06.819Z [DEBUG] TestAgent_Self_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:06.819Z [DEBUG] TestAgent_Self_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:06.828Z [INFO]  TestAgent_Self_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:917a9c49-90c9-f44e-3059-b23d451a4fec Address:127.0.0.1:16804}]"
>     writer.go:29: 2020-02-23T02:47:06.828Z [INFO]  TestAgent_Self_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16804 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:06.829Z [INFO]  TestAgent_Self_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-917a9c49-90c9-f44e-3059-b23d451a4fec.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:06.829Z [INFO]  TestAgent_Self_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-917a9c49-90c9-f44e-3059-b23d451a4fec 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:06.829Z [INFO]  TestAgent_Self_ACLDeny.server: Handled event for server in area: event=member-join server=Node-917a9c49-90c9-f44e-3059-b23d451a4fec.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:06.829Z [INFO]  TestAgent_Self_ACLDeny.server: Adding LAN server: server="Node-917a9c49-90c9-f44e-3059-b23d451a4fec (Addr: tcp/127.0.0.1:16804) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:06.830Z [INFO]  TestAgent_Self_ACLDeny: Started DNS server: address=127.0.0.1:16799 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.830Z [INFO]  TestAgent_Self_ACLDeny: Started DNS server: address=127.0.0.1:16799 network=udp
>     writer.go:29: 2020-02-23T02:47:06.830Z [INFO]  TestAgent_Self_ACLDeny: Started HTTP server: address=127.0.0.1:16800 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.830Z [INFO]  TestAgent_Self_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:47:06.886Z [WARN]  TestAgent_Self_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:06.886Z [INFO]  TestAgent_Self_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16804 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:06.890Z [DEBUG] TestAgent_Self_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:06.890Z [DEBUG] TestAgent_Self_ACLDeny.server.raft: vote granted: from=917a9c49-90c9-f44e-3059-b23d451a4fec term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:06.890Z [INFO]  TestAgent_Self_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:06.890Z [INFO]  TestAgent_Self_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16804 [Leader]"
>     writer.go:29: 2020-02-23T02:47:06.890Z [INFO]  TestAgent_Self_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:06.890Z [INFO]  TestAgent_Self_ACLDeny.server: New leader elected: payload=Node-917a9c49-90c9-f44e-3059-b23d451a4fec
>     writer.go:29: 2020-02-23T02:47:06.892Z [INFO]  TestAgent_Self_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:06.893Z [INFO]  TestAgent_Self_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:06.893Z [WARN]  TestAgent_Self_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:06.896Z [INFO]  TestAgent_Self_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:06.899Z [INFO]  TestAgent_Self_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:06.899Z [INFO]  TestAgent_Self_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:06.899Z [INFO]  TestAgent_Self_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:06.899Z [INFO]  TestAgent_Self_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-917a9c49-90c9-f44e-3059-b23d451a4fec
>     writer.go:29: 2020-02-23T02:47:06.899Z [INFO]  TestAgent_Self_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-917a9c49-90c9-f44e-3059-b23d451a4fec.dc1
>     writer.go:29: 2020-02-23T02:47:06.900Z [INFO]  TestAgent_Self_ACLDeny.server: Handled event for server in area: event=member-update server=Node-917a9c49-90c9-f44e-3059-b23d451a4fec.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:06.904Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:06.910Z [INFO]  TestAgent_Self_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:06.910Z [INFO]  TestAgent_Self_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.910Z [DEBUG] TestAgent_Self_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-917a9c49-90c9-f44e-3059-b23d451a4fec
>     writer.go:29: 2020-02-23T02:47:06.910Z [INFO]  TestAgent_Self_ACLDeny.server: member joined, marking health alive: member=Node-917a9c49-90c9-f44e-3059-b23d451a4fec
>     writer.go:29: 2020-02-23T02:47:06.913Z [DEBUG] TestAgent_Self_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-917a9c49-90c9-f44e-3059-b23d451a4fec
>     writer.go:29: 2020-02-23T02:47:07.086Z [DEBUG] TestAgent_Self_ACLDeny.acl: dropping node from result due to ACLs: node=Node-917a9c49-90c9-f44e-3059-b23d451a4fec
>     writer.go:29: 2020-02-23T02:47:07.086Z [DEBUG] TestAgent_Self_ACLDeny.acl: dropping node from result due to ACLs: node=Node-917a9c49-90c9-f44e-3059-b23d451a4fec
>     --- PASS: TestAgent_Self_ACLDeny/no_token (0.00s)
>     --- PASS: TestAgent_Self_ACLDeny/agent_master_token (0.00s)
>     --- PASS: TestAgent_Self_ACLDeny/read-only_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:07.090Z [INFO]  TestAgent_Self_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:07.090Z [INFO]  TestAgent_Self_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:07.090Z [DEBUG] TestAgent_Self_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:07.090Z [DEBUG] TestAgent_Self_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:07.090Z [DEBUG] TestAgent_Self_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:07.090Z [WARN]  TestAgent_Self_ACLDeny.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:07.090Z [DEBUG] TestAgent_Self_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:07.090Z [ERROR] TestAgent_Self_ACLDeny.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:07.090Z [DEBUG] TestAgent_Self_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:07.090Z [DEBUG] TestAgent_Self_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:07.092Z [WARN]  TestAgent_Self_ACLDeny.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:07.094Z [INFO]  TestAgent_Self_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:07.094Z [INFO]  TestAgent_Self_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:47:07.094Z [INFO]  TestAgent_Self_ACLDeny: shutdown complete
>     writer.go:29: 2020-02-23T02:47:07.094Z [INFO]  TestAgent_Self_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16799 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.094Z [INFO]  TestAgent_Self_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16799 network=udp
>     writer.go:29: 2020-02-23T02:47:07.094Z [INFO]  TestAgent_Self_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16800 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.094Z [INFO]  TestAgent_Self_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:07.094Z [INFO]  TestAgent_Self_ACLDeny: Endpoints down
> === CONT  TestAgent_Self
> --- PASS: TestAgent_Self (0.34s)
>     writer.go:29: 2020-02-23T02:47:07.102Z [WARN]  TestAgent_Self: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:07.102Z [DEBUG] TestAgent_Self.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:07.102Z [DEBUG] TestAgent_Self.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:07.115Z [INFO]  TestAgent_Self.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a001a0ba-8b63-87d6-500d-ab473b8ef47e Address:127.0.0.1:16792}]"
>     writer.go:29: 2020-02-23T02:47:07.116Z [INFO]  TestAgent_Self.server.serf.wan: serf: EventMemberJoin: Node-a001a0ba-8b63-87d6-500d-ab473b8ef47e.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:07.116Z [INFO]  TestAgent_Self.server.serf.lan: serf: EventMemberJoin: Node-a001a0ba-8b63-87d6-500d-ab473b8ef47e 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:07.116Z [INFO]  TestAgent_Self: Started DNS server: address=127.0.0.1:16787 network=udp
>     writer.go:29: 2020-02-23T02:47:07.116Z [INFO]  TestAgent_Self.server.raft: entering follower state: follower="Node at 127.0.0.1:16792 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:07.117Z [INFO]  TestAgent_Self.server: Adding LAN server: server="Node-a001a0ba-8b63-87d6-500d-ab473b8ef47e (Addr: tcp/127.0.0.1:16792) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:07.117Z [INFO]  TestAgent_Self.server: Handled event for server in area: event=member-join server=Node-a001a0ba-8b63-87d6-500d-ab473b8ef47e.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:07.117Z [INFO]  TestAgent_Self: Started DNS server: address=127.0.0.1:16787 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.117Z [INFO]  TestAgent_Self: Started HTTP server: address=127.0.0.1:16788 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.117Z [INFO]  TestAgent_Self: started state syncer
>     writer.go:29: 2020-02-23T02:47:07.174Z [WARN]  TestAgent_Self.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:07.174Z [INFO]  TestAgent_Self.server.raft: entering candidate state: node="Node at 127.0.0.1:16792 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:07.203Z [DEBUG] TestAgent_Self.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:07.203Z [DEBUG] TestAgent_Self.server.raft: vote granted: from=a001a0ba-8b63-87d6-500d-ab473b8ef47e term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:07.203Z [INFO]  TestAgent_Self.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:07.203Z [INFO]  TestAgent_Self.server.raft: entering leader state: leader="Node at 127.0.0.1:16792 [Leader]"
>     writer.go:29: 2020-02-23T02:47:07.203Z [INFO]  TestAgent_Self.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:07.204Z [INFO]  TestAgent_Self.server: New leader elected: payload=Node-a001a0ba-8b63-87d6-500d-ab473b8ef47e
>     writer.go:29: 2020-02-23T02:47:07.213Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:07.221Z [INFO]  TestAgent_Self.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:07.221Z [INFO]  TestAgent_Self.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:07.221Z [DEBUG] TestAgent_Self.server: Skipping self join check for node since the cluster is too small: node=Node-a001a0ba-8b63-87d6-500d-ab473b8ef47e
>     writer.go:29: 2020-02-23T02:47:07.221Z [INFO]  TestAgent_Self.server: member joined, marking health alive: member=Node-a001a0ba-8b63-87d6-500d-ab473b8ef47e
>     writer.go:29: 2020-02-23T02:47:07.430Z [DEBUG] TestAgent_Self: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:07.433Z [INFO]  TestAgent_Self: Synced node info
>     writer.go:29: 2020-02-23T02:47:07.433Z [DEBUG] TestAgent_Self: Node info in sync
>     writer.go:29: 2020-02-23T02:47:07.433Z [INFO]  TestAgent_Self: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:07.433Z [INFO]  TestAgent_Self.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:07.433Z [DEBUG] TestAgent_Self.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:07.433Z [WARN]  TestAgent_Self.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:07.434Z [DEBUG] TestAgent_Self.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:07.435Z [WARN]  TestAgent_Self.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:07.437Z [INFO]  TestAgent_Self.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:07.437Z [INFO]  TestAgent_Self: consul server down
>     writer.go:29: 2020-02-23T02:47:07.437Z [INFO]  TestAgent_Self: shutdown complete
>     writer.go:29: 2020-02-23T02:47:07.437Z [INFO]  TestAgent_Self: Stopping server: protocol=DNS address=127.0.0.1:16787 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.437Z [INFO]  TestAgent_Self: Stopping server: protocol=DNS address=127.0.0.1:16787 network=udp
>     writer.go:29: 2020-02-23T02:47:07.437Z [INFO]  TestAgent_Self: Stopping server: protocol=HTTP address=127.0.0.1:16788 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.437Z [INFO]  TestAgent_Self: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:07.437Z [INFO]  TestAgent_Self: Endpoints down
> === CONT  TestAgent_Checks_ACLFilter
> --- PASS: TestAgent_Reload (1.27s)
>     writer.go:29: 2020-02-23T02:47:06.250Z [WARN]  TestAgent_Reload: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:06.250Z [DEBUG] TestAgent_Reload.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:06.250Z [DEBUG] TestAgent_Reload.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:06.260Z [INFO]  TestAgent_Reload.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:08e8f1aa-9e69-55d6-7711-6645276a1f8d Address:127.0.0.1:16780}]"
>     writer.go:29: 2020-02-23T02:47:06.260Z [INFO]  TestAgent_Reload.server.raft: entering follower state: follower="Node at 127.0.0.1:16780 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:06.261Z [INFO]  TestAgent_Reload.server.serf.wan: serf: EventMemberJoin: Node-08e8f1aa-9e69-55d6-7711-6645276a1f8d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:06.261Z [INFO]  TestAgent_Reload.server.serf.lan: serf: EventMemberJoin: Node-08e8f1aa-9e69-55d6-7711-6645276a1f8d 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:06.261Z [INFO]  TestAgent_Reload.server: Handled event for server in area: event=member-join server=Node-08e8f1aa-9e69-55d6-7711-6645276a1f8d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:06.261Z [INFO]  TestAgent_Reload.server: Adding LAN server: server="Node-08e8f1aa-9e69-55d6-7711-6645276a1f8d (Addr: tcp/127.0.0.1:16780) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:06.262Z [INFO]  TestAgent_Reload: Started DNS server: address=127.0.0.1:16775 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.262Z [INFO]  TestAgent_Reload: Started DNS server: address=127.0.0.1:16775 network=udp
>     writer.go:29: 2020-02-23T02:47:06.262Z [INFO]  TestAgent_Reload: Started HTTP server: address=127.0.0.1:16776 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.262Z [WARN]  TestAgent_Reload: The 'handler' field in watches has been deprecated and replaced with the 'args' field. See https://www.consul.io/docs/agent/watches.html
>     writer.go:29: 2020-02-23T02:47:06.262Z [INFO]  TestAgent_Reload: started state syncer
>     writer.go:29: 2020-02-23T02:47:06.308Z [WARN]  TestAgent_Reload.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:06.308Z [INFO]  TestAgent_Reload.server.raft: entering candidate state: node="Node at 127.0.0.1:16780 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:06.317Z [DEBUG] TestAgent_Reload.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:06.317Z [DEBUG] TestAgent_Reload.server.raft: vote granted: from=08e8f1aa-9e69-55d6-7711-6645276a1f8d term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:06.317Z [INFO]  TestAgent_Reload.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:06.317Z [INFO]  TestAgent_Reload.server.raft: entering leader state: leader="Node at 127.0.0.1:16780 [Leader]"
>     writer.go:29: 2020-02-23T02:47:06.317Z [INFO]  TestAgent_Reload.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:06.317Z [INFO]  TestAgent_Reload.server: New leader elected: payload=Node-08e8f1aa-9e69-55d6-7711-6645276a1f8d
>     writer.go:29: 2020-02-23T02:47:06.325Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:06.334Z [INFO]  TestAgent_Reload.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:06.334Z [INFO]  TestAgent_Reload.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.334Z [DEBUG] TestAgent_Reload.server: Skipping self join check for node since the cluster is too small: node=Node-08e8f1aa-9e69-55d6-7711-6645276a1f8d
>     writer.go:29: 2020-02-23T02:47:06.334Z [INFO]  TestAgent_Reload.server: member joined, marking health alive: member=Node-08e8f1aa-9e69-55d6-7711-6645276a1f8d
>     writer.go:29: 2020-02-23T02:47:06.472Z [DEBUG] TestAgent_Reload.http: Request finished: method=GET url=/v1/kv/test?dc=dc1 from=127.0.0.1:59222 latency=209.624857ms
>     writer.go:29: 2020-02-23T02:47:06.488Z [DEBUG] TestAgent_Reload: watch handler output: watch_handler=true output=
>     writer.go:29: 2020-02-23T02:47:06.502Z [WARN]  TestAgent_Reload: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:06.503Z [DEBUG] TestAgent_Reload: removed service: service=redis
>     writer.go:29: 2020-02-23T02:47:06.503Z [DEBUG] TestAgent_Reload.tlsutil: Update: version=2
>     writer.go:29: 2020-02-23T02:47:06.503Z [INFO]  TestAgent_Reload: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:06.503Z [INFO]  TestAgent_Reload.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:06.503Z [DEBUG] TestAgent_Reload.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.503Z [WARN]  TestAgent_Reload.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:06.503Z [ERROR] TestAgent_Reload.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:06.503Z [DEBUG] TestAgent_Reload.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:06.505Z [WARN]  TestAgent_Reload.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:06.507Z [INFO]  TestAgent_Reload.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:06.507Z [INFO]  TestAgent_Reload: consul server down
>     writer.go:29: 2020-02-23T02:47:06.507Z [INFO]  TestAgent_Reload: shutdown complete
>     writer.go:29: 2020-02-23T02:47:06.507Z [INFO]  TestAgent_Reload: Stopping server: protocol=DNS address=127.0.0.1:16775 network=tcp
>     writer.go:29: 2020-02-23T02:47:06.507Z [INFO]  TestAgent_Reload: Stopping server: protocol=DNS address=127.0.0.1:16775 network=udp
>     writer.go:29: 2020-02-23T02:47:06.507Z [INFO]  TestAgent_Reload: Stopping server: protocol=HTTP address=127.0.0.1:16776 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.507Z [WARN]  TestAgent_Reload: Timeout stopping server: protocol=HTTP address=127.0.0.1:16776 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.507Z [INFO]  TestAgent_Reload: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:07.507Z [INFO]  TestAgent_Reload: Endpoints down
> === CONT  TestAgent_HealthServicesACLEnforcement
> === RUN   TestAgent_Checks_ACLFilter/no_token
> === RUN   TestAgent_Checks_ACLFilter/root_token
> --- PASS: TestAgent_Checks_ACLFilter (0.40s)
>     writer.go:29: 2020-02-23T02:47:07.445Z [WARN]  TestAgent_Checks_ACLFilter: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:07.445Z [WARN]  TestAgent_Checks_ACLFilter: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:07.445Z [DEBUG] TestAgent_Checks_ACLFilter.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:07.445Z [DEBUG] TestAgent_Checks_ACLFilter.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:07.454Z [INFO]  TestAgent_Checks_ACLFilter.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:9c03aea2-e634-b6ad-d95e-9d1b74cb12b8 Address:127.0.0.1:16798}]"
>     writer.go:29: 2020-02-23T02:47:07.454Z [INFO]  TestAgent_Checks_ACLFilter.server.raft: entering follower state: follower="Node at 127.0.0.1:16798 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:07.455Z [INFO]  TestAgent_Checks_ACLFilter.server.serf.wan: serf: EventMemberJoin: Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:07.456Z [INFO]  TestAgent_Checks_ACLFilter.server.serf.lan: serf: EventMemberJoin: Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:07.456Z [INFO]  TestAgent_Checks_ACLFilter.server: Handled event for server in area: event=member-join server=Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:07.456Z [INFO]  TestAgent_Checks_ACLFilter.server: Adding LAN server: server="Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8 (Addr: tcp/127.0.0.1:16798) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:07.456Z [INFO]  TestAgent_Checks_ACLFilter: Started DNS server: address=127.0.0.1:16793 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.456Z [INFO]  TestAgent_Checks_ACLFilter: Started DNS server: address=127.0.0.1:16793 network=udp
>     writer.go:29: 2020-02-23T02:47:07.456Z [INFO]  TestAgent_Checks_ACLFilter: Started HTTP server: address=127.0.0.1:16794 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.456Z [INFO]  TestAgent_Checks_ACLFilter: started state syncer
>     writer.go:29: 2020-02-23T02:47:07.499Z [WARN]  TestAgent_Checks_ACLFilter.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:07.499Z [INFO]  TestAgent_Checks_ACLFilter.server.raft: entering candidate state: node="Node at 127.0.0.1:16798 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:07.502Z [DEBUG] TestAgent_Checks_ACLFilter.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:07.502Z [DEBUG] TestAgent_Checks_ACLFilter.server.raft: vote granted: from=9c03aea2-e634-b6ad-d95e-9d1b74cb12b8 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:07.502Z [INFO]  TestAgent_Checks_ACLFilter.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:07.502Z [INFO]  TestAgent_Checks_ACLFilter.server.raft: entering leader state: leader="Node at 127.0.0.1:16798 [Leader]"
>     writer.go:29: 2020-02-23T02:47:07.502Z [INFO]  TestAgent_Checks_ACLFilter.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:07.503Z [INFO]  TestAgent_Checks_ACLFilter.server: New leader elected: payload=Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8
>     writer.go:29: 2020-02-23T02:47:07.505Z [INFO]  TestAgent_Checks_ACLFilter.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:07.506Z [INFO]  TestAgent_Checks_ACLFilter.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:07.506Z [WARN]  TestAgent_Checks_ACLFilter.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:07.506Z [INFO]  TestAgent_Checks_ACLFilter.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:07.506Z [WARN]  TestAgent_Checks_ACLFilter.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:07.513Z [INFO]  TestAgent_Checks_ACLFilter.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:07.516Z [INFO]  TestAgent_Checks_ACLFilter.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:07.517Z [INFO]  TestAgent_Checks_ACLFilter.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:07.518Z [INFO]  TestAgent_Checks_ACLFilter.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:07.518Z [INFO]  TestAgent_Checks_ACLFilter.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:07.518Z [INFO]  TestAgent_Checks_ACLFilter.server.serf.lan: serf: EventMemberUpdate: Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8
>     writer.go:29: 2020-02-23T02:47:07.518Z [INFO]  TestAgent_Checks_ACLFilter.server.serf.wan: serf: EventMemberUpdate: Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8.dc1
>     writer.go:29: 2020-02-23T02:47:07.518Z [INFO]  TestAgent_Checks_ACLFilter.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:07.518Z [DEBUG] TestAgent_Checks_ACLFilter.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:47:07.518Z [INFO]  TestAgent_Checks_ACLFilter.server.serf.lan: serf: EventMemberUpdate: Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8
>     writer.go:29: 2020-02-23T02:47:07.518Z [INFO]  TestAgent_Checks_ACLFilter.server: Handled event for server in area: event=member-update server=Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:07.519Z [INFO]  TestAgent_Checks_ACLFilter.server.serf.wan: serf: EventMemberUpdate: Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8.dc1
>     writer.go:29: 2020-02-23T02:47:07.519Z [INFO]  TestAgent_Checks_ACLFilter.server: Handled event for server in area: event=member-update server=Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:07.524Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:07.533Z [INFO]  TestAgent_Checks_ACLFilter.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:07.533Z [INFO]  TestAgent_Checks_ACLFilter.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:07.533Z [DEBUG] TestAgent_Checks_ACLFilter.server: Skipping self join check for node since the cluster is too small: node=Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8
>     writer.go:29: 2020-02-23T02:47:07.533Z [INFO]  TestAgent_Checks_ACLFilter.server: member joined, marking health alive: member=Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8
>     writer.go:29: 2020-02-23T02:47:07.535Z [DEBUG] TestAgent_Checks_ACLFilter.server: Skipping self join check for node since the cluster is too small: node=Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8
>     writer.go:29: 2020-02-23T02:47:07.535Z [DEBUG] TestAgent_Checks_ACLFilter.server: Skipping self join check for node since the cluster is too small: node=Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8
>     writer.go:29: 2020-02-23T02:47:07.659Z [DEBUG] TestAgent_Checks_ACLFilter: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:07.673Z [INFO]  TestAgent_Checks_ACLFilter: Synced node info
>     writer.go:29: 2020-02-23T02:47:07.673Z [DEBUG] TestAgent_Checks_ACLFilter: Node info in sync
>     writer.go:29: 2020-02-23T02:47:07.826Z [DEBUG] TestAgent_Checks_ACLFilter.acl: dropping node from result due to ACLs: node=Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8
>     writer.go:29: 2020-02-23T02:47:07.827Z [DEBUG] TestAgent_Checks_ACLFilter.acl: dropping node from result due to ACLs: node=Node-9c03aea2-e634-b6ad-d95e-9d1b74cb12b8
>     writer.go:29: 2020-02-23T02:47:07.827Z [DEBUG] TestAgent_Checks_ACLFilter: dropping check from result due to ACLs: check=mysql
>     --- PASS: TestAgent_Checks_ACLFilter/no_token (0.00s)
>     --- PASS: TestAgent_Checks_ACLFilter/root_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:07.827Z [INFO]  TestAgent_Checks_ACLFilter: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:07.827Z [INFO]  TestAgent_Checks_ACLFilter.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:07.827Z [DEBUG] TestAgent_Checks_ACLFilter.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:07.827Z [DEBUG] TestAgent_Checks_ACLFilter.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:07.827Z [DEBUG] TestAgent_Checks_ACLFilter.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:07.827Z [WARN]  TestAgent_Checks_ACLFilter.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:07.827Z [DEBUG] TestAgent_Checks_ACLFilter.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:07.827Z [DEBUG] TestAgent_Checks_ACLFilter.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:07.827Z [DEBUG] TestAgent_Checks_ACLFilter.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:07.839Z [WARN]  TestAgent_Checks_ACLFilter.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:07.841Z [INFO]  TestAgent_Checks_ACLFilter.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:07.841Z [INFO]  TestAgent_Checks_ACLFilter: consul server down
>     writer.go:29: 2020-02-23T02:47:07.841Z [INFO]  TestAgent_Checks_ACLFilter: shutdown complete
>     writer.go:29: 2020-02-23T02:47:07.841Z [INFO]  TestAgent_Checks_ACLFilter: Stopping server: protocol=DNS address=127.0.0.1:16793 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.841Z [INFO]  TestAgent_Checks_ACLFilter: Stopping server: protocol=DNS address=127.0.0.1:16793 network=udp
>     writer.go:29: 2020-02-23T02:47:07.841Z [INFO]  TestAgent_Checks_ACLFilter: Stopping server: protocol=HTTP address=127.0.0.1:16794 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.841Z [INFO]  TestAgent_Checks_ACLFilter: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:07.841Z [INFO]  TestAgent_Checks_ACLFilter: Endpoints down
> === CONT  TestAgent_HealthServiceByName
> === RUN   TestAgent_HealthServicesACLEnforcement/no-token-health-by-id
> === RUN   TestAgent_HealthServicesACLEnforcement/no-token-health-by-name
> === RUN   TestAgent_HealthServicesACLEnforcement/root-token-health-by-id
> === RUN   TestAgent_HealthServicesACLEnforcement/root-token-health-by-name
> --- PASS: TestAgent_HealthServicesACLEnforcement (0.45s)
>     writer.go:29: 2020-02-23T02:47:07.516Z [WARN]  TestAgent_HealthServicesACLEnforcement: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:07.516Z [DEBUG] TestAgent_HealthServicesACLEnforcement.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:07.517Z [DEBUG] TestAgent_HealthServicesACLEnforcement.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:07.527Z [INFO]  TestAgent_HealthServicesACLEnforcement.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:33aa329d-d315-85ba-69d0-dc17afaa3f51 Address:127.0.0.1:16822}]"
>     writer.go:29: 2020-02-23T02:47:07.527Z [INFO]  TestAgent_HealthServicesACLEnforcement.server.raft: entering follower state: follower="Node at 127.0.0.1:16822 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:07.528Z [INFO]  TestAgent_HealthServicesACLEnforcement.server.serf.wan: serf: EventMemberJoin: Node-33aa329d-d315-85ba-69d0-dc17afaa3f51.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:07.528Z [INFO]  TestAgent_HealthServicesACLEnforcement.server.serf.lan: serf: EventMemberJoin: Node-33aa329d-d315-85ba-69d0-dc17afaa3f51 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:07.529Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: Handled event for server in area: event=member-join server=Node-33aa329d-d315-85ba-69d0-dc17afaa3f51.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:07.529Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: Adding LAN server: server="Node-33aa329d-d315-85ba-69d0-dc17afaa3f51 (Addr: tcp/127.0.0.1:16822) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:07.529Z [INFO]  TestAgent_HealthServicesACLEnforcement: Started DNS server: address=127.0.0.1:16817 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.529Z [INFO]  TestAgent_HealthServicesACLEnforcement: Started DNS server: address=127.0.0.1:16817 network=udp
>     writer.go:29: 2020-02-23T02:47:07.529Z [INFO]  TestAgent_HealthServicesACLEnforcement: Started HTTP server: address=127.0.0.1:16818 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.530Z [INFO]  TestAgent_HealthServicesACLEnforcement: started state syncer
>     writer.go:29: 2020-02-23T02:47:07.569Z [WARN]  TestAgent_HealthServicesACLEnforcement.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:07.569Z [INFO]  TestAgent_HealthServicesACLEnforcement.server.raft: entering candidate state: node="Node at 127.0.0.1:16822 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:07.576Z [DEBUG] TestAgent_HealthServicesACLEnforcement.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:07.576Z [DEBUG] TestAgent_HealthServicesACLEnforcement.server.raft: vote granted: from=33aa329d-d315-85ba-69d0-dc17afaa3f51 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:07.576Z [INFO]  TestAgent_HealthServicesACLEnforcement.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:07.576Z [INFO]  TestAgent_HealthServicesACLEnforcement.server.raft: entering leader state: leader="Node at 127.0.0.1:16822 [Leader]"
>     writer.go:29: 2020-02-23T02:47:07.576Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:07.576Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: New leader elected: payload=Node-33aa329d-d315-85ba-69d0-dc17afaa3f51
>     writer.go:29: 2020-02-23T02:47:07.578Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:07.579Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:07.582Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:07.582Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:07.587Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:07.587Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:07.598Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:07.598Z [INFO]  TestAgent_HealthServicesACLEnforcement.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:07.598Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:07.598Z [INFO]  TestAgent_HealthServicesACLEnforcement.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:07.598Z [DEBUG] TestAgent_HealthServicesACLEnforcement.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:47:07.598Z [INFO]  TestAgent_HealthServicesACLEnforcement.server.serf.lan: serf: EventMemberUpdate: Node-33aa329d-d315-85ba-69d0-dc17afaa3f51
>     writer.go:29: 2020-02-23T02:47:07.598Z [INFO]  TestAgent_HealthServicesACLEnforcement.server.serf.wan: serf: EventMemberUpdate: Node-33aa329d-d315-85ba-69d0-dc17afaa3f51.dc1
>     writer.go:29: 2020-02-23T02:47:07.598Z [INFO]  TestAgent_HealthServicesACLEnforcement.server.serf.lan: serf: EventMemberUpdate: Node-33aa329d-d315-85ba-69d0-dc17afaa3f51
>     writer.go:29: 2020-02-23T02:47:07.598Z [INFO]  TestAgent_HealthServicesACLEnforcement.server.serf.wan: serf: EventMemberUpdate: Node-33aa329d-d315-85ba-69d0-dc17afaa3f51.dc1
>     writer.go:29: 2020-02-23T02:47:07.599Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: Handled event for server in area: event=member-update server=Node-33aa329d-d315-85ba-69d0-dc17afaa3f51.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:07.599Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: Handled event for server in area: event=member-update server=Node-33aa329d-d315-85ba-69d0-dc17afaa3f51.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:07.603Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:07.619Z [INFO]  TestAgent_HealthServicesACLEnforcement.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:07.619Z [INFO]  TestAgent_HealthServicesACLEnforcement.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:07.619Z [DEBUG] TestAgent_HealthServicesACLEnforcement.server: Skipping self join check for node since the cluster is too small: node=Node-33aa329d-d315-85ba-69d0-dc17afaa3f51
>     writer.go:29: 2020-02-23T02:47:07.619Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: member joined, marking health alive: member=Node-33aa329d-d315-85ba-69d0-dc17afaa3f51
>     writer.go:29: 2020-02-23T02:47:07.623Z [DEBUG] TestAgent_HealthServicesACLEnforcement.server: Skipping self join check for node since the cluster is too small: node=Node-33aa329d-d315-85ba-69d0-dc17afaa3f51
>     writer.go:29: 2020-02-23T02:47:07.623Z [DEBUG] TestAgent_HealthServicesACLEnforcement.server: Skipping self join check for node since the cluster is too small: node=Node-33aa329d-d315-85ba-69d0-dc17afaa3f51
>     writer.go:29: 2020-02-23T02:47:07.642Z [DEBUG] TestAgent_HealthServicesACLEnforcement: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:07.654Z [INFO]  TestAgent_HealthServicesACLEnforcement: Synced node info
>     writer.go:29: 2020-02-23T02:47:07.957Z [DEBUG] TestAgent_HealthServicesACLEnforcement.acl: dropping node from result due to ACLs: node=Node-33aa329d-d315-85ba-69d0-dc17afaa3f51
>     --- PASS: TestAgent_HealthServicesACLEnforcement/no-token-health-by-id (0.00s)
>     --- PASS: TestAgent_HealthServicesACLEnforcement/no-token-health-by-name (0.00s)
>     --- PASS: TestAgent_HealthServicesACLEnforcement/root-token-health-by-id (0.00s)
>     --- PASS: TestAgent_HealthServicesACLEnforcement/root-token-health-by-name (0.00s)
>     writer.go:29: 2020-02-23T02:47:07.958Z [INFO]  TestAgent_HealthServicesACLEnforcement: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:07.958Z [INFO]  TestAgent_HealthServicesACLEnforcement.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:07.958Z [DEBUG] TestAgent_HealthServicesACLEnforcement.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:07.958Z [DEBUG] TestAgent_HealthServicesACLEnforcement.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:07.958Z [DEBUG] TestAgent_HealthServicesACLEnforcement.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:07.958Z [WARN]  TestAgent_HealthServicesACLEnforcement.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:07.958Z [DEBUG] TestAgent_HealthServicesACLEnforcement.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:07.958Z [DEBUG] TestAgent_HealthServicesACLEnforcement.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:07.958Z [DEBUG] TestAgent_HealthServicesACLEnforcement.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:07.960Z [WARN]  TestAgent_HealthServicesACLEnforcement.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:07.961Z [INFO]  TestAgent_HealthServicesACLEnforcement.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:07.961Z [INFO]  TestAgent_HealthServicesACLEnforcement: consul server down
>     writer.go:29: 2020-02-23T02:47:07.961Z [INFO]  TestAgent_HealthServicesACLEnforcement: shutdown complete
>     writer.go:29: 2020-02-23T02:47:07.961Z [INFO]  TestAgent_HealthServicesACLEnforcement: Stopping server: protocol=DNS address=127.0.0.1:16817 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.961Z [INFO]  TestAgent_HealthServicesACLEnforcement: Stopping server: protocol=DNS address=127.0.0.1:16817 network=udp
>     writer.go:29: 2020-02-23T02:47:07.961Z [INFO]  TestAgent_HealthServicesACLEnforcement: Stopping server: protocol=HTTP address=127.0.0.1:16818 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.961Z [INFO]  TestAgent_HealthServicesACLEnforcement: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:07.961Z [INFO]  TestAgent_HealthServicesACLEnforcement: Endpoints down
> === CONT  TestAgent_HealthServiceByID
> === RUN   TestAgent_HealthServiceByName/passing_checks
> === RUN   TestAgent_HealthServiceByName/passing_checks/format=text
> === RUN   TestAgent_HealthServiceByName/passing_checks/format=json
> === RUN   TestAgent_HealthServiceByName/warning_checks
> === RUN   TestAgent_HealthServiceByName/warning_checks/format=text
> === RUN   TestAgent_HealthServiceByName/warning_checks/format=json
> === RUN   TestAgent_HealthServiceByName/critical_checks
> === RUN   TestAgent_HealthServiceByName/critical_checks/format=text
> === RUN   TestAgent_HealthServiceByName/critical_checks/format=json
> === RUN   TestAgent_HealthServiceByName/unknown_serviceName
> === RUN   TestAgent_HealthServiceByName/unknown_serviceName/format=text
> === RUN   TestAgent_HealthServiceByName/unknown_serviceName/format=json
> === RUN   TestAgent_HealthServiceByName/critical_check_on_node
> === RUN   TestAgent_HealthServiceByName/critical_check_on_node/format=text
> === RUN   TestAgent_HealthServiceByName/critical_check_on_node/format=json
> === RUN   TestAgent_HealthServiceByName/maintenance_check_on_node
> === RUN   TestAgent_HealthServiceByName/maintenance_check_on_node/format=text
> === RUN   TestAgent_HealthServiceByName/maintenance_check_on_node/format=json
> --- PASS: TestAgent_HealthServiceByName (0.25s)
>     writer.go:29: 2020-02-23T02:47:07.848Z [WARN]  TestAgent_HealthServiceByName: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:07.848Z [DEBUG] TestAgent_HealthServiceByName.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:07.849Z [DEBUG] TestAgent_HealthServiceByName.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:07.859Z [INFO]  TestAgent_HealthServiceByName.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:77cc1d8e-6bd4-b322-a302-3ae7c5f3687c Address:127.0.0.1:16816}]"
>     writer.go:29: 2020-02-23T02:47:07.860Z [INFO]  TestAgent_HealthServiceByName.server.serf.wan: serf: EventMemberJoin: Node-77cc1d8e-6bd4-b322-a302-3ae7c5f3687c.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:07.860Z [INFO]  TestAgent_HealthServiceByName.server.serf.lan: serf: EventMemberJoin: Node-77cc1d8e-6bd4-b322-a302-3ae7c5f3687c 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:07.860Z [INFO]  TestAgent_HealthServiceByName: Started DNS server: address=127.0.0.1:16811 network=udp
>     writer.go:29: 2020-02-23T02:47:07.860Z [INFO]  TestAgent_HealthServiceByName.server.raft: entering follower state: follower="Node at 127.0.0.1:16816 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:07.860Z [INFO]  TestAgent_HealthServiceByName.server: Adding LAN server: server="Node-77cc1d8e-6bd4-b322-a302-3ae7c5f3687c (Addr: tcp/127.0.0.1:16816) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:07.860Z [INFO]  TestAgent_HealthServiceByName.server: Handled event for server in area: event=member-join server=Node-77cc1d8e-6bd4-b322-a302-3ae7c5f3687c.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:07.861Z [INFO]  TestAgent_HealthServiceByName: Started DNS server: address=127.0.0.1:16811 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.861Z [INFO]  TestAgent_HealthServiceByName: Started HTTP server: address=127.0.0.1:16812 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.861Z [INFO]  TestAgent_HealthServiceByName: started state syncer
>     writer.go:29: 2020-02-23T02:47:07.896Z [WARN]  TestAgent_HealthServiceByName.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:07.896Z [INFO]  TestAgent_HealthServiceByName.server.raft: entering candidate state: node="Node at 127.0.0.1:16816 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:07.899Z [DEBUG] TestAgent_HealthServiceByName.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:07.899Z [DEBUG] TestAgent_HealthServiceByName.server.raft: vote granted: from=77cc1d8e-6bd4-b322-a302-3ae7c5f3687c term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:07.899Z [INFO]  TestAgent_HealthServiceByName.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:07.899Z [INFO]  TestAgent_HealthServiceByName.server.raft: entering leader state: leader="Node at 127.0.0.1:16816 [Leader]"
>     writer.go:29: 2020-02-23T02:47:07.900Z [INFO]  TestAgent_HealthServiceByName.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:07.900Z [INFO]  TestAgent_HealthServiceByName.server: New leader elected: payload=Node-77cc1d8e-6bd4-b322-a302-3ae7c5f3687c
>     writer.go:29: 2020-02-23T02:47:07.907Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:07.915Z [INFO]  TestAgent_HealthServiceByName.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:07.915Z [INFO]  TestAgent_HealthServiceByName.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:07.915Z [DEBUG] TestAgent_HealthServiceByName.server: Skipping self join check for node since the cluster is too small: node=Node-77cc1d8e-6bd4-b322-a302-3ae7c5f3687c
>     writer.go:29: 2020-02-23T02:47:07.915Z [INFO]  TestAgent_HealthServiceByName.server: member joined, marking health alive: member=Node-77cc1d8e-6bd4-b322-a302-3ae7c5f3687c
>     writer.go:29: 2020-02-23T02:47:08.001Z [DEBUG] TestAgent_HealthServiceByName: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:08.003Z [INFO]  TestAgent_HealthServiceByName: Synced node info
>     writer.go:29: 2020-02-23T02:47:08.003Z [DEBUG] TestAgent_HealthServiceByName: Node info in sync
>     --- PASS: TestAgent_HealthServiceByName/passing_checks (0.00s)
>         --- PASS: TestAgent_HealthServiceByName/passing_checks/format=text (0.00s)
>         --- PASS: TestAgent_HealthServiceByName/passing_checks/format=json (0.00s)
>     --- PASS: TestAgent_HealthServiceByName/warning_checks (0.00s)
>         --- PASS: TestAgent_HealthServiceByName/warning_checks/format=text (0.00s)
>         --- PASS: TestAgent_HealthServiceByName/warning_checks/format=json (0.00s)
>     --- PASS: TestAgent_HealthServiceByName/critical_checks (0.00s)
>         --- PASS: TestAgent_HealthServiceByName/critical_checks/format=text (0.00s)
>         --- PASS: TestAgent_HealthServiceByName/critical_checks/format=json (0.00s)
>     --- PASS: TestAgent_HealthServiceByName/unknown_serviceName (0.00s)
>         --- PASS: TestAgent_HealthServiceByName/unknown_serviceName/format=text (0.00s)
>         --- PASS: TestAgent_HealthServiceByName/unknown_serviceName/format=json (0.00s)
>     --- PASS: TestAgent_HealthServiceByName/critical_check_on_node (0.00s)
>         --- PASS: TestAgent_HealthServiceByName/critical_check_on_node/format=text (0.00s)
>         --- PASS: TestAgent_HealthServiceByName/critical_check_on_node/format=json (0.00s)
>     --- PASS: TestAgent_HealthServiceByName/maintenance_check_on_node (0.00s)
>         --- PASS: TestAgent_HealthServiceByName/maintenance_check_on_node/format=text (0.00s)
>         --- PASS: TestAgent_HealthServiceByName/maintenance_check_on_node/format=json (0.00s)
>     writer.go:29: 2020-02-23T02:47:08.089Z [INFO]  TestAgent_HealthServiceByName: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:08.089Z [INFO]  TestAgent_HealthServiceByName.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:08.089Z [DEBUG] TestAgent_HealthServiceByName.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:08.089Z [WARN]  TestAgent_HealthServiceByName.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:08.089Z [DEBUG] TestAgent_HealthServiceByName.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:08.091Z [WARN]  TestAgent_HealthServiceByName.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:08.093Z [INFO]  TestAgent_HealthServiceByName.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:08.093Z [INFO]  TestAgent_HealthServiceByName: consul server down
>     writer.go:29: 2020-02-23T02:47:08.093Z [INFO]  TestAgent_HealthServiceByName: shutdown complete
>     writer.go:29: 2020-02-23T02:47:08.093Z [INFO]  TestAgent_HealthServiceByName: Stopping server: protocol=DNS address=127.0.0.1:16811 network=tcp
>     writer.go:29: 2020-02-23T02:47:08.093Z [INFO]  TestAgent_HealthServiceByName: Stopping server: protocol=DNS address=127.0.0.1:16811 network=udp
>     writer.go:29: 2020-02-23T02:47:08.093Z [INFO]  TestAgent_HealthServiceByName: Stopping server: protocol=HTTP address=127.0.0.1:16812 network=tcp
>     writer.go:29: 2020-02-23T02:47:08.093Z [INFO]  TestAgent_HealthServiceByName: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:08.093Z [INFO]  TestAgent_HealthServiceByName: Endpoints down
> === CONT  TestAgent_ChecksWithFilter
> === RUN   TestAgent_HealthServiceByID/passing_checks
> === RUN   TestAgent_HealthServiceByID/passing_checks/format=text
> === RUN   TestAgent_HealthServiceByID/passing_checks/format=json
> === RUN   TestAgent_HealthServiceByID/warning_checks
> === RUN   TestAgent_HealthServiceByID/warning_checks/format=text
> === RUN   TestAgent_HealthServiceByID/warning_checks/format=json
> === RUN   TestAgent_HealthServiceByID/critical_checks
> === RUN   TestAgent_HealthServiceByID/critical_checks/format=text
> === RUN   TestAgent_HealthServiceByID/critical_checks/format=json
> === RUN   TestAgent_HealthServiceByID/unknown_serviceid
> === RUN   TestAgent_HealthServiceByID/unknown_serviceid/format=text
> === RUN   TestAgent_HealthServiceByID/unknown_serviceid/format=json
> === RUN   TestAgent_HealthServiceByID/critical_check_on_node
> === RUN   TestAgent_HealthServiceByID/critical_check_on_node/format=text
> === RUN   TestAgent_HealthServiceByID/critical_check_on_node/format=json
> === RUN   TestAgent_HealthServiceByID/maintenance_check_on_node
> === RUN   TestAgent_HealthServiceByID/maintenance_check_on_node/format=text
> === RUN   TestAgent_HealthServiceByID/maintenance_check_on_node/format=json
> --- PASS: TestAgent_HealthServiceByID (0.26s)
>     writer.go:29: 2020-02-23T02:47:07.970Z [WARN]  TestAgent_HealthServiceByID: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:07.970Z [DEBUG] TestAgent_HealthServiceByID.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:07.971Z [DEBUG] TestAgent_HealthServiceByID.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:07.984Z [INFO]  TestAgent_HealthServiceByID.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4e8ad97b-fcf6-03af-e18b-f24e273a4933 Address:127.0.0.1:16810}]"
>     writer.go:29: 2020-02-23T02:47:07.984Z [INFO]  TestAgent_HealthServiceByID.server.serf.wan: serf: EventMemberJoin: Node-4e8ad97b-fcf6-03af-e18b-f24e273a4933.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:07.984Z [INFO]  TestAgent_HealthServiceByID.server.serf.lan: serf: EventMemberJoin: Node-4e8ad97b-fcf6-03af-e18b-f24e273a4933 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:07.985Z [INFO]  TestAgent_HealthServiceByID: Started DNS server: address=127.0.0.1:16805 network=udp
>     writer.go:29: 2020-02-23T02:47:07.985Z [INFO]  TestAgent_HealthServiceByID.server.raft: entering follower state: follower="Node at 127.0.0.1:16810 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:07.985Z [INFO]  TestAgent_HealthServiceByID.server: Adding LAN server: server="Node-4e8ad97b-fcf6-03af-e18b-f24e273a4933 (Addr: tcp/127.0.0.1:16810) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:07.985Z [INFO]  TestAgent_HealthServiceByID.server: Handled event for server in area: event=member-join server=Node-4e8ad97b-fcf6-03af-e18b-f24e273a4933.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:07.985Z [INFO]  TestAgent_HealthServiceByID: Started DNS server: address=127.0.0.1:16805 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.985Z [INFO]  TestAgent_HealthServiceByID: Started HTTP server: address=127.0.0.1:16806 network=tcp
>     writer.go:29: 2020-02-23T02:47:07.985Z [INFO]  TestAgent_HealthServiceByID: started state syncer
>     writer.go:29: 2020-02-23T02:47:08.027Z [WARN]  TestAgent_HealthServiceByID.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:08.027Z [INFO]  TestAgent_HealthServiceByID.server.raft: entering candidate state: node="Node at 127.0.0.1:16810 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:08.030Z [DEBUG] TestAgent_HealthServiceByID.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:08.030Z [DEBUG] TestAgent_HealthServiceByID.server.raft: vote granted: from=4e8ad97b-fcf6-03af-e18b-f24e273a4933 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:08.030Z [INFO]  TestAgent_HealthServiceByID.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:08.030Z [INFO]  TestAgent_HealthServiceByID.server.raft: entering leader state: leader="Node at 127.0.0.1:16810 [Leader]"
>     writer.go:29: 2020-02-23T02:47:08.030Z [INFO]  TestAgent_HealthServiceByID.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:08.030Z [INFO]  TestAgent_HealthServiceByID.server: New leader elected: payload=Node-4e8ad97b-fcf6-03af-e18b-f24e273a4933
>     writer.go:29: 2020-02-23T02:47:08.046Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:08.054Z [INFO]  TestAgent_HealthServiceByID.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:08.054Z [INFO]  TestAgent_HealthServiceByID.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:08.054Z [DEBUG] TestAgent_HealthServiceByID.server: Skipping self join check for node since the cluster is too small: node=Node-4e8ad97b-fcf6-03af-e18b-f24e273a4933
>     writer.go:29: 2020-02-23T02:47:08.054Z [INFO]  TestAgent_HealthServiceByID.server: member joined, marking health alive: member=Node-4e8ad97b-fcf6-03af-e18b-f24e273a4933
>     writer.go:29: 2020-02-23T02:47:08.121Z [DEBUG] TestAgent_HealthServiceByID: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:08.124Z [INFO]  TestAgent_HealthServiceByID: Synced node info
>     --- PASS: TestAgent_HealthServiceByID/passing_checks (0.00s)
>         --- PASS: TestAgent_HealthServiceByID/passing_checks/format=text (0.00s)
>         --- PASS: TestAgent_HealthServiceByID/passing_checks/format=json (0.00s)
>     --- PASS: TestAgent_HealthServiceByID/warning_checks (0.00s)
>         --- PASS: TestAgent_HealthServiceByID/warning_checks/format=text (0.00s)
>         --- PASS: TestAgent_HealthServiceByID/warning_checks/format=json (0.00s)
>     --- PASS: TestAgent_HealthServiceByID/critical_checks (0.00s)
>         --- PASS: TestAgent_HealthServiceByID/critical_checks/format=text (0.00s)
>         --- PASS: TestAgent_HealthServiceByID/critical_checks/format=json (0.00s)
>     --- PASS: TestAgent_HealthServiceByID/unknown_serviceid (0.00s)
>         --- PASS: TestAgent_HealthServiceByID/unknown_serviceid/format=text (0.00s)
>         --- PASS: TestAgent_HealthServiceByID/unknown_serviceid/format=json (0.00s)
>     --- PASS: TestAgent_HealthServiceByID/critical_check_on_node (0.00s)
>         --- PASS: TestAgent_HealthServiceByID/critical_check_on_node/format=text (0.00s)
>         --- PASS: TestAgent_HealthServiceByID/critical_check_on_node/format=json (0.00s)
>     --- PASS: TestAgent_HealthServiceByID/maintenance_check_on_node (0.00s)
>         --- PASS: TestAgent_HealthServiceByID/maintenance_check_on_node/format=text (0.00s)
>         --- PASS: TestAgent_HealthServiceByID/maintenance_check_on_node/format=json (0.00s)
>     writer.go:29: 2020-02-23T02:47:08.219Z [INFO]  TestAgent_HealthServiceByID: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:08.219Z [INFO]  TestAgent_HealthServiceByID.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:08.219Z [DEBUG] TestAgent_HealthServiceByID.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:08.219Z [WARN]  TestAgent_HealthServiceByID.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:08.219Z [DEBUG] TestAgent_HealthServiceByID.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:08.221Z [WARN]  TestAgent_HealthServiceByID.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:08.222Z [INFO]  TestAgent_HealthServiceByID.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:08.223Z [INFO]  TestAgent_HealthServiceByID: consul server down
>     writer.go:29: 2020-02-23T02:47:08.223Z [INFO]  TestAgent_HealthServiceByID: shutdown complete
>     writer.go:29: 2020-02-23T02:47:08.223Z [INFO]  TestAgent_HealthServiceByID: Stopping server: protocol=DNS address=127.0.0.1:16805 network=tcp
>     writer.go:29: 2020-02-23T02:47:08.223Z [INFO]  TestAgent_HealthServiceByID: Stopping server: protocol=DNS address=127.0.0.1:16805 network=udp
>     writer.go:29: 2020-02-23T02:47:08.223Z [INFO]  TestAgent_HealthServiceByID: Stopping server: protocol=HTTP address=127.0.0.1:16806 network=tcp
>     writer.go:29: 2020-02-23T02:47:08.223Z [INFO]  TestAgent_HealthServiceByID: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:08.223Z [INFO]  TestAgent_HealthServiceByID: Endpoints down
> === CONT  TestAgent_Checks
> --- PASS: TestAgent_ChecksWithFilter (0.44s)
>     writer.go:29: 2020-02-23T02:47:08.101Z [WARN]  TestAgent_ChecksWithFilter: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:08.101Z [DEBUG] TestAgent_ChecksWithFilter.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:08.101Z [DEBUG] TestAgent_ChecksWithFilter.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:08.119Z [INFO]  TestAgent_ChecksWithFilter.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:1c7b4b30-dab2-c929-7eae-3616edece6de Address:127.0.0.1:16828}]"
>     writer.go:29: 2020-02-23T02:47:08.119Z [INFO]  TestAgent_ChecksWithFilter.server.raft: entering follower state: follower="Node at 127.0.0.1:16828 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:08.120Z [INFO]  TestAgent_ChecksWithFilter.server.serf.wan: serf: EventMemberJoin: Node-1c7b4b30-dab2-c929-7eae-3616edece6de.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:08.120Z [INFO]  TestAgent_ChecksWithFilter.server.serf.lan: serf: EventMemberJoin: Node-1c7b4b30-dab2-c929-7eae-3616edece6de 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:08.120Z [INFO]  TestAgent_ChecksWithFilter.server: Adding LAN server: server="Node-1c7b4b30-dab2-c929-7eae-3616edece6de (Addr: tcp/127.0.0.1:16828) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:08.120Z [INFO]  TestAgent_ChecksWithFilter: Started DNS server: address=127.0.0.1:16823 network=udp
>     writer.go:29: 2020-02-23T02:47:08.120Z [INFO]  TestAgent_ChecksWithFilter.server: Handled event for server in area: event=member-join server=Node-1c7b4b30-dab2-c929-7eae-3616edece6de.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:08.120Z [INFO]  TestAgent_ChecksWithFilter: Started DNS server: address=127.0.0.1:16823 network=tcp
>     writer.go:29: 2020-02-23T02:47:08.121Z [INFO]  TestAgent_ChecksWithFilter: Started HTTP server: address=127.0.0.1:16824 network=tcp
>     writer.go:29: 2020-02-23T02:47:08.121Z [INFO]  TestAgent_ChecksWithFilter: started state syncer
>     writer.go:29: 2020-02-23T02:47:08.173Z [WARN]  TestAgent_ChecksWithFilter.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:08.173Z [INFO]  TestAgent_ChecksWithFilter.server.raft: entering candidate state: node="Node at 127.0.0.1:16828 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:08.194Z [DEBUG] TestAgent_ChecksWithFilter.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:08.194Z [DEBUG] TestAgent_ChecksWithFilter.server.raft: vote granted: from=1c7b4b30-dab2-c929-7eae-3616edece6de term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:08.194Z [INFO]  TestAgent_ChecksWithFilter.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:08.194Z [INFO]  TestAgent_ChecksWithFilter.server.raft: entering leader state: leader="Node at 127.0.0.1:16828 [Leader]"
>     writer.go:29: 2020-02-23T02:47:08.194Z [INFO]  TestAgent_ChecksWithFilter.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:08.194Z [INFO]  TestAgent_ChecksWithFilter.server: New leader elected: payload=Node-1c7b4b30-dab2-c929-7eae-3616edece6de
>     writer.go:29: 2020-02-23T02:47:08.201Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:08.208Z [INFO]  TestAgent_ChecksWithFilter.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:08.209Z [INFO]  TestAgent_ChecksWithFilter.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:08.209Z [DEBUG] TestAgent_ChecksWithFilter.server: Skipping self join check for node since the cluster is too small: node=Node-1c7b4b30-dab2-c929-7eae-3616edece6de
>     writer.go:29: 2020-02-23T02:47:08.209Z [INFO]  TestAgent_ChecksWithFilter.server: member joined, marking health alive: member=Node-1c7b4b30-dab2-c929-7eae-3616edece6de
>     writer.go:29: 2020-02-23T02:47:08.331Z [INFO]  TestAgent_ChecksWithFilter: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:08.331Z [INFO]  TestAgent_ChecksWithFilter.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:08.331Z [DEBUG] TestAgent_ChecksWithFilter.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:08.331Z [WARN]  TestAgent_ChecksWithFilter.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:08.331Z [ERROR] TestAgent_ChecksWithFilter.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:08.331Z [DEBUG] TestAgent_ChecksWithFilter.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:08.369Z [WARN]  TestAgent_ChecksWithFilter.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:08.535Z [INFO]  TestAgent_ChecksWithFilter.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:08.535Z [INFO]  TestAgent_ChecksWithFilter: consul server down
>     writer.go:29: 2020-02-23T02:47:08.535Z [INFO]  TestAgent_ChecksWithFilter: shutdown complete
>     writer.go:29: 2020-02-23T02:47:08.535Z [INFO]  TestAgent_ChecksWithFilter: Stopping server: protocol=DNS address=127.0.0.1:16823 network=tcp
>     writer.go:29: 2020-02-23T02:47:08.535Z [INFO]  TestAgent_ChecksWithFilter: Stopping server: protocol=DNS address=127.0.0.1:16823 network=udp
>     writer.go:29: 2020-02-23T02:47:08.535Z [INFO]  TestAgent_ChecksWithFilter: Stopping server: protocol=HTTP address=127.0.0.1:16824 network=tcp
>     writer.go:29: 2020-02-23T02:47:08.535Z [INFO]  TestAgent_ChecksWithFilter: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:08.535Z [INFO]  TestAgent_ChecksWithFilter: Endpoints down
> === CONT  TestAgent_Services_ACLFilter
> --- PASS: TestAgent_Checks (0.61s)
>     writer.go:29: 2020-02-23T02:47:08.230Z [WARN]  TestAgent_Checks: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:08.230Z [DEBUG] TestAgent_Checks.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:08.230Z [DEBUG] TestAgent_Checks.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:08.256Z [INFO]  TestAgent_Checks.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:9d58efcf-da9c-54ee-e2af-2fccb5a1badc Address:127.0.0.1:16834}]"
>     writer.go:29: 2020-02-23T02:47:08.256Z [INFO]  TestAgent_Checks.server.raft: entering follower state: follower="Node at 127.0.0.1:16834 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:08.256Z [INFO]  TestAgent_Checks.server.serf.wan: serf: EventMemberJoin: Node-9d58efcf-da9c-54ee-e2af-2fccb5a1badc.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:08.256Z [INFO]  TestAgent_Checks.server.serf.lan: serf: EventMemberJoin: Node-9d58efcf-da9c-54ee-e2af-2fccb5a1badc 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:08.257Z [INFO]  TestAgent_Checks.server: Adding LAN server: server="Node-9d58efcf-da9c-54ee-e2af-2fccb5a1badc (Addr: tcp/127.0.0.1:16834) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:08.257Z [INFO]  TestAgent_Checks: Started DNS server: address=127.0.0.1:16829 network=udp
>     writer.go:29: 2020-02-23T02:47:08.257Z [INFO]  TestAgent_Checks.server: Handled event for server in area: event=member-join server=Node-9d58efcf-da9c-54ee-e2af-2fccb5a1badc.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:08.257Z [INFO]  TestAgent_Checks: Started DNS server: address=127.0.0.1:16829 network=tcp
>     writer.go:29: 2020-02-23T02:47:08.257Z [INFO]  TestAgent_Checks: Started HTTP server: address=127.0.0.1:16830 network=tcp
>     writer.go:29: 2020-02-23T02:47:08.257Z [INFO]  TestAgent_Checks: started state syncer
>     writer.go:29: 2020-02-23T02:47:08.313Z [WARN]  TestAgent_Checks.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:08.314Z [INFO]  TestAgent_Checks.server.raft: entering candidate state: node="Node at 127.0.0.1:16834 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:08.531Z [DEBUG] TestAgent_Checks.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:08.531Z [DEBUG] TestAgent_Checks.server.raft: vote granted: from=9d58efcf-da9c-54ee-e2af-2fccb5a1badc term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:08.531Z [INFO]  TestAgent_Checks.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:08.531Z [INFO]  TestAgent_Checks.server.raft: entering leader state: leader="Node at 127.0.0.1:16834 [Leader]"
>     writer.go:29: 2020-02-23T02:47:08.531Z [INFO]  TestAgent_Checks.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:08.532Z [INFO]  TestAgent_Checks.server: New leader elected: payload=Node-9d58efcf-da9c-54ee-e2af-2fccb5a1badc
>     writer.go:29: 2020-02-23T02:47:08.664Z [INFO]  TestAgent_Checks: Synced node info
>     writer.go:29: 2020-02-23T02:47:08.664Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:08.708Z [INFO]  TestAgent_Checks.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:08.708Z [INFO]  TestAgent_Checks.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:08.708Z [DEBUG] TestAgent_Checks.server: Skipping self join check for node since the cluster is too small: node=Node-9d58efcf-da9c-54ee-e2af-2fccb5a1badc
>     writer.go:29: 2020-02-23T02:47:08.708Z [INFO]  TestAgent_Checks.server: member joined, marking health alive: member=Node-9d58efcf-da9c-54ee-e2af-2fccb5a1badc
>     writer.go:29: 2020-02-23T02:47:08.788Z [INFO]  TestAgent_Checks: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:08.788Z [INFO]  TestAgent_Checks.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:08.788Z [DEBUG] TestAgent_Checks.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:08.788Z [WARN]  TestAgent_Checks.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:08.788Z [DEBUG] TestAgent_Checks.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:08.808Z [WARN]  TestAgent_Checks.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:08.828Z [INFO]  TestAgent_Checks.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:08.828Z [INFO]  TestAgent_Checks: consul server down
>     writer.go:29: 2020-02-23T02:47:08.828Z [INFO]  TestAgent_Checks: shutdown complete
>     writer.go:29: 2020-02-23T02:47:08.828Z [INFO]  TestAgent_Checks: Stopping server: protocol=DNS address=127.0.0.1:16829 network=tcp
>     writer.go:29: 2020-02-23T02:47:08.828Z [INFO]  TestAgent_Checks: Stopping server: protocol=DNS address=127.0.0.1:16829 network=udp
>     writer.go:29: 2020-02-23T02:47:08.828Z [INFO]  TestAgent_Checks: Stopping server: protocol=HTTP address=127.0.0.1:16830 network=tcp
>     writer.go:29: 2020-02-23T02:47:08.828Z [INFO]  TestAgent_Checks: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:08.828Z [INFO]  TestAgent_Checks: Endpoints down
> === CONT  TestAgent_Services_MeshGateway
> === RUN   TestAgent_Services_ACLFilter/no_token
> === RUN   TestAgent_Services_ACLFilter/root_token
> --- PASS: TestAgent_Services_ACLFilter (1.79s)
>     writer.go:29: 2020-02-23T02:47:08.542Z [WARN]  TestAgent_Services_ACLFilter: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:08.542Z [WARN]  TestAgent_Services_ACLFilter: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:08.542Z [DEBUG] TestAgent_Services_ACLFilter.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:08.543Z [DEBUG] TestAgent_Services_ACLFilter.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:08.695Z [INFO]  TestAgent_Services_ACLFilter.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:35e1991d-75b9-771d-0dcc-9a67671675b9 Address:127.0.0.1:16840}]"
>     writer.go:29: 2020-02-23T02:47:08.695Z [INFO]  TestAgent_Services_ACLFilter.server.raft: entering follower state: follower="Node at 127.0.0.1:16840 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:08.695Z [INFO]  TestAgent_Services_ACLFilter.server.serf.wan: serf: EventMemberJoin: Node-35e1991d-75b9-771d-0dcc-9a67671675b9.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:08.696Z [INFO]  TestAgent_Services_ACLFilter.server.serf.lan: serf: EventMemberJoin: Node-35e1991d-75b9-771d-0dcc-9a67671675b9 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:08.696Z [INFO]  TestAgent_Services_ACLFilter.server: Adding LAN server: server="Node-35e1991d-75b9-771d-0dcc-9a67671675b9 (Addr: tcp/127.0.0.1:16840) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:08.696Z [INFO]  TestAgent_Services_ACLFilter: Started DNS server: address=127.0.0.1:16835 network=udp
>     writer.go:29: 2020-02-23T02:47:08.696Z [INFO]  TestAgent_Services_ACLFilter.server: Handled event for server in area: event=member-join server=Node-35e1991d-75b9-771d-0dcc-9a67671675b9.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:08.696Z [INFO]  TestAgent_Services_ACLFilter: Started DNS server: address=127.0.0.1:16835 network=tcp
>     writer.go:29: 2020-02-23T02:47:08.696Z [INFO]  TestAgent_Services_ACLFilter: Started HTTP server: address=127.0.0.1:16836 network=tcp
>     writer.go:29: 2020-02-23T02:47:08.696Z [INFO]  TestAgent_Services_ACLFilter: started state syncer
>     writer.go:29: 2020-02-23T02:47:08.751Z [WARN]  TestAgent_Services_ACLFilter.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:08.751Z [INFO]  TestAgent_Services_ACLFilter.server.raft: entering candidate state: node="Node at 127.0.0.1:16840 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:08.754Z [DEBUG] TestAgent_Services_ACLFilter.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:08.754Z [DEBUG] TestAgent_Services_ACLFilter.server.raft: vote granted: from=35e1991d-75b9-771d-0dcc-9a67671675b9 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:08.754Z [INFO]  TestAgent_Services_ACLFilter.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:08.754Z [INFO]  TestAgent_Services_ACLFilter.server.raft: entering leader state: leader="Node at 127.0.0.1:16840 [Leader]"
>     writer.go:29: 2020-02-23T02:47:08.754Z [INFO]  TestAgent_Services_ACLFilter.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:08.754Z [INFO]  TestAgent_Services_ACLFilter.server: New leader elected: payload=Node-35e1991d-75b9-771d-0dcc-9a67671675b9
>     writer.go:29: 2020-02-23T02:47:08.761Z [INFO]  TestAgent_Services_ACLFilter.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:08.768Z [INFO]  TestAgent_Services_ACLFilter.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:08.768Z [WARN]  TestAgent_Services_ACLFilter.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:08.785Z [INFO]  TestAgent_Services_ACLFilter.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:08.834Z [INFO]  TestAgent_Services_ACLFilter.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:08.834Z [INFO]  TestAgent_Services_ACLFilter.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:08.834Z [INFO]  TestAgent_Services_ACLFilter.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:08.834Z [INFO]  TestAgent_Services_ACLFilter.server.serf.lan: serf: EventMemberUpdate: Node-35e1991d-75b9-771d-0dcc-9a67671675b9
>     writer.go:29: 2020-02-23T02:47:08.834Z [INFO]  TestAgent_Services_ACLFilter.server.serf.wan: serf: EventMemberUpdate: Node-35e1991d-75b9-771d-0dcc-9a67671675b9.dc1
>     writer.go:29: 2020-02-23T02:47:08.834Z [INFO]  TestAgent_Services_ACLFilter.server: Handled event for server in area: event=member-update server=Node-35e1991d-75b9-771d-0dcc-9a67671675b9.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:08.871Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:09.773Z [INFO]  TestAgent_Services_ACLFilter: Synced node info
>     writer.go:29: 2020-02-23T02:47:09.794Z [DEBUG] TestAgent_Services_ACLFilter.acl: dropping node from result due to ACLs: node=Node-35e1991d-75b9-771d-0dcc-9a67671675b9
>     writer.go:29: 2020-02-23T02:47:09.794Z [DEBUG] TestAgent_Services_ACLFilter: dropping service from result due to ACLs: service=mysql
>     --- PASS: TestAgent_Services_ACLFilter/no_token (0.00s)
>     --- PASS: TestAgent_Services_ACLFilter/root_token (0.00s)
>     writer.go:29: 2020-02-23T02:47:09.794Z [INFO]  TestAgent_Services_ACLFilter: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:09.794Z [INFO]  TestAgent_Services_ACLFilter.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:09.794Z [DEBUG] TestAgent_Services_ACLFilter.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:09.794Z [DEBUG] TestAgent_Services_ACLFilter.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:09.794Z [WARN]  TestAgent_Services_ACLFilter.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:09.794Z [DEBUG] TestAgent_Services_ACLFilter.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:09.794Z [DEBUG] TestAgent_Services_ACLFilter.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:10.095Z [WARN]  TestAgent_Services_ACLFilter.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:10.308Z [INFO]  TestAgent_Services_ACLFilter.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:10.327Z [INFO]  TestAgent_Services_ACLFilter: consul server down
>     writer.go:29: 2020-02-23T02:47:10.327Z [INFO]  TestAgent_Services_ACLFilter: shutdown complete
>     writer.go:29: 2020-02-23T02:47:10.327Z [INFO]  TestAgent_Services_ACLFilter: Stopping server: protocol=DNS address=127.0.0.1:16835 network=tcp
>     writer.go:29: 2020-02-23T02:47:10.327Z [INFO]  TestAgent_Services_ACLFilter: Stopping server: protocol=DNS address=127.0.0.1:16835 network=udp
>     writer.go:29: 2020-02-23T02:47:10.327Z [INFO]  TestAgent_Services_ACLFilter: Stopping server: protocol=HTTP address=127.0.0.1:16836 network=tcp
>     writer.go:29: 2020-02-23T02:47:10.327Z [INFO]  TestAgent_Services_ACLFilter: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:10.327Z [INFO]  TestAgent_Services_ACLFilter: Endpoints down
> === CONT  TestAgent_Services_Sidecar
> --- PASS: TestAgent_Services_MeshGateway (2.23s)
>     writer.go:29: 2020-02-23T02:47:08.835Z [WARN]  TestAgent_Services_MeshGateway: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:08.836Z [DEBUG] TestAgent_Services_MeshGateway.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:08.836Z [DEBUG] TestAgent_Services_MeshGateway.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:10.070Z [INFO]  TestAgent_Services_MeshGateway.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:2badc0e3-855b-177f-6925-47b6c4f0aeff Address:127.0.0.1:16852}]"
>     writer.go:29: 2020-02-23T02:47:10.070Z [INFO]  TestAgent_Services_MeshGateway.server.raft: entering follower state: follower="Node at 127.0.0.1:16852 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:10.071Z [INFO]  TestAgent_Services_MeshGateway.server.serf.wan: serf: EventMemberJoin: Node-2badc0e3-855b-177f-6925-47b6c4f0aeff.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:10.071Z [INFO]  TestAgent_Services_MeshGateway.server.serf.lan: serf: EventMemberJoin: Node-2badc0e3-855b-177f-6925-47b6c4f0aeff 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:10.071Z [INFO]  TestAgent_Services_MeshGateway: Started DNS server: address=127.0.0.1:16847 network=udp
>     writer.go:29: 2020-02-23T02:47:10.071Z [INFO]  TestAgent_Services_MeshGateway.server: Adding LAN server: server="Node-2badc0e3-855b-177f-6925-47b6c4f0aeff (Addr: tcp/127.0.0.1:16852) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:10.071Z [INFO]  TestAgent_Services_MeshGateway.server: Handled event for server in area: event=member-join server=Node-2badc0e3-855b-177f-6925-47b6c4f0aeff.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:10.071Z [INFO]  TestAgent_Services_MeshGateway: Started DNS server: address=127.0.0.1:16847 network=tcp
>     writer.go:29: 2020-02-23T02:47:10.072Z [INFO]  TestAgent_Services_MeshGateway: Started HTTP server: address=127.0.0.1:16848 network=tcp
>     writer.go:29: 2020-02-23T02:47:10.072Z [INFO]  TestAgent_Services_MeshGateway: started state syncer
>     writer.go:29: 2020-02-23T02:47:10.128Z [WARN]  TestAgent_Services_MeshGateway.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:10.128Z [INFO]  TestAgent_Services_MeshGateway.server.raft: entering candidate state: node="Node at 127.0.0.1:16852 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:10.428Z [DEBUG] TestAgent_Services_MeshGateway.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:10.428Z [DEBUG] TestAgent_Services_MeshGateway.server.raft: vote granted: from=2badc0e3-855b-177f-6925-47b6c4f0aeff term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:10.428Z [INFO]  TestAgent_Services_MeshGateway.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:10.428Z [INFO]  TestAgent_Services_MeshGateway.server.raft: entering leader state: leader="Node at 127.0.0.1:16852 [Leader]"
>     writer.go:29: 2020-02-23T02:47:10.428Z [INFO]  TestAgent_Services_MeshGateway.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:10.428Z [INFO]  TestAgent_Services_MeshGateway.server: New leader elected: payload=Node-2badc0e3-855b-177f-6925-47b6c4f0aeff
>     writer.go:29: 2020-02-23T02:47:10.458Z [INFO]  TestAgent_Services_MeshGateway: Synced node info
>     writer.go:29: 2020-02-23T02:47:10.464Z [INFO]  TestAgent_Services_MeshGateway: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:10.464Z [INFO]  TestAgent_Services_MeshGateway.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:10.464Z [WARN]  TestAgent_Services_MeshGateway.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:10.485Z [WARN]  TestAgent_Services_MeshGateway.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:10.641Z [DEBUG] TestAgent_Services_MeshGateway: Node info in sync
>     writer.go:29: 2020-02-23T02:47:10.761Z [WARN]  TestAgent_Services_MeshGateway: Syncing service failed.: service=mg-dc1-01 error="raft is already shutdown"
>     writer.go:29: 2020-02-23T02:47:10.761Z [ERROR] TestAgent_Services_MeshGateway.anti_entropy: failed to sync remote state: error="raft is already shutdown"
>     writer.go:29: 2020-02-23T02:47:10.761Z [INFO]  TestAgent_Services_MeshGateway.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:11.055Z [INFO]  TestAgent_Services_MeshGateway: consul server down
>     writer.go:29: 2020-02-23T02:47:11.055Z [INFO]  TestAgent_Services_MeshGateway: shutdown complete
>     writer.go:29: 2020-02-23T02:47:11.055Z [INFO]  TestAgent_Services_MeshGateway: Stopping server: protocol=DNS address=127.0.0.1:16847 network=tcp
>     writer.go:29: 2020-02-23T02:47:11.055Z [INFO]  TestAgent_Services_MeshGateway: Stopping server: protocol=DNS address=127.0.0.1:16847 network=udp
>     writer.go:29: 2020-02-23T02:47:11.055Z [INFO]  TestAgent_Services_MeshGateway: Stopping server: protocol=HTTP address=127.0.0.1:16848 network=tcp
>     writer.go:29: 2020-02-23T02:47:11.055Z [INFO]  TestAgent_Services_MeshGateway: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:11.055Z [INFO]  TestAgent_Services_MeshGateway: Endpoints down
> === CONT  TestAgent_Services_ExternalConnectProxy
> --- PASS: TestAgent_Services_Sidecar (2.39s)
>     writer.go:29: 2020-02-23T02:47:10.341Z [WARN]  TestAgent_Services_Sidecar: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:10.342Z [DEBUG] TestAgent_Services_Sidecar.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:10.345Z [DEBUG] TestAgent_Services_Sidecar.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:10.481Z [INFO]  TestAgent_Services_Sidecar.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:16f58399-c085-51b2-4c09-162eed8ac4a3 Address:127.0.0.1:16858}]"
>     writer.go:29: 2020-02-23T02:47:10.481Z [INFO]  TestAgent_Services_Sidecar.server.raft: entering follower state: follower="Node at 127.0.0.1:16858 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:10.482Z [INFO]  TestAgent_Services_Sidecar.server.serf.wan: serf: EventMemberJoin: Node-16f58399-c085-51b2-4c09-162eed8ac4a3.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:10.482Z [INFO]  TestAgent_Services_Sidecar.server.serf.lan: serf: EventMemberJoin: Node-16f58399-c085-51b2-4c09-162eed8ac4a3 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:10.482Z [INFO]  TestAgent_Services_Sidecar.server: Adding LAN server: server="Node-16f58399-c085-51b2-4c09-162eed8ac4a3 (Addr: tcp/127.0.0.1:16858) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:10.482Z [INFO]  TestAgent_Services_Sidecar: Started DNS server: address=127.0.0.1:16853 network=udp
>     writer.go:29: 2020-02-23T02:47:10.482Z [INFO]  TestAgent_Services_Sidecar.server: Handled event for server in area: event=member-join server=Node-16f58399-c085-51b2-4c09-162eed8ac4a3.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:10.482Z [INFO]  TestAgent_Services_Sidecar: Started DNS server: address=127.0.0.1:16853 network=tcp
>     writer.go:29: 2020-02-23T02:47:10.483Z [INFO]  TestAgent_Services_Sidecar: Started HTTP server: address=127.0.0.1:16854 network=tcp
>     writer.go:29: 2020-02-23T02:47:10.483Z [INFO]  TestAgent_Services_Sidecar: started state syncer
>     writer.go:29: 2020-02-23T02:47:10.535Z [WARN]  TestAgent_Services_Sidecar.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:10.535Z [INFO]  TestAgent_Services_Sidecar.server.raft: entering candidate state: node="Node at 127.0.0.1:16858 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:11.378Z [DEBUG] TestAgent_Services_Sidecar.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:11.378Z [DEBUG] TestAgent_Services_Sidecar.server.raft: vote granted: from=16f58399-c085-51b2-4c09-162eed8ac4a3 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:11.378Z [INFO]  TestAgent_Services_Sidecar.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:11.378Z [INFO]  TestAgent_Services_Sidecar.server.raft: entering leader state: leader="Node at 127.0.0.1:16858 [Leader]"
>     writer.go:29: 2020-02-23T02:47:11.378Z [INFO]  TestAgent_Services_Sidecar.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:11.378Z [INFO]  TestAgent_Services_Sidecar.server: New leader elected: payload=Node-16f58399-c085-51b2-4c09-162eed8ac4a3
>     writer.go:29: 2020-02-23T02:47:11.501Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:12.095Z [INFO]  TestAgent_Services_Sidecar: Synced node info
>     writer.go:29: 2020-02-23T02:47:12.096Z [INFO]  TestAgent_Services_Sidecar: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:12.096Z [INFO]  TestAgent_Services_Sidecar.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:12.096Z [WARN]  TestAgent_Services_Sidecar.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:12.428Z [WARN]  TestAgent_Services_Sidecar.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:12.661Z [INFO]  TestAgent_Services_Sidecar.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:12.719Z [ERROR] TestAgent_Services_Sidecar.server.connect: Raft apply failed: error="leadership lost while committing log"
>     writer.go:29: 2020-02-23T02:47:12.719Z [ERROR] TestAgent_Services_Sidecar.server: failed to establish leadership: error="leadership lost while committing log"
>     writer.go:29: 2020-02-23T02:47:12.719Z [ERROR] TestAgent_Services_Sidecar.server: failed to transfer leadership attempt, will retry: attempt=0 retry_limit=3 error="raft is already shutdown"
>     writer.go:29: 2020-02-23T02:47:12.719Z [INFO]  TestAgent_Services_Sidecar: consul server down
>     writer.go:29: 2020-02-23T02:47:12.719Z [INFO]  TestAgent_Services_Sidecar: shutdown complete
>     writer.go:29: 2020-02-23T02:47:12.719Z [INFO]  TestAgent_Services_Sidecar: Stopping server: protocol=DNS address=127.0.0.1:16853 network=tcp
>     writer.go:29: 2020-02-23T02:47:12.719Z [INFO]  TestAgent_Services_Sidecar: Stopping server: protocol=DNS address=127.0.0.1:16853 network=udp
>     writer.go:29: 2020-02-23T02:47:12.719Z [INFO]  TestAgent_Services_Sidecar: Stopping server: protocol=HTTP address=127.0.0.1:16854 network=tcp
>     writer.go:29: 2020-02-23T02:47:12.719Z [INFO]  TestAgent_Services_Sidecar: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:12.719Z [INFO]  TestAgent_Services_Sidecar: Endpoints down
> === CONT  TestAgent_ServicesFiltered
> --- PASS: TestAgent_Services_ExternalConnectProxy (3.21s)
>     writer.go:29: 2020-02-23T02:47:11.134Z [WARN]  TestAgent_Services_ExternalConnectProxy: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:11.135Z [DEBUG] TestAgent_Services_ExternalConnectProxy.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:11.135Z [DEBUG] TestAgent_Services_ExternalConnectProxy.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:11.851Z [INFO]  TestAgent_Services_ExternalConnectProxy.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:de48a14d-fbac-223a-e46c-db6df20b8922 Address:127.0.0.1:16846}]"
>     writer.go:29: 2020-02-23T02:47:11.851Z [INFO]  TestAgent_Services_ExternalConnectProxy.server.raft: entering follower state: follower="Node at 127.0.0.1:16846 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:11.852Z [INFO]  TestAgent_Services_ExternalConnectProxy.server.serf.wan: serf: EventMemberJoin: Node-de48a14d-fbac-223a-e46c-db6df20b8922.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:11.852Z [INFO]  TestAgent_Services_ExternalConnectProxy.server.serf.lan: serf: EventMemberJoin: Node-de48a14d-fbac-223a-e46c-db6df20b8922 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:11.852Z [INFO]  TestAgent_Services_ExternalConnectProxy.server: Handled event for server in area: event=member-join server=Node-de48a14d-fbac-223a-e46c-db6df20b8922.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:11.852Z [INFO]  TestAgent_Services_ExternalConnectProxy.server: Adding LAN server: server="Node-de48a14d-fbac-223a-e46c-db6df20b8922 (Addr: tcp/127.0.0.1:16846) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:11.852Z [INFO]  TestAgent_Services_ExternalConnectProxy: Started DNS server: address=127.0.0.1:16841 network=udp
>     writer.go:29: 2020-02-23T02:47:11.852Z [INFO]  TestAgent_Services_ExternalConnectProxy: Started DNS server: address=127.0.0.1:16841 network=tcp
>     writer.go:29: 2020-02-23T02:47:11.853Z [INFO]  TestAgent_Services_ExternalConnectProxy: Started HTTP server: address=127.0.0.1:16842 network=tcp
>     writer.go:29: 2020-02-23T02:47:11.853Z [INFO]  TestAgent_Services_ExternalConnectProxy: started state syncer
>     writer.go:29: 2020-02-23T02:47:11.891Z [WARN]  TestAgent_Services_ExternalConnectProxy.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:11.891Z [INFO]  TestAgent_Services_ExternalConnectProxy.server.raft: entering candidate state: node="Node at 127.0.0.1:16846 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:12.584Z [DEBUG] TestAgent_Services_ExternalConnectProxy.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:12.584Z [DEBUG] TestAgent_Services_ExternalConnectProxy.server.raft: vote granted: from=de48a14d-fbac-223a-e46c-db6df20b8922 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:12.584Z [INFO]  TestAgent_Services_ExternalConnectProxy.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:12.584Z [INFO]  TestAgent_Services_ExternalConnectProxy.server.raft: entering leader state: leader="Node at 127.0.0.1:16846 [Leader]"
>     writer.go:29: 2020-02-23T02:47:12.584Z [INFO]  TestAgent_Services_ExternalConnectProxy.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:12.584Z [INFO]  TestAgent_Services_ExternalConnectProxy.server: New leader elected: payload=Node-de48a14d-fbac-223a-e46c-db6df20b8922
>     writer.go:29: 2020-02-23T02:47:13.031Z [INFO]  TestAgent_Services_ExternalConnectProxy: Synced node info
>     writer.go:29: 2020-02-23T02:47:13.218Z [DEBUG] TestAgent_Services_ExternalConnectProxy: Node info in sync
>     writer.go:29: 2020-02-23T02:47:13.218Z [DEBUG] TestAgent_Services_ExternalConnectProxy: Node info in sync
>     writer.go:29: 2020-02-23T02:47:14.035Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:14.111Z [INFO]  TestAgent_Services_ExternalConnectProxy.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:14.111Z [INFO]  TestAgent_Services_ExternalConnectProxy.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:14.112Z [DEBUG] TestAgent_Services_ExternalConnectProxy.server: Skipping self join check for node since the cluster is too small: node=Node-de48a14d-fbac-223a-e46c-db6df20b8922
>     writer.go:29: 2020-02-23T02:47:14.112Z [INFO]  TestAgent_Services_ExternalConnectProxy.server: member joined, marking health alive: member=Node-de48a14d-fbac-223a-e46c-db6df20b8922
>     writer.go:29: 2020-02-23T02:47:14.156Z [DEBUG] TestAgent_Services_ExternalConnectProxy: Node info in sync
>     writer.go:29: 2020-02-23T02:47:14.156Z [INFO]  TestAgent_Services_ExternalConnectProxy: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:14.202Z [INFO]  TestAgent_Services_ExternalConnectProxy: Synced service: service=db-proxy
>     writer.go:29: 2020-02-23T02:47:14.202Z [INFO]  TestAgent_Services_ExternalConnectProxy.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:14.202Z [DEBUG] TestAgent_Services_ExternalConnectProxy.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:14.202Z [WARN]  TestAgent_Services_ExternalConnectProxy.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:14.202Z [DEBUG] TestAgent_Services_ExternalConnectProxy.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:14.231Z [WARN]  TestAgent_Services_ExternalConnectProxy.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:14.265Z [INFO]  TestAgent_Services_ExternalConnectProxy.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:14.265Z [INFO]  TestAgent_Services_ExternalConnectProxy: consul server down
>     writer.go:29: 2020-02-23T02:47:14.265Z [INFO]  TestAgent_Services_ExternalConnectProxy: shutdown complete
>     writer.go:29: 2020-02-23T02:47:14.265Z [INFO]  TestAgent_Services_ExternalConnectProxy: Stopping server: protocol=DNS address=127.0.0.1:16841 network=tcp
>     writer.go:29: 2020-02-23T02:47:14.265Z [INFO]  TestAgent_Services_ExternalConnectProxy: Stopping server: protocol=DNS address=127.0.0.1:16841 network=udp
>     writer.go:29: 2020-02-23T02:47:14.265Z [INFO]  TestAgent_Services_ExternalConnectProxy: Stopping server: protocol=HTTP address=127.0.0.1:16842 network=tcp
>     writer.go:29: 2020-02-23T02:47:14.265Z [INFO]  TestAgent_Services_ExternalConnectProxy: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:14.265Z [INFO]  TestAgent_Services_ExternalConnectProxy: Endpoints down
> === CONT  TestAgent_Services
> --- PASS: TestAgent_ServicesFiltered (1.89s)
>     writer.go:29: 2020-02-23T02:47:12.724Z [WARN]  TestAgent_ServicesFiltered: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:12.724Z [DEBUG] TestAgent_ServicesFiltered.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:12.724Z [DEBUG] TestAgent_ServicesFiltered.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:14.031Z [INFO]  TestAgent_ServicesFiltered.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:3d9f78a5-cc5d-37cc-849c-d13f4907b232 Address:127.0.0.1:16876}]"
>     writer.go:29: 2020-02-23T02:47:14.031Z [INFO]  TestAgent_ServicesFiltered.server.raft: entering follower state: follower="Node at 127.0.0.1:16876 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:14.032Z [INFO]  TestAgent_ServicesFiltered.server.serf.wan: serf: EventMemberJoin: Node-3d9f78a5-cc5d-37cc-849c-d13f4907b232.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:14.032Z [INFO]  TestAgent_ServicesFiltered.server.serf.lan: serf: EventMemberJoin: Node-3d9f78a5-cc5d-37cc-849c-d13f4907b232 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:14.032Z [INFO]  TestAgent_ServicesFiltered: Started DNS server: address=127.0.0.1:16871 network=udp
>     writer.go:29: 2020-02-23T02:47:14.033Z [INFO]  TestAgent_ServicesFiltered.server: Adding LAN server: server="Node-3d9f78a5-cc5d-37cc-849c-d13f4907b232 (Addr: tcp/127.0.0.1:16876) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:14.033Z [INFO]  TestAgent_ServicesFiltered.server: Handled event for server in area: event=member-join server=Node-3d9f78a5-cc5d-37cc-849c-d13f4907b232.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:14.033Z [INFO]  TestAgent_ServicesFiltered: Started DNS server: address=127.0.0.1:16871 network=tcp
>     writer.go:29: 2020-02-23T02:47:14.033Z [INFO]  TestAgent_ServicesFiltered: Started HTTP server: address=127.0.0.1:16872 network=tcp
>     writer.go:29: 2020-02-23T02:47:14.033Z [INFO]  TestAgent_ServicesFiltered: started state syncer
>     writer.go:29: 2020-02-23T02:47:14.085Z [WARN]  TestAgent_ServicesFiltered.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:14.085Z [INFO]  TestAgent_ServicesFiltered.server.raft: entering candidate state: node="Node at 127.0.0.1:16876 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:14.178Z [DEBUG] TestAgent_ServicesFiltered.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:14.178Z [DEBUG] TestAgent_ServicesFiltered.server.raft: vote granted: from=3d9f78a5-cc5d-37cc-849c-d13f4907b232 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:14.178Z [INFO]  TestAgent_ServicesFiltered.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:14.178Z [INFO]  TestAgent_ServicesFiltered.server.raft: entering leader state: leader="Node at 127.0.0.1:16876 [Leader]"
>     writer.go:29: 2020-02-23T02:47:14.178Z [INFO]  TestAgent_ServicesFiltered.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:14.178Z [INFO]  TestAgent_ServicesFiltered.server: New leader elected: payload=Node-3d9f78a5-cc5d-37cc-849c-d13f4907b232
>     writer.go:29: 2020-02-23T02:47:14.311Z [INFO]  TestAgent_ServicesFiltered: Synced node info
>     writer.go:29: 2020-02-23T02:47:14.312Z [DEBUG] TestAgent_ServicesFiltered: Node info in sync
>     writer.go:29: 2020-02-23T02:47:14.355Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:14.445Z [INFO]  TestAgent_ServicesFiltered.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:14.445Z [INFO]  TestAgent_ServicesFiltered.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:14.445Z [DEBUG] TestAgent_ServicesFiltered.server: Skipping self join check for node since the cluster is too small: node=Node-3d9f78a5-cc5d-37cc-849c-d13f4907b232
>     writer.go:29: 2020-02-23T02:47:14.445Z [INFO]  TestAgent_ServicesFiltered.server: member joined, marking health alive: member=Node-3d9f78a5-cc5d-37cc-849c-d13f4907b232
>     writer.go:29: 2020-02-23T02:47:14.516Z [INFO]  TestAgent_ServicesFiltered: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:14.516Z [INFO]  TestAgent_ServicesFiltered.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:14.516Z [DEBUG] TestAgent_ServicesFiltered.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:14.516Z [WARN]  TestAgent_ServicesFiltered.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:14.516Z [DEBUG] TestAgent_ServicesFiltered.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:14.581Z [WARN]  TestAgent_ServicesFiltered.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:14.611Z [INFO]  TestAgent_ServicesFiltered.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:14.611Z [INFO]  TestAgent_ServicesFiltered: consul server down
>     writer.go:29: 2020-02-23T02:47:14.611Z [INFO]  TestAgent_ServicesFiltered: shutdown complete
>     writer.go:29: 2020-02-23T02:47:14.611Z [INFO]  TestAgent_ServicesFiltered: Stopping server: protocol=DNS address=127.0.0.1:16871 network=tcp
>     writer.go:29: 2020-02-23T02:47:14.611Z [INFO]  TestAgent_ServicesFiltered: Stopping server: protocol=DNS address=127.0.0.1:16871 network=udp
>     writer.go:29: 2020-02-23T02:47:14.612Z [INFO]  TestAgent_ServicesFiltered: Stopping server: protocol=HTTP address=127.0.0.1:16872 network=tcp
>     writer.go:29: 2020-02-23T02:47:14.612Z [INFO]  TestAgent_ServicesFiltered: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:14.612Z [INFO]  TestAgent_ServicesFiltered: Endpoints down
> === CONT  TestACL_filterChecks
> --- PASS: TestACL_filterChecks (0.01s)
>     writer.go:29: 2020-02-23T02:47:14.624Z [WARN]  TestACL_filterChecks: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:14.624Z [WARN]  TestACL_filterChecks: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:14.624Z [DEBUG] TestACL_filterChecks: dropping check from result due to ACLs: check=my-node
>     writer.go:29: 2020-02-23T02:47:14.624Z [DEBUG] TestACL_filterChecks: dropping check from result due to ACLs: check=my-other
>     writer.go:29: 2020-02-23T02:47:14.624Z [DEBUG] TestACL_filterChecks: dropping check from result due to ACLs: check=my-service
>     writer.go:29: 2020-02-23T02:47:14.624Z [DEBUG] TestACL_filterChecks: dropping check from result due to ACLs: check=my-other
> === CONT  TestACL_filterServices
> --- PASS: TestACL_filterServices (0.01s)
>     writer.go:29: 2020-02-23T02:47:14.639Z [WARN]  TestACL_filterServices: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:14.639Z [WARN]  TestACL_filterServices: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:14.639Z [DEBUG] TestACL_filterServices: dropping service from result due to ACLs: service=my-other
> === CONT  TestACL_filterMembers
> --- PASS: TestACL_filterMembers (0.01s)
>     writer.go:29: 2020-02-23T02:47:14.651Z [WARN]  TestACL_filterMembers: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:14.651Z [WARN]  TestACL_filterMembers: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:14.653Z [DEBUG] TestACL_filterMembers: dropping node from result due to ACLs: node=Nope accessorID=9df2d1a4-2d07-414e-8ead-6053f56ed2eb
> === CONT  TestACL_vetCheckUpdate
> --- PASS: TestACL_vetCheckUpdate (0.02s)
>     writer.go:29: 2020-02-23T02:47:14.670Z [WARN]  TestACL_vetCheckUpdate: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:14.670Z [WARN]  TestACL_vetCheckUpdate: bootstrap = true: do not enable unless necessary
> === CONT  TestACL_vetCheckRegister
> --- PASS: TestACL_vetCheckRegister (0.01s)
>     writer.go:29: 2020-02-23T02:47:14.683Z [WARN]  TestACL_vetCheckRegister: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:14.683Z [WARN]  TestACL_vetCheckRegister: bootstrap = true: do not enable unless necessary
> === CONT  TestACL_vetServiceUpdate
> --- PASS: TestACL_vetServiceUpdate (0.01s)
>     writer.go:29: 2020-02-23T02:47:14.694Z [WARN]  TestACL_vetServiceUpdate: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:14.694Z [WARN]  TestACL_vetServiceUpdate: bootstrap = true: do not enable unless necessary
> === CONT  TestACL_vetServiceRegister
> --- PASS: TestACL_vetServiceRegister (0.01s)
>     writer.go:29: 2020-02-23T02:47:14.702Z [WARN]  TestACL_vetServiceRegister: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:14.702Z [WARN]  TestACL_vetServiceRegister: bootstrap = true: do not enable unless necessary
> === CONT  TestACL_RootAuthorizersDenied
> --- PASS: TestACL_RootAuthorizersDenied (0.01s)
>     writer.go:29: 2020-02-23T02:47:14.709Z [WARN]  TestACL_RootAuthorizersDenied: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:14.709Z [WARN]  TestACL_RootAuthorizersDenied: bootstrap = true: do not enable unless necessary
> === CONT  TestACL_Version8
> === RUN   TestACL_Version8/version_8_disabled
> === RUN   TestACL_Version8/version_8_enabled
> --- PASS: TestACL_Version8 (0.02s)
>     --- PASS: TestACL_Version8/version_8_disabled (0.01s)
>         writer.go:29: 2020-02-23T02:47:14.723Z [WARN]  TestACL_Version8/version_8_disabled: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:47:14.723Z [WARN]  TestACL_Version8/version_8_disabled: bootstrap = true: do not enable unless necessary
>     --- PASS: TestACL_Version8/version_8_enabled (0.01s)
>         writer.go:29: 2020-02-23T02:47:14.730Z [WARN]  TestACL_Version8/version_8_enabled: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>         writer.go:29: 2020-02-23T02:47:14.730Z [WARN]  TestACL_Version8/version_8_enabled: bootstrap = true: do not enable unless necessary
> === CONT  TestACL_Authorize
> --- PASS: TestAgent_Services (0.60s)
>     writer.go:29: 2020-02-23T02:47:14.274Z [WARN]  TestAgent_Services: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:14.274Z [DEBUG] TestAgent_Services.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:14.274Z [DEBUG] TestAgent_Services.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:14.431Z [INFO]  TestAgent_Services.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:2a760bdd-c5ec-768e-3240-495bb9ef2277 Address:127.0.0.1:16864}]"
>     writer.go:29: 2020-02-23T02:47:14.431Z [INFO]  TestAgent_Services.server.raft: entering follower state: follower="Node at 127.0.0.1:16864 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:14.432Z [INFO]  TestAgent_Services.server.serf.wan: serf: EventMemberJoin: Node-2a760bdd-c5ec-768e-3240-495bb9ef2277.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:14.432Z [INFO]  TestAgent_Services.server.serf.lan: serf: EventMemberJoin: Node-2a760bdd-c5ec-768e-3240-495bb9ef2277 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:14.432Z [INFO]  TestAgent_Services.server: Adding LAN server: server="Node-2a760bdd-c5ec-768e-3240-495bb9ef2277 (Addr: tcp/127.0.0.1:16864) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:14.432Z [INFO]  TestAgent_Services.server: Handled event for server in area: event=member-join server=Node-2a760bdd-c5ec-768e-3240-495bb9ef2277.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:14.433Z [INFO]  TestAgent_Services: Started DNS server: address=127.0.0.1:16859 network=udp
>     writer.go:29: 2020-02-23T02:47:14.433Z [INFO]  TestAgent_Services: Started DNS server: address=127.0.0.1:16859 network=tcp
>     writer.go:29: 2020-02-23T02:47:14.433Z [INFO]  TestAgent_Services: Started HTTP server: address=127.0.0.1:16860 network=tcp
>     writer.go:29: 2020-02-23T02:47:14.433Z [INFO]  TestAgent_Services: started state syncer
>     writer.go:29: 2020-02-23T02:47:14.486Z [WARN]  TestAgent_Services.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:14.486Z [INFO]  TestAgent_Services.server.raft: entering candidate state: node="Node at 127.0.0.1:16864 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:14.621Z [DEBUG] TestAgent_Services.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:14.621Z [DEBUG] TestAgent_Services.server.raft: vote granted: from=2a760bdd-c5ec-768e-3240-495bb9ef2277 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:14.621Z [INFO]  TestAgent_Services.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:14.621Z [INFO]  TestAgent_Services.server.raft: entering leader state: leader="Node at 127.0.0.1:16864 [Leader]"
>     writer.go:29: 2020-02-23T02:47:14.627Z [INFO]  TestAgent_Services.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:14.627Z [INFO]  TestAgent_Services.server: New leader elected: payload=Node-2a760bdd-c5ec-768e-3240-495bb9ef2277
>     writer.go:29: 2020-02-23T02:47:14.707Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:14.707Z [INFO]  TestAgent_Services: Synced node info
>     writer.go:29: 2020-02-23T02:47:14.707Z [DEBUG] TestAgent_Services: Node info in sync
>     writer.go:29: 2020-02-23T02:47:14.765Z [INFO]  TestAgent_Services.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:14.765Z [INFO]  TestAgent_Services.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:14.765Z [DEBUG] TestAgent_Services.server: Skipping self join check for node since the cluster is too small: node=Node-2a760bdd-c5ec-768e-3240-495bb9ef2277
>     writer.go:29: 2020-02-23T02:47:14.765Z [INFO]  TestAgent_Services.server: member joined, marking health alive: member=Node-2a760bdd-c5ec-768e-3240-495bb9ef2277
>     writer.go:29: 2020-02-23T02:47:14.838Z [INFO]  TestAgent_Services: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:14.838Z [INFO]  TestAgent_Services.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:14.838Z [DEBUG] TestAgent_Services.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:14.838Z [WARN]  TestAgent_Services.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:14.838Z [DEBUG] TestAgent_Services.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:14.851Z [WARN]  TestAgent_Services.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:14.866Z [INFO]  TestAgent_Services.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:14.866Z [INFO]  TestAgent_Services: consul server down
>     writer.go:29: 2020-02-23T02:47:14.866Z [INFO]  TestAgent_Services: shutdown complete
>     writer.go:29: 2020-02-23T02:47:14.866Z [INFO]  TestAgent_Services: Stopping server: protocol=DNS address=127.0.0.1:16859 network=tcp
>     writer.go:29: 2020-02-23T02:47:14.866Z [INFO]  TestAgent_Services: Stopping server: protocol=DNS address=127.0.0.1:16859 network=udp
>     writer.go:29: 2020-02-23T02:47:14.867Z [INFO]  TestAgent_Services: Stopping server: protocol=HTTP address=127.0.0.1:16860 network=tcp
>     writer.go:29: 2020-02-23T02:47:14.867Z [INFO]  TestAgent_Services: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:14.867Z [INFO]  TestAgent_Services: Endpoints down
> === CONT  TestACL_LoginProcedure_HTTP
> --- PASS: TestAgent_Leave_ACLDeny (11.42s)
>     writer.go:29: 2020-02-23T02:47:03.514Z [WARN]  TestAgent_Leave_ACLDeny: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:03.514Z [WARN]  TestAgent_Leave_ACLDeny: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:03.514Z [DEBUG] TestAgent_Leave_ACLDeny.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:03.514Z [DEBUG] TestAgent_Leave_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:03.524Z [INFO]  TestAgent_Leave_ACLDeny.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b Address:127.0.0.1:16708}]"
>     writer.go:29: 2020-02-23T02:47:03.524Z [INFO]  TestAgent_Leave_ACLDeny.server.raft: entering follower state: follower="Node at 127.0.0.1:16708 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:03.525Z [INFO]  TestAgent_Leave_ACLDeny.server.serf.wan: serf: EventMemberJoin: Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.526Z [INFO]  TestAgent_Leave_ACLDeny.server.serf.lan: serf: EventMemberJoin: Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.526Z [INFO]  TestAgent_Leave_ACLDeny.server: Adding LAN server: server="Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b (Addr: tcp/127.0.0.1:16708) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:03.526Z [INFO]  TestAgent_Leave_ACLDeny.server: Handled event for server in area: event=member-join server=Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:03.526Z [INFO]  TestAgent_Leave_ACLDeny: Started DNS server: address=127.0.0.1:16703 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.526Z [INFO]  TestAgent_Leave_ACLDeny: Started DNS server: address=127.0.0.1:16703 network=udp
>     writer.go:29: 2020-02-23T02:47:03.527Z [INFO]  TestAgent_Leave_ACLDeny: Started HTTP server: address=127.0.0.1:16704 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.527Z [INFO]  TestAgent_Leave_ACLDeny: started state syncer
>     writer.go:29: 2020-02-23T02:47:03.587Z [WARN]  TestAgent_Leave_ACLDeny.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:03.587Z [INFO]  TestAgent_Leave_ACLDeny.server.raft: entering candidate state: node="Node at 127.0.0.1:16708 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:03.591Z [DEBUG] TestAgent_Leave_ACLDeny.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:03.591Z [DEBUG] TestAgent_Leave_ACLDeny.server.raft: vote granted: from=cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:03.591Z [INFO]  TestAgent_Leave_ACLDeny.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:03.591Z [INFO]  TestAgent_Leave_ACLDeny.server.raft: entering leader state: leader="Node at 127.0.0.1:16708 [Leader]"
>     writer.go:29: 2020-02-23T02:47:03.591Z [INFO]  TestAgent_Leave_ACLDeny.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:03.591Z [INFO]  TestAgent_Leave_ACLDeny.server: New leader elected: payload=Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b
>     writer.go:29: 2020-02-23T02:47:03.593Z [INFO]  TestAgent_Leave_ACLDeny.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:03.595Z [INFO]  TestAgent_Leave_ACLDeny.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:03.595Z [WARN]  TestAgent_Leave_ACLDeny.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:03.597Z [INFO]  TestAgent_Leave_ACLDeny.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:03.601Z [INFO]  TestAgent_Leave_ACLDeny.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:03.601Z [INFO]  TestAgent_Leave_ACLDeny.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:03.601Z [INFO]  TestAgent_Leave_ACLDeny.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:03.601Z [INFO]  TestAgent_Leave_ACLDeny.server.serf.lan: serf: EventMemberUpdate: Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b
>     writer.go:29: 2020-02-23T02:47:03.601Z [INFO]  TestAgent_Leave_ACLDeny.server.serf.wan: serf: EventMemberUpdate: Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b.dc1
>     writer.go:29: 2020-02-23T02:47:03.601Z [INFO]  TestAgent_Leave_ACLDeny.server: Handled event for server in area: event=member-update server=Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:03.606Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:03.613Z [INFO]  TestAgent_Leave_ACLDeny.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:03.613Z [INFO]  TestAgent_Leave_ACLDeny.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.613Z [DEBUG] TestAgent_Leave_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b
>     writer.go:29: 2020-02-23T02:47:03.613Z [INFO]  TestAgent_Leave_ACLDeny.server: member joined, marking health alive: member=Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b
>     writer.go:29: 2020-02-23T02:47:03.616Z [DEBUG] TestAgent_Leave_ACLDeny.server: Skipping self join check for node since the cluster is too small: node=Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b
>     writer.go:29: 2020-02-23T02:47:03.792Z [DEBUG] TestAgent_Leave_ACLDeny: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:03.821Z [INFO]  TestAgent_Leave_ACLDeny: Synced node info
>     writer.go:29: 2020-02-23T02:47:03.845Z [DEBUG] TestAgent_Leave_ACLDeny.acl: dropping node from result due to ACLs: node=Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b
>     writer.go:29: 2020-02-23T02:47:03.846Z [DEBUG] TestAgent_Leave_ACLDeny.acl: dropping node from result due to ACLs: node=Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b
>     --- PASS: TestAgent_Leave_ACLDeny/no_token (0.00s)
>     --- PASS: TestAgent_Leave_ACLDeny/read-only_token (0.04s)
>     writer.go:29: 2020-02-23T02:47:03.881Z [INFO]  TestAgent_Leave_ACLDeny.server: server starting leave
>     writer.go:29: 2020-02-23T02:47:03.881Z [INFO]  TestAgent_Leave_ACLDeny.server.serf.wan: serf: EventMemberLeave: Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.881Z [INFO]  TestAgent_Leave_ACLDeny.server: Handled event for server in area: event=member-leave server=Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:03.881Z [INFO]  TestAgent_Leave_ACLDeny.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:05.603Z [DEBUG] TestAgent_Leave_ACLDeny.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:05.812Z [DEBUG] TestAgent_Leave_ACLDeny: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:05.812Z [DEBUG] TestAgent_Leave_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:47:05.812Z [DEBUG] TestAgent_Leave_ACLDeny: Node info in sync
>     writer.go:29: 2020-02-23T02:47:06.881Z [INFO]  TestAgent_Leave_ACLDeny.server.serf.lan: serf: EventMemberLeave: Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:06.881Z [INFO]  TestAgent_Leave_ACLDeny.server: Removing LAN server: server="Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b (Addr: tcp/127.0.0.1:16708) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:06.881Z [WARN]  TestAgent_Leave_ACLDeny.server: deregistering self should be done by follower: name=Node-cd6dedfc-ae52-c7c3-bb4e-84ef76acb51b
>     writer.go:29: 2020-02-23T02:47:07.603Z [ERROR] TestAgent_Leave_ACLDeny.server.autopilot: Error updating cluster health: error="error getting server raft protocol versions: No servers found"
>     writer.go:29: 2020-02-23T02:47:09.603Z [ERROR] TestAgent_Leave_ACLDeny.server.autopilot: Error updating cluster health: error="error getting server raft protocol versions: No servers found"
>     writer.go:29: 2020-02-23T02:47:09.881Z [INFO]  TestAgent_Leave_ACLDeny.server: Waiting to drain RPC traffic: drain_time=5s
>     writer.go:29: 2020-02-23T02:47:11.603Z [ERROR] TestAgent_Leave_ACLDeny.server.autopilot: Error updating cluster health: error="error getting server raft protocol versions: No servers found"
>     writer.go:29: 2020-02-23T02:47:13.603Z [ERROR] TestAgent_Leave_ACLDeny.server.autopilot: Error updating cluster health: error="error getting server raft protocol versions: No servers found"
>     writer.go:29: 2020-02-23T02:47:13.603Z [ERROR] TestAgent_Leave_ACLDeny.server.autopilot: Error promoting servers: error="error getting server raft protocol versions: No servers found"
>     writer.go:29: 2020-02-23T02:47:14.882Z [INFO]  TestAgent_Leave_ACLDeny: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:14.882Z [INFO]  TestAgent_Leave_ACLDeny.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:14.882Z [DEBUG] TestAgent_Leave_ACLDeny.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:14.882Z [DEBUG] TestAgent_Leave_ACLDeny.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:14.882Z [DEBUG] TestAgent_Leave_ACLDeny.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:14.882Z [DEBUG] TestAgent_Leave_ACLDeny.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:14.882Z [DEBUG] TestAgent_Leave_ACLDeny.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:14.882Z [DEBUG] TestAgent_Leave_ACLDeny.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:14.928Z [INFO]  TestAgent_Leave_ACLDeny: consul server down
>     writer.go:29: 2020-02-23T02:47:14.928Z [INFO]  TestAgent_Leave_ACLDeny: shutdown complete
>     --- PASS: TestAgent_Leave_ACLDeny/agent_master_token (11.05s)
>     writer.go:29: 2020-02-23T02:47:14.928Z [INFO]  TestAgent_Leave_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16703 network=tcp
>     writer.go:29: 2020-02-23T02:47:14.928Z [INFO]  TestAgent_Leave_ACLDeny: Stopping server: protocol=DNS address=127.0.0.1:16703 network=udp
>     writer.go:29: 2020-02-23T02:47:14.928Z [INFO]  TestAgent_Leave_ACLDeny: Stopping server: protocol=HTTP address=127.0.0.1:16704 network=tcp
>     writer.go:29: 2020-02-23T02:47:14.928Z [INFO]  TestAgent_Leave_ACLDeny: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:14.928Z [INFO]  TestAgent_Leave_ACLDeny: Endpoints down
> === CONT  TestACL_HTTP
> === RUN   TestACL_HTTP/Policy
> === RUN   TestACL_HTTP/Policy/Create
> === RUN   TestACL_HTTP/Policy/Minimal
> === RUN   TestACL_HTTP/Policy/Name_Chars
> === RUN   TestACL_HTTP/Policy/Update_Name_ID_Mismatch
> === RUN   TestACL_HTTP/Policy/Policy_CRUD_Missing_ID_in_URL
> === RUN   TestACL_HTTP/Policy/Update
> === RUN   TestACL_LoginProcedure_HTTP/AuthMethod
> === RUN   TestACL_LoginProcedure_HTTP/AuthMethod/Create
> === RUN   TestACL_LoginProcedure_HTTP/AuthMethod/Create_other
> === RUN   TestACL_LoginProcedure_HTTP/AuthMethod/Update_Name_URL_Mismatch
> === RUN   TestACL_LoginProcedure_HTTP/AuthMethod/Update
> === RUN   TestACL_HTTP/Policy/ID_Supplied
> === RUN   TestACL_HTTP/Policy/Invalid_payload
> === RUN   TestACL_HTTP/Policy/Delete
> === RUN   TestACL_LoginProcedure_HTTP/AuthMethod/Invalid_payload
> === RUN   TestACL_LoginProcedure_HTTP/AuthMethod/List
> === RUN   TestACL_LoginProcedure_HTTP/AuthMethod/Delete
> === RUN   TestACL_LoginProcedure_HTTP/AuthMethod/Read
> === RUN   TestACL_LoginProcedure_HTTP/BindingRule
> === RUN   TestACL_LoginProcedure_HTTP/BindingRule/Create
> === RUN   TestACL_LoginProcedure_HTTP/BindingRule/Create_other
> === RUN   TestACL_LoginProcedure_HTTP/BindingRule/BindingRule_CRUD_Missing_ID_in_URL
> === RUN   TestACL_LoginProcedure_HTTP/BindingRule/Update
> === RUN   TestACL_HTTP/Policy/List
> === RUN   TestACL_HTTP/Policy/Read
> === RUN   TestACL_HTTP/Role
> === RUN   TestACL_HTTP/Role/Create
> === RUN   TestACL_LoginProcedure_HTTP/BindingRule/ID_Supplied
> === RUN   TestACL_LoginProcedure_HTTP/BindingRule/Invalid_payload
> === RUN   TestACL_LoginProcedure_HTTP/BindingRule/List
> === RUN   TestACL_LoginProcedure_HTTP/BindingRule/Delete
> === RUN   TestACL_HTTP/Role/Name_Chars
> === RUN   TestACL_LoginProcedure_HTTP/BindingRule/Read
> === RUN   TestACL_LoginProcedure_HTTP/Login
> === RUN   TestACL_LoginProcedure_HTTP/Login/Create_Token_1
> === RUN   TestACL_HTTP/Role/Update_Name_ID_Mismatch
> === RUN   TestACL_HTTP/Role/Role_CRUD_Missing_ID_in_URL
> === RUN   TestACL_HTTP/Role/Update
> === RUN   TestACL_LoginProcedure_HTTP/Login/Create_Token_2
> === RUN   TestACL_HTTP/Role/ID_Supplied
> === RUN   TestACL_HTTP/Role/Invalid_payload
> === RUN   TestACL_HTTP/Role/Delete
> === RUN   TestACL_LoginProcedure_HTTP/Login/List_Tokens_by_(incorrect)_Method
> === RUN   TestACL_LoginProcedure_HTTP/Login/List_Tokens_by_(correct)_Method
> === RUN   TestACL_LoginProcedure_HTTP/Login/Logout
> 2020-02-23T02:47:16.428Z [ERROR] watch.watch: Watch errored: type=key error="Get https://127.0.0.1:17143/v1/kv/asdf: dial tcp 127.0.0.1:17143: connect: connection refused" retry=45s
> === RUN   TestACL_HTTP/Role/List
> === RUN   TestACL_HTTP/Role/Read
> === RUN   TestACL_HTTP/Token
> === RUN   TestACL_HTTP/Token/Create
> === RUN   TestACL_LoginProcedure_HTTP/Login/Token_is_gone_after_Logout
> === RUN   TestACL_HTTP/Token/Create_Local
> --- PASS: TestACL_LoginProcedure_HTTP (1.63s)
>     writer.go:29: 2020-02-23T02:47:14.882Z [WARN]  TestACL_LoginProcedure_HTTP: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:14.882Z [WARN]  TestACL_LoginProcedure_HTTP: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:14.882Z [DEBUG] TestACL_LoginProcedure_HTTP.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:14.883Z [DEBUG] TestACL_LoginProcedure_HTTP.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:15.048Z [INFO]  TestACL_LoginProcedure_HTTP.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:4ef198ae-a2d8-5d11-5297-7c460baec263 Address:127.0.0.1:16870}]"
>     writer.go:29: 2020-02-23T02:47:15.049Z [INFO]  TestACL_LoginProcedure_HTTP.server.serf.wan: serf: EventMemberJoin: Node-4ef198ae-a2d8-5d11-5297-7c460baec263.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:15.049Z [INFO]  TestACL_LoginProcedure_HTTP.server.serf.lan: serf: EventMemberJoin: Node-4ef198ae-a2d8-5d11-5297-7c460baec263 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:15.049Z [INFO]  TestACL_LoginProcedure_HTTP: Started DNS server: address=127.0.0.1:16865 network=udp
>     writer.go:29: 2020-02-23T02:47:15.049Z [INFO]  TestACL_LoginProcedure_HTTP.server.raft: entering follower state: follower="Node at 127.0.0.1:16870 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:15.049Z [INFO]  TestACL_LoginProcedure_HTTP.server: Adding LAN server: server="Node-4ef198ae-a2d8-5d11-5297-7c460baec263 (Addr: tcp/127.0.0.1:16870) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:15.049Z [INFO]  TestACL_LoginProcedure_HTTP.server: Handled event for server in area: event=member-join server=Node-4ef198ae-a2d8-5d11-5297-7c460baec263.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:15.050Z [INFO]  TestACL_LoginProcedure_HTTP: Started DNS server: address=127.0.0.1:16865 network=tcp
>     writer.go:29: 2020-02-23T02:47:15.050Z [INFO]  TestACL_LoginProcedure_HTTP: Started HTTP server: address=127.0.0.1:16866 network=tcp
>     writer.go:29: 2020-02-23T02:47:15.050Z [INFO]  TestACL_LoginProcedure_HTTP: started state syncer
>     writer.go:29: 2020-02-23T02:47:15.115Z [WARN]  TestACL_LoginProcedure_HTTP.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:15.115Z [INFO]  TestACL_LoginProcedure_HTTP.server.raft: entering candidate state: node="Node at 127.0.0.1:16870 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:15.165Z [DEBUG] TestACL_LoginProcedure_HTTP.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:15.165Z [DEBUG] TestACL_LoginProcedure_HTTP.server.raft: vote granted: from=4ef198ae-a2d8-5d11-5297-7c460baec263 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:15.165Z [INFO]  TestACL_LoginProcedure_HTTP.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:15.165Z [INFO]  TestACL_LoginProcedure_HTTP.server.raft: entering leader state: leader="Node at 127.0.0.1:16870 [Leader]"
>     writer.go:29: 2020-02-23T02:47:15.165Z [INFO]  TestACL_LoginProcedure_HTTP.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:15.165Z [INFO]  TestACL_LoginProcedure_HTTP.server: New leader elected: payload=Node-4ef198ae-a2d8-5d11-5297-7c460baec263
>     writer.go:29: 2020-02-23T02:47:15.181Z [ERROR] TestACL_LoginProcedure_HTTP.anti_entropy: failed to sync remote state: error="ACL not found"
>     writer.go:29: 2020-02-23T02:47:15.200Z [INFO]  TestACL_LoginProcedure_HTTP.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:15.205Z [INFO]  TestACL_LoginProcedure_HTTP.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:15.228Z [INFO]  TestACL_LoginProcedure_HTTP.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:15.228Z [WARN]  TestACL_LoginProcedure_HTTP.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:15.378Z [INFO]  TestACL_LoginProcedure_HTTP.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:15.378Z [INFO]  TestACL_LoginProcedure_HTTP.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:15.378Z [WARN]  TestACL_LoginProcedure_HTTP.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:15.438Z [INFO]  TestACL_LoginProcedure_HTTP.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:15.438Z [INFO]  TestACL_LoginProcedure_HTTP.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:15.438Z [INFO]  TestACL_LoginProcedure_HTTP.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:15.438Z [INFO]  TestACL_LoginProcedure_HTTP.server.serf.lan: serf: EventMemberUpdate: Node-4ef198ae-a2d8-5d11-5297-7c460baec263
>     writer.go:29: 2020-02-23T02:47:15.439Z [INFO]  TestACL_LoginProcedure_HTTP.server.serf.wan: serf: EventMemberUpdate: Node-4ef198ae-a2d8-5d11-5297-7c460baec263.dc1
>     writer.go:29: 2020-02-23T02:47:15.439Z [INFO]  TestACL_LoginProcedure_HTTP.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:15.439Z [DEBUG] TestACL_LoginProcedure_HTTP.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:47:15.439Z [INFO]  TestACL_LoginProcedure_HTTP.server.serf.lan: serf: EventMemberUpdate: Node-4ef198ae-a2d8-5d11-5297-7c460baec263
>     writer.go:29: 2020-02-23T02:47:15.439Z [INFO]  TestACL_LoginProcedure_HTTP.server.serf.wan: serf: EventMemberUpdate: Node-4ef198ae-a2d8-5d11-5297-7c460baec263.dc1
>     writer.go:29: 2020-02-23T02:47:15.439Z [INFO]  TestACL_LoginProcedure_HTTP.server: Handled event for server in area: event=member-update server=Node-4ef198ae-a2d8-5d11-5297-7c460baec263.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:15.439Z [INFO]  TestACL_LoginProcedure_HTTP.server: Handled event for server in area: event=member-update server=Node-4ef198ae-a2d8-5d11-5297-7c460baec263.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:15.618Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:15.808Z [INFO]  TestACL_LoginProcedure_HTTP.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:15.808Z [INFO]  TestACL_LoginProcedure_HTTP.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:15.808Z [DEBUG] TestACL_LoginProcedure_HTTP.server: Skipping self join check for node since the cluster is too small: node=Node-4ef198ae-a2d8-5d11-5297-7c460baec263
>     writer.go:29: 2020-02-23T02:47:15.808Z [INFO]  TestACL_LoginProcedure_HTTP.server: member joined, marking health alive: member=Node-4ef198ae-a2d8-5d11-5297-7c460baec263
>     writer.go:29: 2020-02-23T02:47:15.899Z [DEBUG] TestACL_LoginProcedure_HTTP.server: Skipping self join check for node since the cluster is too small: node=Node-4ef198ae-a2d8-5d11-5297-7c460baec263
>     writer.go:29: 2020-02-23T02:47:15.899Z [DEBUG] TestACL_LoginProcedure_HTTP.server: Skipping self join check for node since the cluster is too small: node=Node-4ef198ae-a2d8-5d11-5297-7c460baec263
>     writer.go:29: 2020-02-23T02:47:15.910Z [DEBUG] TestACL_LoginProcedure_HTTP.acl: dropping node from result due to ACLs: node=Node-4ef198ae-a2d8-5d11-5297-7c460baec263
>     --- PASS: TestACL_LoginProcedure_HTTP/AuthMethod (0.27s)
>         --- PASS: TestACL_LoginProcedure_HTTP/AuthMethod/Create (0.06s)
>         --- PASS: TestACL_LoginProcedure_HTTP/AuthMethod/Create_other (0.08s)
>         --- PASS: TestACL_LoginProcedure_HTTP/AuthMethod/Update_Name_URL_Mismatch (0.00s)
>         --- PASS: TestACL_LoginProcedure_HTTP/AuthMethod/Update (0.09s)
>         --- PASS: TestACL_LoginProcedure_HTTP/AuthMethod/Invalid_payload (0.00s)
>         --- PASS: TestACL_LoginProcedure_HTTP/AuthMethod/List (0.00s)
>         --- PASS: TestACL_LoginProcedure_HTTP/AuthMethod/Delete (0.04s)
>         --- PASS: TestACL_LoginProcedure_HTTP/AuthMethod/Read (0.00s)
>     --- PASS: TestACL_LoginProcedure_HTTP/BindingRule (0.15s)
>         --- PASS: TestACL_LoginProcedure_HTTP/BindingRule/Create (0.03s)
>         --- PASS: TestACL_LoginProcedure_HTTP/BindingRule/Create_other (0.04s)
>         --- PASS: TestACL_LoginProcedure_HTTP/BindingRule/BindingRule_CRUD_Missing_ID_in_URL (0.00s)
>         --- PASS: TestACL_LoginProcedure_HTTP/BindingRule/Update (0.03s)
>         --- PASS: TestACL_LoginProcedure_HTTP/BindingRule/ID_Supplied (0.00s)
>         --- PASS: TestACL_LoginProcedure_HTTP/BindingRule/Invalid_payload (0.00s)
>         --- PASS: TestACL_LoginProcedure_HTTP/BindingRule/List (0.00s)
>         --- PASS: TestACL_LoginProcedure_HTTP/BindingRule/Delete (0.05s)
>         --- PASS: TestACL_LoginProcedure_HTTP/BindingRule/Read (0.00s)
>     --- PASS: TestACL_LoginProcedure_HTTP/Login (0.11s)
>         --- PASS: TestACL_LoginProcedure_HTTP/Login/Create_Token_1 (0.03s)
>         --- PASS: TestACL_LoginProcedure_HTTP/Login/Create_Token_2 (0.04s)
>         --- PASS: TestACL_LoginProcedure_HTTP/Login/List_Tokens_by_(incorrect)_Method (0.00s)
>         --- PASS: TestACL_LoginProcedure_HTTP/Login/List_Tokens_by_(correct)_Method (0.00s)
>         --- PASS: TestACL_LoginProcedure_HTTP/Login/Logout (0.04s)
>         --- PASS: TestACL_LoginProcedure_HTTP/Login/Token_is_gone_after_Logout (0.00s)
>     writer.go:29: 2020-02-23T02:47:16.442Z [INFO]  TestACL_LoginProcedure_HTTP: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:16.442Z [INFO]  TestACL_LoginProcedure_HTTP.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:16.442Z [DEBUG] TestACL_LoginProcedure_HTTP.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:16.442Z [DEBUG] TestACL_LoginProcedure_HTTP.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:16.442Z [DEBUG] TestACL_LoginProcedure_HTTP.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:16.442Z [WARN]  TestACL_LoginProcedure_HTTP.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:16.442Z [DEBUG] TestACL_LoginProcedure_HTTP.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:16.442Z [DEBUG] TestACL_LoginProcedure_HTTP.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:16.442Z [DEBUG] TestACL_LoginProcedure_HTTP.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:16.478Z [WARN]  TestACL_LoginProcedure_HTTP.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:16.501Z [INFO]  TestACL_LoginProcedure_HTTP.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:16.501Z [INFO]  TestACL_LoginProcedure_HTTP: consul server down
>     writer.go:29: 2020-02-23T02:47:16.501Z [INFO]  TestACL_LoginProcedure_HTTP: shutdown complete
>     writer.go:29: 2020-02-23T02:47:16.502Z [INFO]  TestACL_LoginProcedure_HTTP: Stopping server: protocol=DNS address=127.0.0.1:16865 network=tcp
>     writer.go:29: 2020-02-23T02:47:16.502Z [INFO]  TestACL_LoginProcedure_HTTP: Stopping server: protocol=DNS address=127.0.0.1:16865 network=udp
>     writer.go:29: 2020-02-23T02:47:16.502Z [INFO]  TestACL_LoginProcedure_HTTP: Stopping server: protocol=HTTP address=127.0.0.1:16866 network=tcp
>     writer.go:29: 2020-02-23T02:47:16.502Z [INFO]  TestACL_LoginProcedure_HTTP: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:16.502Z [INFO]  TestACL_LoginProcedure_HTTP: Endpoints down
> === CONT  TestACL_Bootstrap
> === RUN   TestACL_HTTP/Token/Read
> === RUN   TestACL_HTTP/Token/Self
> === RUN   TestACL_HTTP/Token/Clone
> === RUN   TestACL_HTTP/Token/Update
> === RUN   TestACL_HTTP/Token/CRUD_Missing_Token_Accessor_ID
> === RUN   TestACL_HTTP/Token/Update_Accessor_Mismatch
> === RUN   TestACL_HTTP/Token/Delete
> === RUN   TestACL_HTTP/Token/List
> === RUN   TestACL_HTTP/Token/List_by_Policy
> === RUN   TestACL_HTTP/Token/Create_with_Accessor
> === RUN   TestACL_HTTP/Token/Create_with_Secret
> === RUN   TestACL_HTTP/Token/Create_with_Accessor_and_Secret
> === RUN   TestACL_HTTP/Token/Create_with_Accessor_Dup
> === RUN   TestACL_HTTP/Token/Create_with_Secret_as_Accessor_Dup
> === RUN   TestACL_HTTP/Token/Create_with_Secret_Dup
> === RUN   TestACL_HTTP/Token/Create_with_Accessor_as_Secret_Dup
> === RUN   TestACL_HTTP/Token/Create_with_Reserved_Accessor
> === RUN   TestACL_HTTP/Token/Create_with_Reserved_Secret
> --- PASS: TestACL_HTTP (1.78s)
>     writer.go:29: 2020-02-23T02:47:14.952Z [WARN]  TestACL_HTTP: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:14.953Z [WARN]  TestACL_HTTP: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:14.953Z [DEBUG] TestACL_HTTP.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:14.953Z [DEBUG] TestACL_HTTP.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:15.115Z [INFO]  TestACL_HTTP.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:9dde74c5-24c4-8783-4564-0405cab98bed Address:127.0.0.1:16894}]"
>     writer.go:29: 2020-02-23T02:47:15.115Z [INFO]  TestACL_HTTP.server.raft: entering follower state: follower="Node at 127.0.0.1:16894 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:15.116Z [INFO]  TestACL_HTTP.server.serf.wan: serf: EventMemberJoin: Node-9dde74c5-24c4-8783-4564-0405cab98bed.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:15.116Z [INFO]  TestACL_HTTP.server.serf.lan: serf: EventMemberJoin: Node-9dde74c5-24c4-8783-4564-0405cab98bed 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:15.116Z [INFO]  TestACL_HTTP.server: Adding LAN server: server="Node-9dde74c5-24c4-8783-4564-0405cab98bed (Addr: tcp/127.0.0.1:16894) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:15.116Z [INFO]  TestACL_HTTP: Started DNS server: address=127.0.0.1:16889 network=udp
>     writer.go:29: 2020-02-23T02:47:15.116Z [INFO]  TestACL_HTTP.server: Handled event for server in area: event=member-join server=Node-9dde74c5-24c4-8783-4564-0405cab98bed.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:15.116Z [INFO]  TestACL_HTTP: Started DNS server: address=127.0.0.1:16889 network=tcp
>     writer.go:29: 2020-02-23T02:47:15.117Z [INFO]  TestACL_HTTP: Started HTTP server: address=127.0.0.1:16890 network=tcp
>     writer.go:29: 2020-02-23T02:47:15.117Z [INFO]  TestACL_HTTP: started state syncer
>     writer.go:29: 2020-02-23T02:47:15.183Z [WARN]  TestACL_HTTP.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:15.183Z [INFO]  TestACL_HTTP.server.raft: entering candidate state: node="Node at 127.0.0.1:16894 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:15.251Z [DEBUG] TestACL_HTTP.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:15.251Z [DEBUG] TestACL_HTTP.server.raft: vote granted: from=9dde74c5-24c4-8783-4564-0405cab98bed term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:15.251Z [INFO]  TestACL_HTTP.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:15.251Z [INFO]  TestACL_HTTP.server.raft: entering leader state: leader="Node at 127.0.0.1:16894 [Leader]"
>     writer.go:29: 2020-02-23T02:47:15.251Z [INFO]  TestACL_HTTP.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:15.252Z [INFO]  TestACL_HTTP.server: New leader elected: payload=Node-9dde74c5-24c4-8783-4564-0405cab98bed
>     writer.go:29: 2020-02-23T02:47:15.267Z [INFO]  TestACL_HTTP.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:15.271Z [ERROR] TestACL_HTTP.anti_entropy: failed to sync remote state: error="ACL not found"
>     writer.go:29: 2020-02-23T02:47:15.341Z [INFO]  TestACL_HTTP.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:15.342Z [WARN]  TestACL_HTTP.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:15.342Z [INFO]  TestACL_HTTP.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:15.342Z [WARN]  TestACL_HTTP.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:15.536Z [INFO]  TestACL_HTTP.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:15.536Z [INFO]  TestACL_HTTP.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:15.615Z [INFO]  TestACL_HTTP: Synced node info
>     writer.go:29: 2020-02-23T02:47:15.615Z [DEBUG] TestACL_HTTP: Node info in sync
>     writer.go:29: 2020-02-23T02:47:15.615Z [INFO]  TestACL_HTTP.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:15.615Z [INFO]  TestACL_HTTP.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:15.615Z [INFO]  TestACL_HTTP.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:15.615Z [INFO]  TestACL_HTTP.server.serf.lan: serf: EventMemberUpdate: Node-9dde74c5-24c4-8783-4564-0405cab98bed
>     writer.go:29: 2020-02-23T02:47:15.615Z [INFO]  TestACL_HTTP.server.serf.wan: serf: EventMemberUpdate: Node-9dde74c5-24c4-8783-4564-0405cab98bed.dc1
>     writer.go:29: 2020-02-23T02:47:15.616Z [INFO]  TestACL_HTTP.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:15.616Z [DEBUG] TestACL_HTTP.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:47:15.616Z [INFO]  TestACL_HTTP.server.serf.lan: serf: EventMemberUpdate: Node-9dde74c5-24c4-8783-4564-0405cab98bed
>     writer.go:29: 2020-02-23T02:47:15.616Z [INFO]  TestACL_HTTP.server.serf.wan: serf: EventMemberUpdate: Node-9dde74c5-24c4-8783-4564-0405cab98bed.dc1
>     writer.go:29: 2020-02-23T02:47:15.616Z [INFO]  TestACL_HTTP.server: Handled event for server in area: event=member-update server=Node-9dde74c5-24c4-8783-4564-0405cab98bed.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:15.616Z [INFO]  TestACL_HTTP.server: Handled event for server in area: event=member-update server=Node-9dde74c5-24c4-8783-4564-0405cab98bed.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:15.622Z [DEBUG] TestACL_HTTP.acl: dropping node from result due to ACLs: node=Node-9dde74c5-24c4-8783-4564-0405cab98bed
>     writer.go:29: 2020-02-23T02:47:15.622Z [DEBUG] TestACL_HTTP.acl: dropping node from result due to ACLs: node=Node-9dde74c5-24c4-8783-4564-0405cab98bed
>     writer.go:29: 2020-02-23T02:47:15.951Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:16.251Z [INFO]  TestACL_HTTP.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:16.252Z [INFO]  TestACL_HTTP.leader: started routine: routine="CA root pruning"
>     --- PASS: TestACL_HTTP/Policy (0.63s)
>         --- PASS: TestACL_HTTP/Policy/Create (0.06s)
>         --- PASS: TestACL_HTTP/Policy/Minimal (0.12s)
>         --- PASS: TestACL_HTTP/Policy/Name_Chars (0.10s)
>         --- PASS: TestACL_HTTP/Policy/Update_Name_ID_Mismatch (0.00s)
>         --- PASS: TestACL_HTTP/Policy/Policy_CRUD_Missing_ID_in_URL (0.00s)
>         --- PASS: TestACL_HTTP/Policy/Update (0.16s)
>         --- PASS: TestACL_HTTP/Policy/ID_Supplied (0.00s)
>         --- PASS: TestACL_HTTP/Policy/Invalid_payload (0.00s)
>         --- PASS: TestACL_HTTP/Policy/Delete (0.19s)
>         --- PASS: TestACL_HTTP/Policy/List (0.00s)
>         --- PASS: TestACL_HTTP/Policy/Read (0.00s)
>     writer.go:29: 2020-02-23T02:47:16.252Z [DEBUG] TestACL_HTTP.server: Skipping self join check for node since the cluster is too small: node=Node-9dde74c5-24c4-8783-4564-0405cab98bed
>     writer.go:29: 2020-02-23T02:47:16.252Z [INFO]  TestACL_HTTP.server: member joined, marking health alive: member=Node-9dde74c5-24c4-8783-4564-0405cab98bed
>     writer.go:29: 2020-02-23T02:47:16.367Z [DEBUG] TestACL_HTTP.server: Skipping self join check for node since the cluster is too small: node=Node-9dde74c5-24c4-8783-4564-0405cab98bed
>     writer.go:29: 2020-02-23T02:47:16.367Z [DEBUG] TestACL_HTTP.server: Skipping self join check for node since the cluster is too small: node=Node-9dde74c5-24c4-8783-4564-0405cab98bed
>     --- PASS: TestACL_HTTP/Role (0.19s)
>         --- PASS: TestACL_HTTP/Role/Create (0.03s)
>         --- PASS: TestACL_HTTP/Role/Name_Chars (0.08s)
>         --- PASS: TestACL_HTTP/Role/Update_Name_ID_Mismatch (0.00s)
>         --- PASS: TestACL_HTTP/Role/Role_CRUD_Missing_ID_in_URL (0.00s)
>         --- PASS: TestACL_HTTP/Role/Update (0.03s)
>         --- PASS: TestACL_HTTP/Role/ID_Supplied (0.00s)
>         --- PASS: TestACL_HTTP/Role/Invalid_payload (0.00s)
>         --- PASS: TestACL_HTTP/Role/Delete (0.04s)
>         --- PASS: TestACL_HTTP/Role/List (0.00s)
>         --- PASS: TestACL_HTTP/Role/Read (0.00s)
>     --- PASS: TestACL_HTTP/Token (0.26s)
>         --- PASS: TestACL_HTTP/Token/Create (0.05s)
>         --- PASS: TestACL_HTTP/Token/Create_Local (0.03s)
>         --- PASS: TestACL_HTTP/Token/Read (0.00s)
>         --- PASS: TestACL_HTTP/Token/Self (0.00s)
>         --- PASS: TestACL_HTTP/Token/Clone (0.02s)
>         --- PASS: TestACL_HTTP/Token/Update (0.04s)
>         --- PASS: TestACL_HTTP/Token/CRUD_Missing_Token_Accessor_ID (0.00s)
>         --- PASS: TestACL_HTTP/Token/Update_Accessor_Mismatch (0.00s)
>         --- PASS: TestACL_HTTP/Token/Delete (0.07s)
>         --- PASS: TestACL_HTTP/Token/List (0.00s)
>         --- PASS: TestACL_HTTP/Token/List_by_Policy (0.00s)
>         --- PASS: TestACL_HTTP/Token/Create_with_Accessor (0.02s)
>         --- PASS: TestACL_HTTP/Token/Create_with_Secret (0.03s)
>         --- PASS: TestACL_HTTP/Token/Create_with_Accessor_and_Secret (0.01s)
>         --- PASS: TestACL_HTTP/Token/Create_with_Accessor_Dup (0.00s)
>         --- PASS: TestACL_HTTP/Token/Create_with_Secret_as_Accessor_Dup (0.00s)
>         --- PASS: TestACL_HTTP/Token/Create_with_Secret_Dup (0.00s)
>         --- PASS: TestACL_HTTP/Token/Create_with_Accessor_as_Secret_Dup (0.00s)
>         --- PASS: TestACL_HTTP/Token/Create_with_Reserved_Accessor (0.00s)
>         --- PASS: TestACL_HTTP/Token/Create_with_Reserved_Secret (0.00s)
>     writer.go:29: 2020-02-23T02:47:16.699Z [INFO]  TestACL_HTTP: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:16.699Z [INFO]  TestACL_HTTP.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:16.699Z [DEBUG] TestACL_HTTP.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:16.699Z [DEBUG] TestACL_HTTP.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:16.699Z [DEBUG] TestACL_HTTP.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:16.699Z [WARN]  TestACL_HTTP.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:16.699Z [DEBUG] TestACL_HTTP.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:16.699Z [DEBUG] TestACL_HTTP.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:16.699Z [DEBUG] TestACL_HTTP.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:16.703Z [WARN]  TestACL_HTTP.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:16.711Z [INFO]  TestACL_HTTP.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:16.711Z [INFO]  TestACL_HTTP: consul server down
>     writer.go:29: 2020-02-23T02:47:16.711Z [INFO]  TestACL_HTTP: shutdown complete
>     writer.go:29: 2020-02-23T02:47:16.711Z [INFO]  TestACL_HTTP: Stopping server: protocol=DNS address=127.0.0.1:16889 network=tcp
>     writer.go:29: 2020-02-23T02:47:16.711Z [INFO]  TestACL_HTTP: Stopping server: protocol=DNS address=127.0.0.1:16889 network=udp
>     writer.go:29: 2020-02-23T02:47:16.712Z [INFO]  TestACL_HTTP: Stopping server: protocol=HTTP address=127.0.0.1:16890 network=tcp
>     writer.go:29: 2020-02-23T02:47:16.712Z [INFO]  TestACL_HTTP: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:16.712Z [INFO]  TestACL_HTTP: Endpoints down
> === CONT  TestACL_Disabled_Response
> === RUN   TestACL_Bootstrap/bootstrap
> === RUN   TestACL_Bootstrap/not_again
> --- PASS: TestACL_Bootstrap (0.53s)
>     writer.go:29: 2020-02-23T02:47:16.511Z [WARN]  TestACL_Bootstrap: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:16.511Z [WARN]  TestACL_Bootstrap: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:16.511Z [DEBUG] TestACL_Bootstrap.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:16.516Z [DEBUG] TestACL_Bootstrap.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:16.681Z [INFO]  TestACL_Bootstrap.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:660c181d-f001-cc91-9f75-b8c5a0974613 Address:127.0.0.1:16900}]"
>     writer.go:29: 2020-02-23T02:47:16.682Z [INFO]  TestACL_Bootstrap.server.serf.wan: serf: EventMemberJoin: Node-660c181d-f001-cc91-9f75-b8c5a0974613.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:16.682Z [INFO]  TestACL_Bootstrap.server.serf.lan: serf: EventMemberJoin: Node-660c181d-f001-cc91-9f75-b8c5a0974613 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:16.683Z [INFO]  TestACL_Bootstrap: Started DNS server: address=127.0.0.1:16895 network=udp
>     writer.go:29: 2020-02-23T02:47:16.683Z [INFO]  TestACL_Bootstrap.server.raft: entering follower state: follower="Node at 127.0.0.1:16900 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:16.683Z [INFO]  TestACL_Bootstrap.server: Adding LAN server: server="Node-660c181d-f001-cc91-9f75-b8c5a0974613 (Addr: tcp/127.0.0.1:16900) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:16.683Z [INFO]  TestACL_Bootstrap.server: Handled event for server in area: event=member-join server=Node-660c181d-f001-cc91-9f75-b8c5a0974613.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:16.683Z [INFO]  TestACL_Bootstrap: Started DNS server: address=127.0.0.1:16895 network=tcp
>     writer.go:29: 2020-02-23T02:47:16.683Z [INFO]  TestACL_Bootstrap: Started HTTP server: address=127.0.0.1:16896 network=tcp
>     writer.go:29: 2020-02-23T02:47:16.683Z [INFO]  TestACL_Bootstrap: started state syncer
>     writer.go:29: 2020-02-23T02:47:16.732Z [WARN]  TestACL_Bootstrap.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:16.732Z [INFO]  TestACL_Bootstrap.server.raft: entering candidate state: node="Node at 127.0.0.1:16900 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:16.768Z [DEBUG] TestACL_Bootstrap.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:16.768Z [DEBUG] TestACL_Bootstrap.server.raft: vote granted: from=660c181d-f001-cc91-9f75-b8c5a0974613 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:16.768Z [INFO]  TestACL_Bootstrap.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:16.768Z [INFO]  TestACL_Bootstrap.server.raft: entering leader state: leader="Node at 127.0.0.1:16900 [Leader]"
>     writer.go:29: 2020-02-23T02:47:16.768Z [INFO]  TestACL_Bootstrap.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:16.768Z [INFO]  TestACL_Bootstrap.server: New leader elected: payload=Node-660c181d-f001-cc91-9f75-b8c5a0974613
>     writer.go:29: 2020-02-23T02:47:16.770Z [ERROR] TestACL_Bootstrap.anti_entropy: failed to sync remote state: error="ACL not found"
>     writer.go:29: 2020-02-23T02:47:16.781Z [INFO]  TestACL_Bootstrap.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:16.788Z [INFO]  TestACL_Bootstrap.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:16.798Z [INFO]  TestACL_Bootstrap.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:16.798Z [INFO]  TestACL_Bootstrap.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:16.798Z [INFO]  TestACL_Bootstrap.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:16.798Z [INFO]  TestACL_Bootstrap.server.serf.lan: serf: EventMemberUpdate: Node-660c181d-f001-cc91-9f75-b8c5a0974613
>     writer.go:29: 2020-02-23T02:47:16.798Z [INFO]  TestACL_Bootstrap.server.serf.wan: serf: EventMemberUpdate: Node-660c181d-f001-cc91-9f75-b8c5a0974613.dc1
>     writer.go:29: 2020-02-23T02:47:16.799Z [INFO]  TestACL_Bootstrap.server: Handled event for server in area: event=member-update server=Node-660c181d-f001-cc91-9f75-b8c5a0974613.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:16.848Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:16.911Z [INFO]  TestACL_Bootstrap.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:16.912Z [INFO]  TestACL_Bootstrap.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:16.912Z [DEBUG] TestACL_Bootstrap.server: Skipping self join check for node since the cluster is too small: node=Node-660c181d-f001-cc91-9f75-b8c5a0974613
>     writer.go:29: 2020-02-23T02:47:16.912Z [INFO]  TestACL_Bootstrap.server: member joined, marking health alive: member=Node-660c181d-f001-cc91-9f75-b8c5a0974613
>     writer.go:29: 2020-02-23T02:47:16.955Z [DEBUG] TestACL_Bootstrap.server: Skipping self join check for node since the cluster is too small: node=Node-660c181d-f001-cc91-9f75-b8c5a0974613
>     writer.go:29: 2020-02-23T02:47:16.965Z [DEBUG] TestACL_Bootstrap.acl: dropping node from result due to ACLs: node=Node-660c181d-f001-cc91-9f75-b8c5a0974613
>     writer.go:29: 2020-02-23T02:47:16.965Z [WARN]  TestACL_Bootstrap.server.acl: failed to remove bootstrap file: error="remove /tmp/TestACL_Bootstrap-agent347905623/acl-bootstrap-reset: no such file or directory"
>     writer.go:29: 2020-02-23T02:47:16.995Z [INFO]  TestACL_Bootstrap.server.acl: ACL bootstrap completed
>     --- PASS: TestACL_Bootstrap/bootstrap (0.03s)
>     --- PASS: TestACL_Bootstrap/not_again (0.00s)
>     writer.go:29: 2020-02-23T02:47:16.995Z [INFO]  TestACL_Bootstrap: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:16.995Z [INFO]  TestACL_Bootstrap.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:16.995Z [DEBUG] TestACL_Bootstrap.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:16.995Z [DEBUG] TestACL_Bootstrap.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:16.995Z [DEBUG] TestACL_Bootstrap.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:16.995Z [WARN]  TestACL_Bootstrap.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:16.995Z [DEBUG] TestACL_Bootstrap.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:16.995Z [DEBUG] TestACL_Bootstrap.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:16.995Z [DEBUG] TestACL_Bootstrap.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:17.015Z [WARN]  TestACL_Bootstrap.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:17.035Z [INFO]  TestACL_Bootstrap.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:17.035Z [INFO]  TestACL_Bootstrap: consul server down
>     writer.go:29: 2020-02-23T02:47:17.035Z [INFO]  TestACL_Bootstrap: shutdown complete
>     writer.go:29: 2020-02-23T02:47:17.035Z [INFO]  TestACL_Bootstrap: Stopping server: protocol=DNS address=127.0.0.1:16895 network=tcp
>     writer.go:29: 2020-02-23T02:47:17.035Z [INFO]  TestACL_Bootstrap: Stopping server: protocol=DNS address=127.0.0.1:16895 network=udp
>     writer.go:29: 2020-02-23T02:47:17.035Z [INFO]  TestACL_Bootstrap: Stopping server: protocol=HTTP address=127.0.0.1:16896 network=tcp
>     writer.go:29: 2020-02-23T02:47:17.035Z [INFO]  TestACL_Bootstrap: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:17.035Z [INFO]  TestACL_Bootstrap: Endpoints down
> === CONT  TestACLReplicationStatus
> === RUN   TestACL_Disabled_Response/ACLBootstrap
> === RUN   TestACL_Disabled_Response/ACLReplicationStatus
> === RUN   TestACL_Disabled_Response/AgentToken
> === RUN   TestACL_Disabled_Response/ACLRulesTranslate
> === RUN   TestACL_Disabled_Response/ACLRulesTranslateLegacyToken
> === RUN   TestACL_Disabled_Response/ACLPolicyList
> === RUN   TestACL_Disabled_Response/ACLPolicyCRUD
> === RUN   TestACL_Disabled_Response/ACLPolicyCreate
> === RUN   TestACL_Disabled_Response/ACLTokenList
> === RUN   TestACL_Disabled_Response/ACLTokenCreate
> === RUN   TestACL_Disabled_Response/ACLTokenSelf
> === RUN   TestACL_Disabled_Response/ACLTokenCRUD
> === RUN   TestACL_Disabled_Response/ACLRoleList
> === RUN   TestACL_Disabled_Response/ACLRoleCreate
> === RUN   TestACL_Disabled_Response/ACLRoleCRUD
> === RUN   TestACL_Disabled_Response/ACLBindingRuleList
> === RUN   TestACL_Disabled_Response/ACLBindingRuleCreate
> === RUN   TestACL_Disabled_Response/ACLBindingRuleCRUD
> === RUN   TestACL_Disabled_Response/ACLAuthMethodList
> === RUN   TestACL_Disabled_Response/ACLAuthMethodCreate
> === RUN   TestACL_Disabled_Response/ACLAuthMethodCRUD
> === RUN   TestACL_Disabled_Response/ACLLogin
> === RUN   TestACL_Disabled_Response/ACLLogout
> === RUN   TestACL_Disabled_Response/ACLAuthorize
> --- PASS: TestACL_Disabled_Response (0.44s)
>     writer.go:29: 2020-02-23T02:47:16.719Z [WARN]  TestACL_Disabled_Response: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:16.719Z [DEBUG] TestACL_Disabled_Response.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:16.720Z [DEBUG] TestACL_Disabled_Response.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:16.761Z [INFO]  TestACL_Disabled_Response.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:91bf180e-0e48-d8d7-c28f-5cb928ad7a18 Address:127.0.0.1:16906}]"
>     writer.go:29: 2020-02-23T02:47:16.762Z [INFO]  TestACL_Disabled_Response.server.serf.wan: serf: EventMemberJoin: Node-91bf180e-0e48-d8d7-c28f-5cb928ad7a18.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:16.762Z [INFO]  TestACL_Disabled_Response.server.serf.lan: serf: EventMemberJoin: Node-91bf180e-0e48-d8d7-c28f-5cb928ad7a18 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:16.763Z [INFO]  TestACL_Disabled_Response: Started DNS server: address=127.0.0.1:16901 network=udp
>     writer.go:29: 2020-02-23T02:47:16.763Z [INFO]  TestACL_Disabled_Response.server.raft: entering follower state: follower="Node at 127.0.0.1:16906 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:16.763Z [INFO]  TestACL_Disabled_Response.server: Adding LAN server: server="Node-91bf180e-0e48-d8d7-c28f-5cb928ad7a18 (Addr: tcp/127.0.0.1:16906) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:16.763Z [INFO]  TestACL_Disabled_Response.server: Handled event for server in area: event=member-join server=Node-91bf180e-0e48-d8d7-c28f-5cb928ad7a18.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:16.763Z [INFO]  TestACL_Disabled_Response: Started DNS server: address=127.0.0.1:16901 network=tcp
>     writer.go:29: 2020-02-23T02:47:16.763Z [INFO]  TestACL_Disabled_Response: Started HTTP server: address=127.0.0.1:16902 network=tcp
>     writer.go:29: 2020-02-23T02:47:16.763Z [INFO]  TestACL_Disabled_Response: started state syncer
>     writer.go:29: 2020-02-23T02:47:16.830Z [WARN]  TestACL_Disabled_Response.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:16.830Z [INFO]  TestACL_Disabled_Response.server.raft: entering candidate state: node="Node at 127.0.0.1:16906 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:16.878Z [DEBUG] TestACL_Disabled_Response.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:16.878Z [DEBUG] TestACL_Disabled_Response.server.raft: vote granted: from=91bf180e-0e48-d8d7-c28f-5cb928ad7a18 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:16.878Z [INFO]  TestACL_Disabled_Response.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:16.878Z [INFO]  TestACL_Disabled_Response.server.raft: entering leader state: leader="Node at 127.0.0.1:16906 [Leader]"
>     writer.go:29: 2020-02-23T02:47:16.878Z [INFO]  TestACL_Disabled_Response.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:16.878Z [INFO]  TestACL_Disabled_Response.server: New leader elected: payload=Node-91bf180e-0e48-d8d7-c28f-5cb928ad7a18
>     writer.go:29: 2020-02-23T02:47:16.991Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:17.065Z [INFO]  TestACL_Disabled_Response.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:17.065Z [INFO]  TestACL_Disabled_Response.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:17.065Z [DEBUG] TestACL_Disabled_Response.server: Skipping self join check for node since the cluster is too small: node=Node-91bf180e-0e48-d8d7-c28f-5cb928ad7a18
>     writer.go:29: 2020-02-23T02:47:17.065Z [INFO]  TestACL_Disabled_Response.server: member joined, marking health alive: member=Node-91bf180e-0e48-d8d7-c28f-5cb928ad7a18
>     --- PASS: TestACL_Disabled_Response/ACLBootstrap (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLReplicationStatus (0.00s)
>     --- PASS: TestACL_Disabled_Response/AgentToken (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLRulesTranslate (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLRulesTranslateLegacyToken (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLPolicyList (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLPolicyCRUD (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLPolicyCreate (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLTokenList (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLTokenCreate (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLTokenSelf (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLTokenCRUD (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLRoleList (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLRoleCreate (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLRoleCRUD (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLBindingRuleList (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLBindingRuleCreate (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLBindingRuleCRUD (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLAuthMethodList (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLAuthMethodCreate (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLAuthMethodCRUD (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLLogin (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLLogout (0.00s)
>     --- PASS: TestACL_Disabled_Response/ACLAuthorize (0.00s)
>     writer.go:29: 2020-02-23T02:47:17.106Z [INFO]  TestACL_Disabled_Response: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:17.106Z [INFO]  TestACL_Disabled_Response.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:17.106Z [DEBUG] TestACL_Disabled_Response.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:17.106Z [WARN]  TestACL_Disabled_Response.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:17.106Z [ERROR] TestACL_Disabled_Response.anti_entropy: failed to sync remote state: error="No cluster leader"
>     writer.go:29: 2020-02-23T02:47:17.107Z [DEBUG] TestACL_Disabled_Response.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:17.125Z [WARN]  TestACL_Disabled_Response.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:17.151Z [INFO]  TestACL_Disabled_Response.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:17.151Z [INFO]  TestACL_Disabled_Response: consul server down
>     writer.go:29: 2020-02-23T02:47:17.152Z [INFO]  TestACL_Disabled_Response: shutdown complete
>     writer.go:29: 2020-02-23T02:47:17.152Z [INFO]  TestACL_Disabled_Response: Stopping server: protocol=DNS address=127.0.0.1:16901 network=tcp
>     writer.go:29: 2020-02-23T02:47:17.152Z [INFO]  TestACL_Disabled_Response: Stopping server: protocol=DNS address=127.0.0.1:16901 network=udp
>     writer.go:29: 2020-02-23T02:47:17.152Z [INFO]  TestACL_Disabled_Response: Stopping server: protocol=HTTP address=127.0.0.1:16902 network=tcp
>     writer.go:29: 2020-02-23T02:47:17.152Z [INFO]  TestACL_Disabled_Response: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:17.152Z [INFO]  TestACL_Disabled_Response: Endpoints down
> === CONT  TestACL_Legacy_Get
> === RUN   TestACL_Legacy_Get/wrong_id
> --- PASS: TestAgent_ForceLeavePrune (14.03s)
>     writer.go:29: 2020-02-23T02:47:03.149Z [WARN]  TestAgent_ForceLeavePrune-a1: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:03.149Z [DEBUG] TestAgent_ForceLeavePrune-a1.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:03.149Z [DEBUG] TestAgent_ForceLeavePrune-a1.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:03.158Z [INFO]  TestAgent_ForceLeavePrune-a1.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:16de1ac8-0766-9599-9d59-476584536fc5 Address:127.0.0.1:16690}]"
>     writer.go:29: 2020-02-23T02:47:03.159Z [INFO]  TestAgent_ForceLeavePrune-a1.server.raft: entering follower state: follower="Node at 127.0.0.1:16690 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:03.159Z [INFO]  TestAgent_ForceLeavePrune-a1.server.serf.wan: serf: EventMemberJoin: Node-16de1ac8-0766-9599-9d59-476584536fc5.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.160Z [INFO]  TestAgent_ForceLeavePrune-a1.server.serf.lan: serf: EventMemberJoin: Node-16de1ac8-0766-9599-9d59-476584536fc5 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.160Z [INFO]  TestAgent_ForceLeavePrune-a1.server: Handled event for server in area: event=member-join server=Node-16de1ac8-0766-9599-9d59-476584536fc5.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:03.160Z [INFO]  TestAgent_ForceLeavePrune-a1.server: Adding LAN server: server="Node-16de1ac8-0766-9599-9d59-476584536fc5 (Addr: tcp/127.0.0.1:16690) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:03.160Z [INFO]  TestAgent_ForceLeavePrune-a1: Started DNS server: address=127.0.0.1:16685 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.160Z [INFO]  TestAgent_ForceLeavePrune-a1: Started DNS server: address=127.0.0.1:16685 network=udp
>     writer.go:29: 2020-02-23T02:47:03.161Z [INFO]  TestAgent_ForceLeavePrune-a1: Started HTTP server: address=127.0.0.1:16686 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.161Z [INFO]  TestAgent_ForceLeavePrune-a1: started state syncer
>     writer.go:29: 2020-02-23T02:47:03.206Z [WARN]  TestAgent_ForceLeavePrune-a1.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:03.206Z [INFO]  TestAgent_ForceLeavePrune-a1.server.raft: entering candidate state: node="Node at 127.0.0.1:16690 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:03.210Z [DEBUG] TestAgent_ForceLeavePrune-a1.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:03.210Z [DEBUG] TestAgent_ForceLeavePrune-a1.server.raft: vote granted: from=16de1ac8-0766-9599-9d59-476584536fc5 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:03.210Z [INFO]  TestAgent_ForceLeavePrune-a1.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:03.210Z [INFO]  TestAgent_ForceLeavePrune-a1.server.raft: entering leader state: leader="Node at 127.0.0.1:16690 [Leader]"
>     writer.go:29: 2020-02-23T02:47:03.210Z [INFO]  TestAgent_ForceLeavePrune-a1.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:03.210Z [INFO]  TestAgent_ForceLeavePrune-a1.server: New leader elected: payload=Node-16de1ac8-0766-9599-9d59-476584536fc5
>     writer.go:29: 2020-02-23T02:47:03.283Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:03.293Z [INFO]  TestAgent_ForceLeavePrune-a1.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:03.293Z [INFO]  TestAgent_ForceLeavePrune-a1.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.293Z [DEBUG] TestAgent_ForceLeavePrune-a1.server: Skipping self join check for node since the cluster is too small: node=Node-16de1ac8-0766-9599-9d59-476584536fc5
>     writer.go:29: 2020-02-23T02:47:03.293Z [INFO]  TestAgent_ForceLeavePrune-a1.server: member joined, marking health alive: member=Node-16de1ac8-0766-9599-9d59-476584536fc5
>     writer.go:29: 2020-02-23T02:47:03.325Z [DEBUG] TestAgent_ForceLeavePrune-a1: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:03.328Z [INFO]  TestAgent_ForceLeavePrune-a1: Synced node info
>     writer.go:29: 2020-02-23T02:47:03.331Z [WARN]  TestAgent_ForceLeavePrune-a2: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:03.331Z [DEBUG] TestAgent_ForceLeavePrune-a2.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:03.331Z [DEBUG] TestAgent_ForceLeavePrune-a2.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:03.341Z [INFO]  TestAgent_ForceLeavePrune-a2.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:78f9793b-0653-b36f-b947-c5cc1420459d Address:127.0.0.1:16696}]"
>     writer.go:29: 2020-02-23T02:47:03.341Z [INFO]  TestAgent_ForceLeavePrune-a2.server.raft: entering follower state: follower="Node at 127.0.0.1:16696 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:03.342Z [INFO]  TestAgent_ForceLeavePrune-a2.server.serf.wan: serf: EventMemberJoin: Node-78f9793b-0653-b36f-b947-c5cc1420459d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.342Z [INFO]  TestAgent_ForceLeavePrune-a2.server.serf.lan: serf: EventMemberJoin: Node-78f9793b-0653-b36f-b947-c5cc1420459d 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.343Z [INFO]  TestAgent_ForceLeavePrune-a2.server: Handled event for server in area: event=member-join server=Node-78f9793b-0653-b36f-b947-c5cc1420459d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:03.343Z [INFO]  TestAgent_ForceLeavePrune-a2.server: Adding LAN server: server="Node-78f9793b-0653-b36f-b947-c5cc1420459d (Addr: tcp/127.0.0.1:16696) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:03.343Z [INFO]  TestAgent_ForceLeavePrune-a2: Started DNS server: address=127.0.0.1:16691 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.343Z [INFO]  TestAgent_ForceLeavePrune-a2: Started DNS server: address=127.0.0.1:16691 network=udp
>     writer.go:29: 2020-02-23T02:47:03.343Z [INFO]  TestAgent_ForceLeavePrune-a2: Started HTTP server: address=127.0.0.1:16692 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.343Z [INFO]  TestAgent_ForceLeavePrune-a2: started state syncer
>     writer.go:29: 2020-02-23T02:47:03.386Z [WARN]  TestAgent_ForceLeavePrune-a2.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:03.386Z [INFO]  TestAgent_ForceLeavePrune-a2.server.raft: entering candidate state: node="Node at 127.0.0.1:16696 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:03.459Z [DEBUG] TestAgent_ForceLeavePrune-a2.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:03.459Z [DEBUG] TestAgent_ForceLeavePrune-a2.server.raft: vote granted: from=78f9793b-0653-b36f-b947-c5cc1420459d term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:03.459Z [INFO]  TestAgent_ForceLeavePrune-a2.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:03.459Z [INFO]  TestAgent_ForceLeavePrune-a2.server.raft: entering leader state: leader="Node at 127.0.0.1:16696 [Leader]"
>     writer.go:29: 2020-02-23T02:47:03.459Z [INFO]  TestAgent_ForceLeavePrune-a2.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:03.460Z [INFO]  TestAgent_ForceLeavePrune-a2.server: New leader elected: payload=Node-78f9793b-0653-b36f-b947-c5cc1420459d
>     writer.go:29: 2020-02-23T02:47:03.485Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:03.495Z [INFO]  TestAgent_ForceLeavePrune-a2.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:03.495Z [INFO]  TestAgent_ForceLeavePrune-a2.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.495Z [DEBUG] TestAgent_ForceLeavePrune-a2.server: Skipping self join check for node since the cluster is too small: node=Node-78f9793b-0653-b36f-b947-c5cc1420459d
>     writer.go:29: 2020-02-23T02:47:03.495Z [INFO]  TestAgent_ForceLeavePrune-a2.server: member joined, marking health alive: member=Node-78f9793b-0653-b36f-b947-c5cc1420459d
>     writer.go:29: 2020-02-23T02:47:03.520Z [DEBUG] TestAgent_ForceLeavePrune-a2: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:03.524Z [INFO]  TestAgent_ForceLeavePrune-a2: Synced node info
>     writer.go:29: 2020-02-23T02:47:03.524Z [DEBUG] TestAgent_ForceLeavePrune-a2: Node info in sync
>     writer.go:29: 2020-02-23T02:47:03.738Z [INFO]  TestAgent_ForceLeavePrune-a1: (LAN) joining: lan_addresses=[127.0.0.1:16694]
>     writer.go:29: 2020-02-23T02:47:03.738Z [DEBUG] TestAgent_ForceLeavePrune-a1.server.memberlist.lan: memberlist: Initiating push/pull sync with: 127.0.0.1:16694
>     writer.go:29: 2020-02-23T02:47:03.738Z [DEBUG] TestAgent_ForceLeavePrune-a2.server.memberlist.lan: memberlist: Stream connection from=127.0.0.1:34080
>     writer.go:29: 2020-02-23T02:47:03.739Z [INFO]  TestAgent_ForceLeavePrune-a2.server.serf.lan: serf: EventMemberJoin: Node-16de1ac8-0766-9599-9d59-476584536fc5 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.739Z [INFO]  TestAgent_ForceLeavePrune-a2.server: Adding LAN server: server="Node-16de1ac8-0766-9599-9d59-476584536fc5 (Addr: tcp/127.0.0.1:16690) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:03.739Z [INFO]  TestAgent_ForceLeavePrune-a2.server: New leader elected: payload=Node-16de1ac8-0766-9599-9d59-476584536fc5
>     writer.go:29: 2020-02-23T02:47:03.739Z [ERROR] TestAgent_ForceLeavePrune-a2.server: Two nodes are in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.: node_to_add=Node-16de1ac8-0766-9599-9d59-476584536fc5 other=Node-78f9793b-0653-b36f-b947-c5cc1420459d
>     writer.go:29: 2020-02-23T02:47:03.739Z [INFO]  TestAgent_ForceLeavePrune-a2.server: member joined, marking health alive: member=Node-16de1ac8-0766-9599-9d59-476584536fc5
>     writer.go:29: 2020-02-23T02:47:03.739Z [INFO]  TestAgent_ForceLeavePrune-a1.server.serf.lan: serf: EventMemberJoin: Node-78f9793b-0653-b36f-b947-c5cc1420459d 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.739Z [INFO]  TestAgent_ForceLeavePrune-a1: (LAN) joined: number_of_nodes=1
>     writer.go:29: 2020-02-23T02:47:03.739Z [DEBUG] TestAgent_ForceLeavePrune-a1: systemd notify failed: error="No socket"
>     writer.go:29: 2020-02-23T02:47:03.739Z [INFO]  TestAgent_ForceLeavePrune-a2: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:03.739Z [INFO]  TestAgent_ForceLeavePrune-a2.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:03.739Z [DEBUG] TestAgent_ForceLeavePrune-a2.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.739Z [WARN]  TestAgent_ForceLeavePrune-a2.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:03.740Z [INFO]  TestAgent_ForceLeavePrune-a1.server: Adding LAN server: server="Node-78f9793b-0653-b36f-b947-c5cc1420459d (Addr: tcp/127.0.0.1:16696) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:03.740Z [ERROR] TestAgent_ForceLeavePrune-a1.server: Two nodes are in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.: node_to_add=Node-78f9793b-0653-b36f-b947-c5cc1420459d other=Node-16de1ac8-0766-9599-9d59-476584536fc5
>     writer.go:29: 2020-02-23T02:47:03.740Z [INFO]  TestAgent_ForceLeavePrune-a1.server: member joined, marking health alive: member=Node-78f9793b-0653-b36f-b947-c5cc1420459d
>     writer.go:29: 2020-02-23T02:47:03.740Z [DEBUG] TestAgent_ForceLeavePrune-a2.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:03.740Z [DEBUG] TestAgent_ForceLeavePrune-a2.server.memberlist.wan: memberlist: Initiating push/pull sync with: 127.0.0.1:16689
>     writer.go:29: 2020-02-23T02:47:03.740Z [DEBUG] TestAgent_ForceLeavePrune-a1.server.memberlist.wan: memberlist: Stream connection from=127.0.0.1:37712
>     writer.go:29: 2020-02-23T02:47:03.740Z [DEBUG] TestAgent_ForceLeavePrune-a1.server.memberlist.wan: memberlist: Initiating push/pull sync with: 127.0.0.1:16695
>     writer.go:29: 2020-02-23T02:47:03.740Z [INFO]  TestAgent_ForceLeavePrune-a1.server.serf.wan: serf: EventMemberJoin: Node-78f9793b-0653-b36f-b947-c5cc1420459d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.740Z [DEBUG] TestAgent_ForceLeavePrune-a2.server.memberlist.wan: memberlist: Stream connection from=127.0.0.1:55496
>     writer.go:29: 2020-02-23T02:47:03.740Z [INFO]  TestAgent_ForceLeavePrune-a1.server: Handled event for server in area: event=member-join server=Node-78f9793b-0653-b36f-b947-c5cc1420459d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:03.741Z [INFO]  TestAgent_ForceLeavePrune-a2.server.serf.wan: serf: EventMemberJoin: Node-16de1ac8-0766-9599-9d59-476584536fc5.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:03.741Z [DEBUG] TestAgent_ForceLeavePrune-a1.server: Successfully performed flood-join for server at address: server=Node-78f9793b-0653-b36f-b947-c5cc1420459d address=127.0.0.1:16695
>     writer.go:29: 2020-02-23T02:47:03.741Z [DEBUG] TestAgent_ForceLeavePrune-a2.server: Successfully performed flood-join for server at address: server=Node-16de1ac8-0766-9599-9d59-476584536fc5 address=127.0.0.1:16689
>     writer.go:29: 2020-02-23T02:47:03.741Z [INFO]  TestAgent_ForceLeavePrune-a2.server: Handled event for server in area: event=member-join server=Node-16de1ac8-0766-9599-9d59-476584536fc5.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:03.766Z [WARN]  TestAgent_ForceLeavePrune-a2.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:03.804Z [INFO]  TestAgent_ForceLeavePrune-a2.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:03.804Z [INFO]  TestAgent_ForceLeavePrune-a2: consul server down
>     writer.go:29: 2020-02-23T02:47:03.804Z [INFO]  TestAgent_ForceLeavePrune-a2: shutdown complete
>     writer.go:29: 2020-02-23T02:47:03.804Z [INFO]  TestAgent_ForceLeavePrune-a2: Stopping server: protocol=DNS address=127.0.0.1:16691 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.804Z [INFO]  TestAgent_ForceLeavePrune-a2: Stopping server: protocol=DNS address=127.0.0.1:16691 network=udp
>     writer.go:29: 2020-02-23T02:47:03.804Z [INFO]  TestAgent_ForceLeavePrune-a2: Stopping server: protocol=HTTP address=127.0.0.1:16692 network=tcp
>     writer.go:29: 2020-02-23T02:47:03.804Z [INFO]  TestAgent_ForceLeavePrune-a2: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:03.804Z [INFO]  TestAgent_ForceLeavePrune-a2: Endpoints down
>     writer.go:29: 2020-02-23T02:47:03.881Z [DEBUG] TestAgent_ForceLeavePrune-a1: Skipping remote check since it is managed automatically: check=serfHealth
>     writer.go:29: 2020-02-23T02:47:03.881Z [DEBUG] TestAgent_ForceLeavePrune-a1: Node info in sync
>     writer.go:29: 2020-02-23T02:47:03.881Z [DEBUG] TestAgent_ForceLeavePrune-a1: Node info in sync
>     writer.go:29: 2020-02-23T02:47:04.673Z [DEBUG] TestAgent_ForceLeavePrune-a1.server.memberlist.lan: memberlist: Failed ping: Node-78f9793b-0653-b36f-b947-c5cc1420459d (timeout reached)
>     writer.go:29: 2020-02-23T02:47:05.160Z [INFO]  TestAgent_ForceLeavePrune-a1.server.memberlist.lan: memberlist: Suspect Node-78f9793b-0653-b36f-b947-c5cc1420459d has failed, no acks received
>     writer.go:29: 2020-02-23T02:47:05.227Z [DEBUG] TestAgent_ForceLeavePrune-a1.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:05.227Z [DEBUG] TestAgent_ForceLeavePrune-a1.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:05.227Z [WARN]  TestAgent_ForceLeavePrune-a1: error getting server health from server: server=Node-78f9793b-0653-b36f-b947-c5cc1420459d error="rpc error getting client: failed to get conn: dial tcp 127.0.0.1:0->127.0.0.1:16696: connect: connection refused"
>     writer.go:29: 2020-02-23T02:47:06.227Z [WARN]  TestAgent_ForceLeavePrune-a1: error getting server health from server: server=Node-78f9793b-0653-b36f-b947-c5cc1420459d error="context deadline exceeded"
>     writer.go:29: 2020-02-23T02:47:06.660Z [DEBUG] TestAgent_ForceLeavePrune-a1.server.memberlist.lan: memberlist: Failed ping: Node-78f9793b-0653-b36f-b947-c5cc1420459d (timeout reached)
>     writer.go:29: 2020-02-23T02:47:07.227Z [DEBUG] TestAgent_ForceLeavePrune-a1.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:07.227Z [WARN]  TestAgent_ForceLeavePrune-a1: error getting server health from server: server=Node-78f9793b-0653-b36f-b947-c5cc1420459d error="rpc error getting client: failed to get conn: dial tcp 127.0.0.1:0->127.0.0.1:16696: connect: connection refused"
>     writer.go:29: 2020-02-23T02:47:08.160Z [INFO]  TestAgent_ForceLeavePrune-a1.server.memberlist.lan: memberlist: Suspect Node-78f9793b-0653-b36f-b947-c5cc1420459d has failed, no acks received
>     writer.go:29: 2020-02-23T02:47:08.227Z [WARN]  TestAgent_ForceLeavePrune-a1: error getting server health from server: server=Node-78f9793b-0653-b36f-b947-c5cc1420459d error="context deadline exceeded"
>     writer.go:29: 2020-02-23T02:47:09.160Z [INFO]  TestAgent_ForceLeavePrune-a1.server.memberlist.lan: memberlist: Marking Node-78f9793b-0653-b36f-b947-c5cc1420459d as failed, suspect timeout reached (0 peer confirmations)
>     writer.go:29: 2020-02-23T02:47:09.160Z [INFO]  TestAgent_ForceLeavePrune-a1.server.serf.lan: serf: EventMemberFailed: Node-78f9793b-0653-b36f-b947-c5cc1420459d 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:09.160Z [INFO]  TestAgent_ForceLeavePrune-a1.server: Removing LAN server: server="Node-78f9793b-0653-b36f-b947-c5cc1420459d (Addr: tcp/127.0.0.1:16696) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:09.160Z [INFO]  TestAgent_ForceLeavePrune-a1.server: member failed, marking health critical: member=Node-78f9793b-0653-b36f-b947-c5cc1420459d
>     writer.go:29: 2020-02-23T02:47:09.161Z [INFO]  TestAgent_ForceLeavePrune-a1: Force leaving node: node=Node-78f9793b-0653-b36f-b947-c5cc1420459d
>     writer.go:29: 2020-02-23T02:47:09.161Z [INFO]  TestAgent_ForceLeavePrune-a1.server.serf.lan: serf: EventMemberLeave (forced): Node-78f9793b-0653-b36f-b947-c5cc1420459d 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:09.161Z [INFO]  TestAgent_ForceLeavePrune-a1.server.serf.lan: serf: EventMemberReap (forced): Node-78f9793b-0653-b36f-b947-c5cc1420459d 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:09.161Z [INFO]  TestAgent_ForceLeavePrune-a1.server: Removing LAN server: server="Node-78f9793b-0653-b36f-b947-c5cc1420459d (Addr: tcp/127.0.0.1:16696) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:09.161Z [INFO]  TestAgent_ForceLeavePrune-a1.server: Removing LAN server: server="Node-78f9793b-0653-b36f-b947-c5cc1420459d (Addr: tcp/127.0.0.1:16696) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:09.660Z [DEBUG] TestAgent_ForceLeavePrune-a1.server.memberlist.lan: memberlist: Failed ping: Node-78f9793b-0653-b36f-b947-c5cc1420459d (timeout reached)
>     writer.go:29: 2020-02-23T02:47:09.703Z [INFO]  TestAgent_ForceLeavePrune-a1.server: deregistering member: member=Node-78f9793b-0653-b36f-b947-c5cc1420459d reason=left
>     writer.go:29: 2020-02-23T02:47:11.160Z [DEBUG] TestAgent_ForceLeavePrune-a1.server.memberlist.wan: memberlist: Failed ping: Node-78f9793b-0653-b36f-b947-c5cc1420459d.dc1 (timeout reached)
>     writer.go:29: 2020-02-23T02:47:12.160Z [INFO]  TestAgent_ForceLeavePrune-a1.server.memberlist.lan: memberlist: Suspect Node-78f9793b-0653-b36f-b947-c5cc1420459d has failed, no acks received
>     writer.go:29: 2020-02-23T02:47:13.159Z [INFO]  TestAgent_ForceLeavePrune-a1.server.memberlist.wan: memberlist: Suspect Node-78f9793b-0653-b36f-b947-c5cc1420459d.dc1 has failed, no acks received
>     writer.go:29: 2020-02-23T02:47:17.161Z [INFO]  TestAgent_ForceLeavePrune-a1.server.serf.wan: serf: EventMemberReap (forced): Node-78f9793b-0653-b36f-b947-c5cc1420459d.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:17.161Z [INFO]  TestAgent_ForceLeavePrune-a1: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:17.161Z [INFO]  TestAgent_ForceLeavePrune-a1.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:17.161Z [DEBUG] TestAgent_ForceLeavePrune-a1.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:17.161Z [WARN]  TestAgent_ForceLeavePrune-a1.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:17.161Z [INFO]  TestAgent_ForceLeavePrune-a1.server: Handled event for server in area: event=member-reap server=Node-78f9793b-0653-b36f-b947-c5cc1420459d.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:17.161Z [DEBUG] TestAgent_ForceLeavePrune-a1.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:17.166Z [WARN]  TestAgent_ForceLeavePrune-a1.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:17.175Z [INFO]  TestAgent_ForceLeavePrune-a1.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:17.175Z [INFO]  TestAgent_ForceLeavePrune-a1: consul server down
>     writer.go:29: 2020-02-23T02:47:17.175Z [INFO]  TestAgent_ForceLeavePrune-a1: shutdown complete
>     writer.go:29: 2020-02-23T02:47:17.175Z [INFO]  TestAgent_ForceLeavePrune-a1: Stopping server: protocol=DNS address=127.0.0.1:16685 network=tcp
>     writer.go:29: 2020-02-23T02:47:17.175Z [INFO]  TestAgent_ForceLeavePrune-a1: Stopping server: protocol=DNS address=127.0.0.1:16685 network=udp
>     writer.go:29: 2020-02-23T02:47:17.175Z [INFO]  TestAgent_ForceLeavePrune-a1: Stopping server: protocol=HTTP address=127.0.0.1:16686 network=tcp
>     writer.go:29: 2020-02-23T02:47:17.175Z [INFO]  TestAgent_ForceLeavePrune-a1: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:17.175Z [INFO]  TestAgent_ForceLeavePrune-a1: Endpoints down
> === CONT  TestACL_Legacy_Clone
> --- PASS: TestACLReplicationStatus (2.13s)
>     writer.go:29: 2020-02-23T02:47:17.049Z [WARN]  TestACLReplicationStatus: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:17.049Z [WARN]  TestACLReplicationStatus: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:17.049Z [DEBUG] TestACLReplicationStatus.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:17.050Z [DEBUG] TestACLReplicationStatus.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:17.161Z [INFO]  TestACLReplicationStatus.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:844eae04-dcdb-7517-5996-afe1e525f733 Address:127.0.0.1:16918}]"
>     writer.go:29: 2020-02-23T02:47:17.162Z [INFO]  TestACLReplicationStatus.server.serf.wan: serf: EventMemberJoin: Node-844eae04-dcdb-7517-5996-afe1e525f733.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:17.162Z [INFO]  TestACLReplicationStatus.server.serf.lan: serf: EventMemberJoin: Node-844eae04-dcdb-7517-5996-afe1e525f733 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:17.163Z [INFO]  TestACLReplicationStatus: Started DNS server: address=127.0.0.1:16913 network=udp
>     writer.go:29: 2020-02-23T02:47:17.163Z [INFO]  TestACLReplicationStatus.server.raft: entering follower state: follower="Node at 127.0.0.1:16918 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:17.163Z [INFO]  TestACLReplicationStatus.server: Adding LAN server: server="Node-844eae04-dcdb-7517-5996-afe1e525f733 (Addr: tcp/127.0.0.1:16918) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:17.163Z [INFO]  TestACLReplicationStatus.server: Handled event for server in area: event=member-join server=Node-844eae04-dcdb-7517-5996-afe1e525f733.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:17.163Z [INFO]  TestACLReplicationStatus: Started DNS server: address=127.0.0.1:16913 network=tcp
>     writer.go:29: 2020-02-23T02:47:17.163Z [INFO]  TestACLReplicationStatus: Started HTTP server: address=127.0.0.1:16914 network=tcp
>     writer.go:29: 2020-02-23T02:47:17.163Z [INFO]  TestACLReplicationStatus: started state syncer
>     writer.go:29: 2020-02-23T02:47:17.229Z [WARN]  TestACLReplicationStatus.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:17.229Z [INFO]  TestACLReplicationStatus.server.raft: entering candidate state: node="Node at 127.0.0.1:16918 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:17.616Z [DEBUG] TestACLReplicationStatus.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:17.616Z [DEBUG] TestACLReplicationStatus.server.raft: vote granted: from=844eae04-dcdb-7517-5996-afe1e525f733 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:17.616Z [INFO]  TestACLReplicationStatus.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:17.616Z [INFO]  TestACLReplicationStatus.server.raft: entering leader state: leader="Node at 127.0.0.1:16918 [Leader]"
>     writer.go:29: 2020-02-23T02:47:17.616Z [INFO]  TestACLReplicationStatus.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:17.617Z [INFO]  TestACLReplicationStatus.server: New leader elected: payload=Node-844eae04-dcdb-7517-5996-afe1e525f733
>     writer.go:29: 2020-02-23T02:47:17.632Z [ERROR] TestACLReplicationStatus.anti_entropy: failed to sync remote state: error="ACL not found"
>     writer.go:29: 2020-02-23T02:47:17.721Z [INFO]  TestACLReplicationStatus.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:17.742Z [INFO]  TestACLReplicationStatus.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:17.742Z [WARN]  TestACLReplicationStatus.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:17.778Z [INFO]  TestACLReplicationStatus.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:17.841Z [INFO]  TestACLReplicationStatus.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:17.842Z [INFO]  TestACLReplicationStatus.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:17.842Z [INFO]  TestACLReplicationStatus.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:17.842Z [INFO]  TestACLReplicationStatus.server.serf.lan: serf: EventMemberUpdate: Node-844eae04-dcdb-7517-5996-afe1e525f733
>     writer.go:29: 2020-02-23T02:47:17.842Z [INFO]  TestACLReplicationStatus.server.serf.wan: serf: EventMemberUpdate: Node-844eae04-dcdb-7517-5996-afe1e525f733.dc1
>     writer.go:29: 2020-02-23T02:47:17.842Z [INFO]  TestACLReplicationStatus.server: Handled event for server in area: event=member-update server=Node-844eae04-dcdb-7517-5996-afe1e525f733.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:18.030Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:19.001Z [INFO]  TestACLReplicationStatus.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:19.001Z [INFO]  TestACLReplicationStatus.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:19.002Z [DEBUG] TestACLReplicationStatus.server: Skipping self join check for node since the cluster is too small: node=Node-844eae04-dcdb-7517-5996-afe1e525f733
>     writer.go:29: 2020-02-23T02:47:19.002Z [INFO]  TestACLReplicationStatus.server: member joined, marking health alive: member=Node-844eae04-dcdb-7517-5996-afe1e525f733
>     writer.go:29: 2020-02-23T02:47:19.055Z [DEBUG] TestACLReplicationStatus.server: Skipping self join check for node since the cluster is too small: node=Node-844eae04-dcdb-7517-5996-afe1e525f733
>     writer.go:29: 2020-02-23T02:47:19.075Z [DEBUG] TestACLReplicationStatus.acl: dropping node from result due to ACLs: node=Node-844eae04-dcdb-7517-5996-afe1e525f733
>     writer.go:29: 2020-02-23T02:47:19.075Z [INFO]  TestACLReplicationStatus: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:19.075Z [INFO]  TestACLReplicationStatus.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:19.075Z [DEBUG] TestACLReplicationStatus.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:19.075Z [DEBUG] TestACLReplicationStatus.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:19.075Z [DEBUG] TestACLReplicationStatus.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:19.075Z [WARN]  TestACLReplicationStatus.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:19.075Z [DEBUG] TestACLReplicationStatus.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:19.075Z [DEBUG] TestACLReplicationStatus.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:19.075Z [DEBUG] TestACLReplicationStatus.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:19.115Z [WARN]  TestACLReplicationStatus.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:19.162Z [INFO]  TestACLReplicationStatus.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:19.162Z [INFO]  TestACLReplicationStatus: consul server down
>     writer.go:29: 2020-02-23T02:47:19.162Z [INFO]  TestACLReplicationStatus: shutdown complete
>     writer.go:29: 2020-02-23T02:47:19.162Z [INFO]  TestACLReplicationStatus: Stopping server: protocol=DNS address=127.0.0.1:16913 network=tcp
>     writer.go:29: 2020-02-23T02:47:19.162Z [INFO]  TestACLReplicationStatus: Stopping server: protocol=DNS address=127.0.0.1:16913 network=udp
>     writer.go:29: 2020-02-23T02:47:19.162Z [INFO]  TestACLReplicationStatus: Stopping server: protocol=HTTP address=127.0.0.1:16914 network=tcp
>     writer.go:29: 2020-02-23T02:47:19.162Z [INFO]  TestACLReplicationStatus: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:19.162Z [INFO]  TestACLReplicationStatus: Endpoints down
> === CONT  TestACL_Legacy_Destroy
> --- PASS: TestACL_Legacy_Clone (2.36s)
>     writer.go:29: 2020-02-23T02:47:17.183Z [WARN]  TestACL_Legacy_Clone: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:17.183Z [WARN]  TestACL_Legacy_Clone: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:17.183Z [DEBUG] TestACL_Legacy_Clone.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:17.183Z [DEBUG] TestACL_Legacy_Clone.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:17.396Z [INFO]  TestACL_Legacy_Clone.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:5010476d-1a7d-2774-efca-78efe7cebf63 Address:127.0.0.1:16930}]"
>     writer.go:29: 2020-02-23T02:47:17.397Z [INFO]  TestACL_Legacy_Clone.server.serf.wan: serf: EventMemberJoin: Node-5010476d-1a7d-2774-efca-78efe7cebf63.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:17.397Z [INFO]  TestACL_Legacy_Clone.server.serf.lan: serf: EventMemberJoin: Node-5010476d-1a7d-2774-efca-78efe7cebf63 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:17.397Z [INFO]  TestACL_Legacy_Clone: Started DNS server: address=127.0.0.1:16925 network=udp
>     writer.go:29: 2020-02-23T02:47:17.397Z [INFO]  TestACL_Legacy_Clone.server.raft: entering follower state: follower="Node at 127.0.0.1:16930 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:17.397Z [INFO]  TestACL_Legacy_Clone.server: Adding LAN server: server="Node-5010476d-1a7d-2774-efca-78efe7cebf63 (Addr: tcp/127.0.0.1:16930) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:17.397Z [INFO]  TestACL_Legacy_Clone.server: Handled event for server in area: event=member-join server=Node-5010476d-1a7d-2774-efca-78efe7cebf63.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:17.397Z [INFO]  TestACL_Legacy_Clone: Started DNS server: address=127.0.0.1:16925 network=tcp
>     writer.go:29: 2020-02-23T02:47:17.398Z [INFO]  TestACL_Legacy_Clone: Started HTTP server: address=127.0.0.1:16926 network=tcp
>     writer.go:29: 2020-02-23T02:47:17.398Z [INFO]  TestACL_Legacy_Clone: started state syncer
>     writer.go:29: 2020-02-23T02:47:17.467Z [WARN]  TestACL_Legacy_Clone.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:17.467Z [INFO]  TestACL_Legacy_Clone.server.raft: entering candidate state: node="Node at 127.0.0.1:16930 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:17.725Z [DEBUG] TestACL_Legacy_Clone.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:17.725Z [DEBUG] TestACL_Legacy_Clone.server.raft: vote granted: from=5010476d-1a7d-2774-efca-78efe7cebf63 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:17.725Z [INFO]  TestACL_Legacy_Clone.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:17.725Z [INFO]  TestACL_Legacy_Clone.server.raft: entering leader state: leader="Node at 127.0.0.1:16930 [Leader]"
>     writer.go:29: 2020-02-23T02:47:17.725Z [INFO]  TestACL_Legacy_Clone.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:17.725Z [INFO]  TestACL_Legacy_Clone.server: New leader elected: payload=Node-5010476d-1a7d-2774-efca-78efe7cebf63
>     writer.go:29: 2020-02-23T02:47:17.748Z [INFO]  TestACL_Legacy_Clone.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:17.798Z [INFO]  TestACL_Legacy_Clone.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:17.798Z [WARN]  TestACL_Legacy_Clone.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:17.798Z [INFO]  TestACL_Legacy_Clone.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:17.799Z [WARN]  TestACL_Legacy_Clone.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:17.892Z [INFO]  TestACL_Legacy_Clone.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:17.892Z [INFO]  TestACL_Legacy_Clone.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:17.955Z [INFO]  TestACL_Legacy_Clone.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:17.955Z [INFO]  TestACL_Legacy_Clone.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:17.955Z [INFO]  TestACL_Legacy_Clone.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:17.955Z [INFO]  TestACL_Legacy_Clone.server.serf.lan: serf: EventMemberUpdate: Node-5010476d-1a7d-2774-efca-78efe7cebf63
>     writer.go:29: 2020-02-23T02:47:17.955Z [INFO]  TestACL_Legacy_Clone.server.serf.wan: serf: EventMemberUpdate: Node-5010476d-1a7d-2774-efca-78efe7cebf63.dc1
>     writer.go:29: 2020-02-23T02:47:17.955Z [INFO]  TestACL_Legacy_Clone.server: Handled event for server in area: event=member-update server=Node-5010476d-1a7d-2774-efca-78efe7cebf63.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:18.137Z [INFO]  TestACL_Legacy_Clone.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:18.137Z [DEBUG] TestACL_Legacy_Clone.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:47:18.137Z [INFO]  TestACL_Legacy_Clone.server.serf.lan: serf: EventMemberUpdate: Node-5010476d-1a7d-2774-efca-78efe7cebf63
>     writer.go:29: 2020-02-23T02:47:18.137Z [INFO]  TestACL_Legacy_Clone.server.serf.wan: serf: EventMemberUpdate: Node-5010476d-1a7d-2774-efca-78efe7cebf63.dc1
>     writer.go:29: 2020-02-23T02:47:18.137Z [INFO]  TestACL_Legacy_Clone.server: Handled event for server in area: event=member-update server=Node-5010476d-1a7d-2774-efca-78efe7cebf63.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:18.268Z [INFO]  TestACL_Legacy_Clone: Synced node info
>     writer.go:29: 2020-02-23T02:47:18.272Z [DEBUG] TestACL_Legacy_Clone.acl: dropping node from result due to ACLs: node=Node-5010476d-1a7d-2774-efca-78efe7cebf63
>     writer.go:29: 2020-02-23T02:47:19.108Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:19.111Z [INFO]  TestACL_Legacy_Clone: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:19.111Z [INFO]  TestACL_Legacy_Clone.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:19.111Z [DEBUG] TestACL_Legacy_Clone.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:19.111Z [DEBUG] TestACL_Legacy_Clone.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:19.111Z [WARN]  TestACL_Legacy_Clone.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:19.111Z [DEBUG] TestACL_Legacy_Clone.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:19.111Z [DEBUG] TestACL_Legacy_Clone.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:19.162Z [WARN]  TestACL_Legacy_Clone.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:19.185Z [INFO]  TestACL_Legacy_Clone.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:19.531Z [ERROR] TestACL_Legacy_Clone.server: failed to establish leadership: error="error generating CA root certificate: leadership lost while committing log"
>     writer.go:29: 2020-02-23T02:47:19.531Z [INFO]  TestACL_Legacy_Clone: consul server down
>     writer.go:29: 2020-02-23T02:47:19.532Z [INFO]  TestACL_Legacy_Clone: shutdown complete
>     writer.go:29: 2020-02-23T02:47:19.532Z [INFO]  TestACL_Legacy_Clone: Stopping server: protocol=DNS address=127.0.0.1:16925 network=tcp
>     writer.go:29: 2020-02-23T02:47:19.532Z [INFO]  TestACL_Legacy_Clone: Stopping server: protocol=DNS address=127.0.0.1:16925 network=udp
>     writer.go:29: 2020-02-23T02:47:19.532Z [INFO]  TestACL_Legacy_Clone: Stopping server: protocol=HTTP address=127.0.0.1:16926 network=tcp
>     writer.go:29: 2020-02-23T02:47:19.532Z [INFO]  TestACL_Legacy_Clone: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:19.532Z [INFO]  TestACL_Legacy_Clone: Endpoints down
> === CONT  TestACL_Legacy_UpdateUpsert
> === RUN   TestACL_Legacy_Get/right_id
> === RUN   TestACL_Authorize/master-token
> === RUN   TestACL_Authorize/master-token/dc1
> === RUN   TestACL_Authorize/master-token/dc2
> === RUN   TestACL_Authorize/custom-token
> === RUN   TestACL_Authorize/custom-token/dc1
> === RUN   TestACL_Authorize/custom-token/dc2
> === RUN   TestACL_Authorize/too-many-requests
> === RUN   TestACL_Authorize/decode-failure
> === RUN   TestACL_Authorize/acl-not-found
> === RUN   TestACL_Authorize/local-token-in-secondary-dc
> === RUN   TestACL_Authorize/local-token-wrong-dc
> --- PASS: TestACL_Authorize (6.64s)
>     writer.go:29: 2020-02-23T02:47:14.738Z [WARN]  TestACL_Authorize: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:14.738Z [DEBUG] TestACL_Authorize.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:14.738Z [DEBUG] TestACL_Authorize.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:14.866Z [INFO]  TestACL_Authorize.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:136c3324-9cc4-28c2-fcd3-81f9a984282f Address:127.0.0.1:16888}]"
>     writer.go:29: 2020-02-23T02:47:14.866Z [INFO]  TestACL_Authorize.server.raft: entering follower state: follower="Node at 127.0.0.1:16888 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:14.866Z [INFO]  TestACL_Authorize.server.serf.wan: serf: EventMemberJoin: Node-136c3324-9cc4-28c2-fcd3-81f9a984282f.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:14.875Z [INFO]  TestACL_Authorize.server.serf.lan: serf: EventMemberJoin: Node-136c3324-9cc4-28c2-fcd3-81f9a984282f 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:14.875Z [INFO]  TestACL_Authorize: Started DNS server: address=127.0.0.1:16883 network=udp
>     writer.go:29: 2020-02-23T02:47:14.875Z [INFO]  TestACL_Authorize.server: Adding LAN server: server="Node-136c3324-9cc4-28c2-fcd3-81f9a984282f (Addr: tcp/127.0.0.1:16888) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:14.875Z [INFO]  TestACL_Authorize.server: Handled event for server in area: event=member-join server=Node-136c3324-9cc4-28c2-fcd3-81f9a984282f.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:14.875Z [INFO]  TestACL_Authorize: Started DNS server: address=127.0.0.1:16883 network=tcp
>     writer.go:29: 2020-02-23T02:47:14.875Z [INFO]  TestACL_Authorize: Started HTTP server: address=127.0.0.1:16884 network=tcp
>     writer.go:29: 2020-02-23T02:47:14.875Z [INFO]  TestACL_Authorize: started state syncer
>     writer.go:29: 2020-02-23T02:47:14.918Z [WARN]  TestACL_Authorize.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:14.918Z [INFO]  TestACL_Authorize.server.raft: entering candidate state: node="Node at 127.0.0.1:16888 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:15.018Z [DEBUG] TestACL_Authorize.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:15.018Z [DEBUG] TestACL_Authorize.server.raft: vote granted: from=136c3324-9cc4-28c2-fcd3-81f9a984282f term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:15.018Z [INFO]  TestACL_Authorize.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:15.018Z [INFO]  TestACL_Authorize.server.raft: entering leader state: leader="Node at 127.0.0.1:16888 [Leader]"
>     writer.go:29: 2020-02-23T02:47:15.018Z [INFO]  TestACL_Authorize.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:15.018Z [INFO]  TestACL_Authorize.server: New leader elected: payload=Node-136c3324-9cc4-28c2-fcd3-81f9a984282f
>     writer.go:29: 2020-02-23T02:47:15.025Z [INFO]  TestACL_Authorize.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:15.056Z [ERROR] TestACL_Authorize.anti_entropy: failed to sync remote state: error="ACL not found"
>     writer.go:29: 2020-02-23T02:47:15.075Z [INFO]  TestACL_Authorize.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:15.075Z [INFO]  TestACL_Authorize.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:15.112Z [INFO]  TestACL_Authorize.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:15.145Z [INFO]  TestACL_Authorize.server: Bootstrapped ACL master token from configuration
>     writer.go:29: 2020-02-23T02:47:15.162Z [INFO]  TestACL_Authorize.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:15.162Z [INFO]  TestACL_Authorize.leader: started routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:15.162Z [INFO]  TestACL_Authorize.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:15.162Z [DEBUG] TestACL_Authorize.server: transitioning out of legacy ACL mode
>     writer.go:29: 2020-02-23T02:47:15.162Z [INFO]  TestACL_Authorize.server.serf.lan: serf: EventMemberUpdate: Node-136c3324-9cc4-28c2-fcd3-81f9a984282f
>     writer.go:29: 2020-02-23T02:47:15.162Z [INFO]  TestACL_Authorize.server.serf.wan: serf: EventMemberUpdate: Node-136c3324-9cc4-28c2-fcd3-81f9a984282f.dc1
>     writer.go:29: 2020-02-23T02:47:15.162Z [INFO]  TestACL_Authorize.server: Handled event for server in area: event=member-update server=Node-136c3324-9cc4-28c2-fcd3-81f9a984282f.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:15.178Z [INFO]  TestACL_Authorize.server: Created ACL anonymous token from configuration
>     writer.go:29: 2020-02-23T02:47:15.178Z [INFO]  TestACL_Authorize.server.serf.lan: serf: EventMemberUpdate: Node-136c3324-9cc4-28c2-fcd3-81f9a984282f
>     writer.go:29: 2020-02-23T02:47:15.178Z [INFO]  TestACL_Authorize.server.serf.wan: serf: EventMemberUpdate: Node-136c3324-9cc4-28c2-fcd3-81f9a984282f.dc1
>     writer.go:29: 2020-02-23T02:47:15.179Z [INFO]  TestACL_Authorize.server: Handled event for server in area: event=member-update server=Node-136c3324-9cc4-28c2-fcd3-81f9a984282f.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:15.305Z [INFO]  TestACL_Authorize: Synced node info
>     writer.go:29: 2020-02-23T02:47:15.305Z [DEBUG] TestACL_Authorize: Node info in sync
>     writer.go:29: 2020-02-23T02:47:15.305Z [DEBUG] connect.ca.consul: consul CA provider configured: id=07:80:c8:de:f6:41:86:29:8f:9c:b8:17:d6:48:c2:d5:c5:5c:7f:0c:03:f7:cf:97:5a:a7:c1:68:aa:23:ae:81 is_primary=true
>     writer.go:29: 2020-02-23T02:47:15.535Z [INFO]  TestACL_Authorize.server.connect: initialized primary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:15.535Z [INFO]  TestACL_Authorize.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:15.535Z [DEBUG] TestACL_Authorize.server: Skipping self join check for node since the cluster is too small: node=Node-136c3324-9cc4-28c2-fcd3-81f9a984282f
>     writer.go:29: 2020-02-23T02:47:15.535Z [INFO]  TestACL_Authorize.server: member joined, marking health alive: member=Node-136c3324-9cc4-28c2-fcd3-81f9a984282f
>     writer.go:29: 2020-02-23T02:47:15.598Z [DEBUG] TestACL_Authorize.server: Skipping self join check for node since the cluster is too small: node=Node-136c3324-9cc4-28c2-fcd3-81f9a984282f
>     writer.go:29: 2020-02-23T02:47:15.598Z [DEBUG] TestACL_Authorize.server: Skipping self join check for node since the cluster is too small: node=Node-136c3324-9cc4-28c2-fcd3-81f9a984282f
>     writer.go:29: 2020-02-23T02:47:15.807Z [WARN]  TestACL_Authorize: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:15.807Z [DEBUG] TestACL_Authorize.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:15.808Z [DEBUG] TestACL_Authorize.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:16.165Z [INFO]  TestACL_Authorize.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:a90f4442-9061-5665-8cea-71fa450d7766 Address:127.0.0.1:16882}]"
>     writer.go:29: 2020-02-23T02:47:16.165Z [INFO]  TestACL_Authorize.server.serf.wan: serf: EventMemberJoin: Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:16.166Z [INFO]  TestACL_Authorize.server.serf.lan: serf: EventMemberJoin: Node-a90f4442-9061-5665-8cea-71fa450d7766 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:16.166Z [INFO]  TestACL_Authorize: Started DNS server: address=127.0.0.1:16877 network=udp
>     writer.go:29: 2020-02-23T02:47:16.166Z [INFO]  TestACL_Authorize.server.raft: entering follower state: follower="Node at 127.0.0.1:16882 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:16.166Z [INFO]  TestACL_Authorize.server: Adding LAN server: server="Node-a90f4442-9061-5665-8cea-71fa450d7766 (Addr: tcp/127.0.0.1:16882) (DC: dc2)"
>     writer.go:29: 2020-02-23T02:47:16.166Z [INFO]  TestACL_Authorize.server: Handled event for server in area: event=member-join server=Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2 area=wan
>     writer.go:29: 2020-02-23T02:47:16.166Z [INFO]  TestACL_Authorize: Started DNS server: address=127.0.0.1:16877 network=tcp
>     writer.go:29: 2020-02-23T02:47:16.167Z [INFO]  TestACL_Authorize: Started HTTP server: address=127.0.0.1:16878 network=tcp
>     writer.go:29: 2020-02-23T02:47:16.167Z [INFO]  TestACL_Authorize: started state syncer
>     writer.go:29: 2020-02-23T02:47:16.235Z [WARN]  TestACL_Authorize.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:16.235Z [INFO]  TestACL_Authorize.server.raft: entering candidate state: node="Node at 127.0.0.1:16882 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:16.338Z [DEBUG] TestACL_Authorize.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:16.338Z [DEBUG] TestACL_Authorize.server.raft: vote granted: from=a90f4442-9061-5665-8cea-71fa450d7766 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:16.338Z [INFO]  TestACL_Authorize.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:16.338Z [INFO]  TestACL_Authorize.server.raft: entering leader state: leader="Node at 127.0.0.1:16882 [Leader]"
>     writer.go:29: 2020-02-23T02:47:16.338Z [INFO]  TestACL_Authorize.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:16.338Z [INFO]  TestACL_Authorize.server: New leader elected: payload=Node-a90f4442-9061-5665-8cea-71fa450d7766
>     writer.go:29: 2020-02-23T02:47:16.353Z [WARN]  TestACL_Authorize.server.rpc: RPC request for DC is currently failing as no path was found: datacenter=dc1 method=ACL.GetPolicy
>     writer.go:29: 2020-02-23T02:47:16.354Z [INFO]  TestACL_Authorize: (WAN) joining: wan_addresses=[127.0.0.1:16887]
>     writer.go:29: 2020-02-23T02:47:16.354Z [DEBUG] TestACL_Authorize.server.memberlist.wan: memberlist: Stream connection from=127.0.0.1:59870
>     writer.go:29: 2020-02-23T02:47:16.354Z [DEBUG] TestACL_Authorize.server.memberlist.wan: memberlist: Initiating push/pull sync with: 127.0.0.1:16887
>     writer.go:29: 2020-02-23T02:47:16.354Z [INFO]  TestACL_Authorize.server.serf.wan: serf: EventMemberJoin: Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:16.354Z [INFO]  TestACL_Authorize.server: Handled event for server in area: event=member-join server=Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2 area=wan
>     writer.go:29: 2020-02-23T02:47:16.355Z [INFO]  TestACL_Authorize.server.serf.wan: serf: EventMemberJoin: Node-136c3324-9cc4-28c2-fcd3-81f9a984282f.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:16.355Z [INFO]  TestACL_Authorize: (WAN) joined: number_of_nodes=1
>     writer.go:29: 2020-02-23T02:47:16.355Z [WARN]  TestACL_Authorize.server.rpc: RPC request for DC is currently failing as no path was found: datacenter=dc1 method=ACL.GetPolicy
>     writer.go:29: 2020-02-23T02:47:16.355Z [INFO]  TestACL_Authorize.server: Handled event for server in area: event=member-join server=Node-136c3324-9cc4-28c2-fcd3-81f9a984282f.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:16.408Z [INFO]  TestACL_Authorize.leader: started routine: routine="ACL policy replication"
>     writer.go:29: 2020-02-23T02:47:16.408Z [INFO]  TestACL_Authorize.leader: started routine: routine="ACL role replication"
>     writer.go:29: 2020-02-23T02:47:16.408Z [INFO]  TestACL_Authorize.server.replication.acl.policy: started ACL Policy replication
>     writer.go:29: 2020-02-23T02:47:16.408Z [INFO]  TestACL_Authorize.leader: started routine: routine="ACL token replication"
>     writer.go:29: 2020-02-23T02:47:16.408Z [INFO]  TestACL_Authorize.leader: started routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:16.408Z [INFO]  TestACL_Authorize.server.serf.lan: serf: EventMemberUpdate: Node-a90f4442-9061-5665-8cea-71fa450d7766
>     writer.go:29: 2020-02-23T02:47:16.408Z [INFO]  TestACL_Authorize.server.serf.wan: serf: EventMemberUpdate: Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2
>     writer.go:29: 2020-02-23T02:47:16.408Z [INFO]  TestACL_Authorize.server: Handled event for server in area: event=member-update server=Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2 area=wan
>     writer.go:29: 2020-02-23T02:47:16.408Z [INFO]  TestACL_Authorize.server.replication.acl.token: started ACL Token replication
>     writer.go:29: 2020-02-23T02:47:16.408Z [INFO]  TestACL_Authorize.server.replication.acl.role: started ACL Role replication
>     writer.go:29: 2020-02-23T02:47:16.409Z [DEBUG] TestACL_Authorize.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:16.410Z [DEBUG] TestACL_Authorize.server.replication.acl.token: finished fetching acls: amount=3
>     writer.go:29: 2020-02-23T02:47:16.410Z [DEBUG] TestACL_Authorize.server.replication.acl.token: acl replication: local=0 remote=3
>     writer.go:29: 2020-02-23T02:47:16.410Z [DEBUG] TestACL_Authorize.server.replication.acl.token: acl replication: deletions=0 updates=3
>     writer.go:29: 2020-02-23T02:47:16.410Z [DEBUG] TestACL_Authorize.server.replication.acl.role: finished fetching acls: amount=0
>     writer.go:29: 2020-02-23T02:47:16.410Z [DEBUG] TestACL_Authorize.server.replication.acl.role: acl replication: local=0 remote=0
>     writer.go:29: 2020-02-23T02:47:16.410Z [DEBUG] TestACL_Authorize.server.replication.acl.role: acl replication: deletions=0 updates=0
>     writer.go:29: 2020-02-23T02:47:16.410Z [DEBUG] TestACL_Authorize.server.replication.acl.role: ACL replication completed through remote index: index=1
>     writer.go:29: 2020-02-23T02:47:16.410Z [DEBUG] TestACL_Authorize.server.replication.acl.policy: finished fetching acls: amount=2
>     writer.go:29: 2020-02-23T02:47:16.410Z [DEBUG] TestACL_Authorize.server.replication.acl.policy: acl replication: local=0 remote=2
>     writer.go:29: 2020-02-23T02:47:16.410Z [DEBUG] TestACL_Authorize.server.replication.acl.policy: acl replication: deletions=0 updates=2
>     writer.go:29: 2020-02-23T02:47:16.410Z [DEBUG] TestACL_Authorize.server.replication.acl.token: acl replication - downloaded updates: amount=3
>     writer.go:29: 2020-02-23T02:47:16.410Z [DEBUG] TestACL_Authorize.server.replication.acl.token: acl replication - performing updates
>     writer.go:29: 2020-02-23T02:47:16.411Z [DEBUG] TestACL_Authorize.server.replication.acl.policy: acl replication - downloaded updates: amount=2
>     writer.go:29: 2020-02-23T02:47:16.411Z [DEBUG] TestACL_Authorize.server.replication.acl.policy: acl replication - performing updates
>     writer.go:29: 2020-02-23T02:47:16.479Z [DEBUG] TestACL_Authorize.server.replication.acl.token: acl replication - upserted batch: number_upserted=3 batch_size=442
>     writer.go:29: 2020-02-23T02:47:16.479Z [DEBUG] TestACL_Authorize.server.replication.acl.token: acl replication - finished updates
>     writer.go:29: 2020-02-23T02:47:16.479Z [DEBUG] TestACL_Authorize.server.replication.acl.token: ACL replication completed through remote index: index=18
>     writer.go:29: 2020-02-23T02:47:16.538Z [DEBUG] TestACL_Authorize.server.replication.acl.policy: acl replication - upserted batch: number_upserted=2 batch_size=675
>     writer.go:29: 2020-02-23T02:47:16.538Z [DEBUG] TestACL_Authorize.server.replication.acl.policy: acl replication - finished updates
>     writer.go:29: 2020-02-23T02:47:16.538Z [DEBUG] TestACL_Authorize.server.replication.acl.policy: ACL replication completed through remote index: index=17
>     writer.go:29: 2020-02-23T02:47:16.588Z [INFO]  TestACL_Authorize: Synced node info
>     writer.go:29: 2020-02-23T02:47:16.588Z [DEBUG] TestACL_Authorize: Node info in sync
>     writer.go:29: 2020-02-23T02:47:16.666Z [INFO]  TestACL_Authorize.server.serf.wan: serf: EventMemberUpdate: Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2
>     writer.go:29: 2020-02-23T02:47:16.666Z [DEBUG] TestACL_Authorize.server.serf.wan: serf: messageJoinType: Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2
>     writer.go:29: 2020-02-23T02:47:16.666Z [INFO]  TestACL_Authorize.server: Handled event for server in area: event=member-update server=Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2 area=wan
>     writer.go:29: 2020-02-23T02:47:17.166Z [DEBUG] TestACL_Authorize.server.serf.wan: serf: messageJoinType: Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2
>     writer.go:29: 2020-02-23T02:47:17.199Z [DEBUG] TestACL_Authorize.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:17.367Z [DEBUG] TestACL_Authorize.server.serf.wan: serf: messageJoinType: Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2
>     writer.go:29: 2020-02-23T02:47:17.666Z [DEBUG] TestACL_Authorize.server.serf.wan: serf: messageJoinType: Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2
>     writer.go:29: 2020-02-23T02:47:17.867Z [DEBUG] TestACL_Authorize.server.serf.wan: serf: messageJoinType: Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2
>     writer.go:29: 2020-02-23T02:47:18.166Z [DEBUG] TestACL_Authorize.server.serf.wan: serf: messageJoinType: Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2
>     writer.go:29: 2020-02-23T02:47:18.367Z [DEBUG] TestACL_Authorize.server.serf.wan: serf: messageJoinType: Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2
>     writer.go:29: 2020-02-23T02:47:18.866Z [DEBUG] TestACL_Authorize.server.serf.wan: serf: messageJoinType: Node-a90f4442-9061-5665-8cea-71fa450d7766.dc2
>     writer.go:29: 2020-02-23T02:47:18.938Z [DEBUG] connect.ca.consul: consul CA provider configured: id=ad:4a:c6:ab:ef:63:c9:60:1a:51:7f:19:62:e3:e9:d9:0e:76:55:10:6e:74:24:69:28:a1:6c:b8:b9:8f:fd:89 is_primary=false
>     writer.go:29: 2020-02-23T02:47:19.192Z [INFO]  TestACL_Authorize.server.connect: received new intermediate certificate from primary datacenter
>     writer.go:29: 2020-02-23T02:47:19.261Z [DEBUG] TestACL_Authorize: Node info in sync
>     writer.go:29: 2020-02-23T02:47:19.528Z [INFO]  TestACL_Authorize.server.connect: updated root certificates from primary datacenter
>     writer.go:29: 2020-02-23T02:47:19.528Z [INFO]  TestACL_Authorize.server.connect: initialized secondary datacenter CA with provider: provider=consul
>     writer.go:29: 2020-02-23T02:47:19.528Z [INFO]  TestACL_Authorize.leader: started routine: routine="config entry replication"
>     writer.go:29: 2020-02-23T02:47:19.528Z [INFO]  TestACL_Authorize.leader: started routine: routine="secondary CA roots watch"
>     writer.go:29: 2020-02-23T02:47:19.528Z [INFO]  TestACL_Authorize.leader: started routine: routine="intention replication"
>     writer.go:29: 2020-02-23T02:47:19.528Z [INFO]  TestACL_Authorize.leader: started routine: routine="secondary cert renew watch"
>     writer.go:29: 2020-02-23T02:47:19.528Z [INFO]  TestACL_Authorize.leader: started routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:19.528Z [DEBUG] TestACL_Authorize.server: Skipping self join check for node since the cluster is too small: node=Node-a90f4442-9061-5665-8cea-71fa450d7766
>     writer.go:29: 2020-02-23T02:47:19.528Z [INFO]  TestACL_Authorize.server: member joined, marking health alive: member=Node-a90f4442-9061-5665-8cea-71fa450d7766
>     writer.go:29: 2020-02-23T02:47:19.529Z [DEBUG] TestACL_Authorize.server.connect: starting Connect CA root replication from primary datacenter: primary=dc1
>     writer.go:29: 2020-02-23T02:47:19.529Z [DEBUG] TestACL_Authorize.server.connect: starting Connect intention replication from primary datacenter: primary=dc1
>     writer.go:29: 2020-02-23T02:47:19.530Z [DEBUG] TestACL_Authorize.server.replication.config_entry: finished fetching config entries: amount=0
>     writer.go:29: 2020-02-23T02:47:19.530Z [DEBUG] TestACL_Authorize.server.replication.config_entry: Config Entry replication: local=0 remote=0
>     writer.go:29: 2020-02-23T02:47:19.530Z [DEBUG] TestACL_Authorize.server.replication.config_entry: Config Entry replication: deletions=0 updates=0
>     writer.go:29: 2020-02-23T02:47:19.530Z [DEBUG] TestACL_Authorize.server.replication.config_entry: replication completed through remote index: index=1
>     writer.go:29: 2020-02-23T02:47:19.931Z [DEBUG] TestACL_Authorize.server: Skipping self join check for node since the cluster is too small: node=Node-a90f4442-9061-5665-8cea-71fa450d7766
>     writer.go:29: 2020-02-23T02:47:20.213Z [DEBUG] TestACL_Authorize.tlsutil: OutgoingRPCWrapper: version=1
>     --- PASS: TestACL_Authorize/master-token (0.00s)
>         --- PASS: TestACL_Authorize/master-token/dc1 (0.00s)
>         --- PASS: TestACL_Authorize/master-token/dc2 (0.00s)
>     --- PASS: TestACL_Authorize/custom-token (0.00s)
>         --- PASS: TestACL_Authorize/custom-token/dc1 (0.00s)
>         --- PASS: TestACL_Authorize/custom-token/dc2 (0.00s)
>     --- PASS: TestACL_Authorize/too-many-requests (0.00s)
>     --- PASS: TestACL_Authorize/decode-failure (0.00s)
>     --- PASS: TestACL_Authorize/acl-not-found (0.00s)
>     --- PASS: TestACL_Authorize/local-token-in-secondary-dc (0.00s)
>     --- PASS: TestACL_Authorize/local-token-wrong-dc (0.00s)
>     writer.go:29: 2020-02-23T02:47:20.216Z [INFO]  TestACL_Authorize: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:20.216Z [INFO]  TestACL_Authorize.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:20.216Z [DEBUG] TestACL_Authorize.leader: stopping routine: routine="ACL role replication"
>     writer.go:29: 2020-02-23T02:47:20.216Z [DEBUG] TestACL_Authorize.leader: stopping routine: routine="ACL token replication"
>     writer.go:29: 2020-02-23T02:47:20.216Z [DEBUG] TestACL_Authorize.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:20.216Z [DEBUG] TestACL_Authorize.leader: stopping routine: routine="secondary CA roots watch"
>     writer.go:29: 2020-02-23T02:47:20.216Z [DEBUG] TestACL_Authorize.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:20.216Z [DEBUG] TestACL_Authorize.leader: stopping routine: routine="ACL policy replication"
>     writer.go:29: 2020-02-23T02:47:20.216Z [DEBUG] TestACL_Authorize.leader: stopping routine: routine="config entry replication"
>     writer.go:29: 2020-02-23T02:47:20.216Z [DEBUG] TestACL_Authorize.leader: stopping routine: routine="intention replication"
>     writer.go:29: 2020-02-23T02:47:20.216Z [DEBUG] TestACL_Authorize.leader: stopping routine: routine="secondary cert renew watch"
>     writer.go:29: 2020-02-23T02:47:20.216Z [WARN]  TestACL_Authorize.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:20.216Z [DEBUG] TestACL_Authorize.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:20.216Z [DEBUG] TestACL_Authorize.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:20.216Z [DEBUG] TestACL_Authorize.leader: stopped routine: routine="secondary cert renew watch"
>     writer.go:29: 2020-02-23T02:47:20.645Z [WARN]  TestACL_Authorize.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:21.131Z [INFO]  TestACL_Authorize.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:21.131Z [INFO]  TestACL_Authorize.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:21.132Z [INFO]  TestACL_Authorize: consul server down
>     writer.go:29: 2020-02-23T02:47:21.132Z [INFO]  TestACL_Authorize: shutdown complete
>     writer.go:29: 2020-02-23T02:47:21.132Z [INFO]  TestACL_Authorize: Stopping server: protocol=DNS address=127.0.0.1:16877 network=tcp
>     writer.go:29: 2020-02-23T02:47:21.132Z [ERROR] TestACL_Authorize.server.rpc: RPC failed to server in DC: server=127.0.0.1:16888 datacenter=dc1 method=Intention.List error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:21.132Z [ERROR] TestACL_Authorize.server.connect: error replicating intentions: routine="intention replication" error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:21.132Z [ERROR] TestACL_Authorize.server.rpc: RPC failed to server in DC: server=127.0.0.1:16888 datacenter=dc1 method=ConfigEntry.ListAll error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:21.132Z [INFO]  TestACL_Authorize.server.replication.config_entry: stopped replication
>     writer.go:29: 2020-02-23T02:47:21.132Z [DEBUG] TestACL_Authorize.leader: stopped routine: routine="config entry replication"
>     writer.go:29: 2020-02-23T02:47:21.132Z [ERROR] TestACL_Authorize.server.rpc: RPC failed to server in DC: server=127.0.0.1:16888 datacenter=dc1 method=ACL.TokenList error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:21.132Z [WARN]  TestACL_Authorize.server.replication.acl.token: ACL replication error (will retry if still leader): error="failed to retrieve remote ACL tokens: rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:21.132Z [DEBUG] TestACL_Authorize.leader: stopped routine: routine="ACL token replication"
>     writer.go:29: 2020-02-23T02:47:21.132Z [ERROR] TestACL_Authorize.server.rpc: RPC failed to server in DC: server=127.0.0.1:16888 datacenter=dc1 method=ConnectCA.Roots error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:21.132Z [ERROR] TestACL_Authorize.server.connect: CA root replication failed, will retry: routine="secondary CA roots watch" error="Error retrieving the primary datacenter's roots: rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:21.132Z [ERROR] TestACL_Authorize.server.rpc: RPC failed to server in DC: server=127.0.0.1:16888 datacenter=dc1 method=ACL.PolicyList error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:21.132Z [WARN]  TestACL_Authorize.server.replication.acl.policy: ACL replication error (will retry if still leader): error="failed to retrieve remote ACL policies: rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:21.132Z [DEBUG] TestACL_Authorize.leader: stopped routine: routine="ACL policy replication"
>     writer.go:29: 2020-02-23T02:47:21.132Z [INFO]  TestACL_Authorize: Stopping server: protocol=DNS address=127.0.0.1:16877 network=udp
>     writer.go:29: 2020-02-23T02:47:21.132Z [INFO]  TestACL_Authorize: Stopping server: protocol=HTTP address=127.0.0.1:16878 network=tcp
>     writer.go:29: 2020-02-23T02:47:21.132Z [INFO]  TestACL_Authorize: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:21.132Z [ERROR] TestACL_Authorize.server.rpc: RPC failed to server in DC: server=127.0.0.1:16888 datacenter=dc1 method=ACL.RoleList error="rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:21.132Z [WARN]  TestACL_Authorize.server.replication.acl.role: ACL replication error (will retry if still leader): error="failed to retrieve remote ACL roles: rpc error making call: EOF"
>     writer.go:29: 2020-02-23T02:47:21.132Z [DEBUG] TestACL_Authorize.leader: stopped routine: routine="ACL role replication"
>     writer.go:29: 2020-02-23T02:47:21.132Z [INFO]  TestACL_Authorize: Endpoints down
>     writer.go:29: 2020-02-23T02:47:21.132Z [INFO]  TestACL_Authorize: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:21.132Z [INFO]  TestACL_Authorize.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:21.132Z [DEBUG] TestACL_Authorize.leader: stopping routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:21.132Z [DEBUG] TestACL_Authorize.leader: stopping routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:21.132Z [DEBUG] TestACL_Authorize.leader: stopping routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:21.132Z [WARN]  TestACL_Authorize.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:21.132Z [DEBUG] TestACL_Authorize.leader: stopped routine: routine="CA root pruning"
>     writer.go:29: 2020-02-23T02:47:21.132Z [DEBUG] TestACL_Authorize.leader: stopped routine: routine="legacy ACL token upgrade"
>     writer.go:29: 2020-02-23T02:47:21.132Z [DEBUG] TestACL_Authorize.leader: stopped routine: routine="acl token reaping"
>     writer.go:29: 2020-02-23T02:47:21.211Z [WARN]  TestACL_Authorize.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:21.371Z [INFO]  TestACL_Authorize.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:21.371Z [INFO]  TestACL_Authorize.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:21.372Z [INFO]  TestACL_Authorize: consul server down
>     writer.go:29: 2020-02-23T02:47:21.372Z [INFO]  TestACL_Authorize: shutdown complete
>     writer.go:29: 2020-02-23T02:47:21.372Z [INFO]  TestACL_Authorize: Stopping server: protocol=DNS address=127.0.0.1:16883 network=tcp
>     writer.go:29: 2020-02-23T02:47:21.372Z [INFO]  TestACL_Authorize: Stopping server: protocol=DNS address=127.0.0.1:16883 network=udp
>     writer.go:29: 2020-02-23T02:47:21.372Z [INFO]  TestACL_Authorize: Stopping server: protocol=HTTP address=127.0.0.1:16884 network=tcp
>     writer.go:29: 2020-02-23T02:47:21.372Z [INFO]  TestACL_Authorize: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:21.372Z [INFO]  TestACL_Authorize: Endpoints down
> === CONT  TestACL_Legacy_Update
> === CONT  TestIntentionsMatch_byInvalid
> --- FAIL: TestACL_Legacy_Destroy (4.65s)
>     writer.go:29: 2020-02-23T02:47:19.169Z [WARN]  TestACL_Legacy_Destroy: The 'acl_datacenter' field is deprecated. Use the 'primary_datacenter' field instead.
>     writer.go:29: 2020-02-23T02:47:19.169Z [WARN]  TestACL_Legacy_Destroy: bootstrap = true: do not enable unless necessary
>     writer.go:29: 2020-02-23T02:47:19.169Z [DEBUG] TestACL_Legacy_Destroy.tlsutil: Update: version=1
>     writer.go:29: 2020-02-23T02:47:19.169Z [DEBUG] TestACL_Legacy_Destroy.tlsutil: OutgoingRPCWrapper: version=1
>     writer.go:29: 2020-02-23T02:47:21.012Z [INFO]  TestACL_Legacy_Destroy.server.raft: initial configuration: index=1 servers="[{Suffrage:Voter ID:3e2b3678-b9d7-fede-0374-caab8d123062 Address:127.0.0.1:16912}]"
>     writer.go:29: 2020-02-23T02:47:21.012Z [INFO]  TestACL_Legacy_Destroy.server.raft: entering follower state: follower="Node at 127.0.0.1:16912 [Follower]" leader=
>     writer.go:29: 2020-02-23T02:47:21.012Z [INFO]  TestACL_Legacy_Destroy.server.serf.wan: serf: EventMemberJoin: Node-3e2b3678-b9d7-fede-0374-caab8d123062.dc1 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:21.013Z [INFO]  TestACL_Legacy_Destroy.server.serf.lan: serf: EventMemberJoin: Node-3e2b3678-b9d7-fede-0374-caab8d123062 127.0.0.1
>     writer.go:29: 2020-02-23T02:47:21.013Z [INFO]  TestACL_Legacy_Destroy.server: Handled event for server in area: event=member-join server=Node-3e2b3678-b9d7-fede-0374-caab8d123062.dc1 area=wan
>     writer.go:29: 2020-02-23T02:47:21.013Z [INFO]  TestACL_Legacy_Destroy.server: Adding LAN server: server="Node-3e2b3678-b9d7-fede-0374-caab8d123062 (Addr: tcp/127.0.0.1:16912) (DC: dc1)"
>     writer.go:29: 2020-02-23T02:47:21.013Z [INFO]  TestACL_Legacy_Destroy: Started DNS server: address=127.0.0.1:16907 network=udp
>     writer.go:29: 2020-02-23T02:47:21.013Z [INFO]  TestACL_Legacy_Destroy: Started DNS server: address=127.0.0.1:16907 network=tcp
>     writer.go:29: 2020-02-23T02:47:21.013Z [INFO]  TestACL_Legacy_Destroy: Started HTTP server: address=127.0.0.1:16908 network=tcp
>     writer.go:29: 2020-02-23T02:47:21.013Z [INFO]  TestACL_Legacy_Destroy: started state syncer
>     writer.go:29: 2020-02-23T02:47:21.075Z [WARN]  TestACL_Legacy_Destroy.server.raft: heartbeat timeout reached, starting election: last-leader=
>     writer.go:29: 2020-02-23T02:47:21.075Z [INFO]  TestACL_Legacy_Destroy.server.raft: entering candidate state: node="Node at 127.0.0.1:16912 [Candidate]" term=2
>     writer.go:29: 2020-02-23T02:47:22.427Z [DEBUG] TestACL_Legacy_Destroy.server.raft: votes: needed=1
>     writer.go:29: 2020-02-23T02:47:22.427Z [DEBUG] TestACL_Legacy_Destroy.server.raft: vote granted: from=3e2b3678-b9d7-fede-0374-caab8d123062 term=2 tally=1
>     writer.go:29: 2020-02-23T02:47:22.427Z [INFO]  TestACL_Legacy_Destroy.server.raft: election won: tally=1
>     writer.go:29: 2020-02-23T02:47:22.427Z [INFO]  TestACL_Legacy_Destroy.server.raft: entering leader state: leader="Node at 127.0.0.1:16912 [Leader]"
>     writer.go:29: 2020-02-23T02:47:22.427Z [INFO]  TestACL_Legacy_Destroy.server: cluster leadership acquired
>     writer.go:29: 2020-02-23T02:47:22.427Z [INFO]  TestACL_Legacy_Destroy.server: New leader elected: payload=Node-3e2b3678-b9d7-fede-0374-caab8d123062
>     writer.go:29: 2020-02-23T02:47:22.545Z [ERROR] TestACL_Legacy_Destroy.anti_entropy: failed to sync remote state: error="ACL not found"
>     writer.go:29: 2020-02-23T02:47:22.563Z [INFO]  TestACL_Legacy_Destroy.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:23.029Z [INFO]  TestACL_Legacy_Destroy: Requesting shutdown
>     writer.go:29: 2020-02-23T02:47:23.029Z [INFO]  TestACL_Legacy_Destroy.server: shutting down server
>     writer.go:29: 2020-02-23T02:47:23.029Z [WARN]  TestACL_Legacy_Destroy.server.serf.lan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:23.155Z [INFO]  TestACL_Legacy_Destroy.server: Created ACL 'global-management' policy
>     writer.go:29: 2020-02-23T02:47:23.155Z [WARN]  TestACL_Legacy_Destroy.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:23.155Z [INFO]  TestACL_Legacy_Destroy.server: initializing acls
>     writer.go:29: 2020-02-23T02:47:23.155Z [WARN]  TestACL_Legacy_Destroy.server: Configuring a non-UUID master token is deprecated
>     writer.go:29: 2020-02-23T02:47:23.285Z [WARN]  TestACL_Legacy_Destroy.server.serf.wan: serf: Shutdown without a Leave
>     writer.go:29: 2020-02-23T02:47:23.485Z [INFO]  TestACL_Legacy_Destroy.server.router.manager: shutting down
>     writer.go:29: 2020-02-23T02:47:23.763Z [INFO]  TestACL_Legacy_Destroy: consul server down
>     writer.go:29: 2020-02-23T02:47:23.763Z [INFO]  TestACL_Legacy_Destroy: shutdown complete
>     writer.go:29: 2020-02-23T02:47:23.763Z [INFO]  TestACL_Legacy_Destroy: Stopping server: protocol=DNS address=127.0.0.1:16907 network=tcp
>     writer.go:29: 2020-02-23T02:47:23.763Z [INFO]  TestACL_Legacy_Destroy: Stopping server: protocol=DNS address=127.0.0.1:16907 network=udp
>     writer.go:29: 2020-02-23T02:47:23.763Z [INFO]  TestACL_Legacy_Destroy: Stopping server: protocol=HTTP address=127.0.0.1:16908 network=tcp
>     writer.go:29: 2020-02-23T02:47:23.763Z [INFO]  TestACL_Legacy_Destroy: Waiting for endpoints to shut down
>     writer.go:29: 2020-02-23T02:47:23.763Z [INFO]  TestACL_Legacy_Destroy: Endpoints down
>     writer.go:29: 2020-02-23T02:47:23.763Z [ERROR] TestACL_Legacy_Destroy.server: error transitioning to using new ACLs: error="failed to bootstrap master token: leadership lost while committing log"
>     writer.go:29: 2020-02-23T02:47:23.763Z [ERROR] TestACL_Legacy_Destroy.server: failed to establish leadership: error="failed to bootstrap master token: leadership lost while committing log"
>     writer.go:29: 2020-02-23T02:47:23.763Z [ERROR] TestACL_Legacy_Destroy.server: failed to transfer leadership attempt, will retry: attempt=0 retry_limit=3 error="raft is already shutdown"
>     retry.go:121: testagent.go:116: unavailable. last error: Catalog.ListNodes failed: ACL not found
>         testagent.go:116: TestAgent already started
 [...]
> dh_auto_test: error: cd _build && go test -vet=off -v -p 4 -short -failfast -timeout 8m github.com/hashicorp/consul github.com/hashicorp/consul/acl github.com/hashicorp/consul/agent github.com/hashicorp/consul/agent/ae github.com/hashicorp/consul/agent/agentpb github.com/hashicorp/consul/agent/config github.com/hashicorp/consul/agent/debug github.com/hashicorp/consul/agent/exec github.com/hashicorp/consul/agent/local github.com/hashicorp/consul/agent/metadata github.com/hashicorp/consul/agent/mock github.com/hashicorp/consul/agent/pool github.com/hashicorp/consul/agent/proxycfg github.com/hashicorp/consul/agent/router github.com/hashicorp/consul/agent/structs github.com/hashicorp/consul/agent/systemd github.com/hashicorp/consul/agent/token github.com/hashicorp/consul/agent/xds github.com/hashicorp/consul/command github.com/hashicorp/consul/command/acl github.com/hashicorp/consul/command/acl/agenttokens github.com/hashicorp/consul/command/acl/authmethod github.com/hashicorp/consul/command/acl/authmethod/create github.com/hashicorp/consul/command/acl/authmethod/delete github.com/hashicorp/consul/command/acl/authmethod/list github.com/hashicorp/consul/command/acl/authmethod/read github.com/hashicorp/consul/command/acl/authmethod/update github.com/hashicorp/consul/command/acl/bindingrule github.com/hashicorp/consul/command/acl/bindingrule/create github.com/hashicorp/consul/command/acl/bindingrule/delete github.com/hashicorp/consul/command/acl/bindingrule/list github.com/hashicorp/consul/command/acl/bindingrule/read github.com/hashicorp/consul/command/acl/bindingrule/update github.com/hashicorp/consul/command/acl/bootstrap github.com/hashicorp/consul/command/acl/policy github.com/hashicorp/consul/command/acl/policy/create github.com/hashicorp/consul/command/acl/policy/delete github.com/hashicorp/consul/command/acl/policy/list github.com/hashicorp/consul/command/acl/policy/read github.com/hashicorp/consul/command/acl/policy/update github.com/hashicorp/consul/command/acl/role github.com/hashicorp/consul/command/acl/role/create github.com/hashicorp/consul/command/acl/role/delete github.com/hashicorp/consul/command/acl/role/list github.com/hashicorp/consul/command/acl/role/read github.com/hashicorp/consul/command/acl/role/update github.com/hashicorp/consul/command/acl/rules github.com/hashicorp/consul/command/acl/token github.com/hashicorp/consul/command/acl/token/clone github.com/hashicorp/consul/command/acl/token/create github.com/hashicorp/consul/command/acl/token/delete github.com/hashicorp/consul/command/acl/token/list github.com/hashicorp/consul/command/acl/token/read github.com/hashicorp/consul/command/acl/token/update github.com/hashicorp/consul/command/agent github.com/hashicorp/consul/command/catalog github.com/hashicorp/consul/command/catalog/list/dc github.com/hashicorp/consul/command/catalog/list/nodes github.com/hashicorp/consul/command/catalog/list/services github.com/hashicorp/consul/command/config github.com/hashicorp/consul/command/config/delete github.com/hashicorp/consul/command/config/list github.com/hashicorp/consul/command/config/read github.com/hashicorp/consul/command/config/write github.com/hashicorp/consul/command/connect github.com/hashicorp/consul/command/connect/ca github.com/hashicorp/consul/command/connect/ca/get github.com/hashicorp/consul/command/connect/ca/set github.com/hashicorp/consul/command/connect/envoy github.com/hashicorp/consul/command/connect/envoy/pipe-bootstrap github.com/hashicorp/consul/command/connect/proxy github.com/hashicorp/consul/command/debug github.com/hashicorp/consul/command/event github.com/hashicorp/consul/command/exec github.com/hashicorp/consul/command/flags github.com/hashicorp/consul/command/forceleave github.com/hashicorp/consul/command/helpers github.com/hashicorp/consul/command/info github.com/hashicorp/consul/command/intention github.com/hashicorp/consul/command/intention/check github.com/hashicorp/consul/command/intention/create github.com/hashicorp/consul/command/intention/delete github.com/hashicorp/consul/command/intention/finder github.com/hashicorp/consul/command/intention/get github.com/hashicorp/consul/command/intention/match github.com/hashicorp/consul/command/join github.com/hashicorp/consul/command/keygen github.com/hashicorp/consul/command/keyring github.com/hashicorp/consul/command/kv github.com/hashicorp/consul/command/kv/del github.com/hashicorp/consul/command/kv/exp github.com/hashicorp/consul/command/kv/get github.com/hashicorp/consul/command/kv/imp github.com/hashicorp/consul/command/kv/impexp github.com/hashicorp/consul/command/kv/put github.com/hashicorp/consul/command/leave github.com/hashicorp/consul/command/lock github.com/hashicorp/consul/command/login github.com/hashicorp/consul/command/logout github.com/hashicorp/consul/command/maint github.com/hashicorp/consul/command/members github.com/hashicorp/consul/command/monitor github.com/hashicorp/consul/command/operator github.com/hashicorp/consul/command/operator/autopilot github.com/hashicorp/consul/command/operator/autopilot/get github.com/hashicorp/consul/command/operator/autopilot/set github.com/hashicorp/consul/command/operator/raft github.com/hashicorp/consul/command/operator/raft/listpeers github.com/hashicorp/consul/command/operator/raft/removepeer github.com/hashicorp/consul/command/reload github.com/hashicorp/consul/command/rtt github.com/hashicorp/consul/command/services github.com/hashicorp/consul/command/services/deregister github.com/hashicorp/consul/command/services/register github.com/hashicorp/consul/command/snapshot github.com/hashicorp/consul/command/snapshot/inspect github.com/hashicorp/consul/command/snapshot/restore github.com/hashicorp/consul/command/snapshot/save github.com/hashicorp/consul/command/validate github.com/hashicorp/consul/command/version github.com/hashicorp/consul/command/watch github.com/hashicorp/consul/connect github.com/hashicorp/consul/connect/certgen github.com/hashicorp/consul/connect/proxy github.com/hashicorp/consul/ipaddr github.com/hashicorp/consul/lib github.com/hashicorp/consul/lib/file github.com/hashicorp/consul/lib/semaphore github.com/hashicorp/consul/logging github.com/hashicorp/consul/logging/monitor github.com/hashicorp/consul/sdk/freeport github.com/hashicorp/consul/sdk/testutil github.com/hashicorp/consul/sdk/testutil/retry github.com/hashicorp/consul/sentinel github.com/hashicorp/consul/service_os github.com/hashicorp/consul/snapshot github.com/hashicorp/consul/testrpc github.com/hashicorp/consul/tlsutil github.com/hashicorp/consul/types github.com/hashicorp/consul/version returned exit code 1

The full build log is available from:
   http://qa-logs.debian.net/2020/02/22/consul_1.7.0+dfsg1-1_unstable.log

A list of current common problems and possible solutions is available at
http://wiki.debian.org/qa.debian.org/FTBFS . You're welcome to contribute!

About the archive rebuild: The rebuild was done on EC2 VM instances from
Amazon Web Services, using a clean, minimal and up-to-date chroot. Every
failed build was retried once to eliminate random failures.



More information about the Pkg-go-maintainers mailing list