Building

Started building at 2022/09/01 22:27:15
Using pegged server, 1948 build
Calculating base
Updating mirror
Basing run on 7.2.0-1948 3df08568d
Updating tree for run 01.09.2022-22.27
query is at c3f3821, changes since last good build: 
 c3f3821ea MB-53533 Ensure initialisation is complete ahead of PrepareTopologyChange processing.
 e74608abe MB-53541 Independent exchange notifier for merge actions.
 6a1b677ce MB-53565 Do not consider primary index for index scan inside correlated subquery
 89f2c4277 MB-53565 Proper formalization for ExpressionTerm and SubqueryTerm under ANSI join
 3c0a17984 MB-53565 Do not use join predicates as index span if under hash join
 f636e8c63 MB-53506 Additional fixes for unit tests
 b5c561c25 MB-53506 Fix some unit test issues
 6de332853 MB-53230. Change load_factor to moving avg
 8fcba5ab8 MB-53410 Revise fix.
 5ce5fd088 MB-53528 add SkipMetering to index connection
 8047faf2e MB-53526 register unbillable admin units with regulator
 cae373174 MB-53410 Reduce redundant value creation in UNNEST.
 be0672272 MB-52176:If use_cbo setting is true then run update statistics after CREATE INDEX for non deferred indexes and after BUILD INDEX for deferred indexes
 d0e4f965a updating go mod for building in module aware mode
 f3f2f3580 MB-53514 Revise fix.
 08344c66a MB-53514 Report KV RUs incurred for sequential scans
 120933d52 Revert "MB-53514 Report KV RUs incurred for sequential scans"
 f70f23c4f MB-53514 Report KV RUs incurred for sequential scans
 63fbf9734 MB-53406 Reduce atomics in lockless pool.
 c68ea6550 MB-53506 Prevent multiple mutations on a single key in UPSERT.
 407a2027f MB-53477 Alter threshold for spilling group-by without a quota.
 b7552a36e MB-53472:Add command line option -query_context or -qc to specify the query_context when connecting to the couchbase cluster to launch cbq shell
 1613c282d MB-53506 Improve sequential scan integration with IUD operations.
 c754bf3c9 MB-53176:Perform memory accounting when cached values are freed in the ExpressionScan operator
 e178f11e6 MB-33748 Go fmt
 174adf156 MB-33748 Making time in catalogs ISO-8601 format
 73e10feeb MB-53230. Use minimum value
 1c61edc90 MB-51928 Additional fixes for join filter
 839c44620 MB-44219 credentials not always available to planner
 1c9c1c6a1 MB-53439 MB-50997 scope UDF definitions stored in _system scope, open system:functions
 456304363 MB-53446 Honor USE NL hint for kv range scan
 d1fc29ec6 MB-53444 Avoid invalid numbers when calculating group cost
 f2d314563 MB-44219: In serverless mode, check if user has permissions on bucket when executing a scoped function
 b72782619 MB-53434. return false only on error
 519566933 go mod tidy (update to bleve@v2.3.4)
 c327709fe MB-52098: go mod tidy
 ab8a702f2 MB-53427 cbq vim-mode: enhance commands
 a85b778e2 MB-53420 consider empty non nil parameter list for inline UDF caching
 568270f72 MB-53416. Set Conditional for NVL/NVL2/Decode/IfMissing/IfNull
 6502af299 MB-53391 Fix value tracking for RAW projections
 d1e41aa49 MB-29604 Fix test cases.
 0b269607c MB-44305 Spill sort and group by to disk.
 e8b2e1f49 MB-53394 Don't escape HTML in errors and warnings.
 8d2e43a17 MB-29604 Return warning on division by zero.
 6a6d16cb5 MB-53372 properly formalize select in UDF with variables as non correlated, avoid caching for UDF select
 0faaadf0c MB-53377 Proper set up of join keys when converting from ANSI join to lookup join
 f47e3d29b MB-53371 Use 64-bit value for 'Uses'
 86c6a347a MB-53353 Adjust unbounded folllowing window range
 039b1d27c GOCBC-1325 Add Transaction logger
 c4f13bfb9 MB-52952 - [FTS] Improving the precision of stats used for WUs metering
 4d2617f4a MB-44219 In serverless mode if the user is not an admin user and tries to access a collection, scope or bucket that does not exist or they do not have permissions on:  If the user has permissions on the bucket in the keyspace they are trying to access, display the specific error message. If not, display the generic access denied message.
 758bc9411 MB-52176 Perform automatic UPDATE STATISTICS for CREATE PRIMARY INDEX
 38fa22e85 MB-53254 Ignore transaction binary documents in sequential scans
 5dd58d359 MB-53291 Administrator is throttled
 5fcf49bd5 MB-53298 Do not return error for MISSING value in join processing
 8b1553f13 MB-53248 sub doc api to account for rus
 34267f151 MB-52971 Handing CheckResultReject
 0d0d028ec MB-53260 Fix last_scan_time for sequential scans.
 70e3a059c -= passing TlsCAFile to regulator init (unused/deprecated)
 c32b4fb19 "MB-53230 load factor of query node"
 28049ab09 MB-51961 Fix typo
 47cd8a06a Build failure Revert "MB-53230 load factor of query node"
 46c894124 Revert "MB-44305 Spill sort and group by to disk."
 9f11a3e8b MB-51961 Allow primary index to be used on inner of nested-loop join
 95839d370 MB-44305 Spill sort and group by to disk.
 4d1078dd1 MB-53230 load factor of query node
 422c73fe7 MB-53235 leave item quota management to producer for serialized operators
gometa is at 05cb6b2, changes since last good build: none
ns_server is at 90035b0, changes since last good build: 
 90035b0e7 MB-52195 Tag "system" collections to not be metered
 dd4f8b6de MB-52142: Fix default for bucket throttle setting
 f5a520077 MB-50537: Don't allow strict n2n encryption if external unencrypted dist
 823c23c4d MB-53478: Fix saving anonymous functions to disk
 badb66c8f MB-52142: Add throttle limits to bucket config
 f09932bb4 MB-52044 Fix eaccess crash
 f2b6fed41 MB-52142: Add throttle limits to internal settings
 2efa104ee MB-52350 Allow setting per-bucket storage limits
 7e4c5811d Prevent memcached bucket creation on serverless
 28db38600 MB-52350 Fix default values for storage limits
 8fff21898 MB-51516: Don't clamp in_docs estimate to current checkpoint items
 c2a03f4bf MB-52226: Introduce pause/resume APIs that are stubbed out
 9cffcd45e MB-23768: Don't allow user backup in mixed clusters
 f3e9ba2f0 MB-23768: Fix validator:json_array()
 9e90c3142 MB-23768: Add menelaus_users:store_users and use it for restore
 1f3a00032 MB-23768: Add replicated_dets:change_multiple/2
 0c909be9c MB-23768: [rbac] Make sure we compress docs only once when ...
 a804ff225 MB-23768: Add PUT /settings/rbac/backup
 85a2b99a7 MB-23768: Move security_roles_access and ldap_access checks...
 807549ea8 MB-23768: Fix validator:apply_multi_params
 c85963d04 MB-23768: Add GET /settings/rbac/backup
 253346b7b MB-23768: Call menelaus_roles:validate_roles when validating...
 7b4f6c40e MB-23768: Remove unnecessary has_permission(Permission, Req) check
 8301f1565 MB-23768: Remove permission param in verify_ldap_access
 b08f974b7 MB-53326 Push CCCP payload on all kv nodes
 19898a852 MB-52350 Fix unused variable
 666af3431 MB-52350 Add storage limits to bucket config
 8ae0cbb85 MB-52350 Add storage limits to internal settings
 6b01ad610 MB-53423 Adjust bucket maximums for _system scope
 44aa2ee1f MB-53288: New query node-quota parameter
 207277058 MB-53352 Report the running config profile
 65faa5fe7 MB-51738 Use this_node() in ns_memcached
 3ab091f2e MB-51738 Define this_node() to handle distribution crash
 f9693814a MB-53192: Add upgrade for memory alerts
 763b16746 MB-53193: Reenable autofailover popup alerts
 97fce2439 MB-47905: Pass client cert path to services
 e9572c7c9 Update regulator frequently_changed_key prefix to /regulator/report
 55725498f MB-53323: consider keep nodes when placing buckets in rebalance
 492ae395e Add isServerless to /pools result
couchstore is at 803ec5f, changes since last good build: 
 803ec5f Refactor: mcbp::datatype moved to cb::mcbp::datatype
forestdb is at acba458, changes since last good build: none
kv_engine is at 1d85f5a, changes since last good build: 
 1d85f5a55 MB-53127: Document write should clear read usage
 d69988fa9 MB-52311: [1/n] Pause / Resume Bucket: opcodes
 225d4a7ea Refactor bucket delete to add extra extra unit tests
 c14da3f90 MB-53510: Refactor bucket creation
 8a3da42c2 MB-53543: Disable BackfillSmallBuffer test
 cb7a5b432 MB-53304: Enforce holding of stateLock in VBucket::queueItem [2/3]
 e74ba64e4 MB-53304: Enforce holding of stateLock in VBucket::queueItem [1/3]
 0ceebade0 Remove ServerCallbackIface
 19d61765d MB-52553: Don't special-case persistence cursor in CM::addStats
 5836d9a70 MB-50984: Remove max_checkpoints hard limit on the single vbucket
 47572e321 MB-50984: Default checkpoint_destruction_tasks=2
 00201d310 MB-53523: Only check snap start vs last snap end if active VB
 c90a937a4 Reformat test_reader_thread_starvation_warmup
 dec1c6c2c Merge "Merge branch 'neo' into 'master'"
 af74c95fe Refactor: CheckpointManager::registerCursorBySeqno()
 a933cf568 MB-53448: DCP_ADD_STREAM_FLAG_TO_LATEST should use highSeqno of collections(s) in filter
 08438bfb8 MB-53259: Update DCP Consumer byffer-size at dynamic Bucket Quota change
 e61d09645 [Refactor] deselect bucket before trying to delete
 afac71aab MB-53055: Fix Checkpoint::isEmptyByExpel() semantic
 ae8baf2dc Remove unused code from kvstore_test
 6f5ba689c [Refactor] Move bufferevent related code to subclass
 2dd1745c6 MB-53498: Delay bucket type update
 d50b99685 Merge branch 'neo' into 'master'
 79de292f2 Merge branch 'neo' into 'master'
 34bc1c7d2 Merge "Merge branch 'neo' into 'master'"
 074db327f Enable KVStoreTest GetBySeqno for non-couchstore
 16bd96ae6 MB-53284: Use magma memory optimized writes in BucketQuotaChangeTest
 854eced08 Merge branch 'neo' into 'master'
 50f5747b7 Merge "Merge branch 'neo' into 'master'"
 c231af910 Cleanup: remove 'polling' durability timeout mode
 f5930b3ea Tidy: Checkpoint::queueDirty use structured binding in for loop
 a65ca2ba1 Merge branch 'neo' into 'master'
 16f186be2 Only regenerate serverless/configuration.json if exe changed
 194900077 MB-53052: Remove KVBucket::itemFreqDecayerIsSnoozed()
 4b4ad639d Refactor: Create factory method for Connection objects
 f3ac46848 MB-35297: Fix RangeScan sampling stats NotFound path
 7d3f297f7 MB-46738: Rename dcp_conn_buffer_ratio into dcp_consumer_buffer_ratio
 9bc866891 [Refactor] Remove the history field of sloppy gauge
 a7b78c756 MB-53055: Add highestExpelledSeqno to Checkpoint ostream
 a58bd636c MB-53055: Add highest_expelled_seqno to Checkpoint stats
 7e4587d1e Remove duplicate method in DurabilityEPBucketTest
 12136509b Add labels to Montonic<> members of Checkpoint
 795dd8dc0 MB-53055: Fix exception message in CM::registerCursorBySeqno
 200aa87ae Add "filter" capabilities to delete bucket
 6fcfed646 SetClusterConfig should create config-only bucket
 6990718c8 MB-52953: Remove refs to old replication-throttle params and stats
 0fdcf8882 MB-52953: Remove unused EPStats::replicationThrottleThreshold
 bc4592d8b MB-52953: Use mutation_mem_threshold in ReplicationThrottleEP::hasSomeMemory
 b1ed0feb2 MB-52953: Turn mutation_mem_threshold into mutation_mem_ratio
 45dd2db60 MB-53429: Hold vbState lock during pageOut
 ba18b10ca MB-53438: Acquire the vbState lock during disk backfill
 348287953 MB-53141: Return all if sampling range-scan requests samples > keys
 9da38ff86 MB-35297: Improve logging for RangeScan create/cancel
 79aa3dd72 MB-53100: Add extra seqno log information after we register a cursor
 415b3ec74 MB-53198: Do not abort warmup for shard if scan cancelled
 dc09bb535 Cleanup: Move mcbp::datatype to cb::mcbp::datatype
 a77fca118 MB-35297: Meter RangeScan create
 36d090abe MB-35297: Throttle RangeScan create/continue
 40321cf27 SetClusterConfig should handle all bucket states
 ac0c0486d Merge commit 'couchbase/neo~7' into trunk
 c615d15f2 Merge "Merge commit 'couchbase/neo~10' into trunk"
 a96e4a5e9 MB-52806: Disconnect DCP connections when they loose privilege
 6b7d68b4e MB-52158: Check for privilege in RangeScan continue/cancel
 b887f1f17 Merge commit 'couchbase/neo~10' into trunk
 2cfe963a7 Modernize config parsing [2/2]
 4a6018627 MB-53359: Add uniqe error code for config-bucket
 e0e5d5c98 MB-35297: Add EventDrivenTimeoutTask
 8bfdba483 Cleanup: move mcbp::subdoc under cb::mcbp::subdoc
 cf97e6792 Cleanup: Move mcbp::cas under cb::mcbp::cas
 3834eb115 MB-43127: Log succcess status from dumpCallback
 bcb730456 MB-52172 Refactor source file generation cmake target
 d847f8a55 MB-35297: Meter RangeScan key/values
 a7a610b48 Refactor: Rename CreateBucketCommandContext
 af47290a6 Refactor out wait code to separate method
 3c30a1142 Include all bucket states in "bucket_details "
 5d272f547 MB-53379: Allow Collection enabled clients to select COB
 0042495b9 MB-52975: Fold backfill create and scan into one invocation of run
 53f915d1d MB-35297: runtime must not be zero when backfill completes
 a811f317b MB-53359: Don't try to fetch bucket metrics from config-only bucket
 881774c5e MB-53354: Extend CheckpointMemoryTrackingTest suite for non-SSO case
 7d7389df7 Modernize parse_config [1/2]
 72e650860 Set the correct hostname for dcp metering test
 8325ff14b Remove support for DT_CONFIGFILE
 92c8f4fa8 Remove config_parse from server-api
 f85f41bad MB-35297: RangeScan document 'flags' should match GetMeta byte order
 3eccd2aa6 MB-53157: RangeScanCreate uuid should be a string
 67d4759c0 MB-52953: Add ReplicationThrottleEP::engine member
 c310b2f4a Don't use the term whitelist
 407905037 MB-53197: Add support for ClusterConfigOnly bucket
 f61b2e1c6 MB-53294: Introduce storage_bytes metering metric
 be1577087 MB-52953: Remove unused UseActiveVBMemThreshold
 ecbd40992 MB-35297: Add missing recvResponse / sendCommand from RangeScanTest/CreateInvalid
 e3bbe2ace MB-52953: Use only mutation_mem_threshold in VB::hasMemoryForStoredValue
 533286852 MB-53294: Refactor engine Prometheus metrics
 03056b2d2 MB-53294: Rename Cardinality -> MetricGroup
 8937d6e5a MB-52953: Default replication_throttle_threshold=93
 6579346af MB-52956: Update lastReadSeqno at the end of an OSO backfill
 3af167ac7 MB-52953: Move VBucket::mutationMemThreshold to KVBucket
 8c5af9915 MB-52854: Fix and re-enable the DcpConsumerBufferAckTest suite
 100a5b2af MB-52957: Avoid scan when collection high seqno < start
 cd6df9b81 Make wasFirst in ActiveStream snapshot functions const
 7bc7ee427 Sanity check that snap start > previous snap end
 8f324c470 MB-53184: Extend range-scan computed exclusive-end upto the input
 3f6fb6ba2 MB-46738: Remove Vbid arg from the buffer-ack DCP api
 cdc3c2f29 MB-52842: Fix intermittent failure in 'disk>RAM delete paged-out'
 1588cb007 Merge "Merge branch 'neo' into 'master'"
 552d9e2c7 MB-46738: Remove unused dcp_conn_buffer_size_max
 e44ee005e MB-46738: Remove unused dcp_conn_buffer_size
 769d20940 MB-52264: Add desiredMaxSize stat
 6809d7eae MB-46738: Ensure Consumer buffer size always ratio of bucket quota
 b05ebef25 Merge branch 'neo' into 'master'
 ab1ab27f8 Merge "Merge commit 'ea65052e' into 'couchbase/master'"
 a6e70fdae Merge commit 'ea65052e' into 'couchbase/master'
 503ae084b MB-46738: Make DcpFlowControlManager::engine const
 e5766a51e MB-46738: Make dcp_conn_buffer_ratio dynamic
 979159649 MB-53205: Hold VBucket stateLock while calling fetchValidValue
 89602bce3 Humpty-Dumpty: Failover exploration tool
 bb17d9439 MB-53197: [Refactor] create BucketManager::setClusterConfig
 a81e37998 Upgrade go version to 1.19 for tls_test
 256c78709 Merge "MB-52383: Merge branch 'cheshire-cat' into neo" into neo
 09bbfce5c Merge "MB-47851: Merge branch 'cheshire-cat' into neo" into neo
 e99ce1c4a Merge "MB-47267: Merge branch 'cheshire-cat' into neo" into neo
 112e09c36 Merge "MB-51373: Merge branch 'cheshire-cat' into neo" into neo
 281df3be1 Merge "Merge branch 'cheshire-cat' into neo" into neo
 46014c72f MB-52383: Merge branch 'cheshire-cat' into neo
 ecc2f6bb7 Change the logic for Unmetered privilege
 c73eaf5f5 MB-53100: Add streamName arg to MockActiveStream ctor
 ba7850f07 MB-47851: Merge branch 'cheshire-cat' into neo
 5edb02327 MB-47267: Merge branch 'cheshire-cat' into neo
 8db209a68 MB-51373: Merge branch 'cheshire-cat' into neo
 eb865cbb0 Merge branch 'cheshire-cat' into neo
 f656b5152 Merge "Merge branch 'cheshire-cat' into neo" into neo
 453eb9a98 MB-53282: Reset open_time in early return in close_and_rotate_file
 2a83a2a63 MB-52383: Merge branch 'mad-hatter' into cheshire-cat
 9c684fb52 Merge branch 'mad-hatter' into cheshire-cat
 852883091 Merge branch 'mad-hatter' into cheshire-cat
 0173173cb Revert "MB-52813: Don't call Seek for every call of ::scan"
 f1c3ddc67 Merge branch 'cheshire-cat' into neo
 349c2640c Set GOVERSION to 1.18 to remove warning from cmake
 abfb02f80 MB-46738: FCManager API takes DcpConsumer&
 4ab7dbaa3 MB-52264: Wait for memory to reduce before setting new quota
 d5d7b65d0 [serverless] Split Get metering test to individual tests
 fda7ec6b8 Remove old comment in PagingVisitor
 5d9bdbb44 MB-52633: Swap PagingVisitor freq counter histogram to flat array
 f494fa983 MB-51373: Merge branch 'mad-hatter' into cheshire-cat
 6d32e009a MB-52669: Specify GOVERSION without patch revision
 df808528f Merge "Merge branch 'neo'"
 eeb5cbad7 Merge branch 'neo'
 18a4cd691 MB-52793: Merge branch 'mad-hatter' into cheshire-cat
 b4c2fe22b Merge branch 'mad-hatter' into cheshire-cat
 c80c6f58c MB-51373: Inspect and correct Item objects created by KVStore
 ea65052eb MB-53046: [BP] Timeout SeqnoPersistenceRequests when no data is flushed
 5f6d5dc65 MB-47267 / MB-52383: Make backfill during warmup a PauseResume task
 4e51c38a8 MB-47851: Cancel any requests blocked on warmup if warmup stopped.
 2c6e95c8e MB-47267: Make ObjectRegistry getAllocSize atomic
 3d73de526 MB-52902: Populate kvstore rev if no vbstate found
 ad47f53b7 MB-51373: Inspect and correct Item objects created by KVStore
 8855aebe5 MB-52793: Ensure StoredValue::del updates datatype
 35086bc80 Merge remote-tracking branch 'couchbase/alice' into mad-hatter
 0df2087be MB-43055: [BP] Ensure ItemPager available is not left set to false
 6dfd920a8 MB-43453: mcctl: Use passwd from env or stdin
 b7d5bd362 MB-40531: [BP] Prefer paging from replicas if possible
Switching indexing to unstable
indexing is at 19ea81d, changes since last good build: none
Switching plasma to unstable
plasma is at cfa6534, changes since last good build: 
fatal: Invalid revision range 0141641db3ee3de853547c46ed58c647fc7c43a1..HEAD

Switching nitro to unstable
nitro is at 966c610, changes since last good build: none
Switching gometa to master
gometa is at 05cb6b2, changes since last good build: none
Switching testrunner to master
Submodule 'gauntlet' (https://github.com/pavithra-mahamani/gauntlet) registered for path 'gauntlet'
Submodule 'java_sdk_client' (https://github.com/couchbaselabs/java_sdk_client) registered for path 'java_sdk_client'
Submodule 'lib/capellaAPI' (https://github.com/couchbaselabs/CapellaRESTAPIs) registered for path 'lib/capellaAPI'
Submodule path 'gauntlet': checked out '4e2424851a59c6f4b4edfdb7e36fa6a0874d6300'
Submodule path 'java_sdk_client': checked out '961d8eb79ec29bad962b87425eca59fc43c6fe07'
Submodule path 'lib/capellaAPI': checked out '879091aa331e3d72f913b8192f563715d9e8597a'
testrunner is at f2361d1, changes since last good build: none
Pulling in uncommitted change 176468 at refs/changes/68/176468/14
Total 52 (delta 40), reused 47 (delta 40)
[unstable 8a34f579] MB-51825: Add NumVBuckets in TsVbuuid struct and fix tsvbuuid pool
 Author: Sai Krishna Teja Kommaraju 
 Date: Tue Jun 21 22:27:17 2022 +0530
 1 file changed, 44 insertions(+), 9 deletions(-)
Pulling in uncommitted change 176523 at refs/changes/23/176523/3
Total 51 (delta 40), reused 48 (delta 40)
[unstable 5a9e0c08] MB-51825: Fix memedb_slice_test adding numVBuckets arg
 Author: Sai Krishna Teja Kommaraju 
 Date: Wed Jun 22 16:31:49 2022 +0530
 1 file changed, 1 insertion(+), 1 deletion(-)
Pulling in uncommitted change 178917 at refs/changes/17/178917/8
[unstable 602eeb4d] MB-51825: Pass numVBuckets to Storage from indexer
 Author: Sai Krishna Teja Kommaraju 
 Date: Wed Aug 17 15:08:14 2022 +0530
 5 files changed, 56 insertions(+), 26 deletions(-)
Pulling in uncommitted change 178948 at refs/changes/48/178948/10
[unstable 24b7f55d] MB-51825: Indexer - Fetch numVBuckets from cinfo
 Author: Sai Krishna Teja Kommaraju 
 Date: Wed Aug 17 20:53:00 2022 +0530
 2 files changed, 57 insertions(+), 21 deletions(-)
Pulling in uncommitted change 178949 at refs/changes/49/178949/10
[unstable e07e3109] MB-51825: Scan_Coordinator - Fetch numVbuckets from cinfo
 Author: Sai Krishna Teja Kommaraju 
 Date: Wed Aug 17 22:26:59 2022 +0530
 2 files changed, 14 insertions(+), 11 deletions(-)
Pulling in uncommitted change 178950 at refs/changes/50/178950/11
[unstable 707538dc] MB-51825: Storage_Manager - Fetch numVbuckets from cinfo
 Author: Sai Krishna Teja Kommaraju 
 Date: Wed Aug 17 23:33:14 2022 +0530
 6 files changed, 102 insertions(+), 17 deletions(-)
Pulling in uncommitted change 178951 at refs/changes/51/178951/11
[unstable 8aa984d6] MB-51825: Mutation_Manager - Fetch numVbuckets from cinfo
 Author: Sai Krishna Teja Kommaraju 
 Date: Thu Aug 18 00:06:03 2022 +0530
 4 files changed, 37 insertions(+), 22 deletions(-)
Pulling in uncommitted change 178958 at refs/changes/58/178958/11
[unstable fe86e86c] MB-51825: TimeKeeper - Fetch numVbuckets from cinfo
 Author: Sai Krishna Teja Kommaraju 
 Date: Thu Aug 18 04:46:09 2022 +0530
 3 files changed, 59 insertions(+), 44 deletions(-)
Pulling in uncommitted change 179474 at refs/changes/74/179474/1
Total 74 (delta 60), reused 70 (delta 60)
[unstable e2ba42a9] MB100 : Add function to satisfy datastore.Context interface
 Author: Sai Krishna Teja Kommaraju 
 Date: Thu Sep 1 22:25:22 2022 +0530
 4 files changed, 16 insertions(+)
Pulling in uncommitted change 178623 at refs/changes/23/178623/14
Total 6 (delta 1), reused 2 (delta 1)
[unstable 97e1fbe] MB-52020: Add Shard Locking APIs
 Author: saptarshi.sen 
 Date: Fri Aug 5 11:27:27 2022 -0700
 4 files changed, 755 insertions(+), 29 deletions(-)
Pulling in uncommitted change 179000 at refs/changes/00/179000/9
Total 17 (delta 10), reused 11 (delta 10)
[unstable 81c1885] MB-52020: Add Shard CopyStats
 Author: saptarshi.sen 
 Date: Wed Aug 10 15:03:32 2022 -0700
 9 files changed, 531 insertions(+), 114 deletions(-)
Pulling in uncommitted change 179007 at refs/changes/07/179007/7
Total 24 (delta 17), reused 19 (delta 17)
[unstable 7677c5b] MB-52020: Add TransferShard & RestoreShard API
 Author: saptarshi.sen 
 Date: Fri Aug 19 00:30:05 2022 -0700
 5 files changed, 1884 insertions(+), 80 deletions(-)
Pulling in uncommitted change 179316 at refs/changes/16/179316/4
Total 32 (delta 23), reused 27 (delta 23)
[unstable bff8c6f] MB-52020: Handle Instance Paths in Shard Metadata Snapshot
 Author: saptarshi.sen 
 Date: Sat Aug 27 22:23:31 2022 -0700
 6 files changed, 772 insertions(+), 278 deletions(-)
Pulling in uncommitted change 179442 at refs/changes/42/179442/2
Total 38 (delta 29), reused 34 (delta 29)
[unstable 3c474a1] MB-52020: Handle zero segmentSize in persistent config snapshot
 Author: saptarshi.sen 
 Date: Wed Aug 31 11:19:16 2022 -0700
 4 files changed, 13 insertions(+), 16 deletions(-)
Pulling in uncommitted change 179443 at refs/changes/43/179443/2
Total 44 (delta 36), reused 42 (delta 36)
[unstable 1c330ba] MB-52020: Add checksum to persistent config snapshot
 Author: saptarshi.sen 
 Date: Wed Aug 31 07:50:00 2022 -0700
 4 files changed, 114 insertions(+), 15 deletions(-)
Building community edition
Building cmakefiles and deps [CE]
Building main product [CE]
Build CE finished
BUILD_ENTERPRISE empty. Building enterprise edition
Building Enterprise Edition
Building cmakefiles and deps [EE]
Building main product [EE]
Build EE finished

Testing

Started testing at 2022/09/01 23:16:58
Testing mode: sanity,unit,functional,integration
Using storage type: plasma
Setting ulimit to 200000

Simple Test

Sep 01 23:21:57 rebalance_in_with_ops (rebalance.rebalancein.RebalanceInTests) ... ok
Sep 01 23:25:49 rebalance_in_with_ops (rebalance.rebalancein.RebalanceInTests) ... ok
Sep 01 23:26:33 do_warmup_100k (memcapable.WarmUpMemcachedTest) ... ok
Sep 01 23:27:57 test_view_ops (view.createdeleteview.CreateDeleteViewTests) ... ok
Sep 01 23:28:46 b" 'stop_on_failure': 'True'}"
Sep 01 23:28:46 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t rebalance.rebalancein.RebalanceInTests.rebalance_in_with_ops,nodes_in=3,replicas=1,items=50000,get-logs-cluster-run=True,doc_ops=create;update;delete'
Sep 01 23:28:46 b"{'nodes_in': '3', 'replicas': '1', 'items': '50000', 'get-logs-cluster-run': 'True', 'doc_ops': 'create;update;delete', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 1, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'False', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-01_23-17-20/test_1'}"
Sep 01 23:28:46 b'-->result: '
Sep 01 23:28:46 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 1 , fail 0'
Sep 01 23:28:46 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t rebalance.rebalancein.RebalanceInTests.rebalance_in_with_ops,nodes_in=3,bucket_type=ephemeral,replicas=1,items=50000,get-logs-cluster-run=True,doc_ops=create;update;delete'
Sep 01 23:28:46 b"{'nodes_in': '3', 'bucket_type': 'ephemeral', 'replicas': '1', 'items': '50000', 'get-logs-cluster-run': 'True', 'doc_ops': 'create;update;delete', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 2, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-01_23-17-20/test_2'}"
Sep 01 23:28:46 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Sep 01 23:28:46 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t memcapable.WarmUpMemcachedTest.do_warmup_100k,get-logs-cluster-run=True'
Sep 01 23:28:46 b"{'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 3, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-01_23-17-20/test_3'}"
Sep 01 23:28:46 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Sep 01 23:28:46 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Sep 01 23:28:46 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t view.createdeleteview.CreateDeleteViewTests.test_view_ops,ddoc_ops=create,test_with_view=True,num_ddocs=1,num_views_per_ddoc=10,items=1000,skip_cleanup=False,get-logs-cluster-run=True'
Sep 01 23:28:46 b"{'ddoc_ops': 'create', 'test_with_view': 'True', 'num_ddocs': '1', 'num_views_per_ddoc': '10', 'items': '1000', 'skip_cleanup': 'False', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 4, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-01_23-17-20/test_4'}"
Sep 01 23:28:46 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Sep 01 23:28:46 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Sep 01 23:28:46 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Sep 01 23:38:30 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t view.viewquerytests.ViewQueryTests.test_employee_dataset_startkey_endkey_queries_rebalance_in,num_nodes_to_add=1,skip_rebalance=true,docs-per-day=1,timeout=1200,get-logs-cluster-run=True'ok
Sep 01 23:39:16 test_simple_dataset_stale_queries_data_modification (view.viewquerytests.ViewQueryTests) ... ok
Sep 01 23:42:59 load_with_ops (xdcr.uniXDCR.unidirectional) ... ok
Sep 01 23:46:52 load_with_failover (xdcr.uniXDCR.unidirectional) ... ok
Sep 01 23:49:32 suite_tearDown (xdcr.uniXDCR.unidirectional) ... ok
Sep 01 23:49:32 b"{'num_nodes_to_add': '1', 'skip_rebalance': 'true', 'docs-per-day': '1', 'timeout': '1200', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 5, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-01_23-17-20/test_5'}"
Sep 01 23:49:32 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Sep 01 23:49:32 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Sep 01 23:49:32 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Sep 01 23:49:32 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 1 , fail 0'
Sep 01 23:49:32 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t view.viewquerytests.ViewQueryTests.test_simple_dataset_stale_queries_data_modification,num-docs=1000,skip_rebalance=true,timeout=1200,get-logs-cluster-run=True'
Sep 01 23:49:32 b"{'num-docs': '1000', 'skip_rebalance': 'true', 'timeout': '1200', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 6, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-01_23-17-20/test_6'}"
Sep 01 23:49:32 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Sep 01 23:49:32 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Sep 01 23:49:32 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Sep 01 23:49:32 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 2 , fail 0'
Sep 01 23:49:32 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t xdcr.uniXDCR.unidirectional.load_with_ops,replicas=1,items=10000,value_size=128,ctopology=chain,rdirection=unidirection,doc-ops=update-delete,get-logs-cluster-run=True'
Sep 01 23:49:32 b"{'replicas': '1', 'items': '10000', 'value_size': '128', 'ctopology': 'chain', 'rdirection': 'unidirection', 'doc-ops': 'update-delete', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 7, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-01_23-17-20/test_7'}"
Sep 01 23:49:32 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Sep 01 23:49:32 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Sep 01 23:49:32 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Sep 01 23:49:32 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 2 , fail 0'
Sep 01 23:49:32 b'summary so far suite xdcr.uniXDCR.unidirectional , pass 1 , fail 0'
Sep 01 23:49:32 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t xdcr.uniXDCR.unidirectional.load_with_failover,replicas=1,items=10000,ctopology=chain,rdirection=unidirection,doc-ops=update-delete,failover=source,get-logs-cluster-run=True'
Sep 01 23:49:32 b"{'replicas': '1', 'items': '10000', 'ctopology': 'chain', 'rdirection': 'unidirection', 'doc-ops': 'update-delete', 'failover': 'source', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 8, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-01_23-17-20/test_8'}"
Sep 01 23:49:32 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Sep 01 23:49:32 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Sep 01 23:49:32 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Sep 01 23:49:32 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 2 , fail 0'
Sep 01 23:49:32 b'summary so far suite xdcr.uniXDCR.unidirectional , pass 2 , fail 0'
Sep 01 23:49:32 b'Run after suite setup for xdcr.uniXDCR.unidirectional.load_with_failover'
Sep 01 23:49:33 b"('rebalance.rebalancein.RebalanceInTests.rebalance_in_with_ops', ' pass')"
Sep 01 23:49:33 b"('rebalance.rebalancein.RebalanceInTests.rebalance_in_with_ops', ' pass')"
Sep 01 23:49:33 b"('memcapable.WarmUpMemcachedTest.do_warmup_100k', ' pass')"
Sep 01 23:49:33 b"('view.createdeleteview.CreateDeleteViewTests.test_view_ops', ' pass')"
Sep 01 23:49:33 b"('view.viewquerytests.ViewQueryTests.test_employee_dataset_startkey_endkey_queries_rebalance_in', ' pass')"
Sep 01 23:49:33 b"('view.viewquerytests.ViewQueryTests.test_simple_dataset_stale_queries_data_modification', ' pass')"
Sep 01 23:49:33 b"('xdcr.uniXDCR.unidirectional.load_with_ops', ' pass')"
Sep 01 23:49:33 b"('xdcr.uniXDCR.unidirectional.load_with_failover', ' pass')"

Unit tests

=== RUN   TestMerger
--- PASS: TestMerger (0.02s)
=== RUN   TestInsert
--- PASS: TestInsert (0.00s)
=== RUN   TestInsertPerf
16000 items took 15.471638ms -> 1.0341503595159091e+06 items/s conflicts 1
--- PASS: TestInsertPerf (0.02s)
=== RUN   TestGetPerf
16000 items took 7.57188ms -> 2.1130815596654993e+06 items/s
--- PASS: TestGetPerf (0.01s)
=== RUN   TestGetRangeSplitItems
{
"node_count":             1000,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       0,
"next_pointers_per_node": 1.3450,
"memory_used":            45520,
"node_allocs":            1000,
"node_frees":             0,
"level_node_distribution":{
"level0": 747,
"level1": 181,
"level2": 56,
"level3": 13,
"level4": 2,
"level5": 1,
"level6": 0,
"level7": 0,
"level8": 0,
"level9": 0,
"level10": 0,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
Split range keys [105 161 346 379 434 523 713]
No of items in each range [105 56 185 33 55 89 190 287]
--- PASS: TestGetRangeSplitItems (0.00s)
=== RUN   TestBuilder
{
"node_count":             50000,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       0,
"next_pointers_per_node": 1.3368,
"memory_used":            2269408,
"node_allocs":            50000,
"node_frees":             0,
"level_node_distribution":{
"level0": 37380,
"level1": 9466,
"level2": 2370,
"level3": 578,
"level4": 152,
"level5": 40,
"level6": 9,
"level7": 4,
"level8": 1,
"level9": 0,
"level10": 0,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
Took 7.12436ms to build 50000 items, 7.018174e+06 items/sec
Took 6.660095ms to iterate 50000 items
--- PASS: TestBuilder (0.01s)
=== RUN   TestNodeDCAS
--- PASS: TestNodeDCAS (0.00s)
PASS
ok  	github.com/couchbase/nitro/skiplist	0.074s
=== RUN   TestZstdSimple
--- PASS: TestZstdSimple (0.00s)
=== RUN   TestZstdCompressBound
--- PASS: TestZstdCompressBound (3.04s)
=== RUN   TestZstdErrors
--- PASS: TestZstdErrors (0.00s)
=== RUN   TestZstdCompressLevels
--- PASS: TestZstdCompressLevels (0.75s)
=== RUN   TestZstdEmptySrc
--- PASS: TestZstdEmptySrc (0.00s)
=== RUN   TestZstdLargeSrc
--- PASS: TestZstdLargeSrc (0.00s)
PASS
ok  	github.com/couchbase/plasma/zstd	3.794s
=== RUN   TestAutoTunerWriteUsageStats
--- PASS: TestAutoTunerWriteUsageStats (10.09s)
=== RUN   TestAutoTunerReadUsageStats
--- PASS: TestAutoTunerReadUsageStats (7.54s)
=== RUN   TestAutoTunerCleanerUsageStats
--- PASS: TestAutoTunerCleanerUsageStats (8.57s)
=== RUN   TestAutoTunerDiskStats
--- PASS: TestAutoTunerDiskStats (2.50s)
=== RUN   TestAutoTunerTargetFragRatio
--- PASS: TestAutoTunerTargetFragRatio (0.00s)
=== RUN   TestAutoTunerExcessUsedSpace
--- PASS: TestAutoTunerExcessUsedSpace (0.00s)
=== RUN   TestAutoTunerUsedSpaceRatio
--- PASS: TestAutoTunerUsedSpaceRatio (0.00s)
=== RUN   TestAutoTunerAdjustFragRatio
--- PASS: TestAutoTunerAdjustFragRatio (0.00s)
=== RUN   TestAutoTuneFlushBufferAdjustMemQuotaSingleShard
--- PASS: TestAutoTuneFlushBufferAdjustMemQuotaSingleShard (18.45s)
=== RUN   TestAutoTuneFlushBufferAdjustMemQuotaManyShards
--- PASS: TestAutoTuneFlushBufferAdjustMemQuotaManyShards (11.23s)
=== RUN   TestAutoTuneFlushBufferRebalanceIdleShards
--- PASS: TestAutoTuneFlushBufferRebalanceIdleShards (9.49s)
=== RUN   TestAutoTuneFlushBufferGetUsedMemory
--- PASS: TestAutoTuneFlushBufferGetUsedMemory (17.69s)
=== RUN   TestBloom
--- PASS: TestBloom (4.67s)
=== RUN   TestBloomDisableEnable
--- PASS: TestBloomDisableEnable (3.65s)
=== RUN   TestBloomDisable
--- PASS: TestBloomDisable (0.03s)
=== RUN   TestBloomFreeDuringLookup
--- PASS: TestBloomFreeDuringLookup (0.03s)
=== RUN   TestBloomRecoveryFreeDuringLookup
--- PASS: TestBloomRecoveryFreeDuringLookup (0.07s)
=== RUN   TestBloomRecoverySwapInLookup
--- PASS: TestBloomRecoverySwapInLookup (0.11s)
=== RUN   TestBloomRecoverySwapOutLookup
--- PASS: TestBloomRecoverySwapOutLookup (0.07s)
=== RUN   TestBloomRecoveryInserts
--- PASS: TestBloomRecoveryInserts (0.11s)
=== RUN   TestBloomRecovery
--- PASS: TestBloomRecovery (0.11s)
=== RUN   TestBloomStats
--- PASS: TestBloomStats (3.54s)
=== RUN   TestBloomStatsRecovery
--- PASS: TestBloomStatsRecovery (0.84s)
=== RUN   TestBloomFilterSimple
--- PASS: TestBloomFilterSimple (0.00s)
=== RUN   TestBloomFilterConcurrent
--- PASS: TestBloomFilterConcurrent (21.94s)
=== RUN   TestBitArrayConcurrent
--- PASS: TestBitArrayConcurrent (0.93s)
=== RUN   TestBloomCapacity
--- PASS: TestBloomCapacity (0.00s)
=== RUN   TestBloomNumHashFuncs
--- PASS: TestBloomNumHashFuncs (0.00s)
=== RUN   TestBloomTestAndAdd
--- PASS: TestBloomTestAndAdd (0.22s)
=== RUN   TestBloomReset
--- PASS: TestBloomReset (0.00s)
=== RUN   TestLFSCopier
--- PASS: TestLFSCopier (0.00s)
=== RUN   TestLFSCopierNumBytes
--- PASS: TestLFSCopierNumBytes (0.01s)
=== RUN   TestSBCopyConcurrent
--- PASS: TestSBCopyConcurrent (0.24s)
=== RUN   TestSBCopyCorrupt
--- PASS: TestSBCopyCorrupt (0.01s)
=== RUN   TestLSSCopyHeadTailSingleSegment
--- PASS: TestLSSCopyHeadTailSingleSegment (0.02s)
=== RUN   TestLSSCopyFullSegments
--- PASS: TestLSSCopyFullSegments (0.58s)
=== RUN   TestLSSCopyPartialSegments
--- PASS: TestLSSCopyPartialSegments (0.07s)
=== RUN   TestLSSCopyHolePunching
--- PASS: TestLSSCopyHolePunching (0.60s)
=== RUN   TestLSSCopyConcurrent
--- PASS: TestLSSCopyConcurrent (0.75s)
=== RUN   TestShardCopySimple
--- PASS: TestShardCopySimple (0.28s)
=== RUN   TestShardCopyMetadataCorrupted
--- PASS: TestShardCopyMetadataCorrupted (0.04s)
=== RUN   TestShardCopyLSSMetadataCorrupted
--- PASS: TestShardCopyLSSMetadataCorrupted (0.06s)
=== RUN   TestShardCopyBeforeRecovery
--- PASS: TestShardCopyBeforeRecovery (0.00s)
=== RUN   TestShardCopySkipLog
--- PASS: TestShardCopySkipLog (0.63s)
=== RUN   TestShardCopyAddInstance
--- PASS: TestShardCopyAddInstance (1.58s)
=== RUN   TestShardCopyRestoreShard
--- PASS: TestShardCopyRestoreShard (0.60s)
=== RUN   TestShardCopyRestoreManyShards
--- PASS: TestShardCopyRestoreManyShards (5.75s)
=== RUN   TestShardCopyRestoreConcurrentLogCleaning
--- PASS: TestShardCopyRestoreConcurrentLogCleaning (21.72s)
=== RUN   TestShardCopyRestorePartialRollback
--- PASS: TestShardCopyRestorePartialRollback (12.25s)
=== RUN   TestInvalidMVCCRollback
--- PASS: TestInvalidMVCCRollback (0.22s)
=== RUN   TestShardCopyRestoreConcurrentPurges
--- PASS: TestShardCopyRestoreConcurrentPurges (13.13s)
=== RUN   TestShardCopyDuplicateIndex
--- PASS: TestShardCopyDuplicateIndex (0.11s)
=== RUN   TestTenantCopy
--- PASS: TestTenantCopy (3.65s)
=== RUN   TestLockShardAddInstance
--- PASS: TestLockShardAddInstance (0.14s)
=== RUN   TestLockShardAddInstanceMapping
--- PASS: TestLockShardAddInstanceMapping (0.22s)
=== RUN   TestLockShardCloseInstance
--- PASS: TestLockShardCloseInstance (0.23s)
=== RUN   TestLockShardEmptyShard
--- PASS: TestLockShardEmptyShard (0.12s)
=== RUN   TestDestroyShardID
--- PASS: TestDestroyShardID (0.52s)
=== RUN   TestDestroyShardIDConcurrent
--- PASS: TestDestroyShardIDConcurrent (0.12s)
=== RUN   TestDestroyShardIDNumTenants
--- PASS: TestDestroyShardIDNumTenants (0.41s)
=== RUN   TestDestroyShardIDTenantAddRemove
--- PASS: TestDestroyShardIDTenantAddRemove (0.22s)
=== RUN   TestTransferShardAPI
--- PASS: TestTransferShardAPI (1.91s)
=== RUN   TestTransferShardAPICreateIndexes
--- PASS: TestTransferShardAPICreateIndexes (11.41s)
=== RUN   TestTransferShardAPIWithDropIndexes
--- PASS: TestTransferShardAPIWithDropIndexes (5.74s)
=== RUN   TestTransferShardAPIWithCancel
--- PASS: TestTransferShardAPIWithCancel (6.02s)
=== RUN   TestTransferShardAPIWithCleanup
--- PASS: TestTransferShardAPIWithCleanup (47.44s)
=== RUN   TestRestoreShardAPI
--- PASS: TestRestoreShardAPI (1.73s)
=== RUN   TestRestoreShardNumShards
--- PASS: TestRestoreShardNumShards (0.81s)
=== RUN   TestRestoreShardInvalidLocation
--- PASS: TestRestoreShardInvalidLocation (0.08s)
=== RUN   TestShardDoCleanupAPI
--- PASS: TestShardDoCleanupAPI (0.16s)
=== RUN   TestDiag
--- PASS: TestDiag (0.48s)
=== RUN   TestDumpLog
--- PASS: TestDumpLog (0.06s)
=== RUN   TestExtrasN1
=== RUN   TestExtrasN2
=== RUN   TestExtrasN3
=== RUN   TestGMRecovery
--- PASS: TestGMRecovery (8.34s)
=== RUN   TestIteratorSimple
--- PASS: TestIteratorSimple (4.80s)
=== RUN   TestIteratorSeek
--- PASS: TestIteratorSeek (5.68s)
=== RUN   TestPlasmaIteratorSeekFirst
--- PASS: TestPlasmaIteratorSeekFirst (0.52s)
=== RUN   TestPlasmaIteratorSwapin
--- PASS: TestPlasmaIteratorSwapin (4.90s)
=== RUN   TestIteratorSetEnd
--- PASS: TestIteratorSetEnd (0.74s)
=== RUN   TestIterHiItm
--- PASS: TestIterHiItm (1.90s)
=== RUN   TestIterDeleteSplitMerge
--- PASS: TestIterDeleteSplitMerge (0.04s)
=== RUN   TestKeySamplingSingle
--- PASS: TestKeySamplingSingle (0.09s)
=== RUN   TestKeySamplingAll
--- PASS: TestKeySamplingAll (0.11s)
=== RUN   TestKeySamplingEmpty
--- PASS: TestKeySamplingEmpty (0.03s)
=== RUN   TestKeySamplingExceed
--- PASS: TestKeySamplingExceed (0.10s)
=== RUN   TestLogOperation
--- PASS: TestLogOperation (59.50s)
=== RUN   TestLogLargeSize
--- PASS: TestLogLargeSize (0.17s)
=== RUN   TestLogTrim
--- PASS: TestLogTrim (59.91s)
=== RUN   TestLogSuperblockCorruption
--- PASS: TestLogSuperblockCorruption (60.43s)
=== RUN   TestLogTrimHolePunch
--- PASS: TestLogTrimHolePunch (50.74s)
=== RUN   TestLogMissingAndTruncatedSegments
--- PASS: TestLogMissingAndTruncatedSegments (0.07s)
=== RUN   TestLogReadBeyondMaxFileIndex
--- PASS: TestLogReadBeyondMaxFileIndex (2.78s)
=== RUN   TestLogReadEOFWithMMap
--- PASS: TestLogReadEOFWithMMap (0.00s)
=== RUN   TestShardLSSCleaning
--- PASS: TestShardLSSCleaning (0.22s)
=== RUN   TestShardLSSCleaningDeleteInstance
--- PASS: TestShardLSSCleaningDeleteInstance (0.16s)
=== RUN   TestShardLSSCleaningCorruptInstance
--- PASS: TestShardLSSCleaningCorruptInstance (0.18s)
=== RUN   TestPlasmaLSSCleaner
--- PASS: TestPlasmaLSSCleaner (218.47s)
=== RUN   TestLSSBasic
--- PASS: TestLSSBasic (0.10s)
=== RUN   TestLSSConcurrent
--- PASS: TestLSSConcurrent (0.97s)
=== RUN   TestLSSCleaner
--- PASS: TestLSSCleaner (11.86s)
=== RUN   TestLSSSuperBlock
--- PASS: TestLSSSuperBlock (1.07s)
=== RUN   TestLSSLargeSinglePayload
--- PASS: TestLSSLargeSinglePayload (0.80s)
=== RUN   TestLSSUnstableEnvironment
--- PASS: TestLSSUnstableEnvironment (10.22s)
=== RUN   TestLSSSmallFlushBuffer
--- PASS: TestLSSSmallFlushBuffer (0.01s)
=== RUN   TestLSSTrimFlushBufferGC
--- PASS: TestLSSTrimFlushBufferGC (1.51s)
=== RUN   TestLSSTrimFlushBufferNoIO
--- PASS: TestLSSTrimFlushBufferNoIO (30.01s)
=== RUN   TestLSSTrimFlushBufferWithIO
--- PASS: TestLSSTrimFlushBufferWithIO (33.43s)
=== RUN   TestLSSExtendFlushBufferWithIO
--- PASS: TestLSSExtendFlushBufferWithIO (30.02s)
=== RUN   TestLSSCtxTrimFlushBuffer
--- PASS: TestLSSCtxTrimFlushBuffer (3.81s)
=== RUN   TestLSSNegativeGetFlushBufferMemory
--- PASS: TestLSSNegativeGetFlushBufferMemory (0.02s)
=== RUN   TestMem
Plasma: Adaptive memory quota tuning (decrementing): RSS:761569280, freePercent:89.84774262163191, currentQuota=1099511627776, newQuota=1073741824, netGrowth=0, percent=99Plasma: Adaptive memory quota tuning (incrementing): RSS:761266176, freePercent: 89.84872246700702, currentQuota=0, newQuota=10995116277--- PASS: TestMem (15.02s)
=== RUN   TestCpu
--- PASS: TestCpu (14.49s)
=== RUN   TestTopTen20
--- PASS: TestTopTen20 (0.66s)
=== RUN   TestTopTen5
--- PASS: TestTopTen5 (0.14s)
=== RUN   TestMVCCSimple
--- PASS: TestMVCCSimple (0.19s)
=== RUN   TestMVCCLookup
--- PASS: TestMVCCLookup (0.12s)
=== RUN   TestMVCCIteratorRefresh
--- PASS: TestMVCCIteratorRefresh (4.82s)
=== RUN   TestMVCCIteratorRefreshEveryRow
--- PASS: TestMVCCIteratorRefreshEveryRow (0.77s)
=== RUN   TestMVCCGarbageCollection
--- PASS: TestMVCCGarbageCollection (0.09s)
=== RUN   TestMVCCRecoveryPoint
--- PASS: TestMVCCRecoveryPoint (1.82s)
=== RUN   TestMVCCRollbackMergeSibling
--- PASS: TestMVCCRollbackMergeSibling (0.09s)
=== RUN   TestMVCCRollbackCompact
--- PASS: TestMVCCRollbackCompact (0.06s)
=== RUN   TestMVCCRollbackSplit
--- PASS: TestMVCCRollbackSplit (0.05s)
=== RUN   TestMVCCRollbackItemsNotInSnapshot
--- PASS: TestMVCCRollbackItemsNotInSnapshot (0.15s)
=== RUN   TestMVCCRecoveryPointRollbackedSnapshot
--- PASS: TestMVCCRecoveryPointRollbackedSnapshot (0.87s)
=== RUN   TestMVCCRollbackBetweenRecoveryPoint
--- PASS: TestMVCCRollbackBetweenRecoveryPoint (0.88s)
=== RUN   TestMVCCRecoveryPointCrash
--- PASS: TestMVCCRecoveryPointCrash (0.08s)
=== RUN   TestMVCCIntervalGC
--- PASS: TestMVCCIntervalGC (0.21s)
=== RUN   TestMVCCItemsCount
--- PASS: TestMVCCItemsCount (0.32s)
=== RUN   TestLargeItems
--- PASS: TestLargeItems (106.90s)
=== RUN   TestTooLargeKey
--- PASS: TestTooLargeKey (3.28s)
=== RUN   TestMVCCItemUpdateSize
--- PASS: TestMVCCItemUpdateSize (0.23s)
=== RUN   TestEvictionStats
--- PASS: TestEvictionStats (0.40s)
=== RUN   TestReaderCacheStats
--- PASS: TestReaderCacheStats (1.14s)
=== RUN   TestInvalidSnapshot
--- PASS: TestInvalidSnapshot (0.87s)
=== RUN   TestEmptyKeyInsert
--- PASS: TestEmptyKeyInsert (0.03s)
=== RUN   TestMVCCRecoveryPointError
--- PASS: TestMVCCRecoveryPointError (0.05s)
=== RUN   TestMVCCReaderPurgeSequential
--- PASS: TestMVCCReaderPurgeSequential (0.21s)
=== RUN   TestMVCCReaderNoPurge
--- PASS: TestMVCCReaderNoPurge (0.19s)
=== RUN   TestMVCCReaderPurgeAfterUpdate
--- PASS: TestMVCCReaderPurgeAfterUpdate (0.21s)
=== RUN   TestMVCCReaderPurgeAfterRollback
--- PASS: TestMVCCReaderPurgeAfterRollback (0.22s)
=== RUN   TestMVCCReaderPurgeSimple
--- PASS: TestMVCCReaderPurgeSimple (0.06s)
=== RUN   TestMVCCReaderPurgeRandom
--- PASS: TestMVCCReaderPurgeRandom (0.20s)
=== RUN   TestMVCCReaderPurgePageFlag
--- PASS: TestMVCCReaderPurgePageFlag (0.10s)
=== RUN   TestMVCCPurgeRatioWithRollback
--- PASS: TestMVCCPurgeRatioWithRollback (15.81s)
=== RUN   TestComputeItemsCountMVCCWithRollbackI
--- PASS: TestComputeItemsCountMVCCWithRollbackI (0.11s)
=== RUN   TestComputeItemsCountMVCCWithRollbackII
--- PASS: TestComputeItemsCountMVCCWithRollbackII (0.05s)
=== RUN   TestComputeItemsCountMVCCWithRollbackIII
--- PASS: TestComputeItemsCountMVCCWithRollbackIII (0.08s)
=== RUN   TestComputeItemsCountMVCCWithRollbackIV
--- PASS: TestComputeItemsCountMVCCWithRollbackIV (0.07s)
=== RUN   TestMVCCPurgedRecordsWithCompactFullMarshalAndCascadedEmptyPagesMerge
--- PASS: TestMVCCPurgedRecordsWithCompactFullMarshalAndCascadedEmptyPagesMerge (1.72s)
=== RUN   TestMaxDeltaChainLenWithCascadedEmptyPagesMerge
--- PASS: TestMaxDeltaChainLenWithCascadedEmptyPagesMerge (1.42s)
=== RUN   TestAutoHoleCleaner
--- PASS: TestAutoHoleCleaner (34.49s)
=== RUN   TestAutoHoleCleaner5Indexes
--- PASS: TestAutoHoleCleaner5Indexes (197.48s)
=== RUN   TestIteratorReportedHoleRegionBoundary
--- PASS: TestIteratorReportedHoleRegionBoundary (0.13s)
=== RUN   TestFullRangeHoleScans
--- PASS: TestFullRangeHoleScans (0.33s)
=== RUN   TestOverlappingRangeHoleScans
--- PASS: TestOverlappingRangeHoleScans (0.33s)
=== RUN   TestMVCCIteratorSMRRefreshOnHoleScan
--- PASS: TestMVCCIteratorSMRRefreshOnHoleScan (7.66s)
=== RUN   TestAutoHoleCleanerWithRecovery
--- PASS: TestAutoHoleCleanerWithRecovery (2.93s)
=== RUN   TestPageMergeCorrectness2
--- PASS: TestPageMergeCorrectness2 (0.00s)
=== RUN   TestPageMergeCorrectness
--- PASS: TestPageMergeCorrectness (0.00s)
=== RUN   TestPageMarshalFull
--- PASS: TestPageMarshalFull (0.01s)
=== RUN   TestPageMergeMarshal
--- PASS: TestPageMergeMarshal (0.00s)
=== RUN   TestPageOperations
--- PASS: TestPageOperations (0.03s)
=== RUN   TestPageIterator
--- PASS: TestPageIterator (0.00s)
=== RUN   TestPageMarshal
--- PASS: TestPageMarshal (0.02s)
=== RUN   TestPageMergeCorrectness3
--- PASS: TestPageMergeCorrectness3 (0.00s)
=== RUN   TestPageHasDataRecords
--- PASS: TestPageHasDataRecords (0.00s)
=== RUN   TestPlasmaPageVisitor
--- PASS: TestPlasmaPageVisitor (4.55s)
=== RUN   TestPageRingVisitor
--- PASS: TestPageRingVisitor (4.36s)
=== RUN   TestPauseVisitorOnLowMemory
--- PASS: TestPauseVisitorOnLowMemory (1.10s)
=== RUN   TestCheckpointRecovery
--- PASS: TestCheckpointRecovery (7.82s)
=== RUN   TestPageCorruption
--- PASS: TestPageCorruption (0.83s)
=== RUN   TestCheckPointRecoveryFollowCleaning
--- PASS: TestCheckPointRecoveryFollowCleaning (0.09s)
=== RUN   TestFragmentationWithZeroItems
--- PASS: TestFragmentationWithZeroItems (1.13s)
=== RUN   TestEvictOnPersist
--- PASS: TestEvictOnPersist (0.16s)
=== RUN   TestPlasmaSimple
--- PASS: TestPlasmaSimple (13.28s)
=== RUN   TestPlasmaCompression
--- PASS: TestPlasmaCompression (0.04s)
=== RUN   TestPlasmaCompressionWrong
--- PASS: TestPlasmaCompressionWrong (0.03s)
=== RUN   TestPlasmaInMemCompression
--- PASS: TestPlasmaInMemCompression (0.03s)
=== RUN   TestPlasmaInMemCompressionZstd
--- PASS: TestPlasmaInMemCompressionZstd (0.04s)
=== RUN   TestPlasmaInMemCompressionWrong
--- PASS: TestPlasmaInMemCompressionWrong (0.02s)
=== RUN   TestSpoiledConfig
--- PASS: TestSpoiledConfig (0.03s)
=== RUN   TestPlasmaErrorFile
--- PASS: TestPlasmaErrorFile (0.02s)
=== RUN   TestPlasmaPersistor
--- PASS: TestPlasmaPersistor (9.71s)
=== RUN   TestPlasmaEvictionLSSDataSize
--- PASS: TestPlasmaEvictionLSSDataSize (0.03s)
=== RUN   TestPlasmaEviction
--- PASS: TestPlasmaEviction (29.54s)
=== RUN   TestConcurrDelOps
--- PASS: TestConcurrDelOps (71.30s)
=== RUN   TestPlasmaDataSize
--- PASS: TestPlasmaDataSize (0.06s)
=== RUN   TestLargeBasePage
--- PASS: TestLargeBasePage (60.99s)
=== RUN   TestLargeValue
--- PASS: TestLargeValue (103.67s)
=== RUN   TestPlasmaTooLargeKey
--- PASS: TestPlasmaTooLargeKey (3.25s)
=== RUN   TestEvictAfterMerge
--- PASS: TestEvictAfterMerge (0.12s)
=== RUN   TestEvictDirty
--- PASS: TestEvictDirty (0.15s)
=== RUN   TestEvictUnderQuota
--- PASS: TestEvictUnderQuota (60.11s)
=== RUN   TestEvictSetting
--- PASS: TestEvictSetting (1.17s)
=== RUN   TestBasePageAfterCompaction
--- PASS: TestBasePageAfterCompaction (0.13s)
=== RUN   TestSwapout
--- PASS: TestSwapout (0.04s)
=== RUN   TestSwapoutSplitBasePage
--- PASS: TestSwapoutSplitBasePage (0.03s)
=== RUN   TestCompactFullMarshal
--- PASS: TestCompactFullMarshal (0.05s)
=== RUN   TestPageStats
--- PASS: TestPageStats (2.12s)
=== RUN   TestPageStatsTinyIndex
--- PASS: TestPageStatsTinyIndex (0.14s)
=== RUN   TestPageStatsTinyIndexOnRecovery
--- PASS: TestPageStatsTinyIndexOnRecovery (0.11s)
=== RUN   TestPageStatsTinyIndexOnSplitAndMerge
--- PASS: TestPageStatsTinyIndexOnSplitAndMerge (0.06s)
=== RUN   TestPageCompress
--- PASS: TestPageCompress (0.04s)
=== RUN   TestPageCompressSwapin
--- PASS: TestPageCompressSwapin (0.06s)
=== RUN   TestPageCompressStats
--- PASS: TestPageCompressStats (0.68s)
=== RUN   TestPageDecompressStats
--- PASS: TestPageDecompressStats (0.04s)
=== RUN   TestSharedDedicatedDataSize
--- PASS: TestSharedDedicatedDataSize (3.59s)
=== RUN   TestLastRpSns
--- PASS: TestLastRpSns (0.05s)
=== RUN   TestPageCompressState
--- PASS: TestPageCompressState (0.03s)
=== RUN   TestPageCompressDuringBurst
--- PASS: TestPageCompressDuringBurst (0.06s)
=== RUN   TestPageDontDecompressDuringScan
--- PASS: TestPageDontDecompressDuringScan (0.12s)
=== RUN   TestPageDecompressAndCompressSwapin
--- PASS: TestPageDecompressAndCompressSwapin (2.06s)
=== RUN   TestPageCompressibleStat
--- PASS: TestPageCompressibleStat (0.49s)
=== RUN   TestPageCompressibleStatRecovery
--- PASS: TestPageCompressibleStatRecovery (0.18s)
=== RUN   TestPageCompressBeforeEvictPercent
--- PASS: TestPageCompressBeforeEvictPercent (0.72s)
=== RUN   TestPageCompressDecompressAfterDisable
--- PASS: TestPageCompressDecompressAfterDisable (0.72s)
=== RUN   TestWrittenDataSz
--- PASS: TestWrittenDataSz (3.44s)
=== RUN   TestWrittenDataSzAfterRecoveryCleaning
--- PASS: TestWrittenDataSzAfterRecoveryCleaning (3.81s)
=== RUN   TestWrittenHdrSz
--- PASS: TestWrittenHdrSz (3.06s)
=== RUN   TestPersistConfigUpgrade
--- PASS: TestPersistConfigUpgrade (0.00s)
=== RUN   TestLSSSegmentSize
--- PASS: TestLSSSegmentSize (0.23s)
=== RUN   TestPlasmaFlushBufferSzCfg
--- PASS: TestPlasmaFlushBufferSzCfg (0.11s)
=== RUN   TestCompactionCountwithCompactFullMarshal
--- PASS: TestCompactionCountwithCompactFullMarshal (0.10s)
=== RUN   TestCompactionCountwithCompactFullMarshalSMO
--- PASS: TestCompactionCountwithCompactFullMarshalSMO (0.03s)
=== RUN   TestPageHasDataRecordsOnCompactFullMarshal
--- PASS: TestPageHasDataRecordsOnCompactFullMarshal (0.07s)
=== RUN   TestPauseReaderOnLowMemory
--- PASS: TestPauseReaderOnLowMemory (1.05s)
=== RUN   TestRecoveryCleanerFragRatio
--- PASS: TestRecoveryCleanerFragRatio (214.58s)
=== RUN   TestRecoveryCleanerRelocation
--- PASS: TestRecoveryCleanerRelocation (214.57s)
=== RUN   TestRecoveryCleanerDataSize
--- PASS: TestRecoveryCleanerDataSize (219.54s)
=== RUN   TestRecoveryCleanerDeleteInstance
--- PASS: TestRecoveryCleanerDeleteInstance (434.56s)
=== RUN   TestRecoveryCleanerRecoveryPoint
--- PASS: TestRecoveryCleanerRecoveryPoint (27.02s)
=== RUN   TestRecoveryCleanerCorruptInstance
--- PASS: TestRecoveryCleanerCorruptInstance (0.16s)
=== RUN   TestRecoveryCleanerAhead
--- PASS: TestRecoveryCleanerAhead (4.22s)
=== RUN   TestRecoveryCleanerAheadAfterRecovery
--- PASS: TestRecoveryCleanerAheadAfterRecovery (2.19s)
=== RUN   TestCleaningUncommittedData
--- PASS: TestCleaningUncommittedData (0.04s)
=== RUN   TestPlasmaRecoverySimple
--- PASS: TestPlasmaRecoverySimple (0.05s)
=== RUN   TestPlasmaRecovery
--- PASS: TestPlasmaRecovery (27.92s)
=== RUN   TestShardRecoveryShared
--- PASS: TestShardRecoveryShared (10.58s)
=== RUN   TestShardRecoveryRecoveryLogAhead
--- PASS: TestShardRecoveryRecoveryLogAhead (32.63s)
=== RUN   TestShardRecoveryDataLogAhead
--- PASS: TestShardRecoveryDataLogAhead (21.85s)
=== RUN   TestShardRecoveryDestroyBlksInDataLog
--- PASS: TestShardRecoveryDestroyBlksInDataLog (9.73s)
=== RUN   TestShardRecoveryDestroyBlksInRecoveryLog
--- PASS: TestShardRecoveryDestroyBlksInRecoveryLog (10.23s)
=== RUN   TestShardRecoveryDestroyBlksInBothLog
--- PASS: TestShardRecoveryDestroyBlksInBothLog (9.58s)
=== RUN   TestShardRecoveryRecoveryLogCorruption
--- PASS: TestShardRecoveryRecoveryLogCorruption (9.23s)
=== RUN   TestShardRecoveryDataLogCorruption
--- PASS: TestShardRecoveryDataLogCorruption (10.29s)
=== RUN   TestShardRecoverySharedNoRP
--- PASS: TestShardRecoverySharedNoRP (10.31s)
=== RUN   TestShardRecoveryNotEnoughMem
--- PASS: TestShardRecoveryNotEnoughMem (33.51s)
=== RUN   TestShardRecoveryCleanup
--- PASS: TestShardRecoveryCleanup (0.45s)
=== RUN   TestShardRecoveryRebuildSharedLog
--- PASS: TestShardRecoveryRebuildSharedLog (1.22s)
=== RUN   TestShardRecoveryUpgradeWithCheckpoint
--- PASS: TestShardRecoveryUpgradeWithCheckpoint (0.44s)
=== RUN   TestShardRecoveryUpgradeWithLogReplay
--- PASS: TestShardRecoveryUpgradeWithLogReplay (0.43s)
=== RUN   TestShardRecoveryRebuildAfterError
--- PASS: TestShardRecoveryRebuildAfterError (1.30s)
=== RUN   TestShardRecoveryRebuildAfterConcurrentDelete
--- PASS: TestShardRecoveryRebuildAfterConcurrentDelete (1.73s)
=== RUN   TestShardRecoveryAfterDeleteInstance
--- PASS: TestShardRecoveryAfterDeleteInstance (0.10s)
=== RUN   TestShardRecoveryDestroyShard
--- PASS: TestShardRecoveryDestroyShard (0.22s)
=== RUN   TestHeaderRepair
--- PASS: TestHeaderRepair (0.06s)
=== RUN   TestCheckpointWithWriter
--- PASS: TestCheckpointWithWriter (3.50s)
=== RUN   TestPlasmaRecoveryWithRepairFullReplay
--- PASS: TestPlasmaRecoveryWithRepairFullReplay (22.53s)
=== RUN   TestPlasmaRecoveryWithInsertRepairCheckpoint
--- PASS: TestPlasmaRecoveryWithInsertRepairCheckpoint (33.80s)
=== RUN   TestPlasmaRecoveryWithDeleteRepairCheckpoint
--- PASS: TestPlasmaRecoveryWithDeleteRepairCheckpoint (12.55s)
=== RUN   TestShardRecoverySharedFullReplayOnError
--- PASS: TestShardRecoverySharedFullReplayOnError (12.21s)
=== RUN   TestShardRecoveryDedicatedFullReplayOnError
--- PASS: TestShardRecoveryDedicatedFullReplayOnError (12.03s)
=== RUN   TestShardRecoverySharedFullReplayOnErrorWithRepair
--- PASS: TestShardRecoverySharedFullReplayOnErrorWithRepair (14.45s)
=== RUN   TestGlobalWorkContextForRecovery
--- PASS: TestGlobalWorkContextForRecovery (0.33s)
=== RUN   TestSkipLogSimple
--- PASS: TestSkipLogSimple (0.00s)
=== RUN   TestSkipLogLoadStore
--- PASS: TestSkipLogLoadStore (0.00s)
=== RUN   TestShardMetadata
--- PASS: TestShardMetadata (0.04s)
=== RUN   TestPlasmaId
--- PASS: TestPlasmaId (0.02s)
=== RUN   TestShardPersistence
--- PASS: TestShardPersistence (0.20s)
=== RUN   TestShardDestroy
--- PASS: TestShardDestroy (0.05s)
=== RUN   TestShardClose
--- PASS: TestShardClose (5.04s)
=== RUN   TestShardMgrRecovery
--- PASS: TestShardMgrRecovery (0.09s)
=== RUN   TestShardDeadData
--- PASS: TestShardDeadData (0.19s)
=== RUN   TestShardConfigUpdate
--- PASS: TestShardConfigUpdate (0.07s)
=== RUN   TestShardSelection
--- PASS: TestShardSelection (0.10s)
=== RUN   TestShardWriteAmp
--- PASS: TestShardWriteAmp (10.12s)
=== RUN   TestShardStats
--- PASS: TestShardStats (0.19s)
=== RUN   TestShardMultipleWriters
--- PASS: TestShardMultipleWriters (0.15s)
=== RUN   TestShardDestroyMultiple
--- PASS: TestShardDestroyMultiple (0.13s)
=== RUN   TestShardBackupCorrupted
--- PASS: TestShardBackupCorrupted (0.10s)
=== RUN   TestShardBackupCorruptedShare
--- PASS: TestShardBackupCorruptedShare (0.06s)
=== RUN   TestShardCorruption
--- PASS: TestShardCorruption (0.06s)
=== RUN   TestShardCorruptionAddInstance
--- PASS: TestShardCorruptionAddInstance (0.13s)
=== RUN   TestShardCreateError
--- PASS: TestShardCreateError (0.22s)
=== RUN   TestShardNumInsts
--- PASS: TestShardNumInsts (1.32s)
=== RUN   TestShardInstanceGroup
--- PASS: TestShardInstanceGroup (0.10s)
=== RUN   TestShardLeak
--- PASS: TestShardLeak (1.74s)
=== RUN   TestShardMemLeak
--- PASS: TestShardMemLeak (0.74s)
=== RUN   TestShardFind
--- PASS: TestShardFind (0.21s)
=== RUN   TestShardFileOpenDescCount
--- PASS: TestShardFileOpenDescCount (58.69s)
=== RUN   TestSMRSimple
--- PASS: TestSMRSimple (1.10s)
=== RUN   TestSMRConcurrent
--- PASS: TestSMRConcurrent (47.81s)
=== RUN   TestSMRComplex
--- PASS: TestSMRComplex (109.17s)
=== RUN   TestDGMWithCASConflicts
--- PASS: TestDGMWithCASConflicts (32.58s)
=== RUN   TestMaxSMRPendingMem
--- PASS: TestMaxSMRPendingMem (0.02s)
=== RUN   TestStatsLogger
--- PASS: TestStatsLogger (20.29s)
=== RUN   TestStatsSamplePercentile
--- PASS: TestStatsSamplePercentile (0.02s)
=== RUN   TestPlasmaSwapper
--- PASS: TestPlasmaSwapper (21.40s)
=== RUN   TestPlasmaAutoSwapper
--- PASS: TestPlasmaAutoSwapper (85.22s)
=== RUN   TestSwapperAddInstance
--- PASS: TestSwapperAddInstance (4.18s)
=== RUN   TestSwapperRemoveInstance
--- PASS: TestSwapperRemoveInstance (4.28s)
=== RUN   TestSwapperJoinContext
--- PASS: TestSwapperJoinContext (4.74s)
=== RUN   TestSwapperSplitContext
--- PASS: TestSwapperSplitContext (4.68s)
=== RUN   TestSwapperGlobalClock
--- PASS: TestSwapperGlobalClock (29.69s)
=== RUN   TestSwapperConflict
--- PASS: TestSwapperConflict (2.82s)
=== RUN   TestSwapperRemoveInstanceWait
--- PASS: TestSwapperRemoveInstanceWait (3.41s)
=== RUN   TestSwapperStats
--- PASS: TestSwapperStats (0.98s)
=== RUN   TestSwapperSweepInterval
--- PASS: TestSwapperSweepInterval (0.44s)
=== RUN   TestSweepCompress
--- PASS: TestSweepCompress (0.05s)
=== RUN   TestTenantShardAssignment
--- PASS: TestTenantShardAssignment (2.95s)
=== RUN   TestTenantShardAssignmentServerless
--- PASS: TestTenantShardAssignmentServerless (11.77s)
=== RUN   TestTenantShardAssignmentDedicated
--- PASS: TestTenantShardAssignmentDedicated (1.51s)
=== RUN   TestTenantShardAssignmentDedicatedMainBackIndexes
--- PASS: TestTenantShardAssignmentDedicatedMainBackIndexes (0.09s)
=== RUN   TestTenantShardRecovery
--- PASS: TestTenantShardRecovery (2.86s)
=== RUN   TestTenantMemUsed
--- PASS: TestTenantMemUsed (2.71s)
=== RUN   TestTenantSwitchController
--- PASS: TestTenantSwitchController (0.09s)
=== RUN   TestTenantAssignMandatoryQuota
--- PASS: TestTenantAssignMandatoryQuota (0.41s)
=== RUN   TestTenantMutationQuota
--- PASS: TestTenantMutationQuota (0.04s)
=== RUN   TestTenantInitialBuildQuota
--- PASS: TestTenantInitialBuildQuota (0.06s)
=== RUN   TestTenantInitialBuildNonDGM
--- PASS: TestTenantInitialBuildNonDGM (1.93s)
=== RUN   TestTenantInitialBuildDGM
--- PASS: TestTenantInitialBuildDGM (1.92s)
=== RUN   TestTenantInitialBuildZeroResident
--- PASS: TestTenantInitialBuildZeroResident (1.89s)
=== RUN   TestTenantIncrementalBuildDGM
--- PASS: TestTenantIncrementalBuildDGM (2.98s)
=== RUN   TestTenantInitialBuildTwoTenants
--- PASS: TestTenantInitialBuildTwoTenants (2.92s)
=== RUN   TestTenantInitialBuildTwoControllers
--- PASS: TestTenantInitialBuildTwoControllers (2.95s)
=== RUN   TestTenantIncrementalBuildTwoIndexes
--- PASS: TestTenantIncrementalBuildTwoIndexes (0.35s)
=== RUN   TestTenantIncrementalBuildConcurrent
--- PASS: TestTenantIncrementalBuildConcurrent (2.74s)
=== RUN   TestTenantDecrementGlobalQuota
--- PASS: TestTenantDecrementGlobalQuota (2.21s)
=== RUN   TestTenantInitialBuildNotEnoughQuota
--- PASS: TestTenantInitialBuildNotEnoughQuota (2.99s)
=== RUN   TestTenantRecoveryResidentRatioHeaderReplay
--- PASS: TestTenantRecoveryResidentRatioHeaderReplay (0.13s)
=== RUN   TestTenantRecoveryResidentRatioDataReplay
--- PASS: TestTenantRecoveryResidentRatioDataReplay (0.22s)
=== RUN   TestTenantRecoveryController
--- PASS: TestTenantRecoveryController (1.60s)
=== RUN   TestTenantRecoveryQuotaWithLastCheckpoint
--- PASS: TestTenantRecoveryQuotaWithLastCheckpoint (0.76s)
=== RUN   TestTenantRecoveryQuotaZeroResidentWithLastCheckpoint
--- PASS: TestTenantRecoveryQuotaZeroResidentWithLastCheckpoint (3.17s)
=== RUN   TestTenantRecoveryQuotaWithFormula
--- PASS: TestTenantRecoveryQuotaWithFormula (3.00s)
=== RUN   TestTenantRecoveryQuotaWithDataReplay
--- PASS: TestTenantRecoveryQuotaWithDataReplay (6.69s)
=== RUN   TestTenantRecoveryEvictionNoCheckpoint
--- PASS: TestTenantRecoveryEvictionNoCheckpoint (14.74s)
=== RUN   TestTenantRecoveryEvictionHeaderReplay
--- PASS: TestTenantRecoveryEvictionHeaderReplay (8.82s)
=== RUN   TestTenantRecoveryEvictionDataReplaySequential
--- PASS: TestTenantRecoveryEvictionDataReplaySequential (8.60s)
=== RUN   TestTenantRecoveryEvictionDataReplayInterleaved
--- PASS: TestTenantRecoveryEvictionDataReplayInterleaved (9.81s)
=== RUN   TestTenantRecoveryEvictionDataReplayNoCheckpoint
--- PASS: TestTenantRecoveryEvictionDataReplayNoCheckpoint (9.64s)
=== RUN   TestTenantRecoveryEvictionDataReplaySingle
--- PASS: TestTenantRecoveryEvictionDataReplaySingle (4.30s)
=== RUN   TestTenantRecoveryLastCheckpoint
--- PASS: TestTenantRecoveryLastCheckpoint (5.36s)
=== RUN   TestTenantRecoveryRequestQuota
--- PASS: TestTenantRecoveryRequestQuota (2.53s)
=== RUN   TestTenantAssignDiscretionaryQuota
--- PASS: TestTenantAssignDiscretionaryQuota (0.41s)
=== RUN   TestSCtx
--- PASS: TestSCtx (16.68s)
=== RUN   TestWCtxGeneric
--- PASS: TestWCtxGeneric (45.72s)
=== RUN   TestWCtxWriter
--- PASS: TestWCtxWriter (46.24s)
=== RUN   TestSCtxTrimWithReader
--- PASS: TestSCtxTrimWithReader (0.03s)
=== RUN   TestSCtxTrimWithWriter
--- PASS: TestSCtxTrimWithWriter (0.03s)
=== RUN   TestSCtxTrimEmpty
--- PASS: TestSCtxTrimEmpty (0.02s)
=== RUN   TestWCtxTrimWithReader
--- PASS: TestWCtxTrimWithReader (0.03s)
=== RUN   TestWCtxTrimWithWriter
--- PASS: TestWCtxTrimWithWriter (0.03s)
--- PASS: TestExtrasN1 (0.00s)
--- PASS: TestExtrasN2 (0.00s)
--- PASS: TestExtrasN3 (0.00s)
PASS
ok  	github.com/couchbase/plasma	3787.595s
=== RUN   TestInteger
--- PASS: TestInteger (0.00s)
=== RUN   TestSmallDecimal
--- PASS: TestSmallDecimal (0.00s)
=== RUN   TestLargeDecimal
--- PASS: TestLargeDecimal (0.00s)
=== RUN   TestFloat
--- PASS: TestFloat (0.00s)
=== RUN   TestSuffixCoding
--- PASS: TestSuffixCoding (0.00s)
=== RUN   TestCodecLength
--- PASS: TestCodecLength (0.00s)
=== RUN   TestSpecialString
--- PASS: TestSpecialString (0.00s)
=== RUN   TestCodecNoLength
--- PASS: TestCodecNoLength (0.00s)
=== RUN   TestCodecJSON
--- PASS: TestCodecJSON (0.00s)
=== RUN   TestReference
--- PASS: TestReference (0.00s)
=== RUN   TestN1QLEncode
--- PASS: TestN1QLEncode (0.00s)
=== RUN   TestArrayExplodeJoin
--- PASS: TestArrayExplodeJoin (0.00s)
=== RUN   TestN1QLDecode
--- PASS: TestN1QLDecode (0.00s)
=== RUN   TestN1QLDecode2
--- PASS: TestN1QLDecode2 (0.00s)
=== RUN   TestArrayExplodeJoin2
--- PASS: TestArrayExplodeJoin2 (0.00s)
=== RUN   TestMB28956
--- PASS: TestMB28956 (0.00s)
=== RUN   TestFixEncodedInt
--- PASS: TestFixEncodedInt (0.00s)
=== RUN   TestN1QLDecodeLargeInt64
--- PASS: TestN1QLDecodeLargeInt64 (0.00s)
=== RUN   TestMixedModeFixEncodedInt
TESTING [4111686018427387900, -8223372036854775808, 822337203685477618] 
PASS 
TESTING [0] 
PASS 
TESTING [0.0] 
PASS 
TESTING [0.0000] 
PASS 
TESTING [0.0000000] 
PASS 
TESTING [-0] 
PASS 
TESTING [-0.0] 
PASS 
TESTING [-0.0000] 
PASS 
TESTING [-0.0000000] 
PASS 
TESTING [1] 
PASS 
TESTING [20] 
PASS 
TESTING [3456] 
PASS 
TESTING [7645000] 
PASS 
TESTING [9223372036854775807] 
PASS 
TESTING [9223372036854775806] 
PASS 
TESTING [9223372036854775808] 
PASS 
TESTING [92233720368547758071234000] 
PASS 
TESTING [92233720368547758071234987437653] 
PASS 
TESTING [12300000000000000000000000000000056] 
PASS 
TESTING [12300000000000000000000000000000000] 
PASS 
TESTING [123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000056] 
PASS 
TESTING [123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000] 
PASS 
TESTING [12300000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000056] 
PASS 
TESTING [210690] 
PASS 
TESTING [90000] 
PASS 
TESTING [123000000] 
PASS 
TESTING [3.60e2] 
PASS 
TESTING [36e2] 
PASS 
TESTING [1.9999999999e10] 
PASS 
TESTING [1.99999e10] 
PASS 
TESTING [1.99999e5] 
PASS 
TESTING [0.00000000000012e15] 
PASS 
TESTING [7.64507352e8] 
PASS 
TESTING [9.2233720368547758071234987437653e31] 
PASS 
TESTING [2650e-1] 
PASS 
TESTING [26500e-1] 
PASS 
TESTING [-1] 
PASS 
TESTING [-20] 
PASS 
TESTING [-3456] 
PASS 
TESTING [-7645000] 
PASS 
TESTING [-9223372036854775808] 
PASS 
TESTING [-9223372036854775807] 
PASS 
TESTING [-9223372036854775806] 
PASS 
TESTING [-9223372036854775809] 
PASS 
TESTING [-92233720368547758071234000] 
PASS 
TESTING [-92233720368547758071234987437653] 
PASS 
TESTING [-12300000000000000000000000000000056] 
PASS 
TESTING [-12300000000000000000000000000000000] 
PASS 
TESTING [-123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000056] 
PASS 
TESTING [-123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000] 
PASS 
TESTING [-210690] 
PASS 
TESTING [-90000] 
PASS 
TESTING [-123000000] 
PASS 
TESTING [-3.60e2] 
PASS 
TESTING [-36e2] 
PASS 
TESTING [-1.9999999999e10] 
PASS 
TESTING [-1.99999e10] 
PASS 
TESTING [-1.99999e5] 
PASS 
TESTING [-0.00000000000012e15] 
PASS 
TESTING [-2650e-1] 
PASS 
TESTING [-26500e-1] 
PASS 
TESTING [0.03] 
PASS 
TESTING [198.60] 
PASS 
TESTING [2000045.178] 
PASS 
TESTING [1.7976931348623157e+308] 
PASS 
TESTING [0.000000000000000000890] 
PASS 
TESTING [257953786.9864236576] 
PASS 
TESTING [257953786.9864236576e8] 
PASS 
TESTING [36.912e3] 
PASS 
TESTING [2761.67e0] 
PASS 
TESTING [2761.67e00] 
PASS 
TESTING [2761.67e000] 
PASS 
TESTING [7676546.67e-3] 
PASS 
TESTING [-0.03] 
PASS 
TESTING [-198.60] 
PASS 
TESTING [-2000045.178] 
PASS 
TESTING [-1.7976931348623157e+308] 
PASS 
TESTING [-0.000000000000000000890] 
PASS 
TESTING [-257953786.9864236576] 
PASS 
TESTING [-257953786.9864236576e8] 
PASS 
TESTING [-36.912e3] 
PASS 
TESTING [-2761.67e0] 
PASS 
TESTING [-2761.67e00] 
PASS 
TESTING [-2761.67e000] 
PASS 
TESTING [-7676546.67e-3] 
PASS 
--- PASS: TestMixedModeFixEncodedInt (0.01s)
=== RUN   TestCodecDesc
--- PASS: TestCodecDesc (0.00s)
=== RUN   TestCodecDescPropLen
--- PASS: TestCodecDescPropLen (0.00s)
=== RUN   TestCodecDescSplChar
--- PASS: TestCodecDescSplChar (0.00s)
PASS
ok  	github.com/couchbase/indexing/secondary/collatejson	0.035s
Initializing write barrier = 8000
=== RUN   TestForestDBIterator
2022-09-02T00:53:02.651+05:30 [INFO][FDB] Forestdb blockcache size 134217728 initialized in 8299 us

2022-09-02T00:53:02.652+05:30 [INFO][FDB] Forestdb opened database file test
2022-09-02T00:53:02.656+05:30 [INFO][FDB] Forestdb closed database file test
--- PASS: TestForestDBIterator (0.02s)
=== RUN   TestForestDBIteratorSeek
2022-09-02T00:53:02.657+05:30 [INFO][FDB] Forestdb opened database file test
2022-09-02T00:53:02.661+05:30 [INFO][FDB] Forestdb closed database file test
--- PASS: TestForestDBIteratorSeek (0.00s)
=== RUN   TestPrimaryIndexEntry
--- PASS: TestPrimaryIndexEntry (0.00s)
=== RUN   TestSecondaryIndexEntry
--- PASS: TestSecondaryIndexEntry (0.00s)
=== RUN   TestPrimaryIndexEntryMatch
--- PASS: TestPrimaryIndexEntryMatch (0.00s)
=== RUN   TestSecondaryIndexEntryMatch
--- PASS: TestSecondaryIndexEntryMatch (0.00s)
=== RUN   TestLongDocIdEntry
--- PASS: TestLongDocIdEntry (0.00s)
=== RUN   TestMemDBInsertionPerf
Maximum number of file descriptors = 200000
Set IO Concurrency: 7200
Initial build: 10000000 items took 1m57.611391703s -> 85025.77730950294 items/s
Incr build: 10000000 items took 45.124311463s -> 221610.02962227073 items/s
Main Index: {
"node_count":             12972429,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       1,
"next_pointers_per_node": 1.3333,
"memory_used":            1222191799,
"node_allocs":            12972429,
"node_frees":             0,
"level_node_distribution":{
"level0": 9729450,
"level1": 2432539,
"level2": 607737,
"level3": 152044,
"level4": 37959,
"level5": 9554,
"level6": 2356,
"level7": 577,
"level8": 168,
"level9": 29,
"level10": 13,
"level11": 2,
"level12": 1,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
Back Index 0 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 1 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 2 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 3 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 4 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 5 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 6 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 7 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 8 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 9 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 10 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 11 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 12 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 13 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 14 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 15 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
--- PASS: TestMemDBInsertionPerf (162.74s)
=== RUN   TestBasicsA
--- PASS: TestBasicsA (0.00s)
=== RUN   TestSizeA
--- PASS: TestSizeA (0.00s)
=== RUN   TestSizeWithFreelistA
--- PASS: TestSizeWithFreelistA (0.00s)
=== RUN   TestDequeueUptoSeqnoA
--- PASS: TestDequeueUptoSeqnoA (0.10s)
=== RUN   TestDequeueA
--- PASS: TestDequeueA (1.21s)
=== RUN   TestMultipleVbucketsA
--- PASS: TestMultipleVbucketsA (0.00s)
=== RUN   TestDequeueUptoFreelistA
--- PASS: TestDequeueUptoFreelistA (0.00s)
=== RUN   TestDequeueUptoFreelistMultVbA
--- PASS: TestDequeueUptoFreelistMultVbA (0.00s)
=== RUN   TestConcurrentEnqueueDequeueA
--- PASS: TestConcurrentEnqueueDequeueA (0.00s)
=== RUN   TestConcurrentEnqueueDequeueA1
--- PASS: TestConcurrentEnqueueDequeueA1 (10.01s)
=== RUN   TestEnqueueAppCh
--- PASS: TestEnqueueAppCh (2.00s)
=== RUN   TestDequeueN
--- PASS: TestDequeueN (0.00s)
=== RUN   TestConcurrentEnqueueDequeueN
--- PASS: TestConcurrentEnqueueDequeueN (0.00s)
=== RUN   TestConcurrentEnqueueDequeueN1
--- PASS: TestConcurrentEnqueueDequeueN1 (10.01s)
PASS
ok  	github.com/couchbase/indexing/secondary/indexer	186.749s
=== RUN   TestConnPoolBasicSanity
2022-09-02T00:56:12.537+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 3 overflow 6 low WM 3 relConn batch size 1 ...
2022-09-02T00:56:12.745+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2022-09-02T00:56:13.539+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestConnPoolBasicSanity (5.00s)
=== RUN   TestConnRelease
2022-09-02T00:56:17.541+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 500 overflow 10 low WM 40 relConn batch size 10 ...
Waiting for connections to get released
Waiting for more connections to get released
Waiting for further more connections to get released
2022-09-02T00:56:57.300+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2022-09-02T00:56:57.559+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestConnRelease (43.76s)
=== RUN   TestLongevity
2022-09-02T00:57:01.302+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 500 overflow 10 low WM 40 relConn batch size 10 ...
Releasing 1 conns.
Getting 2 conns.
Releasing 2 conns.
Getting 4 conns.
Releasing 1 conns.
Getting 3 conns.
Releasing 0 conns.
Getting 0 conns.
Releasing 1 conns.
Getting 0 conns.
Releasing 4 conns.
Getting 1 conns.
Releasing 2 conns.
Getting 4 conns.
Releasing 3 conns.
Getting 4 conns.
Releasing 1 conns.
Getting 0 conns.
Releasing 2 conns.
Getting 1 conns.
Releasing 0 conns.
Getting 1 conns.
Releasing 3 conns.
Getting 3 conns.
Releasing 2 conns.
Getting 2 conns.
Releasing 2 conns.
Getting 3 conns.
Releasing 0 conns.
Getting 0 conns.
2022-09-02T00:57:39.703+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2022-09-02T00:57:40.321+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestLongevity (42.40s)
=== RUN   TestSustainedHighConns
2022-09-02T00:57:43.704+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 500 overflow 10 low WM 40 relConn batch size 10 ...
Allocating 16 Connections
cp.curActConns = 0
Returning 3 Connections
cp.curActConns = 11
Returning 2 Connections
cp.curActConns = 11
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 14
Returning 1 Connections
Allocating 12 Connections
cp.curActConns = 23
Returning 1 Connections
cp.curActConns = 24
Allocating 10 Connections
Returning 1 Connections
cp.curActConns = 33
Returning 3 Connections
Allocating 15 Connections
cp.curActConns = 34
Returning 4 Connections
cp.curActConns = 41
Returning 3 Connections
Allocating 8 Connections
cp.curActConns = 45
Returning 2 Connections
cp.curActConns = 44
Allocating 3 Connections
Returning 4 Connections
cp.curActConns = 43
Allocating 9 Connections
Returning 3 Connections
cp.curActConns = 49
Returning 2 Connections
Allocating 21 Connections
cp.curActConns = 57
Returning 4 Connections
cp.curActConns = 64
Returning 4 Connections
Allocating 0 Connections
cp.curActConns = 60
Returning 0 Connections
Allocating 13 Connections
cp.curActConns = 69
Returning 3 Connections
cp.curActConns = 70
Allocating 3 Connections
Returning 0 Connections
cp.curActConns = 73
Returning 1 Connections
Allocating 10 Connections
cp.curActConns = 82
Returning 0 Connections
Allocating 6 Connections
cp.curActConns = 82
Returning 1 Connections
cp.curActConns = 87
Returning 3 Connections
Allocating 11 Connections
cp.curActConns = 94
Returning 2 Connections
cp.curActConns = 93
Allocating 8 Connections
Returning 1 Connections
cp.curActConns = 100
Returning 3 Connections
Allocating 1 Connections
cp.curActConns = 98
Returning 2 Connections
Allocating 18 Connections
cp.curActConns = 106
Returning 2 Connections
cp.curActConns = 112
Returning 4 Connections
Allocating 2 Connections
cp.curActConns = 110
Returning 3 Connections
Allocating 21 Connections
cp.curActConns = 117
Returning 0 Connections
cp.curActConns = 128
Returning 3 Connections
Allocating 3 Connections
cp.curActConns = 128
Returning 2 Connections
Allocating 8 Connections
cp.curActConns = 129
Returning 1 Connections
cp.curActConns = 133
Returning 4 Connections
Allocating 8 Connections
cp.curActConns = 137
Returning 3 Connections
Allocating 16 Connections
cp.curActConns = 141
Returning 2 Connections
cp.curActConns = 148
Returning 3 Connections
Allocating 11 Connections
cp.curActConns = 152
Returning 1 Connections
cp.curActConns = 155
Returning 2 Connections
Allocating 15 Connections
cp.curActConns = 163
Returning 3 Connections
cp.curActConns = 165
Returning 2 Connections
Allocating 18 Connections
cp.curActConns = 174
Returning 0 Connections
cp.curActConns = 181
Returning 3 Connections
Allocating 0 Connections
cp.curActConns = 178
Returning 1 Connections
Allocating 15 Connections
cp.curActConns = 186
Returning 2 Connections
cp.curActConns = 190
Returning 2 Connections
Allocating 10 Connections
cp.curActConns = 197
Returning 0 Connections
cp.curActConns = 198
Allocating 10 Connections
Returning 0 Connections
cp.curActConns = 208
Returning 2 Connections
Allocating 3 Connections
cp.curActConns = 209
Returning 3 Connections
Allocating 1 Connections
cp.curActConns = 207
Returning 2 Connections
Allocating 4 Connections
cp.curActConns = 209
Returning 2 Connections
Allocating 2 Connections
cp.curActConns = 209
Returning 2 Connections
Allocating 1 Connections
cp.curActConns = 208
Returning 4 Connections
Allocating 1 Connections
cp.curActConns = 205
Returning 2 Connections
Allocating 1 Connections
cp.curActConns = 204
Returning 4 Connections
Allocating 1 Connections
cp.curActConns = 201
Returning 0 Connections
Allocating 3 Connections
cp.curActConns = 204
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 207
Returning 1 Connections
Allocating 2 Connections
cp.curActConns = 208
Returning 0 Connections
Allocating 0 Connections
cp.curActConns = 208
Returning 0 Connections
Allocating 2 Connections
cp.curActConns = 209
Returning 4 Connections
cp.curActConns = 206
Allocating 3 Connections
Returning 3 Connections
cp.curActConns = 206
Allocating 4 Connections
Returning 3 Connections
cp.curActConns = 207
Allocating 4 Connections
Returning 2 Connections
cp.curActConns = 209
Returning 2 Connections
Allocating 2 Connections
cp.curActConns = 209
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 212
Returning 0 Connections
Allocating 1 Connections
cp.curActConns = 213
Returning 1 Connections
Allocating 1 Connections
cp.curActConns = 213
Returning 0 Connections
Allocating 1 Connections
cp.curActConns = 214
Returning 4 Connections
Allocating 4 Connections
cp.curActConns = 214
Returning 0 Connections
Allocating 3 Connections
cp.curActConns = 217
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 216
Returning 2 Connections
Allocating 3 Connections
cp.curActConns = 217
Returning 2 Connections
Allocating 1 Connections
cp.curActConns = 216
Returning 3 Connections
Allocating 4 Connections
cp.curActConns = 217
Returning 2 Connections
Allocating 3 Connections
cp.curActConns = 218
Returning 2 Connections
Allocating 3 Connections
cp.curActConns = 219
Returning 3 Connections
Allocating 3 Connections
cp.curActConns = 219
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 218
Returning 0 Connections
Allocating 0 Connections
cp.curActConns = 218
Returning 4 Connections
Allocating 2 Connections
cp.curActConns = 216
Returning 2 Connections
Allocating 0 Connections
cp.curActConns = 214
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 217
Returning 0 Connections
Allocating 3 Connections
cp.curActConns = 220
Returning 0 Connections
Allocating 4 Connections
cp.curActConns = 221
Returning 0 Connections
cp.curActConns = 224
Returning 3 Connections
Allocating 0 Connections
cp.curActConns = 221
Returning 4 Connections
Allocating 2 Connections
cp.curActConns = 219
Returning 0 Connections
Allocating 0 Connections
cp.curActConns = 219
Returning 2 Connections
Allocating 4 Connections
cp.curActConns = 221
Returning 0 Connections
Allocating 3 Connections
cp.curActConns = 224
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 223
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 222
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 225
Returning 2 Connections
Allocating 1 Connections
cp.curActConns = 224
Returning 2 Connections
Allocating 1 Connections
cp.curActConns = 223
Returning 0 Connections
Allocating 4 Connections
cp.curActConns = 227
Returning 2 Connections
Allocating 2 Connections
cp.curActConns = 227
Retuning from startDeallocatorRoutine
Retuning from startAllocatorRoutine
2022-09-02T00:58:38.765+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2022-09-02T00:58:39.741+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestSustainedHighConns (59.06s)
=== RUN   TestLowWM
2022-09-02T00:58:42.766+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 20 overflow 5 low WM 10 relConn batch size 2 ...
2022-09-02T00:59:42.782+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] active conns 0, free conns 10
2022-09-02T01:00:42.798+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] active conns 0, free conns 10
2022-09-02T01:00:48.277+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2022-09-02T01:00:48.800+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestLowWM (129.51s)
=== RUN   TestTotalConns
2022-09-02T01:00:52.279+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 120 overflow 5 low WM 10 relConn batch size 10 ...
2022-09-02T01:01:06.450+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2022-09-02T01:01:07.285+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestTotalConns (18.17s)
=== RUN   TestUpdateTickRate
2022-09-02T01:01:10.452+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 40 overflow 5 low WM 2 relConn batch size 2 ...
2022-09-02T01:01:31.300+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2022-09-02T01:01:31.464+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestUpdateTickRate (24.85s)
PASS
ok  	github.com/couchbase/indexing/secondary/queryport/client	322.820s
Starting server: attempt 1

Functional tests

2022/09/02 01:03:46 In TestMain()
2022/09/02 01:03:46 otp node fetch error: json: cannot unmarshal string into Go value of type couchbase.Pool
2022/09/02 01:03:46 Initialising services with role: kv,n1ql on node: 127.0.0.1:9000
2022/09/02 01:03:47 Initialising web UI on node: 127.0.0.1:9000
2022/09/02 01:03:47 InitWebCreds, response is: {"newBaseUri":"http://127.0.0.1:9000/"}
2022/09/02 01:03:48 Setting data quota of 1500M and Index quota of 1500M
2022/09/02 01:03:49 Adding node: https://127.0.0.1:19001 with role: kv,index to the cluster
2022/09/02 01:03:56 AddNode: Successfully added node: 127.0.0.1:9001 (role kv,index), response: {"otpNode":"n_1@127.0.0.1"}
2022/09/02 01:04:01 Rebalance progress: 0
2022/09/02 01:04:06 Rebalance progress: 0
2022/09/02 01:04:11 Rebalance progress: 0
2022/09/02 01:04:16 Rebalance progress: 0
2022/09/02 01:04:21 Rebalance progress: 0
2022/09/02 01:04:26 Rebalance progress: 0
2022/09/02 01:04:31 Rebalance progress: 0
2022/09/02 01:04:36 Rebalance progress: 0
2022/09/02 01:04:41 Rebalance progress: 0
2022/09/02 01:04:46 Rebalance progress: 0
2022/09/02 01:04:51 Rebalance progress: 0
2022/09/02 01:04:56 Rebalance progress: 0
2022/09/02 01:05:01 Rebalance progress: 0
2022/09/02 01:05:06 Rebalance failed. See logs for detailed reason. You can try again.
2022/09/02 01:05:06 Error while initialising cluster: AddNodeAndRebalance: Error during rebalance, err: Rebalance failed
panic: Error while initialising cluster: AddNodeAndRebalance: Error during rebalance, err: Rebalance failed


goroutine 1 [running]:
panic({0xe8aee0, 0xc000077da0})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/panic.go:941 +0x397 fp=0xc0000cdca8 sp=0xc0000cdbe8 pc=0x43b757
log.Panicf({0x1025e70?, 0x20?}, {0xc0000cddf8?, 0xd?, 0xc000190a80?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/log/log.go:392 +0x67 fp=0xc0000cdcf0 sp=0xc0000cdca8 pc=0x5bdfc7
github.com/couchbase/indexing/secondary/tests/framework/common.HandleError(...)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/tests/framework/common/util.go:76
github.com/couchbase/indexing/secondary/tests/functionaltests.TestMain(0x446212?)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/tests/functionaltests/common_test.go:94 +0x456 fp=0xc0000cdec8 sp=0xc0000cdcf0 pc=0xd31a76
main.main()
	_testmain.go:483 +0x1d3 fp=0xc0000cdf80 sp=0xc0000cdec8 pc=0xe0cfd3
runtime.main()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:250 +0x212 fp=0xc0000cdfe0 sp=0xc0000cdf80 pc=0x43e2d2
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0000cdfe8 sp=0xc0000cdfe0 pc=0x46ec21

goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000060fb0 sp=0xc000060f90 pc=0x43e696
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.forcegchelper()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:301 +0xad fp=0xc000060fe0 sp=0xc000060fb0 pc=0x43e52d
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000060fe8 sp=0xc000060fe0 pc=0x46ec21
created by runtime.init.6
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:289 +0x25

goroutine 18 [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005c790 sp=0xc00005c770 pc=0x43e696
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.bgsweep(0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgcsweep.go:297 +0xd7 fp=0xc00005c7c8 sp=0xc00005c790 pc=0x4297f7
runtime.gcenable.func1()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:177 +0x26 fp=0xc00005c7e0 sp=0xc00005c7c8 pc=0x41f3a6
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005c7e8 sp=0xc00005c7e0 pc=0x46ec21
created by runtime.gcenable
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:177 +0x6b

goroutine 19 [GC scavenge wait]:
runtime.gopark(0x32ecc8975c5c?, 0x10000?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005cf20 sp=0xc00005cf00 pc=0x43e696
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.bgscavenge(0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgcscavenge.go:364 +0x2a5 fp=0xc00005cfc8 sp=0xc00005cf20 pc=0x427605
runtime.gcenable.func2()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:178 +0x26 fp=0xc00005cfe0 sp=0xc00005cfc8 pc=0x41f346
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005cfe8 sp=0xc00005cfe0 pc=0x46ec21
created by runtime.gcenable
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:178 +0xaa

goroutine 3 [finalizer wait]:
runtime.gopark(0x0?, 0x109fd68?, 0x60?, 0x81?, 0x2000000020?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000060630 sp=0xc000060610 pc=0x43e696
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.runfinq()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mfinal.go:177 +0xb3 fp=0xc0000607e0 sp=0xc000060630 pc=0x41e453
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0000607e8 sp=0xc0000607e0 pc=0x46ec21
created by runtime.createfing
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mfinal.go:157 +0x45

goroutine 34 [select]:
runtime.gopark(0xc000246798?, 0x2?, 0x0?, 0x0?, 0xc00024678c?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000246618 sp=0xc0002465f8 pc=0x43e696
runtime.selectgo(0xc000246798, 0xc000246788, 0x0?, 0x0, 0x0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc000246758 sp=0xc000246618 pc=0x44e112
github.com/couchbase/cbauth/cbauthimpl.(*tlsNotifier).loop(0xc00020e0f0)
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:396 +0x67 fp=0xc0002467c8 sp=0xc000246758 pc=0x785e07
github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest.func2()
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:561 +0x26 fp=0xc0002467e0 sp=0xc0002467c8 pc=0x786a86
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0002467e8 sp=0xc0002467e0 pc=0x46ec21
created by github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:561 +0x37a

goroutine 35 [select]:
runtime.gopark(0xc000246f98?, 0x2?, 0x0?, 0x0?, 0xc000246f8c?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000246e08 sp=0xc000246de8 pc=0x43e696
runtime.selectgo(0xc000246f98, 0xc000246f88, 0x0?, 0x0, 0x0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc000246f48 sp=0xc000246e08 pc=0x44e112
github.com/couchbase/cbauth/cbauthimpl.(*cfgChangeNotifier).loop(0xc00020e108)
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:316 +0x85 fp=0xc000246fc8 sp=0xc000246f48 pc=0x785825
github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest.func3()
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:562 +0x26 fp=0xc000246fe0 sp=0xc000246fc8 pc=0x786a26
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000246fe8 sp=0xc000246fe0 pc=0x46ec21
created by github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:562 +0x3ca

goroutine 36 [IO wait]:
runtime.gopark(0xc00022c340?, 0xc000054f00?, 0x70?, 0x98?, 0x484542?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000489800 sp=0xc0004897e0 pc=0x43e696
runtime.netpollblock(0xc00014f000?, 0x1000?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:522 +0xf7 fp=0xc000489838 sp=0xc000489800 pc=0x437137
internal/poll.runtime_pollWait(0x7f7c7c591fc0, 0x72)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:302 +0x89 fp=0xc000489858 sp=0xc000489838 pc=0x469209
internal/poll.(*pollDesc).wait(0xc000134080?, 0xc00014f000?, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:83 +0x32 fp=0xc000489880 sp=0xc000489858 pc=0x4a2132
internal/poll.(*pollDesc).waitRead(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc000134080, {0xc00014f000, 0x1000, 0x1000})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_unix.go:167 +0x25a fp=0xc000489900 sp=0xc000489880 pc=0x4a349a
net.(*netFD).Read(0xc000134080, {0xc00014f000?, 0xc000110ce0?, 0xc0004899d0?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/fd_posix.go:55 +0x29 fp=0xc000489948 sp=0xc000489900 pc=0x669209
net.(*conn).Read(0xc000138000, {0xc00014f000?, 0x11?, 0xc000489a68?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/net.go:183 +0x45 fp=0xc000489990 sp=0xc000489948 pc=0x679485
bufio.(*Reader).Read(0xc000114180, {0xc00015e001, 0x5ff, 0x453934?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:236 +0x1b4 fp=0xc0004899c8 sp=0xc000489990 pc=0x5206f4
github.com/couchbase/cbauth/revrpc.(*minirwc).Read(0x191?, {0xc00015e001?, 0x8?, 0x20?})
	/opt/build/goproj/src/github.com/couchbase/cbauth/revrpc/revrpc.go:102 +0x25 fp=0xc0004899f8 sp=0xc0004899c8 pc=0x7d7045
encoding/json.(*Decoder).refill(0xc00015a000)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:165 +0x17f fp=0xc000489a48 sp=0xc0004899f8 pc=0x565bbf
encoding/json.(*Decoder).readValue(0xc00015a000)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:140 +0xbb fp=0xc000489a98 sp=0xc000489a48 pc=0x5657bb
encoding/json.(*Decoder).Decode(0xc00015a000, {0xebbc00, 0xc000114260})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:63 +0x78 fp=0xc000489ac8 sp=0xc000489a98 pc=0x565418
net/rpc/jsonrpc.(*serverCodec).ReadRequestHeader(0xc000114240, 0xc000128020)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/jsonrpc/server.go:66 +0x85 fp=0xc000489b08 sp=0xc000489ac8 pc=0x7d62a5
github.com/couchbase/cbauth/revrpc.(*jsonServerCodec).ReadRequestHeader(0xc000116050?, 0x4cd388?)
	:1 +0x2a fp=0xc000489b28 sp=0xc000489b08 pc=0x7d8dca
net/rpc.(*Server).readRequestHeader(0xc000116050, {0x11b5fe8, 0xc000110170})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/server.go:587 +0x66 fp=0xc000489bf8 sp=0xc000489b28 pc=0x7d59c6
net/rpc.(*Server).readRequest(0x0?, {0x11b5fe8, 0xc000110170})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/server.go:547 +0x3b fp=0xc000489cd0 sp=0xc000489bf8 pc=0x7d551b
net/rpc.(*Server).ServeCodec(0xc000116050, {0x11b5fe8?, 0xc000110170})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/server.go:462 +0x87 fp=0xc000489dc8 sp=0xc000489cd0 pc=0x7d4c47
github.com/couchbase/cbauth/revrpc.(*Service).Run(0xc000247760?, 0xc000130fa0)
	/opt/build/goproj/src/github.com/couchbase/cbauth/revrpc/revrpc.go:192 +0x5d9 fp=0xc000489f38 sp=0xc000489dc8 pc=0x7d7799
github.com/couchbase/cbauth/revrpc.BabysitService(0x0?, 0x0?, {0x11ac700?, 0xc00012a000?})
	/opt/build/goproj/src/github.com/couchbase/cbauth/revrpc/revrpc.go:288 +0x58 fp=0xc000489f70 sp=0xc000489f38 pc=0x7d7e98
github.com/couchbase/cbauth.runRPCForSvc(0x0?, 0xc000240000)
	/opt/build/goproj/src/github.com/couchbase/cbauth/default.go:57 +0xbd fp=0xc000489fc0 sp=0xc000489f70 pc=0x7e1c1d
github.com/couchbase/cbauth.startDefault.func1()
	/opt/build/goproj/src/github.com/couchbase/cbauth/default.go:66 +0x25 fp=0xc000489fe0 sp=0xc000489fc0 pc=0x7e1f05
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000489fe8 sp=0xc000489fe0 pc=0x46ec21
created by github.com/couchbase/cbauth.startDefault
	/opt/build/goproj/src/github.com/couchbase/cbauth/default.go:65 +0xf9

goroutine 23 [GC worker (idle)]:
runtime.gopark(0xe5da60?, 0xc000124c37?, 0x16?, 0x0?, 0x11b5fe8?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000247758 sp=0xc000247738 pc=0x43e696
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc0002477e0 sp=0xc000247758 pc=0x421485
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0002477e8 sp=0xc0002477e0 pc=0x46ec21
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25

goroutine 57 [GC worker (idle)]:
runtime.gopark(0x32ecc8799c18?, 0x0?, 0x0?, 0x0?, 0x1?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000061758 sp=0xc000061738 pc=0x43e696
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc0000617e0 sp=0xc000061758 pc=0x421485
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0000617e8 sp=0xc0000617e0 pc=0x46ec21
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25

goroutine 22 [GC worker (idle)]:
runtime.gopark(0xc000190c50?, 0x0?, 0x0?, 0xa4?, 0x11b5f01?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000242758 sp=0xc000242738 pc=0x43e696
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc0002427e0 sp=0xc000242758 pc=0x421485
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0002427e8 sp=0xc0002427e0 pc=0x46ec21
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25

goroutine 24 [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005d758 sp=0xc00005d738 pc=0x43e696
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc00005d7e0 sp=0xc00005d758 pc=0x421485
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005d7e8 sp=0xc00005d7e0 pc=0x46ec21
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25

goroutine 5 [IO wait]:
runtime.gopark(0xc0000d4ea0?, 0xc000050500?, 0x68?, 0xb?, 0x484542?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000130af8 sp=0xc000130ad8 pc=0x43e696
runtime.netpollblock(0xc000172000?, 0x1000?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:522 +0xf7 fp=0xc000130b30 sp=0xc000130af8 pc=0x437137
internal/poll.runtime_pollWait(0x7f7c7c591ed0, 0x72)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:302 +0x89 fp=0xc000130b50 sp=0xc000130b30 pc=0x469209
internal/poll.(*pollDesc).wait(0xc0001bc800?, 0xc000172000?, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:83 +0x32 fp=0xc000130b78 sp=0xc000130b50 pc=0x4a2132
internal/poll.(*pollDesc).waitRead(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc0001bc800, {0xc000172000, 0x1000, 0x1000})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_unix.go:167 +0x25a fp=0xc000130bf8 sp=0xc000130b78 pc=0x4a349a
net.(*netFD).Read(0xc0001bc800, {0xc000172000?, 0x40bae9?, 0x4?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/fd_posix.go:55 +0x29 fp=0xc000130c40 sp=0xc000130bf8 pc=0x669209
net.(*conn).Read(0xc0001381a8, {0xc000172000?, 0x0?, 0x0?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/net.go:183 +0x45 fp=0xc000130c88 sp=0xc000130c40 pc=0x679485
net/http.(*persistConn).Read(0xc000480b40, {0xc000172000?, 0xc000118120?, 0xc000130d30?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1929 +0x4e fp=0xc000130ce8 sp=0xc000130c88 pc=0x76a6ae
bufio.(*Reader).fill(0xc0001142a0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:106 +0x103 fp=0xc000130d20 sp=0xc000130ce8 pc=0x520123
bufio.(*Reader).Peek(0xc0001142a0, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:144 +0x5d fp=0xc000130d40 sp=0xc000130d20 pc=0x52027d
net/http.(*persistConn).readLoop(0xc000480b40)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2093 +0x1ac fp=0xc000130fc8 sp=0xc000130d40 pc=0x76b4cc
net/http.(*Transport).dialConn.func5()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x26 fp=0xc000130fe0 sp=0xc000130fc8 pc=0x769ca6
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000130fe8 sp=0xc000130fe0 pc=0x46ec21
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x173e

goroutine 6 [select]:
runtime.gopark(0xc000070f90?, 0x2?, 0xd8?, 0xd?, 0xc000070f24?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000070d90 sp=0xc000070d70 pc=0x43e696
runtime.selectgo(0xc000070f90, 0xc000070f20, 0xc0000dd580?, 0x0, 0xc000205620?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc000070ed0 sp=0xc000070d90 pc=0x44e112
net/http.(*persistConn).writeLoop(0xc000480b40)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2392 +0xf5 fp=0xc000070fc8 sp=0xc000070ed0 pc=0x76d1b5
net/http.(*Transport).dialConn.func6()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x26 fp=0xc000070fe0 sp=0xc000070fc8 pc=0x769c46
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000070fe8 sp=0xc000070fe0 pc=0x46ec21
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x1791
signal: aborted (core dumped)
FAIL	github.com/couchbase/indexing/secondary/tests/functionaltests	80.171s
Indexer Go routine dump logged in /opt/build/ns_server/logs/n_1/indexer_functests_pprof.log
curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (7) Failed to connect to 127.0.0.1 port 9108 after 1 ms: Connection refused
2022/09/02 01:05:09 In TestMain()
2022/09/02 01:05:09 ChangeIndexerSettings: Host  Port 0 Nodes []
2022/09/02 01:05:09 Changing config key indexer.api.enableTestServer to value true
2022/09/02 01:05:09 Error in ChangeIndexerSettings: Post "http://:2/internal/settings": dial tcp :2: connect: connection refused
panic: Error in ChangeIndexerSettings: Post "http://:2/internal/settings": dial tcp :2: connect: connection refused


goroutine 1 [running]:
panic({0xcd25e0, 0xc0005502d0})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/panic.go:941 +0x397 fp=0xc0002e3d40 sp=0xc0002e3c80 pc=0x43a6d7
log.Panicf({0xe3f566?, 0x1e?}, {0xc0002e3e38?, 0x1c?, 0xcc7ce0?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/log/log.go:392 +0x67 fp=0xc0002e3d88 sp=0xc0002e3d40 pc=0x5bb407
github.com/couchbase/indexing/secondary/tests/framework/common.HandleError(...)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/tests/framework/common/util.go:76
github.com/couchbase/indexing/secondary/tests/largedatatests.TestMain(0x445131?)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/tests/largedatatests/common_test.go:52 +0x468 fp=0xc0002e3ec8 sp=0xc0002e3d88 pc=0xc5e768
main.main()
	_testmain.go:59 +0x1d3 fp=0xc0002e3f80 sp=0xc0002e3ec8 pc=0xc64833
runtime.main()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:250 +0x212 fp=0xc0002e3fe0 sp=0xc0002e3f80 pc=0x43d252
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0002e3fe8 sp=0xc0002e3fe0 pc=0x46dba1

goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000060fb0 sp=0xc000060f90 pc=0x43d616
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.forcegchelper()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:301 +0xad fp=0xc000060fe0 sp=0xc000060fb0 pc=0x43d4ad
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000060fe8 sp=0xc000060fe0 pc=0x46dba1
created by runtime.init.6
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:289 +0x25

goroutine 18 [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005c790 sp=0xc00005c770 pc=0x43d616
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.bgsweep(0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgcsweep.go:297 +0xd7 fp=0xc00005c7c8 sp=0xc00005c790 pc=0x428777
runtime.gcenable.func1()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:177 +0x26 fp=0xc00005c7e0 sp=0xc00005c7c8 pc=0x41e326
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005c7e8 sp=0xc00005c7e0 pc=0x46dba1
created by runtime.gcenable
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:177 +0x6b

goroutine 19 [GC scavenge wait]:
runtime.gopark(0x32ff8543826f?, 0x10000?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005cf20 sp=0xc00005cf00 pc=0x43d616
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.bgscavenge(0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgcscavenge.go:364 +0x2a5 fp=0xc00005cfc8 sp=0xc00005cf20 pc=0x426585
runtime.gcenable.func2()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:178 +0x26 fp=0xc00005cfe0 sp=0xc00005cfc8 pc=0x41e2c6
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005cfe8 sp=0xc00005cfe0 pc=0x46dba1
created by runtime.gcenable
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:178 +0xaa

goroutine 3 [finalizer wait]:
runtime.gopark(0x0?, 0xea3598?, 0x0?, 0x81?, 0x2000000020?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000060630 sp=0xc000060610 pc=0x43d616
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.runfinq()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mfinal.go:177 +0xb3 fp=0xc0000607e0 sp=0xc000060630 pc=0x41d3d3
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0000607e8 sp=0xc0000607e0 pc=0x46dba1
created by runtime.createfing
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mfinal.go:157 +0x45

goroutine 34 [select]:
runtime.gopark(0xc00029af98?, 0x2?, 0xc7?, 0xd6?, 0xc00029af8c?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00029ae18 sp=0xc00029adf8 pc=0x43d616
runtime.selectgo(0xc00029af98, 0xc00029af88, 0x0?, 0x0, 0xb?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc00029af58 sp=0xc00029ae18 pc=0x44d092
github.com/couchbase/cbauth/cbauthimpl.(*tlsNotifier).loop(0xc000208108)
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:396 +0x67 fp=0xc00029afc8 sp=0xc00029af58 pc=0x779647
github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest.func2()
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:561 +0x26 fp=0xc00029afe0 sp=0xc00029afc8 pc=0x77a2c6
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00029afe8 sp=0xc00029afe0 pc=0x46dba1
created by github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:561 +0x37a

goroutine 35 [select]:
runtime.gopark(0xc00024c798?, 0x2?, 0x0?, 0x0?, 0xc00024c78c?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00024c608 sp=0xc00024c5e8 pc=0x43d616
runtime.selectgo(0xc00024c798, 0xc00024c788, 0x0?, 0x0, 0x0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc00024c748 sp=0xc00024c608 pc=0x44d092
github.com/couchbase/cbauth/cbauthimpl.(*cfgChangeNotifier).loop(0xc000208120)
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:316 +0x85 fp=0xc00024c7c8 sp=0xc00024c748 pc=0x779065
github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest.func3()
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:562 +0x26 fp=0xc00024c7e0 sp=0xc00024c7c8 pc=0x77a266
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00024c7e8 sp=0xc00024c7e0 pc=0x46dba1
created by github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:562 +0x3ca

goroutine 36 [IO wait]:
runtime.gopark(0xc00020e680?, 0xc00004e000?, 0x70?, 0x98?, 0x483482?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000199800 sp=0xc0001997e0 pc=0x43d616
runtime.netpollblock(0xc00018d000?, 0x1000?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:522 +0xf7 fp=0xc000199838 sp=0xc000199800 pc=0x4360b7
internal/poll.runtime_pollWait(0x7fd917e351d8, 0x72)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:302 +0x89 fp=0xc000199858 sp=0xc000199838 pc=0x468189
internal/poll.(*pollDesc).wait(0xc000032200?, 0xc00018d000?, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:83 +0x32 fp=0xc000199880 sp=0xc000199858 pc=0x4a0db2
internal/poll.(*pollDesc).waitRead(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc000032200, {0xc00018d000, 0x1000, 0x1000})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_unix.go:167 +0x25a fp=0xc000199900 sp=0xc000199880 pc=0x4a211a
net.(*netFD).Read(0xc000032200, {0xc00018d000?, 0xc000077560?, 0xc?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/fd_posix.go:55 +0x29 fp=0xc000199948 sp=0xc000199900 pc=0x665589
net.(*conn).Read(0xc0000104f8, {0xc00018d000?, 0x11?, 0x0?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/net.go:183 +0x45 fp=0xc000199990 sp=0xc000199948 pc=0x674aa5
bufio.(*Reader).Read(0xc00009e360, {0xc000038601, 0x5ff, 0x4528b4?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:236 +0x1b4 fp=0xc0001999c8 sp=0xc000199990 pc=0x51dd14
github.com/couchbase/cbauth/revrpc.(*minirwc).Read(0x203000?, {0xc000038601?, 0x203000?, 0xc00004c280?})
	/opt/build/goproj/src/github.com/couchbase/cbauth/revrpc/revrpc.go:102 +0x25 fp=0xc0001999f8 sp=0xc0001999c8 pc=0x7b9da5
encoding/json.(*Decoder).refill(0xc0000b0000)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:165 +0x17f fp=0xc000199a48 sp=0xc0001999f8 pc=0x562fff
encoding/json.(*Decoder).readValue(0xc0000b0000)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:140 +0xbb fp=0xc000199a98 sp=0xc000199a48 pc=0x562bfb
encoding/json.(*Decoder).Decode(0xc0000b0000, {0xcfd5c0, 0xc00009e440})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:63 +0x78 fp=0xc000199ac8 sp=0xc000199a98 pc=0x562858
net/rpc/jsonrpc.(*serverCodec).ReadRequestHeader(0xc00009e420, 0xc00004c280)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/jsonrpc/server.go:66 +0x85 fp=0xc000199b08 sp=0xc000199ac8 pc=0x7b9005
github.com/couchbase/cbauth/revrpc.(*jsonServerCodec).ReadRequestHeader(0xc0000900f0?, 0x4cc008?)
	:1 +0x2a fp=0xc000199b28 sp=0xc000199b08 pc=0x7bbb2a
net/rpc.(*Server).readRequestHeader(0xc0000900f0, {0xf8d6e8, 0xc000076e50})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/server.go:587 +0x66 fp=0xc000199bf8 sp=0xc000199b28 pc=0x7b8726
net/rpc.(*Server).readRequest(0x0?, {0xf8d6e8, 0xc000076e50})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/server.go:547 +0x3b fp=0xc000199cd0 sp=0xc000199bf8 pc=0x7b827b
net/rpc.(*Server).ServeCodec(0xc0000900f0, {0xf8d6e8?, 0xc000076e50})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/server.go:462 +0x87 fp=0xc000199dc8 sp=0xc000199cd0 pc=0x7b79a7
github.com/couchbase/cbauth/revrpc.(*Service).Run(0xc00024cf60?, 0xc000073fa0)
	/opt/build/goproj/src/github.com/couchbase/cbauth/revrpc/revrpc.go:192 +0x5d9 fp=0xc000199f38 sp=0xc000199dc8 pc=0x7ba4f9
github.com/couchbase/cbauth/revrpc.BabysitService(0x0?, 0x0?, {0xf84480?, 0xc00000e600?})
	/opt/build/goproj/src/github.com/couchbase/cbauth/revrpc/revrpc.go:288 +0x58 fp=0xc000199f70 sp=0xc000199f38 pc=0x7babf8
github.com/couchbase/cbauth.runRPCForSvc(0x0?, 0xc000246000)
	/opt/build/goproj/src/github.com/couchbase/cbauth/default.go:57 +0xbd fp=0xc000199fc0 sp=0xc000199f70 pc=0x7c44fd
github.com/couchbase/cbauth.startDefault.func1()
	/opt/build/goproj/src/github.com/couchbase/cbauth/default.go:66 +0x25 fp=0xc000199fe0 sp=0xc000199fc0 pc=0x7c47e5
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000199fe8 sp=0xc000199fe0 pc=0x46dba1
created by github.com/couchbase/cbauth.startDefault
	/opt/build/goproj/src/github.com/couchbase/cbauth/default.go:65 +0xf9

goroutine 20 [chan receive]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005d678 sp=0xc00005d658 pc=0x43d616
runtime.chanrecv(0xc00043ad80, 0xc00005d790, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/chan.go:577 +0x56c fp=0xc00005d708 sp=0xc00005d678 pc=0x40b5cc
runtime.chanrecv2(0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/chan.go:445 +0x18 fp=0xc00005d730 sp=0xc00005d708 pc=0x40b038
github.com/couchbase/goutils/systemeventlog.(*SystemEventLoggerImpl).logEvents(0xc0001295e0)
	/opt/build/goproj/src/github.com/couchbase/goutils/systemeventlog/system_event_logger.go:186 +0xb7 fp=0xc00005d7c8 sp=0xc00005d730 pc=0xb001b7
github.com/couchbase/goutils/systemeventlog.NewSystemEventLogger.func1()
	/opt/build/goproj/src/github.com/couchbase/goutils/systemeventlog/system_event_logger.go:125 +0x26 fp=0xc00005d7e0 sp=0xc00005d7c8 pc=0xaffac6
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005d7e8 sp=0xc00005d7e0 pc=0x46dba1
created by github.com/couchbase/goutils/systemeventlog.NewSystemEventLogger
	/opt/build/goproj/src/github.com/couchbase/goutils/systemeventlog/system_event_logger.go:125 +0x1d6

goroutine 21 [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005df58 sp=0xc00005df38 pc=0x43d616
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc00005dfe0 sp=0xc00005df58 pc=0x420405
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005dfe8 sp=0xc00005dfe0 pc=0x46dba1
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25

goroutine 22 [GC worker (idle)]:
runtime.gopark(0x32ff854b9fad?, 0x3?, 0xcf?, 0x52?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005e758 sp=0xc00005e738 pc=0x43d616
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc00005e7e0 sp=0xc00005e758 pc=0x420405
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005e7e8 sp=0xc00005e7e0 pc=0x46dba1
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25

goroutine 23 [GC worker (idle)]:
runtime.gopark(0x32ff854353be?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005ef58 sp=0xc00005ef38 pc=0x43d616
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc00005efe0 sp=0xc00005ef58 pc=0x420405
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005efe8 sp=0xc00005efe0 pc=0x46dba1
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25

goroutine 24 [GC worker (idle)]:
runtime.gopark(0x17bf3a0?, 0x1?, 0x22?, 0x6f?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005f758 sp=0xc00005f738 pc=0x43d616
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc00005f7e0 sp=0xc00005f758 pc=0x420405
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005f7e8 sp=0xc00005f7e0 pc=0x46dba1
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25

goroutine 42 [select]:
runtime.gopark(0xc000021790?, 0x2?, 0x0?, 0x30?, 0xc000021774?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc0000215f0 sp=0xc0000215d0 pc=0x43d616
runtime.selectgo(0xc000021790, 0xc000021770, 0xe6221d?, 0x0, 0x0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc000021730 sp=0xc0000215f0 pc=0x44d092
github.com/couchbase/indexing/secondary/common.(*internalVersionMonitor).ticker(0xc000276000)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:652 +0x158 fp=0xc0000217c8 sp=0xc000021730 pc=0xacb498
github.com/couchbase/indexing/secondary/common.newInternalVersionMonitor.func1()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:231 +0x26 fp=0xc0000217e0 sp=0xc0000217c8 pc=0xac73e6
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0000217e8 sp=0xc0000217e0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/common.newInternalVersionMonitor
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:231 +0x2f6

goroutine 8 [select]:
runtime.gopark(0xc000062798?, 0x2?, 0x10?, 0x0?, 0xc00006278c?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000062618 sp=0xc0000625f8 pc=0x43d616
runtime.selectgo(0xc000062798, 0xc000062788, 0x0?, 0x0, 0x0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc000062758 sp=0xc000062618 pc=0x44d092
github.com/couchbase/indexing/secondary/queryport/client.(*schedTokenMonitor).updater(0xc000090320)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:2389 +0x92 fp=0xc0000627c8 sp=0xc000062758 pc=0xc37f92
github.com/couchbase/indexing/secondary/queryport/client.newSchedTokenMonitor.func1()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:2186 +0x26 fp=0xc0000627e0 sp=0xc0000627c8 pc=0xc368c6
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0000627e8 sp=0xc0000627e0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/queryport/client.newSchedTokenMonitor
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:2186 +0x2e5

goroutine 9 [select]:
runtime.gopark(0xc0000cbba0?, 0x3?, 0x0?, 0x30?, 0xc0000cbb1a?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc0000cb978 sp=0xc0000cb958 pc=0x43d616
runtime.selectgo(0xc0000cbba0, 0xc0000cbb14, 0x3?, 0x0, 0xc00009e440?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc0000cbab8 sp=0xc0000cb978 pc=0x44d092
github.com/couchbase/cbauth/metakv.doRunObserveChildren(0xc0001103e0?, {0xe504e2, 0x1b}, 0xc0000cbe68, 0xc000042840)
	/opt/build/goproj/src/github.com/couchbase/cbauth/metakv/metakv.go:301 +0x429 fp=0xc0000cbe40 sp=0xc0000cbab8 pc=0x9b8289
github.com/couchbase/cbauth/metakv.(*store).runObserveChildren(...)
	/opt/build/goproj/src/github.com/couchbase/cbauth/metakv/metakv.go:259
github.com/couchbase/cbauth/metakv.RunObserveChildren({0xe504e2?, 0x0?}, 0x0?, 0x0?)
	/opt/build/goproj/src/github.com/couchbase/cbauth/metakv/metakv.go:389 +0x58 fp=0xc0000cbe88 sp=0xc0000cbe40 pc=0x9b8838
github.com/couchbase/indexing/secondary/manager/common.(*CommandListener).ListenTokens.func2.1(0x0?, {0x0?, 0x0?})
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/manager/common/token.go:1579 +0xc7 fp=0xc0000cbf00 sp=0xc0000cbe88 pc=0xb05147
github.com/couchbase/indexing/secondary/common.(*RetryHelper).Run(0xc0000cbfa0)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/retry_helper.go:36 +0x83 fp=0xc0000cbf38 sp=0xc0000cbf00 pc=0xace643
github.com/couchbase/indexing/secondary/manager/common.(*CommandListener).ListenTokens.func2()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/manager/common/token.go:1584 +0xdf fp=0xc0000cbfe0 sp=0xc0000cbf38 pc=0xb04fff
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0000cbfe8 sp=0xc0000cbfe0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/manager/common.(*CommandListener).ListenTokens
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/manager/common/token.go:1572 +0xaf

goroutine 14 [select]:
runtime.gopark(0xc00057aea8?, 0x6?, 0x68?, 0xab?, 0xc00057acbc?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00057ab20 sp=0xc00057ab00 pc=0x43d616
runtime.selectgo(0xc00057aea8, 0xc00057acb0, 0xe45617?, 0x0, 0xc00057aef0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc00057ac60 sp=0xc00057ab20 pc=0x44d092
net/http.(*persistConn).roundTrip(0xc000124360, 0xc00021a140)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2620 +0x974 fp=0xc00057af18 sp=0xc00057ac60 pc=0x769254
net/http.(*Transport).roundTrip(0x170e3e0, 0xc00056a300)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:594 +0x7c9 fp=0xc00057b150 sp=0xc00057af18 pc=0x75cce9
net/http.(*Transport).RoundTrip(0xc00056a300?, 0xf83c00?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/roundtrip.go:17 +0x19 fp=0xc00057b170 sp=0xc00057b150 pc=0x744f19
net/http.send(0xc00056a200, {0xf83c00, 0x170e3e0}, {0xdceb60?, 0x48e901?, 0x1787d40?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/client.go:252 +0x5d8 fp=0xc00057b350 sp=0xc00057b170 pc=0x706818
net/http.(*Client).send(0xc00056e150, 0xc00056a200, {0x7fd917eb2748?, 0x150?, 0x1787d40?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/client.go:176 +0x9b fp=0xc00057b3c8 sp=0xc00057b350 pc=0x7060bb
net/http.(*Client).do(0xc00056e150, 0xc00056a200)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/client.go:725 +0x8f5 fp=0xc00057b5c8 sp=0xc00057b3c8 pc=0x7084f5
net/http.(*Client).Do(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/client.go:593
github.com/couchbase/indexing/secondary/security.getWithAuthInternal({0xc00055c1e0?, 0x1b?}, 0xc00057b960, {0x0, 0x0}, 0x0)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/security/tls.go:669 +0x549 fp=0xc00057b6c8 sp=0xc00057b5c8 pc=0x8626c9
github.com/couchbase/indexing/secondary/security.GetWithAuthNonTLS(...)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/security/tls.go:604
github.com/couchbase/indexing/secondary/dcp.queryRestAPIOnLocalhost(0xc0005562d0, {0xe3ebec, 0x6}, {0xd43bc0?, 0x1?}, {0xcb0680, 0xc000148598}, 0xc000568240?)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/dcp/pools.go:359 +0x1b3 fp=0xc00057b910 sp=0xc00057b6c8 pc=0x884093
github.com/couchbase/indexing/secondary/dcp.(*Client).parseURLResponse(...)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/dcp/pools.go:530
github.com/couchbase/indexing/secondary/dcp.ConnectWithAuth({0xc000568240, 0x3f}, {0xf83820?, 0xc00021c100})
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/dcp/pools.go:594 +0x125 fp=0xc00057b988 sp=0xc00057b910 pc=0x886045
github.com/couchbase/indexing/secondary/dcp.Connect({0xc000568240, 0x3f})
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/dcp/pools.go:600 +0xce fp=0xc00057bb20 sp=0xc00057b988 pc=0x8861ee
github.com/couchbase/indexing/secondary/common.NewServicesChangeNotifier({0xc000568240, 0x3f}, {0xe3fad7, 0x7}, {0xe42d8c, 0xb})
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/services_notifier.go:239 +0x1a8 fp=0xc00057bda0 sp=0xc00057bb20 pc=0xad0f48
github.com/couchbase/indexing/secondary/queryport/client.(*metadataClient).watchClusterChanges(0xc0000c04d0)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:2071 +0xff fp=0xc00057bfc8 sp=0xc00057bda0 pc=0xc35c9f
github.com/couchbase/indexing/secondary/queryport/client.newMetaBridgeClient.func1()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:149 +0x26 fp=0xc00057bfe0 sp=0xc00057bfc8 pc=0xc29426
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00057bfe8 sp=0xc00057bfe0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/queryport/client.newMetaBridgeClient
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:149 +0x5c5

goroutine 41 [IO wait]:
runtime.gopark(0xc000102680?, 0xc000054f00?, 0x10?, 0x4a?, 0x483482?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc0000749a0 sp=0xc000074980 pc=0x43d616
runtime.netpollblock(0xc0001ad000?, 0x1000?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:522 +0xf7 fp=0xc0000749d8 sp=0xc0000749a0 pc=0x4360b7
internal/poll.runtime_pollWait(0x7fd917e34f08, 0x72)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:302 +0x89 fp=0xc0000749f8 sp=0xc0000749d8 pc=0x468189
internal/poll.(*pollDesc).wait(0xc000032400?, 0xc0001ad000?, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:83 +0x32 fp=0xc000074a20 sp=0xc0000749f8 pc=0x4a0db2
internal/poll.(*pollDesc).waitRead(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc000032400, {0xc0001ad000, 0x1000, 0x1000})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_unix.go:167 +0x25a fp=0xc000074aa0 sp=0xc000074a20 pc=0x4a211a
net.(*netFD).Read(0xc000032400, {0xc0001ad000?, 0x0?, 0x0?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/fd_posix.go:55 +0x29 fp=0xc000074ae8 sp=0xc000074aa0 pc=0x665589
net.(*conn).Read(0xc0000106f0, {0xc0001ad000?, 0x0?, 0x0?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/net.go:183 +0x45 fp=0xc000074b30 sp=0xc000074ae8 pc=0x674aa5
net/http.(*persistConn).Read(0xc0000db440, {0xc0001ad000?, 0x1000?, 0x1000?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1929 +0x4e fp=0xc000074b90 sp=0xc000074b30 pc=0x76588e
bufio.(*Reader).fill(0xc00009eba0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:106 +0x103 fp=0xc000074bc8 sp=0xc000074b90 pc=0x51d743
bufio.(*Reader).ReadSlice(0xc00009eba0, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:371 +0x2f fp=0xc000074c18 sp=0xc000074bc8 pc=0x51e32f
net/http/internal.readChunkLine(0x400?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/internal/chunked.go:129 +0x25 fp=0xc000074c68 sp=0xc000074c18 pc=0x7036c5
net/http/internal.(*chunkedReader).beginChunk(0xc000202bd0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/internal/chunked.go:48 +0x28 fp=0xc000074c98 sp=0xc000074c68 pc=0x703148
net/http/internal.(*chunkedReader).Read(0xc000202bd0, {0xc000028000?, 0x5?, 0xc000021568?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/internal/chunked.go:98 +0x14e fp=0xc000074d18 sp=0xc000074c98 pc=0x70340e
net/http.(*body).readLocked(0xc000112400, {0xc000028000?, 0x7fd8f004f080?, 0xc000021635?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transfer.go:844 +0x3c fp=0xc000074d68 sp=0xc000074d18 pc=0x75a3fc
net/http.(*body).Read(0x1010000000000?, {0xc000028000?, 0x0?, 0x7fd917ea6f18?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transfer.go:836 +0x125 fp=0xc000074de0 sp=0xc000074d68 pc=0x75a2c5
net/http.(*bodyEOFSignal).Read(0xc0001124c0, {0xc000028000, 0x200, 0x200})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2774 +0x142 fp=0xc000074e60 sp=0xc000074de0 pc=0x769fc2
encoding/json.(*Decoder).refill(0xc000230000)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:165 +0x17f fp=0xc000074eb0 sp=0xc000074e60 pc=0x562fff
encoding/json.(*Decoder).readValue(0xc000230000)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:140 +0xbb fp=0xc000074f00 sp=0xc000074eb0 pc=0x562bfb
encoding/json.(*Decoder).Decode(0xc000230000, {0xcaee80, 0xc00013eeb0})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:63 +0x78 fp=0xc000074f30 sp=0xc000074f00 pc=0x562858
github.com/couchbase/cbauth/metakv.doRunObserveChildren.func1()
	/opt/build/goproj/src/github.com/couchbase/cbauth/metakv/metakv.go:284 +0x10b fp=0xc000074fe0 sp=0xc000074f30 pc=0x9b872b
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000074fe8 sp=0xc000074fe0 pc=0x46dba1
created by github.com/couchbase/cbauth/metakv.doRunObserveChildren
	/opt/build/goproj/src/github.com/couchbase/cbauth/metakv/metakv.go:280 +0x2eb

goroutine 44 [select]:
runtime.gopark(0xc000021f70?, 0x2?, 0x98?, 0x61?, 0xc000021f4c?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000021dc8 sp=0xc000021da8 pc=0x43d616
runtime.selectgo(0xc000021f70, 0xc000021f48, 0xe70c96?, 0x0, 0xc000021f90?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc000021f08 sp=0xc000021dc8 pc=0x44d092
github.com/couchbase/indexing/secondary/common.(*internalVersionMonitor).monitor(0xc000276000)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:589 +0x166 fp=0xc000021fc8 sp=0xc000021f08 pc=0xacae66
github.com/couchbase/indexing/secondary/common.MonitorInternalVersion.func1()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:801 +0x26 fp=0xc000021fe0 sp=0xc000021fc8 pc=0xacc346
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000021fe8 sp=0xc000021fe0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/common.MonitorInternalVersion
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:801 +0x125

goroutine 12 [select]:
runtime.gopark(0xc000073f68?, 0x4?, 0x3?, 0x0?, 0xc000073db0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000073c00 sp=0xc000073be0 pc=0x43d616
runtime.selectgo(0xc000073f68, 0xc000073da8, 0xc000112400?, 0x0, 0x1?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc000073d40 sp=0xc000073c00 pc=0x44d092
net/http.(*persistConn).readLoop(0xc0000db440)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2213 +0xda5 fp=0xc000073fc8 sp=0xc000073d40 pc=0x7672a5
net/http.(*Transport).dialConn.func5()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x26 fp=0xc000073fe0 sp=0xc000073fc8 pc=0x764e86
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000073fe8 sp=0xc000073fe0 pc=0x46dba1
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x173e

goroutine 39 [runnable]:
runtime.gopark(0xc000102820?, 0xc000054f00?, 0x68?, 0xcb?, 0x483482?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00029caf8 sp=0xc00029cad8 pc=0x43d616
runtime.netpollblock(0xc00027e000?, 0x1000?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:522 +0xf7 fp=0xc00029cb30 sp=0xc00029caf8 pc=0x4360b7
internal/poll.runtime_pollWait(0x7fd917e34e18, 0x72)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:302 +0x89 fp=0xc00029cb50 sp=0xc00029cb30 pc=0x468189
internal/poll.(*pollDesc).wait(0xc000250180?, 0xc00027e000?, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:83 +0x32 fp=0xc00029cb78 sp=0xc00029cb50 pc=0x4a0db2
internal/poll.(*pollDesc).waitRead(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc000250180, {0xc00027e000, 0x1000, 0x1000})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_unix.go:167 +0x25a fp=0xc00029cbf8 sp=0xc00029cb78 pc=0x4a211a
net.(*netFD).Read(0xc000250180, {0xc00027e000?, 0x40aa69?, 0x4?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/fd_posix.go:55 +0x29 fp=0xc00029cc40 sp=0xc00029cbf8 pc=0x665589
net.(*conn).Read(0xc000118058, {0xc00027e000?, 0xc00020a048?, 0x1?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/net.go:183 +0x45 fp=0xc00029cc88 sp=0xc00029cc40 pc=0x674aa5
net/http.(*persistConn).Read(0xc000124360, {0xc00027e000?, 0xc0001404e0?, 0xc00029cd30?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1929 +0x4e fp=0xc00029cce8 sp=0xc00029cc88 pc=0x76588e
bufio.(*Reader).fill(0xc000200720)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:106 +0x103 fp=0xc00029cd20 sp=0xc00029cce8 pc=0x51d743
bufio.(*Reader).Peek(0xc000200720, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:144 +0x5d fp=0xc00029cd40 sp=0xc00029cd20 pc=0x51d89d
net/http.(*persistConn).readLoop(0xc000124360)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2093 +0x1ac fp=0xc00029cfc8 sp=0xc00029cd40 pc=0x7666ac
net/http.(*Transport).dialConn.func5()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x26 fp=0xc00029cfe0 sp=0xc00029cfc8 pc=0x764e86
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00029cfe8 sp=0xc00029cfe0 pc=0x46dba1
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x173e

goroutine 43 [runnable]:
runtime.gopark(0xc0000195f8?, 0x6?, 0xb8?, 0x92?, 0xc00001940c?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000019270 sp=0xc000019250 pc=0x43d616
runtime.selectgo(0xc0000195f8, 0xc000019400, 0xe45617?, 0x0, 0xc0000193e8?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc0000193b0 sp=0xc000019270 pc=0x44d092
net/http.(*persistConn).roundTrip(0xc000564120, 0xc000113480)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2620 +0x974 fp=0xc000019668 sp=0xc0000193b0 pc=0x769254
net/http.(*Transport).roundTrip(0x170e3e0, 0xc000146a00)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:594 +0x7c9 fp=0xc0000198a0 sp=0xc000019668 pc=0x75cce9
net/http.(*Transport).RoundTrip(0x412085?, 0xf83c00?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/roundtrip.go:17 +0x19 fp=0xc0000198c0 sp=0xc0000198a0 pc=0x744f19
net/http.send(0xc000146a00, {0xf83c00, 0x170e3e0}, {0xdceb60?, 0xc000226701?, 0x0?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/client.go:252 +0x5d8 fp=0xc000019aa0 sp=0xc0000198c0 pc=0x706818
net/http.(*Client).send(0xc000203470, 0xc000146a00, {0xc000226740?, 0xc000297ba8?, 0x0?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/client.go:176 +0x9b fp=0xc000019b18 sp=0xc000019aa0 pc=0x7060bb
net/http.(*Client).do(0xc000203470, 0xc000146a00)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/client.go:725 +0x8f5 fp=0xc000019d18 sp=0xc000019b18 pc=0x7084f5
net/http.(*Client).Do(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/client.go:593
github.com/couchbase/indexing/secondary/security.getWithAuthInternal({0xc000204720?, 0x2c?}, 0xc000208780, {0x0, 0x0}, 0x0)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/security/tls.go:669 +0x549 fp=0xc000019e18 sp=0xc000019d18 pc=0x8626c9
github.com/couchbase/indexing/secondary/security.GetWithAuthNonTLS(...)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/security/tls.go:604
github.com/couchbase/indexing/secondary/common.(*internalVersionMonitor).notifier(0xc000276000)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:678 +0x85 fp=0xc000019fc8 sp=0xc000019e18 pc=0xacb605
github.com/couchbase/indexing/secondary/common.newInternalVersionMonitor.func2()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:232 +0x26 fp=0xc000019fe0 sp=0xc000019fc8 pc=0xac7386
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000019fe8 sp=0xc000019fe0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/common.newInternalVersionMonitor
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:232 +0x336

goroutine 40 [select]:
runtime.gopark(0xc00029df90?, 0x2?, 0xd8?, 0xdd?, 0xc00029df24?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00029dd90 sp=0xc00029dd70 pc=0x43d616
runtime.selectgo(0xc00029df90, 0xc00029df20, 0xc000112080?, 0x0, 0xc0001d4540?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc00029ded0 sp=0xc00029dd90 pc=0x44d092
net/http.(*persistConn).writeLoop(0xc000124360)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2392 +0xf5 fp=0xc00029dfc8 sp=0xc00029ded0 pc=0x768395
net/http.(*Transport).dialConn.func6()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x26 fp=0xc00029dfe0 sp=0xc00029dfc8 pc=0x764e26
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00029dfe8 sp=0xc00029dfe0 pc=0x46dba1
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x1791

goroutine 13 [select]:
runtime.gopark(0xc0001b7f90?, 0x2?, 0xd8?, 0x7d?, 0xc0001b7f24?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc0001b7d90 sp=0xc0001b7d70 pc=0x43d616
runtime.selectgo(0xc0001b7f90, 0xc0001b7f20, 0xc0000e5a80?, 0x0, 0xc0002854d0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc0001b7ed0 sp=0xc0001b7d90 pc=0x44d092
net/http.(*persistConn).writeLoop(0xc0000db440)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2392 +0xf5 fp=0xc0001b7fc8 sp=0xc0001b7ed0 pc=0x768395
net/http.(*Transport).dialConn.func6()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x26 fp=0xc0001b7fe0 sp=0xc0001b7fc8 pc=0x764e26
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0001b7fe8 sp=0xc0001b7fe0 pc=0x46dba1
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x1791

goroutine 15 [chan receive]:
runtime.gopark(0xc000062ed8?, 0x4431bb?, 0x20?, 0x2f?, 0x459dc5?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000062ec8 sp=0xc000062ea8 pc=0x43d616
runtime.chanrecv(0xc00009f440, 0x0, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/chan.go:577 +0x56c fp=0xc000062f58 sp=0xc000062ec8 pc=0x40b5cc
runtime.chanrecv1(0xdf8475800?, 0xc000042cc0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/chan.go:440 +0x18 fp=0xc000062f80 sp=0xc000062f58 pc=0x40aff8
github.com/couchbase/indexing/secondary/queryport/client.(*metadataClient).logstats(0xc0000c04d0)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:1410 +0x79 fp=0xc000062fc8 sp=0xc000062f80 pc=0xc30a39
github.com/couchbase/indexing/secondary/queryport/client.newMetaBridgeClient.func2()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:150 +0x26 fp=0xc000062fe0 sp=0xc000062fc8 pc=0xc293c6
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000062fe8 sp=0xc000062fe0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/queryport/client.newMetaBridgeClient
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:150 +0x605

goroutine 16 [select]:
runtime.gopark(0xc000063f90?, 0x2?, 0x0?, 0x0?, 0xc000063f8c?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000063e18 sp=0xc000063df8 pc=0x43d616
runtime.selectgo(0xc000063f90, 0xc000063f88, 0x0?, 0x0, 0x0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc000063f58 sp=0xc000063e18 pc=0x44d092
github.com/couchbase/indexing/secondary/queryport/client.(*GsiClient).listenMetaChange(0xc000148480, 0xc00010e150)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/client.go:1830 +0x70 fp=0xc000063fc0 sp=0xc000063f58 pc=0xc24930
github.com/couchbase/indexing/secondary/queryport/client.makeWithMetaProvider.func1()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/client.go:1823 +0x2a fp=0xc000063fe0 sp=0xc000063fc0 pc=0xc2488a
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000063fe8 sp=0xc000063fe0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/queryport/client.makeWithMetaProvider
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/client.go:1823 +0x28a

goroutine 66 [select]:
runtime.gopark(0xc000061750?, 0x2?, 0x0?, 0x30?, 0xc000061714?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000061590 sp=0xc000061570 pc=0x43d616
runtime.selectgo(0xc000061750, 0xc000061710, 0xe6f1ce?, 0x0, 0xcd1be0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc0000616d0 sp=0xc000061590 pc=0x44d092
github.com/couchbase/indexing/secondary/queryport/client.(*GsiClient).logstats(0xc000148480, 0xc00010e150)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/client.go:1849 +0x238 fp=0xc0000617c0 sp=0xc0000616d0 pc=0xc24bb8
github.com/couchbase/indexing/secondary/queryport/client.makeWithMetaProvider.func2()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/client.go:1824 +0x2a fp=0xc0000617e0 sp=0xc0000617c0 pc=0xc2482a
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0000617e8 sp=0xc0000617e0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/queryport/client.makeWithMetaProvider
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/client.go:1824 +0x2ed

goroutine 27 [IO wait]:
runtime.gopark(0xc00057c1a0?, 0xc000050500?, 0x68?, 0x8b?, 0x483482?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000298af8 sp=0xc000298ad8 pc=0x43d616
runtime.netpollblock(0xc00057e000?, 0x1000?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:522 +0xf7 fp=0xc000298b30 sp=0xc000298af8 pc=0x4360b7
internal/poll.runtime_pollWait(0x7fd917e350e8, 0x72)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:302 +0x89 fp=0xc000298b50 sp=0xc000298b30 pc=0x468189
internal/poll.(*pollDesc).wait(0xc000148680?, 0xc00057e000?, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:83 +0x32 fp=0xc000298b78 sp=0xc000298b50 pc=0x4a0db2
internal/poll.(*pollDesc).waitRead(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc000148680, {0xc00057e000, 0x1000, 0x1000})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_unix.go:167 +0x25a fp=0xc000298bf8 sp=0xc000298b78 pc=0x4a211a
net.(*netFD).Read(0xc000148680, {0xc00057e000?, 0x8?, 0xc000063488?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/fd_posix.go:55 +0x29 fp=0xc000298c40 sp=0xc000298bf8 pc=0x665589
net.(*conn).Read(0xc00020a050, {0xc00057e000?, 0x203000?, 0x203000?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/net.go:183 +0x45 fp=0xc000298c88 sp=0xc000298c40 pc=0x674aa5
net/http.(*persistConn).Read(0xc000564120, {0xc00057e000?, 0x40a1fd?, 0x60?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1929 +0x4e fp=0xc000298ce8 sp=0xc000298c88 pc=0x76588e
bufio.(*Reader).fill(0xc00043b260)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:106 +0x103 fp=0xc000298d20 sp=0xc000298ce8 pc=0x51d743
bufio.(*Reader).Peek(0xc00043b260, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:144 +0x5d fp=0xc000298d40 sp=0xc000298d20 pc=0x51d89d
net/http.(*persistConn).readLoop(0xc000564120)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2093 +0x1ac fp=0xc000298fc8 sp=0xc000298d40 pc=0x7666ac
net/http.(*Transport).dialConn.func5()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x26 fp=0xc000298fe0 sp=0xc000298fc8 pc=0x764e86
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000298fe8 sp=0xc000298fe0 pc=0x46dba1
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x173e

goroutine 28 [select]:
runtime.gopark(0xc000297f90?, 0x2?, 0xd8?, 0x7d?, 0xc000297f24?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000297d90 sp=0xc000297d70 pc=0x43d616
runtime.selectgo(0xc000297f90, 0xc000297f20, 0xc00021a240?, 0x0, 0xc000203500?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc000297ed0 sp=0xc000297d90 pc=0x44d092
net/http.(*persistConn).writeLoop(0xc000564120)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2392 +0xf5 fp=0xc000297fc8 sp=0xc000297ed0 pc=0x768395
net/http.(*Transport).dialConn.func6()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x26 fp=0xc000297fe0 sp=0xc000297fc8 pc=0x764e26
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000297fe8 sp=0xc000297fe0 pc=0x46dba1
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x1791
signal: aborted (core dumped)
FAIL	github.com/couchbase/indexing/secondary/tests/largedatatests	0.207s
Indexer Go routine dump logged in /opt/build/ns_server/logs/n_1/indexer_largedata_pprof.log
curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (7) Failed to connect to 127.0.0.1 port 9108 after 1 ms: Connection refused

Integration tests

echo "Running gsi integration tests with 4 node cluster"
Running gsi integration tests with 4 node cluster
scripts/start_cluster_and_run_tests.sh b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini conf/simple_gsi_n1ql.conf 1 1 gsi_type=plasma
Printing gsi_type=plasma
gsi_type=plasma
In here
-p makefile=True,gsi_type=plasma
/opt/build/testrunner /opt/build/testrunner
make[1]: Entering directory '/opt/build/ns_server'
cd build && make --no-print-directory ns_dataclean
Built target ns_dataclean
make[1]: Leaving directory '/opt/build/ns_server'
make[1]: Entering directory '/opt/build/ns_server'
cd build && make --no-print-directory all
[  0%] Built target event_ui_build_prepare
[100%] Built target ns_ui_build_prepare
[100%] Building Go Modules target ns_minify_js using Go 1.18.5
[100%] Built target ns_minify_js
[100%] Building Go Modules target ns_minify_css using Go 1.18.5
[100%] Built target ns_minify_css
[100%] Built target query_ui_build_prepare
[100%] Built target fts_ui_build_prepare
[100%] Built target cbas_ui_build_prepare
[100%] Built target backup_ui_build_prepare
[100%] Built target ui_build
==> enacl (compile)
[100%] Built target enacl
[100%] Built target kv_mappings
[100%] Built target ns_cfg
==> ale (compile)
[100%] Built target ale
==> chronicle (compile)
[100%] Built target chronicle
==> ns_server (compile)
[100%] Built target ns_server
==> gen_smtp (compile)
[100%] Built target gen_smtp
==> ns_babysitter (compile)
[100%] Built target ns_babysitter
==> ns_couchdb (compile)
[100%] Built target ns_couchdb
[100%] Building Go target ns_goport using Go 1.18.5
[100%] Built target ns_goport
[100%] Building Go target ns_generate_cert using Go 1.18.5
[100%] Built target ns_generate_cert
[100%] Building Go target ns_godu using Go 1.18.5
[100%] Built target ns_godu
[100%] Building Go target ns_gosecrets using Go 1.18.5
[100%] Built target ns_gosecrets
[100%] Building Go target ns_generate_hash using Go 1.18.5
[100%] Built target ns_generate_hash
==> chronicle (escriptize)
[100%] Built target chronicle_dump
make[1]: Leaving directory '/opt/build/ns_server'
/opt/build/testrunner
INFO:__main__:Checking arguments...
INFO:__main__:Conf filename: conf/simple_gsi_n1ql.conf
INFO:__main__:Test prefix: gsi.indexscans_gsi.SecondaryIndexingScanTests
INFO:__main__:Test prefix: gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests
INFO:__main__:Test prefix: gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests
INFO:__main__:TestRunner: start...
INFO:__main__:Global Test input params:
INFO:__main__:
Number of tests initially selected before GROUP filters: 11
INFO:__main__:--> Running test: gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi
INFO:__main__:Logs folder: /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_1
*** TestRunner ***
{'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi',
 'conf_file': 'conf/simple_gsi_n1ql.conf',
 'gsi_type': 'plasma',
 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini',
 'makefile': 'True',
 'num_nodes': 4,
 'spec': 'simple_gsi_n1ql'}
Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_1

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 1, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'False', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_1'}
Run before suite setup for gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index
suite_setUp (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... -->before_suite_name:gsi.indexscans_gsi.SecondaryIndexingScanTests.suite_setUp,suite: ]>
2022-09-02 01:05:53 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:05:53 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:05:53 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:05:53 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] socket error while connecting to http://127.0.0.1:9000/pools/default error [Errno 111] Connection refused 
2022-09-02 01:05:56 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] socket error while connecting to http://127.0.0.1:9000/pools/default error [Errno 111] Connection refused 
2022-09-02 01:06:02 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] socket error while connecting to http://127.0.0.1:9000/pools/default error [Errno 111] Connection refused 
2022-09-02 01:06:14 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 01:06:14 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:06:14 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 01:06:14 | INFO | MainProcess | MainThread | [on_prem_rest_client.get_nodes_version] Node version in cluster 7.2.0-1948-rel-EE-enterprise
2022-09-02 01:06:14 | INFO | MainProcess | MainThread | [basetestcase.setUp] ==============  basetestcase setup was started for test #1 suite_setUp==============
2022-09-02 01:06:14 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 01:06:14 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 01:06:14 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 01:06:14 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 01:06:14 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] cannot find service node index in cluster 
2022-09-02 01:06:14 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 01:06:14 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 01:06:14 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2022-09-02 01:06:14 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2022-09-02 01:06:14 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:06:14 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:06:14 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:06:14 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:06:14 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:06:15 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:06:15 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:06:15 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:06:15 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:06:15 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:06:15 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:06:16 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [basetestcase.tearDown] ==============  basetestcase cleanup was started for test #1 suite_setUp ==============
2022-09-02 01:06:20 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 01:06:20 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default/ body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 01:06:20 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 01:06:20 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9001/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9001/pools/default with status False: unknown pool
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 01:06:20 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9002/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9002/pools/default with status False: unknown pool
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 01:06:20 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9003/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9003/pools/default with status False: unknown pool
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [basetestcase.tearDown] Removing user 'clientuser'...
2022-09-02 01:06:20 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 01:06:20 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 01:06:20 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:06:20 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 01:06:20 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
2022-09-02 01:06:20 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9001/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9001/pools/default with status False: unknown pool
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9001
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9001 is running
2022-09-02 01:06:20 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9002/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9002/pools/default with status False: unknown pool
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9002
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9002 is running
2022-09-02 01:06:20 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9003/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9003/pools/default with status False: unknown pool
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9003
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9003 is running
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [basetestcase.tearDown] ==============  basetestcase cleanup was finished for test #1 suite_setUp ==============
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [basetestcase.tearDown] closing all ssh connections
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [basetestcase.tearDown] closing all memcached connections
Cluster instance shutdown with force
2022-09-02 01:06:20 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 01:06:20 | INFO | MainProcess | MainThread | [basetestcase.setUp] initializing cluster
2022-09-02 01:06:21 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '22', 'memoryTotal': 15466930176, 'memoryFree': 12332064768, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 8184, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 01:06:21 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [task.execute] quota for index service will be 256 MB
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [task.execute] set index quota to node 127.0.0.1 
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_service_memoryQuota] pools/default params : indexMemoryQuota=256
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7650
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_node_services] --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_node_services] /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9000
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 01:06:21 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9001/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9001/pools/default with status False: unknown pool
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '23', 'memoryTotal': 15466930176, 'memoryFree': 12160532480, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 8184, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 01:06:21 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9001/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7906
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9001
2022-09-02 01:06:21 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
2022-09-02 01:06:22 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 01:06:22 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:06:22 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:06:22 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:06:22 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 01:06:22 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 01:06:22 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 01:06:22 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:06:22 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:06:22 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 01:06:22 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9002/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9002/pools/default with status False: unknown pool
2022-09-02 01:06:22 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
2022-09-02 01:06:22 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '23', 'memoryTotal': 15466930176, 'memoryFree': 12023865344, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 8184, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 01:06:22 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9002/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 01:06:22 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7906
2022-09-02 01:06:22 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9002
2022-09-02 01:06:22 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 01:06:23 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9003/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9003/pools/default with status False: unknown pool
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '18', 'memoryTotal': 15466930176, 'memoryFree': 11935408128, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 8184, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 01:06:23 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9003/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7906
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9003
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 01:06:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:06:24 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:06:24 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 01:06:24 | INFO | MainProcess | MainThread | [basetestcase.add_built_in_server_user] **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
2022-09-02 01:06:24 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] DELETE http://127.0.0.1:9000/settings/rbac/users/local/cbadminbucket body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
2022-09-02 01:06:24 | INFO | MainProcess | MainThread | [internal_user.delete_user] Exception while deleting user. Exception is -b'"User was not found."'
2022-09-02 01:06:24 | INFO | MainProcess | MainThread | [basetestcase.sleep] sleep for 5 secs.  ...
2022-09-02 01:06:29 | INFO | MainProcess | MainThread | [basetestcase.add_built_in_server_user] **** add 'admin' role to 'cbadminbucket' user ****
2022-09-02 01:06:29 | INFO | MainProcess | MainThread | [basetestcase.setUp] done initializing cluster
2022-09-02 01:06:29 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:06:29 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:06:29 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:06:29 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 01:06:29 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 01:06:29 | INFO | MainProcess | MainThread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 01:06:29 | INFO | MainProcess | MainThread | [basetestcase.enable_diag_eval_on_non_local_hosts] Enabled diag/eval for non-local hosts from 127.0.0.1
2022-09-02 01:06:30 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.create_bucket] http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
2022-09-02 01:06:30 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.create_bucket] 0.06 seconds to create bucket default
2022-09-02 01:06:30 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2022-09-02 01:07:30 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.vbucket_map_ready] vbucket map is not ready for bucket default after waiting 60 seconds
2022-09-02 01:07:30 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2022-09-02 01:07:31 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2022-09-02 01:07:31 | INFO | MainProcess | Cluster_Thread | [task.check] bucket 'default' was created with per node RAM quota: 7650
2022-09-02 01:07:31 | INFO | MainProcess | MainThread | [basetestcase.setUp] ==============  basetestcase setup was finished for test #1 suite_setUp ==============
2022-09-02 01:07:31 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:07:31 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:07:31 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:07:31 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:07:31 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:07:31 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:07:31 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:07:31 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:07:31 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:07:31 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:07:31 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:07:31 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:07:34 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2022-09-02 01:07:34 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 6.687625980190074, 'mem_free': 13924732928, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
2022-09-02 01:07:34 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2022-09-02 01:07:34 | INFO | MainProcess | MainThread | [basetestcase.setUp] Time to execute basesetup : 101.55356478691101
2022-09-02 01:07:34 | INFO | MainProcess | MainThread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 01:07:34 | INFO | MainProcess | MainThread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 01:07:34 | INFO | MainProcess | MainThread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 01:07:34 | INFO | MainProcess | MainThread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 01:07:34 | INFO | MainProcess | MainThread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 01:07:34 | INFO | MainProcess | MainThread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 01:07:34 | INFO | MainProcess | MainThread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 01:07:34 | INFO | MainProcess | MainThread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 01:07:34 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 01:07:35 | INFO | MainProcess | MainThread | [on_prem_rest_client.set_index_settings_internal] {'indexer.settings.storage_mode': 'plasma'} set
2022-09-02 01:07:35 | INFO | MainProcess | MainThread | [newtuq.setUp] Allowing the indexer to complete restart after setting the internal settings
2022-09-02 01:07:35 | INFO | MainProcess | MainThread | [basetestcase.sleep] sleep for 5 secs.  ...
2022-09-02 01:07:40 | INFO | MainProcess | MainThread | [on_prem_rest_client.set_index_settings_internal] {'indexer.api.enableTestServer': True} set
2022-09-02 01:07:40 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:07:40 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:07:40 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:07:41 | INFO | MainProcess | MainThread | [basetestcase.load] create 2016.0 to default documents...
2022-09-02 01:07:41 | INFO | MainProcess | MainThread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2022-09-02 01:07:43 | INFO | MainProcess | MainThread | [basetestcase.load] LOAD IS FINISHED
2022-09-02 01:07:43 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 01:07:43 | INFO | MainProcess | MainThread | [newtuq.setUp] ip:127.0.0.1 port:9000 ssh_username:Administrator
2022-09-02 01:07:43 | INFO | MainProcess | MainThread | [basetestcase.sleep] sleep for 30 secs.  ...
2022-09-02 01:08:13 | INFO | MainProcess | MainThread | [tuq_helper.create_primary_index] Check if index existed in default on server 127.0.0.1
2022-09-02 01:08:13 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2022-09-02 01:08:13 | INFO | MainProcess | MainThread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2022-09-02 01:08:13 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 150.26069ms
2022-09-02 01:08:13 | ERROR | MainProcess | MainThread | [tuq_helper._is_index_in_list] Fail to get index list.  List output: {'requestID': '80415e3b-febc-494e-8965-ef6c6ba9c5f0', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '150.26069ms', 'executionTime': '150.119157ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
2022-09-02 01:08:13 | INFO | MainProcess | MainThread | [tuq_helper.create_primary_index] Create primary index
2022-09-02 01:08:13 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] RUN QUERY CREATE PRIMARY INDEX ON default 
2022-09-02 01:08:13 | INFO | MainProcess | MainThread | [on_prem_rest_client.query_tool] query params : statement=CREATE+PRIMARY+INDEX+ON+default+
2022-09-02 01:08:14 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 730.288474ms
2022-09-02 01:08:14 | INFO | MainProcess | MainThread | [tuq_helper.create_primary_index] Check if index is online
2022-09-02 01:08:14 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2022-09-02 01:08:14 | INFO | MainProcess | MainThread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2022-09-02 01:08:14 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 5.863016ms
2022-09-02 01:08:14 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 01:08:14 | INFO | MainProcess | MainThread | [on_prem_rest_client.urllib_request] Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
2022-09-02 01:08:14 | INFO | MainProcess | MainThread | [on_prem_rest_client.set_index_settings] {'queryport.client.waitForScheduledIndex': False} set
2022-09-02 01:08:14 | INFO | MainProcess | MainThread | [on_prem_rest_client.urllib_request] Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
2022-09-02 01:08:14 | INFO | MainProcess | MainThread | [on_prem_rest_client.set_index_settings] {'indexer.allowScheduleCreateRebal': True} set
2022-09-02 01:08:14 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 01:08:14 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:14 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:14 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with Administrator
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with Administrator
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [tuq_helper.drop_primary_index] CHECK FOR PRIMARY INDEXES
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [tuq_helper.drop_primary_index] DROP PRIMARY INDEX ON default USING GSI
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 5.81306ms
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] RUN QUERY DROP PRIMARY INDEX ON default USING GSI
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [on_prem_rest_client.query_tool] query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 54.725215ms
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 4.523186118548895, 'mem_free': 13832368128, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:15 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:16 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:16 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:16 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:16 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:16 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:16 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:16 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:16 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:16 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:16 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:19 | INFO | MainProcess | MainThread | [basetestcase.tearDown] ==============  basetestcase cleanup was started for test #1 suite_setUp ==============
2022-09-02 01:08:20 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] deleting existing buckets ['default'] on 127.0.0.1
2022-09-02 01:08:20 | INFO | MainProcess | MainThread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete....
2022-09-02 01:08:20 | INFO | MainProcess | MainThread | [on_prem_rest_client.bucket_exists] node 127.0.0.1 existing buckets : []
2022-09-02 01:08:20 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : default from 127.0.0.1
2022-09-02 01:08:20 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [basetestcase.tearDown] Removing user 'clientuser'...
2022-09-02 01:08:21 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [basetestcase.tearDown] b'"User was not found."'
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9001
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9001 is running
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9002
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9002 is running
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9003
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9003 is running
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [basetestcase.tearDown] ==============  basetestcase cleanup was finished for test #1 suite_setUp ==============
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [basetestcase.tearDown] closing all ssh connections
2022-09-02 01:08:21 | INFO | MainProcess | MainThread | [basetestcase.tearDown] closing all memcached connections
ok

----------------------------------------------------------------------
Ran 1 test in 148.421s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Cluster instance shutdown with force
-->result: 
2022-09-02 01:08:21 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:21 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:21 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:21 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:08:21 | INFO | MainProcess | test_thread | [on_prem_rest_client.get_nodes_version] Node version in cluster 7.2.0-1948-rel-EE-enterprise
2022-09-02 01:08:21 | INFO | MainProcess | test_thread | [basetestcase.setUp] ==============  basetestcase setup was started for test #1 test_multi_create_query_explain_drop_index==============
2022-09-02 01:08:21 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 01:08:21 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:21 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 22.05205078532215, 'mem_free': 13814222848, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:22 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:23 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:23 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:23 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:23 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:23 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:23 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:23 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:26 | INFO | MainProcess | test_thread | [basetestcase.tearDown] ==============  basetestcase cleanup was started for test #1 test_multi_create_query_explain_drop_index ==============
2022-09-02 01:08:26 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 01:08:26 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [basetestcase.tearDown] Removing user 'clientuser'...
2022-09-02 01:08:27 | ERROR | MainProcess | test_thread | [on_prem_rest_client._http_request] DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [basetestcase.tearDown] b'"User was not found."'
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9001
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9001 is running
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9002
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9002 is running
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9003
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9003 is running
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [basetestcase.tearDown] ==============  basetestcase cleanup was finished for test #1 test_multi_create_query_explain_drop_index ==============
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [basetestcase.tearDown] closing all ssh connections
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [basetestcase.tearDown] closing all memcached connections
Cluster instance shutdown with force
2022-09-02 01:08:27 | INFO | MainProcess | test_thread | [basetestcase.setUp] initializing cluster
2022-09-02 01:08:28 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
2022-09-02 01:08:28 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '151', 'memoryTotal': 15466930176, 'memoryFree': 13814222848, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 01:08:28 | INFO | MainProcess | Cluster_Thread | [task.execute] quota for index service will be 256 MB
2022-09-02 01:08:28 | INFO | MainProcess | Cluster_Thread | [task.execute] set index quota to node 127.0.0.1 
2022-09-02 01:08:28 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_service_memoryQuota] pools/default params : indexMemoryQuota=256
2022-09-02 01:08:28 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7650
2022-09-02 01:08:28 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_node_services] --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
2022-09-02 01:08:28 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_node_services] /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
2022-09-02 01:08:28 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
2022-09-02 01:08:28 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_node_services] This node is already provisioned with services, we do not consider this as failure for test case
2022-09-02 01:08:28 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9000
2022-09-02 01:08:28 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
2022-09-02 01:08:28 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 01:08:28 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:28 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '148', 'memoryTotal': 15466930176, 'memoryFree': 13809270784, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7906
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9001
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 01:08:29 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '149', 'memoryTotal': 15466930176, 'memoryFree': 13809311744, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7906
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9002
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '149', 'memoryTotal': 15466930176, 'memoryFree': 13809483776, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7906
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9003
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:30 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 01:08:31 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 01:08:31 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 01:08:31 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:08:31 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 01:08:31 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 01:08:31 | INFO | MainProcess | test_thread | [basetestcase.add_built_in_server_user] **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
2022-09-02 01:08:31 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 5 secs.  ...
2022-09-02 01:08:36 | INFO | MainProcess | test_thread | [basetestcase.add_built_in_server_user] **** add 'admin' role to 'cbadminbucket' user ****
2022-09-02 01:08:36 | INFO | MainProcess | test_thread | [basetestcase.setUp] done initializing cluster
2022-09-02 01:08:36 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:08:36 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:08:36 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:08:36 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 01:08:36 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 01:08:36 | INFO | MainProcess | test_thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 01:08:36 | INFO | MainProcess | test_thread | [basetestcase.enable_diag_eval_on_non_local_hosts] Enabled diag/eval for non-local hosts from 127.0.0.1
2022-09-02 01:08:37 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.create_bucket] http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
2022-09-02 01:08:37 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.create_bucket] 0.05 seconds to create bucket default
2022-09-02 01:08:37 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2022-09-02 01:09:30 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2022-09-02 01:09:31 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2022-09-02 01:09:31 | INFO | MainProcess | Cluster_Thread | [task.check] bucket 'default' was created with per node RAM quota: 7650
2022-09-02 01:09:31 | INFO | MainProcess | test_thread | [basetestcase.setUp] ==============  basetestcase setup was finished for test #1 test_multi_create_query_explain_drop_index ==============
2022-09-02 01:09:31 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:09:31 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:09:31 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:09:31 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:09:31 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:09:32 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:09:32 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:09:32 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:09:32 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:09:32 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:09:32 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:09:32 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 6.003423078730811, 'mem_free': 13806100480, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [basetestcase.setUp] Time to execute basesetup : 74.0373694896698
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [on_prem_rest_client.set_index_settings_internal] {'indexer.settings.storage_mode': 'plasma'} set
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [newtuq.setUp] Allowing the indexer to complete restart after setting the internal settings
2022-09-02 01:09:35 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 5 secs.  ...
2022-09-02 01:09:40 | INFO | MainProcess | test_thread | [on_prem_rest_client.set_index_settings_internal] {'indexer.api.enableTestServer': True} set
2022-09-02 01:09:40 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:09:40 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:09:41 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:09:42 | INFO | MainProcess | test_thread | [basetestcase.load] create 2016.0 to default documents...
2022-09-02 01:09:42 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2022-09-02 01:09:44 | INFO | MainProcess | test_thread | [basetestcase.load] LOAD IS FINISHED
2022-09-02 01:09:44 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 01:09:44 | INFO | MainProcess | test_thread | [newtuq.setUp] ip:127.0.0.1 port:9000 ssh_username:Administrator
2022-09-02 01:09:44 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 30 secs.  ...
2022-09-02 01:10:14 | INFO | MainProcess | test_thread | [tuq_helper.create_primary_index] Check if index existed in default on server 127.0.0.1
2022-09-02 01:10:14 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2022-09-02 01:10:14 | INFO | MainProcess | test_thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2022-09-02 01:10:14 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 64.798813ms
2022-09-02 01:10:14 | ERROR | MainProcess | test_thread | [tuq_helper._is_index_in_list] Fail to get index list.  List output: {'requestID': '980cbd3d-53c8-4696-a675-f14465c62de9', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '64.798813ms', 'executionTime': '64.725294ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
2022-09-02 01:10:14 | INFO | MainProcess | test_thread | [tuq_helper.create_primary_index] Create primary index
2022-09-02 01:10:14 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] RUN QUERY CREATE PRIMARY INDEX ON default 
2022-09-02 01:10:14 | INFO | MainProcess | test_thread | [on_prem_rest_client.query_tool] query params : statement=CREATE+PRIMARY+INDEX+ON+default+
2022-09-02 01:10:14 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 698.068271ms
2022-09-02 01:10:14 | INFO | MainProcess | test_thread | [tuq_helper.create_primary_index] Check if index is online
2022-09-02 01:10:15 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2022-09-02 01:10:15 | INFO | MainProcess | test_thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2022-09-02 01:10:15 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 6.611979ms
2022-09-02 01:10:15 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 01:10:15 | INFO | MainProcess | test_thread | [on_prem_rest_client.urllib_request] Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
2022-09-02 01:10:15 | INFO | MainProcess | test_thread | [on_prem_rest_client.set_index_settings] {'queryport.client.waitForScheduledIndex': False} set
2022-09-02 01:10:15 | INFO | MainProcess | test_thread | [on_prem_rest_client.urllib_request] Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
2022-09-02 01:10:15 | INFO | MainProcess | test_thread | [on_prem_rest_client.set_index_settings] {'indexer.allowScheduleCreateRebal': True} set
2022-09-02 01:10:15 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 01:10:16 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY CREATE INDEX `employeee8b2aae915d9414990a49f0525cd8004job_title` ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
2022-09-02 01:10:16 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=CREATE+INDEX+%60employeee8b2aae915d9414990a49f0525cd8004job_title%60+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
2022-09-02 01:10:16 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 53.747941ms
2022-09-02 01:10:16 | INFO | MainProcess | test_thread | [base_gsi.async_build_index] BUILD INDEX on default(employeee8b2aae915d9414990a49f0525cd8004job_title) USING GSI
2022-09-02 01:10:17 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY BUILD INDEX on default(employeee8b2aae915d9414990a49f0525cd8004job_title) USING GSI
2022-09-02 01:10:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=BUILD+INDEX+on+default%28employeee8b2aae915d9414990a49f0525cd8004job_title%29+USING+GSI
2022-09-02 01:10:17 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 26.971863ms
2022-09-02 01:10:18 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = 'employeee8b2aae915d9414990a49f0525cd8004job_title'
2022-09-02 01:10:18 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeee8b2aae915d9414990a49f0525cd8004job_title%27
2022-09-02 01:10:18 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 6.816037ms
2022-09-02 01:10:19 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = 'employeee8b2aae915d9414990a49f0525cd8004job_title'
2022-09-02 01:10:19 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeee8b2aae915d9414990a49f0525cd8004job_title%27
2022-09-02 01:10:19 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 7.844353ms
2022-09-02 01:10:20 | INFO | MainProcess | Cluster_Thread | [task.execute]  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2022-09-02 01:10:20 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
2022-09-02 01:10:20 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
2022-09-02 01:10:20 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 3.286ms
2022-09-02 01:10:20 | INFO | MainProcess | Cluster_Thread | [task.execute] {'requestID': '96f0510f-9dfa-4d0a-b79d-dd18ce1cb2e2', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employeee8b2aae915d9414990a49f0525cd8004job_title', 'index_id': '9849c2997196770d', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'index_key': '`job_title`', 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '3.286ms', 'executionTime': '3.211661ms', 'resultCount': 1, 'resultSize': 724, 'serviceLoad': 6}}
2022-09-02 01:10:20 | INFO | MainProcess | Cluster_Thread | [task.execute]  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2022-09-02 01:10:20 | INFO | MainProcess | Cluster_Thread | [task.check]  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2022-09-02 01:10:20 | INFO | MainProcess | test_thread | [base_gsi.async_query_using_index] Query : SELECT * FROM default WHERE  job_title = "Sales" 
2022-09-02 01:10:20 | INFO | MainProcess | test_thread | [tuq_generators.generate_expected_result] FROM clause ===== is default
2022-09-02 01:10:20 | INFO | MainProcess | test_thread | [tuq_generators.generate_expected_result] WHERE clause ===== is   doc["job_title"] == "Sales" 
2022-09-02 01:10:20 | INFO | MainProcess | test_thread | [tuq_generators.generate_expected_result] UNNEST clause ===== is None
2022-09-02 01:10:20 | INFO | MainProcess | test_thread | [tuq_generators.generate_expected_result] SELECT clause ===== is {"*" : doc,}
2022-09-02 01:10:20 | INFO | MainProcess | test_thread | [tuq_generators._filter_full_set] -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
2022-09-02 01:10:20 | INFO | MainProcess | test_thread | [tuq_generators._filter_full_set] -->where_clause=  doc["job_title"] == "Sales" 
2022-09-02 01:10:21 | INFO | MainProcess | Cluster_Thread | [task.execute]  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2022-09-02 01:10:21 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
2022-09-02 01:10:21 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
2022-09-02 01:10:21 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 148.676127ms
2022-09-02 01:10:21 | INFO | MainProcess | Cluster_Thread | [tuq_helper._verify_results]  Analyzing Actual Result
2022-09-02 01:10:21 | INFO | MainProcess | Cluster_Thread | [tuq_helper._verify_results]  Analyzing Expected Result
2022-09-02 01:10:22 | INFO | MainProcess | Cluster_Thread | [task.execute]  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2022-09-02 01:10:22 | INFO | MainProcess | Cluster_Thread | [task.check]  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2022-09-02 01:10:23 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = 'employeee8b2aae915d9414990a49f0525cd8004job_title'
2022-09-02 01:10:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeee8b2aae915d9414990a49f0525cd8004job_title%27
2022-09-02 01:10:23 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 5.938238ms
2022-09-02 01:10:23 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY DROP INDEX employeee8b2aae915d9414990a49f0525cd8004job_title ON default USING GSI
2022-09-02 01:10:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=DROP+INDEX+employeee8b2aae915d9414990a49f0525cd8004job_title+ON+default+USING+GSI
2022-09-02 01:10:23 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 47.528259ms
2022-09-02 01:10:23 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = 'employeee8b2aae915d9414990a49f0525cd8004job_title'
2022-09-02 01:10:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeee8b2aae915d9414990a49f0525cd8004job_title%27
2022-09-02 01:10:23 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 5.411089ms
2022-09-02 01:10:23 | ERROR | MainProcess | Cluster_Thread | [tuq_helper._is_index_in_list] Fail to get index list.  List output: {'requestID': '928362ee-a9f9-43ae-b085-2d257b59ed5e', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '5.411089ms', 'executionTime': '5.33803ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [tuq_helper.drop_primary_index] CHECK FOR PRIMARY INDEXES
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [tuq_helper.drop_primary_index] DROP PRIMARY INDEX ON default USING GSI
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 6.905656ms
2022-09-02 01:10:24 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] RUN QUERY DROP PRIMARY INDEX ON default USING GSI
2022-09-02 01:10:25 | INFO | MainProcess | test_thread | [on_prem_rest_client.query_tool] query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
2022-09-02 01:10:25 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 36.330959ms
2022-09-02 01:10:25 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2022-09-02 01:10:25 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 14.44113006404691, 'mem_free': 13709963264, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
2022-09-02 01:10:25 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2022-09-02 01:10:25 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:10:25 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:10:25 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:10:25 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:10:25 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:10:25 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:10:25 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:10:25 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:10:26 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:10:26 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 01:10:26 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 01:10:26 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 01:10:29 | INFO | MainProcess | test_thread | [basetestcase.tearDown] ==============  basetestcase cleanup was started for test #1 test_multi_create_query_explain_drop_index ==============
2022-09-02 01:10:29 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleting existing buckets ['default'] on 127.0.0.1
2022-09-02 01:10:30 | INFO | MainProcess | test_thread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete....
2022-09-02 01:10:30 | INFO | MainProcess | test_thread | [on_prem_rest_client.bucket_exists] node 127.0.0.1 existing buckets : []
2022-09-02 01:10:30 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : default from 127.0.0.1
2022-09-02 01:10:30 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 01:10:30 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [basetestcase.tearDown] Removing user 'clientuser'...
2022-09-02 01:10:31 | ERROR | MainProcess | test_thread | [on_prem_rest_client._http_request] DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [basetestcase.tearDown] b'"User was not found."'
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9001
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9001 is running
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9002
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9002 is running
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9003
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9003 is running
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [basetestcase.tearDown] ==============  basetestcase cleanup was finished for test #1 test_multi_create_query_explain_drop_index ==============
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [basetestcase.tearDown] closing all ssh connections
2022-09-02 01:10:31 | INFO | MainProcess | test_thread | [basetestcase.tearDown] closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 1 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_1
ok

----------------------------------------------------------------------
Ran 1 test in 129.935s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_2

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,doc_ops=True,delete_ops_per=.5,run_async=True,scan_consistency=request_plus,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'doc_ops': 'True', 'delete_ops_per': '.5', 'run_async': 'True', 'scan_consistency': 'request_plus', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 2, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_2'}
[2022-09-02 01:10:31,517] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:10:31,517] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:10:31,774] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:10:31,805] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:10:31,893] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 01:10:31,893] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #2 test_multi_create_query_explain_drop_index==============
[2022-09-02 01:10:31,894] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 01:10:32,546] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 01:10:32,573] - [task:164] INFO -  {'uptime': '276', 'memoryTotal': 15466930176, 'memoryFree': 13729300480, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:10:32,600] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 01:10:32,601] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 01:10:32,601] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 01:10:32,634] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 01:10:32,671] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 01:10:32,671] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 01:10:32,702] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 01:10:32,703] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 01:10:32,703] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 01:10:32,704] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 01:10:32,756] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:10:32,759] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:10:32,760] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:10:33,024] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:10:33,025] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:10:33,093] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:10:33,094] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:10:33,123] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:10:33,149] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:10:33,178] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:10:33,302] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 01:10:33,328] - [task:164] INFO -  {'uptime': '274', 'memoryTotal': 15466930176, 'memoryFree': 13698121728, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:10:33,354] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:10:33,393] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 01:10:33,393] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 01:10:33,444] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:10:33,449] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:10:33,449] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:10:33,694] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:10:33,695] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:10:33,763] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:10:33,764] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:10:33,793] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:10:33,818] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:10:33,847] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:10:33,971] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 01:10:33,998] - [task:164] INFO -  {'uptime': '274', 'memoryTotal': 15466930176, 'memoryFree': 13722624000, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:10:34,023] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:10:34,052] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 01:10:34,052] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 01:10:34,103] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:10:34,106] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:10:34,106] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:10:34,354] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:10:34,355] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:10:34,421] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:10:34,422] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:10:34,451] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:10:34,477] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:10:34,505] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:10:34,621] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 01:10:34,648] - [task:164] INFO -  {'uptime': '269', 'memoryTotal': 15466930176, 'memoryFree': 13687599104, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:10:34,675] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:10:34,702] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 01:10:34,702] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 01:10:34,755] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:10:34,758] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:10:34,758] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:10:35,027] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:10:35,028] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:10:35,099] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:10:35,100] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:10:35,130] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:10:35,156] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:10:35,187] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:10:35,288] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 01:10:35,692] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:10:40,696] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 01:10:40,781] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 01:10:40,787] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:10:40,788] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:10:41,060] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:10:41,061] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:10:41,127] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:10:41,128] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:10:41,128] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 01:10:41,398] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 01:10:41,452] - [on_prem_rest_client:3047] INFO - 0.05 seconds to create bucket default
[2022-09-02 01:10:41,453] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 01:11:31,346] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:11:31,673] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:11:31,931] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 01:11:31,933] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #2 test_multi_create_query_explain_drop_index ==============
[2022-09-02 01:11:31,986] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:11:31,986] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:11:32,233] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:11:32,237] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:11:32,238] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:11:32,478] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:11:32,483] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:11:32,483] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:11:32,730] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:11:32,735] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:11:32,735] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:11:33,034] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:11:36,604] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:11:36,604] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 6.787802852965508, 'mem_free': 13720887296, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:11:36,604] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:11:36,605] - [basetestcase:467] INFO - Time to execute basesetup : 65.08954310417175
[2022-09-02 01:11:36,655] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:11:36,655] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:11:36,707] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:11:36,707] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:11:36,759] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:11:36,759] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:11:36,816] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:11:36,816] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:11:36,867] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:11:36,926] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 01:11:36,926] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 01:11:36,927] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:11:41,938] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 01:11:41,942] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:11:41,943] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:11:42,192] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:11:43,182] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 01:11:43,365] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:11:46,030] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:11:46,046] - [newtuq:85] INFO - {'delete': {'start': 0, 'end': 0}, 'remaining': {'start': 0, 'end': 1}}
[2022-09-02 01:11:46,714] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:11:46,714] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 01:11:46,715] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 01:12:16,744] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 01:12:16,773] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:12:16,799] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:12:16,866] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 64.87068ms
[2022-09-02 01:12:16,866] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '2db2ce0b-a130-45db-9e8f-56a5ef146d45', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '64.87068ms', 'executionTime': '64.808477ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:12:16,866] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 01:12:16,893] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 01:12:16,919] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 01:12:17,665] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 744.447219ms
[2022-09-02 01:12:17,666] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 01:12:17,730] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:12:17,767] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:12:17,775] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.038754ms
[2022-09-02 01:12:17,988] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:12:18,023] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 01:12:18,036] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 01:12:18,038] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 01:12:18,053] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 01:12:18,119] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:12:18,943] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employee9165eeaaeab94792bda31b5dd0fe9c04job_title` ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2022-09-02 01:12:18,969] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employee9165eeaaeab94792bda31b5dd0fe9c04job_title%60+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 01:12:19,020] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 48.052914ms
[2022-09-02 01:12:19,020] - [base_gsi:282] INFO - BUILD INDEX on default(employee9165eeaaeab94792bda31b5dd0fe9c04job_title) USING GSI
[2022-09-02 01:12:20,050] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(employee9165eeaaeab94792bda31b5dd0fe9c04job_title) USING GSI
[2022-09-02 01:12:20,078] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28employee9165eeaaeab94792bda31b5dd0fe9c04job_title%29+USING+GSI
[2022-09-02 01:12:20,104] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 24.470856ms
[2022-09-02 01:12:21,135] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee9165eeaaeab94792bda31b5dd0fe9c04job_title'
[2022-09-02 01:12:21,165] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee9165eeaaeab94792bda31b5dd0fe9c04job_title%27
[2022-09-02 01:12:21,175] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 8.068023ms
[2022-09-02 01:12:22,205] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee9165eeaaeab94792bda31b5dd0fe9c04job_title'
[2022-09-02 01:12:22,231] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee9165eeaaeab94792bda31b5dd0fe9c04job_title%27
[2022-09-02 01:12:22,239] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.163272ms
[2022-09-02 01:12:22,719] - [basetestcase:2772] INFO - delete 0.0 to default documents...
[2022-09-02 01:12:22,899] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:12:23,681] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:12:24,241] - [task:3235] INFO -  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:12:24,275] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 01:12:24,302] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
[2022-09-02 01:12:24,306] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.177274ms
[2022-09-02 01:12:24,306] - [task:3245] INFO - {'requestID': '461bda26-6abf-4502-93db-a516f08ea405', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee9165eeaaeab94792bda31b5dd0fe9c04job_title', 'index_id': '6a903eb47c4550b', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'index_key': '`job_title`', 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '2.177274ms', 'executionTime': '2.115784ms', 'resultCount': 1, 'resultSize': 723, 'serviceLoad': 6}}
[2022-09-02 01:12:24,307] - [task:3246] INFO -  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:12:24,307] - [task:3276] INFO -  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:12:24,307] - [base_gsi:560] INFO - Query : SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 01:12:24,308] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 01:12:24,308] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["job_title"] == "Sales" 
[2022-09-02 01:12:24,308] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 01:12:24,308] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 01:12:24,309] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 01:12:24,309] - [tuq_generators:329] INFO - -->where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 01:12:25,308] - [task:3235] INFO -  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:12:25,339] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 01:12:25,365] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
[2022-09-02 01:12:25,484] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 109.393453ms
[2022-09-02 01:12:25,484] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 01:12:25,485] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 01:12:26,297] - [task:3246] INFO -  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:12:26,297] - [task:3276] INFO -  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:12:27,327] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee9165eeaaeab94792bda31b5dd0fe9c04job_title'
[2022-09-02 01:12:27,353] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee9165eeaaeab94792bda31b5dd0fe9c04job_title%27
[2022-09-02 01:12:27,361] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.559894ms
[2022-09-02 01:12:27,386] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employee9165eeaaeab94792bda31b5dd0fe9c04job_title ON default USING GSI
[2022-09-02 01:12:27,412] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employee9165eeaaeab94792bda31b5dd0fe9c04job_title+ON+default+USING+GSI
[2022-09-02 01:12:27,450] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 35.480275ms
[2022-09-02 01:12:27,488] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee9165eeaaeab94792bda31b5dd0fe9c04job_title'
[2022-09-02 01:12:27,514] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee9165eeaaeab94792bda31b5dd0fe9c04job_title%27
[2022-09-02 01:12:27,520] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.31601ms
[2022-09-02 01:12:27,521] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'eb3200d2-6814-4c8c-a325-eed24334164d', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '5.31601ms', 'executionTime': '5.250959ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:12:27,627] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:12:27,631] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:12:27,631] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:12:27,967] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:12:28,023] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:12:28,024] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 01:12:28,087] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:12:28,144] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:12:28,147] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:12:28,147] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:12:28,479] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:12:28,537] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:12:28,538] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 01:12:28,598] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:12:28,656] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:12:28,657] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 01:12:28,657] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:12:28,686] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:12:28,712] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:12:28,719] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.523816ms
[2022-09-02 01:12:28,745] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:12:28,771] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 01:12:28,830] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 56.760175ms
[2022-09-02 01:12:28,916] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:12:28,916] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 7.027874970142705, 'mem_free': 13610086400, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:12:28,916] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:12:28,920] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:12:28,920] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:12:29,253] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:12:29,258] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:12:29,258] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:12:29,589] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:12:29,595] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:12:29,595] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:12:30,151] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:12:30,159] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:12:30,160] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:12:30,764] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:12:34,908] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #2 test_multi_create_query_explain_drop_index ==============
[2022-09-02 01:12:35,059] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 01:12:35,825] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 01:12:35,854] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 01:12:35,854] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 01:12:35,908] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:12:35,962] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:12:36,014] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:12:36,015] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 01:12:36,094] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 01:12:36,095] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 01:12:36,121] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:12:36,261] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 01:12:36,261] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:12:36,288] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 01:12:36,318] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 01:12:36,318] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:12:36,346] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 01:12:36,371] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 01:12:36,372] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:12:36,399] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 01:12:36,424] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 01:12:36,424] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:12:36,450] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 01:12:36,451] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #2 test_multi_create_query_explain_drop_index ==============
[2022-09-02 01:12:36,451] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 01:12:36,451] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 2 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_2
ok

----------------------------------------------------------------------
Ran 1 test in 124.989s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_3

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,doc_ops=True,update_ops_per=.5,run_async=True,scan_consistency=request_plus,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'doc_ops': 'True', 'update_ops_per': '.5', 'run_async': 'True', 'scan_consistency': 'request_plus', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 3, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_3'}
[2022-09-02 01:12:36,549] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:12:36,549] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:12:36,848] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:12:36,878] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:12:36,957] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 01:12:36,957] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #3 test_multi_create_query_explain_drop_index==============
[2022-09-02 01:12:36,958] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 01:12:37,578] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 01:12:37,605] - [task:164] INFO -  {'uptime': '401', 'memoryTotal': 15466930176, 'memoryFree': 13610647552, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:12:37,631] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 01:12:37,631] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 01:12:37,631] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 01:12:37,661] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 01:12:37,701] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 01:12:37,701] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 01:12:37,730] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 01:12:37,731] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 01:12:37,731] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 01:12:37,731] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 01:12:37,780] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:12:37,784] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:12:37,784] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:12:38,101] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:12:38,102] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:12:38,177] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:12:38,178] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:12:38,207] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:12:38,233] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:12:38,262] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:12:38,382] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 01:12:38,408] - [task:164] INFO -  {'uptime': '399', 'memoryTotal': 15466930176, 'memoryFree': 13669732352, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:12:38,434] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:12:38,463] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 01:12:38,463] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 01:12:38,514] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:12:38,517] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:12:38,517] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:12:38,814] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:12:38,815] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:12:38,894] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:12:38,896] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:12:38,926] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:12:38,953] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:12:38,980] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:12:39,096] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 01:12:39,123] - [task:164] INFO -  {'uptime': '400', 'memoryTotal': 15466930176, 'memoryFree': 13670129664, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:12:39,149] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:12:39,179] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 01:12:39,179] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 01:12:39,231] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:12:39,236] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:12:39,236] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:12:39,536] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:12:39,537] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:12:39,615] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:12:39,616] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:12:39,645] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:12:39,671] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:12:39,698] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:12:39,820] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 01:12:39,849] - [task:164] INFO -  {'uptime': '395', 'memoryTotal': 15466930176, 'memoryFree': 13669732352, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:12:39,879] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:12:39,909] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 01:12:39,909] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 01:12:39,965] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:12:39,968] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:12:39,968] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:12:40,288] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:12:40,289] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:12:40,369] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:12:40,370] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:12:40,401] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:12:40,428] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:12:40,459] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:12:40,551] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 01:12:40,948] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:12:45,954] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 01:12:46,037] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 01:12:46,042] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:12:46,043] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:12:46,356] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:12:46,357] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:12:46,435] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:12:46,436] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:12:46,436] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 01:12:46,661] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 01:12:46,717] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 01:12:46,717] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 01:13:31,351] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:13:31,712] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:13:32,004] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 01:13:32,006] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #3 test_multi_create_query_explain_drop_index ==============
[2022-09-02 01:13:32,061] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:13:32,061] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:13:32,363] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:13:32,368] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:13:32,368] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:13:32,663] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:13:32,668] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:13:32,668] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:13:32,970] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:13:32,975] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:13:32,975] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:13:33,505] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:13:37,497] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:13:37,498] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 6.342844001379855, 'mem_free': 13672984576, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:13:37,498] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:13:37,498] - [basetestcase:467] INFO - Time to execute basesetup : 60.95177960395813
[2022-09-02 01:13:37,551] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:13:37,551] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:13:37,605] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:13:37,605] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:13:37,657] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:13:37,657] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:13:37,714] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:13:37,714] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:13:37,765] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:13:37,828] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 01:13:37,829] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 01:13:37,834] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:13:42,847] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 01:13:42,851] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:13:42,851] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:13:43,157] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:13:44,185] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 01:13:44,354] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:13:46,930] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:13:46,947] - [newtuq:85] INFO - {'update': {'start': 0, 'end': 0}, 'remaining': {'start': 0, 'end': 1}}
[2022-09-02 01:13:47,793] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:13:47,793] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 01:13:47,794] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 01:14:17,816] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 01:14:17,845] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:14:17,871] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:14:17,940] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 67.902311ms
[2022-09-02 01:14:17,941] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '660a8bb6-4492-4722-b312-c79d134c3dc6', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '67.902311ms', 'executionTime': '67.844197ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:14:17,941] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 01:14:17,968] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 01:14:17,993] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 01:14:18,708] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 713.482636ms
[2022-09-02 01:14:18,709] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 01:14:18,756] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:14:18,791] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:14:18,799] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.739747ms
[2022-09-02 01:14:18,974] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:14:19,006] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 01:14:19,020] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 01:14:19,020] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 01:14:19,040] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 01:14:19,120] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:14:19,946] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employeea659e2a34ee54688a2619bcea73f2a0ejob_title` ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2022-09-02 01:14:19,973] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employeea659e2a34ee54688a2619bcea73f2a0ejob_title%60+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 01:14:20,025] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 50.284353ms
[2022-09-02 01:14:20,026] - [base_gsi:282] INFO - BUILD INDEX on default(employeea659e2a34ee54688a2619bcea73f2a0ejob_title) USING GSI
[2022-09-02 01:14:21,056] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(employeea659e2a34ee54688a2619bcea73f2a0ejob_title) USING GSI
[2022-09-02 01:14:21,083] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28employeea659e2a34ee54688a2619bcea73f2a0ejob_title%29+USING+GSI
[2022-09-02 01:14:21,104] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 19.707759ms
[2022-09-02 01:14:22,137] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeea659e2a34ee54688a2619bcea73f2a0ejob_title'
[2022-09-02 01:14:22,169] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeea659e2a34ee54688a2619bcea73f2a0ejob_title%27
[2022-09-02 01:14:22,183] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 11.764588ms
[2022-09-02 01:14:23,212] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeea659e2a34ee54688a2619bcea73f2a0ejob_title'
[2022-09-02 01:14:23,240] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeea659e2a34ee54688a2619bcea73f2a0ejob_title%27
[2022-09-02 01:14:23,247] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.024392ms
[2022-09-02 01:14:23,717] - [basetestcase:2772] INFO - update 0.0 to default documents...
[2022-09-02 01:14:23,888] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:14:24,558] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:14:25,249] - [task:3235] INFO -  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:14:25,282] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 01:14:25,309] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
[2022-09-02 01:14:25,313] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.289971ms
[2022-09-02 01:14:25,313] - [task:3245] INFO - {'requestID': 'dedcafec-d518-4827-8a0f-b545a4d25707', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employeea659e2a34ee54688a2619bcea73f2a0ejob_title', 'index_id': '8bd17124786c59ee', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'index_key': '`job_title`', 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '2.289971ms', 'executionTime': '2.147313ms', 'resultCount': 1, 'resultSize': 724, 'serviceLoad': 6}}
[2022-09-02 01:14:25,314] - [task:3246] INFO -  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:14:25,314] - [task:3276] INFO -  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:14:25,314] - [base_gsi:560] INFO - Query : SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 01:14:25,315] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 01:14:25,315] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["job_title"] == "Sales" 
[2022-09-02 01:14:25,315] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 01:14:25,316] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 01:14:25,316] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 01:14:25,316] - [tuq_generators:329] INFO - -->where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 01:14:26,315] - [task:3235] INFO -  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:14:26,345] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 01:14:26,372] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
[2022-09-02 01:14:26,479] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 97.84955ms
[2022-09-02 01:14:26,479] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 01:14:26,480] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 01:14:27,305] - [task:3246] INFO -  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:14:27,305] - [task:3276] INFO -  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:14:28,336] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeea659e2a34ee54688a2619bcea73f2a0ejob_title'
[2022-09-02 01:14:28,362] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeea659e2a34ee54688a2619bcea73f2a0ejob_title%27
[2022-09-02 01:14:28,371] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.977027ms
[2022-09-02 01:14:28,398] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employeea659e2a34ee54688a2619bcea73f2a0ejob_title ON default USING GSI
[2022-09-02 01:14:28,424] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employeea659e2a34ee54688a2619bcea73f2a0ejob_title+ON+default+USING+GSI
[2022-09-02 01:14:28,467] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 40.250038ms
[2022-09-02 01:14:28,505] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeea659e2a34ee54688a2619bcea73f2a0ejob_title'
[2022-09-02 01:14:28,530] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeea659e2a34ee54688a2619bcea73f2a0ejob_title%27
[2022-09-02 01:14:28,537] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.554095ms
[2022-09-02 01:14:28,537] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'cce3dac3-6512-427f-94ac-a31d0ab96985', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '5.554095ms', 'executionTime': '5.48609ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:14:28,646] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:14:28,649] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:14:28,650] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:14:29,036] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:14:29,092] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:14:29,092] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 01:14:29,164] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:14:29,220] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:14:29,223] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:14:29,223] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:14:29,630] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:14:29,689] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:14:29,689] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 01:14:29,758] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:14:29,813] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:14:29,813] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 01:14:29,814] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:14:29,839] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:14:29,865] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:14:29,872] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.760898ms
[2022-09-02 01:14:29,898] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:14:29,923] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 01:14:29,964] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 38.407567ms
[2022-09-02 01:14:30,034] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:14:30,035] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 24.95788320640968, 'mem_free': 13528358912, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:14:30,035] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:14:30,039] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:14:30,039] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:14:30,439] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:14:30,444] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:14:30,444] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:14:31,012] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:14:31,019] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:14:31,020] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:14:31,703] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:14:31,709] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:14:31,710] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:14:32,406] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:14:36,466] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #3 test_multi_create_query_explain_drop_index ==============
[2022-09-02 01:14:36,605] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 01:14:37,856] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 01:14:37,885] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 01:14:37,885] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 01:14:37,947] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:14:38,001] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:14:38,054] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:14:38,055] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 01:14:38,134] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 01:14:38,135] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 01:14:38,161] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:14:38,290] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 01:14:38,290] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:14:38,317] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 01:14:38,342] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 01:14:38,343] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:14:38,369] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 01:14:38,395] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 01:14:38,395] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:14:38,422] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 01:14:38,448] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 01:14:38,448] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:14:38,475] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 01:14:38,475] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #3 test_multi_create_query_explain_drop_index ==============
[2022-09-02 01:14:38,475] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 01:14:38,476] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 3 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_3
ok

----------------------------------------------------------------------
Ran 1 test in 121.981s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_4

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,doc_ops=True,expiry_ops_per=.5,run_async=True,scan_consistency=request_plus,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'doc_ops': 'True', 'expiry_ops_per': '.5', 'run_async': 'True', 'scan_consistency': 'request_plus', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 4, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_4'}
[2022-09-02 01:14:38,574] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:14:38,575] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:14:38,929] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:14:38,960] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:14:39,039] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 01:14:39,039] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #4 test_multi_create_query_explain_drop_index==============
[2022-09-02 01:14:39,040] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 01:14:39,604] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 01:14:39,632] - [task:164] INFO -  {'uptime': '523', 'memoryTotal': 15466930176, 'memoryFree': 13620613120, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:14:39,658] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 01:14:39,658] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 01:14:39,658] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 01:14:39,697] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 01:14:39,731] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 01:14:39,731] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 01:14:39,759] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 01:14:39,760] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 01:14:39,760] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 01:14:39,760] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 01:14:39,811] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:14:39,814] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:14:39,814] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:14:40,187] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:14:40,188] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:14:40,269] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:14:40,270] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:14:40,300] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:14:40,327] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:14:40,355] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:14:40,481] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 01:14:40,508] - [task:164] INFO -  {'uptime': '520', 'memoryTotal': 15466930176, 'memoryFree': 13621690368, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:14:40,534] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:14:40,561] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 01:14:40,562] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 01:14:40,613] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:14:40,617] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:14:40,617] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:14:40,995] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:14:40,996] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:14:41,079] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:14:41,080] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:14:41,110] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:14:41,136] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:14:41,164] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:14:41,288] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 01:14:41,316] - [task:164] INFO -  {'uptime': '520', 'memoryTotal': 15466930176, 'memoryFree': 13621723136, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:14:41,341] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:14:41,369] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 01:14:41,369] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 01:14:41,420] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:14:41,426] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:14:41,426] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:14:41,792] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:14:41,793] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:14:41,888] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:14:41,889] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:14:41,919] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:14:41,945] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:14:41,974] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:14:42,092] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 01:14:42,120] - [task:164] INFO -  {'uptime': '520', 'memoryTotal': 15466930176, 'memoryFree': 13621665792, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:14:42,147] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:14:42,175] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 01:14:42,175] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 01:14:42,228] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:14:42,231] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:14:42,232] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:14:42,599] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:14:42,600] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:14:42,681] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:14:42,683] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:14:42,713] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:14:42,740] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:14:42,770] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:14:42,868] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 01:14:43,251] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:14:48,254] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 01:14:48,338] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 01:14:48,344] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:14:48,344] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:14:48,719] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:14:48,720] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:14:48,801] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:14:48,802] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:14:48,802] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 01:14:48,981] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 01:14:49,042] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 01:14:49,042] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 01:15:31,035] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:15:31,350] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:15:31,728] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 01:15:31,730] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #4 test_multi_create_query_explain_drop_index ==============
[2022-09-02 01:15:31,792] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:15:31,792] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:15:32,187] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:15:32,192] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:15:32,192] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:15:32,543] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:15:32,549] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:15:32,549] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:15:33,067] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:15:33,076] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:15:33,076] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:15:33,720] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:15:38,092] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:15:38,093] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 6.371875282837814, 'mem_free': 13630574592, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:15:38,093] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:15:38,093] - [basetestcase:467] INFO - Time to execute basesetup : 59.52130579948425
[2022-09-02 01:15:38,145] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:15:38,145] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:15:38,197] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:15:38,198] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:15:38,252] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:15:38,252] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:15:38,307] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:15:38,307] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:15:38,359] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:15:38,425] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 01:15:38,426] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 01:15:38,427] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:15:43,435] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 01:15:43,438] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:15:43,438] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:15:43,806] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:15:44,832] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 01:15:45,005] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:15:47,506] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:15:47,520] - [newtuq:85] INFO - {'expiry': {'start': 0, 'end': 0}, 'remaining': {'start': 0, 'end': 1}}
[2022-09-02 01:15:48,202] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:15:48,202] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 01:15:48,203] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 01:16:18,218] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 01:16:18,247] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:16:18,273] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:16:18,346] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 70.535354ms
[2022-09-02 01:16:18,346] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '65adeae0-49ed-4e50-b1ad-b3bdbf6b84ca', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '70.535354ms', 'executionTime': '70.477041ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:16:18,346] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 01:16:18,373] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 01:16:18,399] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 01:16:19,127] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 726.32128ms
[2022-09-02 01:16:19,127] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 01:16:19,182] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:16:19,214] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:16:19,221] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.758074ms
[2022-09-02 01:16:19,439] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:16:19,476] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 01:16:19,491] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 01:16:19,491] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 01:16:19,513] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 01:16:19,591] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:16:20,400] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employee0f7fdf8e18b2464d9056815d28a85471job_title` ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2022-09-02 01:16:20,427] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employee0f7fdf8e18b2464d9056815d28a85471job_title%60+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 01:16:20,479] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 50.409402ms
[2022-09-02 01:16:20,480] - [base_gsi:282] INFO - BUILD INDEX on default(employee0f7fdf8e18b2464d9056815d28a85471job_title) USING GSI
[2022-09-02 01:16:21,509] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(employee0f7fdf8e18b2464d9056815d28a85471job_title) USING GSI
[2022-09-02 01:16:21,535] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28employee0f7fdf8e18b2464d9056815d28a85471job_title%29+USING+GSI
[2022-09-02 01:16:21,558] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 20.414857ms
[2022-09-02 01:16:22,587] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee0f7fdf8e18b2464d9056815d28a85471job_title'
[2022-09-02 01:16:22,613] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee0f7fdf8e18b2464d9056815d28a85471job_title%27
[2022-09-02 01:16:22,622] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.421254ms
[2022-09-02 01:16:23,652] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee0f7fdf8e18b2464d9056815d28a85471job_title'
[2022-09-02 01:16:23,679] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee0f7fdf8e18b2464d9056815d28a85471job_title%27
[2022-09-02 01:16:23,688] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.157797ms
[2022-09-02 01:16:24,152] - [basetestcase:2772] INFO - update 0.0 to default documents...
[2022-09-02 01:16:24,321] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:16:25,158] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:16:25,346] - [data_helper:309] INFO - dict:{'ip': '127.0.0.1', 'port': '9000', 'username': 'Administrator', 'password': 'asdasd'}
[2022-09-02 01:16:25,346] - [data_helper:310] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:16:25,519] - [cluster_helper:379] INFO - Setting flush param on server {'ip': '127.0.0.1', 'port': '9000', 'username': 'Administrator', 'password': 'asdasd'}, exp_pager_stime to 10 on default
[2022-09-02 01:16:25,519] - [mc_bin_client:669] INFO - setting param: exp_pager_stime 10
[2022-09-02 01:16:25,520] - [cluster_helper:393] INFO - Setting flush param on server {'ip': '127.0.0.1', 'port': '9000', 'username': 'Administrator', 'password': 'asdasd'}, exp_pager_stime to 10, result: (1233776889, 0, b'')
[2022-09-02 01:16:25,691] - [task:3235] INFO -  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:16:25,719] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 01:16:25,746] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
[2022-09-02 01:16:25,750] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.303652ms
[2022-09-02 01:16:25,750] - [task:3245] INFO - {'requestID': 'eabd869f-4595-489b-ad3b-187b79247e46', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee0f7fdf8e18b2464d9056815d28a85471job_title', 'index_id': 'e71360e0458c589a', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'index_key': '`job_title`', 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '2.303652ms', 'executionTime': '2.231372ms', 'resultCount': 1, 'resultSize': 724, 'serviceLoad': 6}}
[2022-09-02 01:16:25,750] - [task:3246] INFO -  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:16:25,751] - [task:3276] INFO -  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:16:25,751] - [base_gsi:560] INFO - Query : SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 01:16:25,751] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 01:16:25,752] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["job_title"] == "Sales" 
[2022-09-02 01:16:25,752] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 01:16:25,752] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 01:16:25,752] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 01:16:25,753] - [tuq_generators:329] INFO - -->where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 01:16:26,752] - [task:3235] INFO -  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:16:26,782] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 01:16:26,808] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
[2022-09-02 01:16:26,936] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 119.959139ms
[2022-09-02 01:16:26,937] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 01:16:26,937] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 01:16:27,754] - [task:3246] INFO -  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:16:27,754] - [task:3276] INFO -  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:16:28,784] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee0f7fdf8e18b2464d9056815d28a85471job_title'
[2022-09-02 01:16:28,811] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee0f7fdf8e18b2464d9056815d28a85471job_title%27
[2022-09-02 01:16:28,819] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.071964ms
[2022-09-02 01:16:28,845] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employee0f7fdf8e18b2464d9056815d28a85471job_title ON default USING GSI
[2022-09-02 01:16:28,871] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employee0f7fdf8e18b2464d9056815d28a85471job_title+ON+default+USING+GSI
[2022-09-02 01:16:28,920] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 47.268434ms
[2022-09-02 01:16:28,957] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee0f7fdf8e18b2464d9056815d28a85471job_title'
[2022-09-02 01:16:28,982] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee0f7fdf8e18b2464d9056815d28a85471job_title%27
[2022-09-02 01:16:28,990] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.171105ms
[2022-09-02 01:16:28,990] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '742b1a54-a474-46a8-be98-8b9c2adfe99e', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '6.171105ms', 'executionTime': '6.045863ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:16:29,096] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:16:29,099] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:16:29,100] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:16:29,538] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:16:29,599] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:16:29,599] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 01:16:29,676] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:16:29,733] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:16:29,735] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:16:29,736] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:16:30,175] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:16:30,234] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:16:30,235] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 01:16:30,312] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:16:30,370] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:16:30,371] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 01:16:30,371] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:16:30,400] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:16:30,426] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:16:30,435] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.229946ms
[2022-09-02 01:16:30,462] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:16:30,487] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 01:16:30,533] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 44.041115ms
[2022-09-02 01:16:30,603] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:16:30,603] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 25.85868022554394, 'mem_free': 13481332736, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:16:30,603] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:16:30,608] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:16:30,608] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:16:31,035] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:16:31,040] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:16:31,041] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:16:31,687] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:16:31,698] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:16:31,699] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:16:32,483] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:16:32,491] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:16:32,491] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:16:33,262] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:16:37,987] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #4 test_multi_create_query_explain_drop_index ==============
[2022-09-02 01:16:38,130] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 01:16:39,001] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 01:16:39,031] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 01:16:39,031] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 01:16:39,088] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:16:39,145] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:16:39,205] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:16:39,206] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 01:16:39,306] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 01:16:39,307] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 01:16:39,333] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:16:39,472] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 01:16:39,472] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:16:39,500] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 01:16:39,529] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 01:16:39,529] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:16:39,557] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 01:16:39,584] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 01:16:39,584] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:16:39,611] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 01:16:39,636] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 01:16:39,637] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:16:39,664] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 01:16:39,664] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #4 test_multi_create_query_explain_drop_index ==============
[2022-09-02 01:16:39,665] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 01:16:39,665] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 4 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_4
ok

----------------------------------------------------------------------
Ran 1 test in 121.148s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_5

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,doc_ops=True,create_ops_per=.5,run_async=True,scan_consistency=request_plus,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'doc_ops': 'True', 'create_ops_per': '.5', 'run_async': 'True', 'scan_consistency': 'request_plus', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 5, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_5'}
[2022-09-02 01:16:39,769] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:16:39,770] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:16:40,223] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:16:40,256] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:16:40,335] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 01:16:40,336] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #5 test_multi_create_query_explain_drop_index==============
[2022-09-02 01:16:40,336] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 01:16:40,796] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 01:16:40,823] - [task:164] INFO -  {'uptime': '644', 'memoryTotal': 15466930176, 'memoryFree': 13585154048, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:16:40,848] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 01:16:40,848] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 01:16:40,849] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 01:16:40,880] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 01:16:40,915] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 01:16:40,915] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 01:16:40,944] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 01:16:40,945] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 01:16:40,945] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 01:16:40,945] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 01:16:40,993] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:16:40,996] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:16:40,996] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:16:41,406] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:16:41,407] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:16:41,498] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:16:41,499] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:16:41,530] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:16:41,557] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:16:41,586] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:16:41,716] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 01:16:41,749] - [task:164] INFO -  {'uptime': '640', 'memoryTotal': 15466930176, 'memoryFree': 13585969152, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:16:41,782] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:16:41,811] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 01:16:41,811] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 01:16:41,866] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:16:41,869] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:16:41,869] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:16:42,345] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:16:42,346] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:16:42,446] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:16:42,448] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:16:42,480] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:16:42,507] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:16:42,536] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:16:42,650] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 01:16:42,677] - [task:164] INFO -  {'uptime': '641', 'memoryTotal': 15466930176, 'memoryFree': 13585850368, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:16:42,704] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:16:42,731] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 01:16:42,731] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 01:16:42,790] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:16:42,795] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:16:42,795] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:16:43,228] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:16:43,229] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:16:43,322] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:16:43,323] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:16:43,353] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:16:43,379] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:16:43,408] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:16:43,524] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 01:16:43,551] - [task:164] INFO -  {'uptime': '641', 'memoryTotal': 15466930176, 'memoryFree': 13586210816, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:16:43,578] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:16:43,607] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 01:16:43,607] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 01:16:43,658] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:16:43,661] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:16:43,661] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:16:44,093] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:16:44,094] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:16:44,184] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:16:44,186] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:16:44,215] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:16:44,241] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:16:44,271] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:16:44,359] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 01:16:44,749] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:16:49,754] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 01:16:49,843] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 01:16:49,848] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:16:49,848] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:16:50,276] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:16:50,277] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:16:50,372] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:16:50,374] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:16:50,374] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 01:16:51,477] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 01:16:51,536] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 01:16:51,537] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 01:17:30,926] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:17:31,290] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:17:31,695] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 01:17:31,699] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #5 test_multi_create_query_explain_drop_index ==============
[2022-09-02 01:17:31,756] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:17:31,756] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:17:32,200] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:17:32,205] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:17:32,205] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:17:32,618] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:17:32,624] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:17:32,624] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:17:33,307] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:17:33,314] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:17:33,315] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:17:34,075] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:17:39,035] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:17:39,036] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 6.700671873974978, 'mem_free': 13589082112, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:17:39,036] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:17:39,036] - [basetestcase:467] INFO - Time to execute basesetup : 59.26920437812805
[2022-09-02 01:17:39,087] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:17:39,087] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:17:39,140] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:17:39,140] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:17:39,193] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:17:39,193] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:17:39,246] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:17:39,246] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:17:39,297] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:17:39,359] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 01:17:39,360] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 01:17:39,360] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:17:44,369] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 01:17:44,372] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:17:44,373] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:17:44,792] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:17:45,788] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 01:17:46,035] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:17:48,450] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:17:48,466] - [newtuq:85] INFO - {'remaining': {'start': 0, 'end': 1}, 'create': {'start': 1, 'end': 2}}
[2022-09-02 01:17:49,545] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:17:49,545] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 01:17:49,545] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 01:18:19,576] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 01:18:19,605] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:18:19,632] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:18:19,699] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 64.761132ms
[2022-09-02 01:18:19,699] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '8c70476b-4298-43b3-9397-be0bd5d25932', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '64.761132ms', 'executionTime': '64.702797ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:18:19,700] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 01:18:19,727] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 01:18:19,753] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 01:18:20,497] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 741.618421ms
[2022-09-02 01:18:20,497] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 01:18:20,557] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:18:20,587] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:18:20,595] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.161001ms
[2022-09-02 01:18:20,796] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:18:20,831] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 01:18:20,851] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 01:18:20,852] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 01:18:20,881] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 01:18:20,941] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:18:21,755] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employee3fec40b3474748acbc365106708f3f0fjob_title` ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2022-09-02 01:18:21,781] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employee3fec40b3474748acbc365106708f3f0fjob_title%60+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 01:18:21,830] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 47.068673ms
[2022-09-02 01:18:21,831] - [base_gsi:282] INFO - BUILD INDEX on default(employee3fec40b3474748acbc365106708f3f0fjob_title) USING GSI
[2022-09-02 01:18:22,862] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(employee3fec40b3474748acbc365106708f3f0fjob_title) USING GSI
[2022-09-02 01:18:22,888] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28employee3fec40b3474748acbc365106708f3f0fjob_title%29+USING+GSI
[2022-09-02 01:18:22,910] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 19.510255ms
[2022-09-02 01:18:23,942] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee3fec40b3474748acbc365106708f3f0fjob_title'
[2022-09-02 01:18:23,971] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee3fec40b3474748acbc365106708f3f0fjob_title%27
[2022-09-02 01:18:23,980] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.748502ms
[2022-09-02 01:18:25,011] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee3fec40b3474748acbc365106708f3f0fjob_title'
[2022-09-02 01:18:25,037] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee3fec40b3474748acbc365106708f3f0fjob_title%27
[2022-09-02 01:18:25,047] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.748963ms
[2022-09-02 01:18:25,537] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 01:18:25,713] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:18:29,359] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:18:30,054] - [task:3235] INFO -  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:18:30,086] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 01:18:30,112] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
[2022-09-02 01:18:30,117] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 3.544858ms
[2022-09-02 01:18:30,117] - [task:3245] INFO - {'requestID': 'eee646ef-bad8-4340-a689-240068485639', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee3fec40b3474748acbc365106708f3f0fjob_title', 'index_id': '1fae1f352a0406eb', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'index_key': '`job_title`', 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '3.544858ms', 'executionTime': '3.47897ms', 'resultCount': 1, 'resultSize': 724, 'serviceLoad': 6}}
[2022-09-02 01:18:30,118] - [task:3246] INFO -  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:18:30,118] - [task:3276] INFO -  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:18:30,118] - [base_gsi:560] INFO - Query : SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 01:18:30,119] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 01:18:30,119] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["job_title"] == "Sales" 
[2022-09-02 01:18:30,119] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 01:18:30,120] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 01:18:30,120] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 01:18:30,120] - [tuq_generators:329] INFO - -->where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 01:18:31,119] - [task:3235] INFO -  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:18:31,148] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 01:18:31,173] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
[2022-09-02 01:18:31,336] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 149.576951ms
[2022-09-02 01:18:31,337] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 01:18:31,338] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 01:18:32,959] - [task:3246] INFO -  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:18:32,960] - [task:3276] INFO -  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:18:33,990] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee3fec40b3474748acbc365106708f3f0fjob_title'
[2022-09-02 01:18:34,016] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee3fec40b3474748acbc365106708f3f0fjob_title%27
[2022-09-02 01:18:34,023] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.595471ms
[2022-09-02 01:18:34,049] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employee3fec40b3474748acbc365106708f3f0fjob_title ON default USING GSI
[2022-09-02 01:18:34,074] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employee3fec40b3474748acbc365106708f3f0fjob_title+ON+default+USING+GSI
[2022-09-02 01:18:34,124] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 48.735912ms
[2022-09-02 01:18:34,157] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee3fec40b3474748acbc365106708f3f0fjob_title'
[2022-09-02 01:18:34,182] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee3fec40b3474748acbc365106708f3f0fjob_title%27
[2022-09-02 01:18:34,190] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.56573ms
[2022-09-02 01:18:34,190] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '0dc475e9-09ad-470c-8ce6-0542796b7117', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '5.56573ms', 'executionTime': '5.506263ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:18:34,298] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:18:34,303] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:18:34,303] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:18:34,829] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:18:34,885] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:18:34,885] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 01:18:34,970] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:18:35,024] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:18:35,027] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:18:35,028] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:18:35,574] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:18:35,636] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:18:35,637] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 01:18:35,732] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:18:35,790] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:18:35,791] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 01:18:35,791] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:18:35,817] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:18:35,843] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:18:35,850] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.139603ms
[2022-09-02 01:18:35,878] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:18:35,904] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 01:18:35,946] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 39.815113ms
[2022-09-02 01:18:36,022] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:18:36,022] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 22.79024744812186, 'mem_free': 13467607040, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:18:36,023] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:18:36,027] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:18:36,027] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:18:36,575] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:18:36,580] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:18:36,580] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:18:37,093] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:18:37,098] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:18:37,098] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:18:37,963] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:18:37,972] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:18:37,972] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:18:38,867] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:18:44,841] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #5 test_multi_create_query_explain_drop_index ==============
[2022-09-02 01:18:44,982] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 01:18:45,818] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 01:18:45,846] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 01:18:45,847] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 01:18:45,901] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:18:45,954] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:18:46,006] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:18:46,007] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 01:18:46,086] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 01:18:46,088] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 01:18:46,137] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:18:46,269] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 01:18:46,269] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:18:46,296] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 01:18:46,321] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 01:18:46,322] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:18:46,348] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 01:18:46,374] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 01:18:46,375] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:18:46,402] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 01:18:46,428] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 01:18:46,428] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:18:46,454] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 01:18:46,455] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #5 test_multi_create_query_explain_drop_index ==============
[2022-09-02 01:18:46,455] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 01:18:46,455] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 5 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_5
ok

----------------------------------------------------------------------
Ran 1 test in 126.740s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_6

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,doc_ops=True,create_ops_per=.5,delete_ops_per=.2,update_ops_per=.2,run_async=True,scan_consistency=request_plus,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'doc_ops': 'True', 'create_ops_per': '.5', 'delete_ops_per': '.2', 'update_ops_per': '.2', 'run_async': 'True', 'scan_consistency': 'request_plus', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 6, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_6'}
[2022-09-02 01:18:46,556] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:18:46,556] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:18:47,045] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:18:47,078] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:18:47,159] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 01:18:47,159] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #6 test_multi_create_query_explain_drop_index==============
[2022-09-02 01:18:47,160] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 01:18:47,582] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 01:18:47,608] - [task:164] INFO -  {'uptime': '771', 'memoryTotal': 15466930176, 'memoryFree': 13439229952, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:18:47,634] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 01:18:47,634] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 01:18:47,635] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 01:18:47,665] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 01:18:47,700] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 01:18:47,700] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 01:18:47,728] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 01:18:47,730] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 01:18:47,730] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 01:18:47,730] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 01:18:47,779] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:18:47,782] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:18:47,782] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:18:48,276] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:18:48,277] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:18:48,377] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:18:48,378] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:18:48,407] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:18:48,433] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:18:48,461] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:18:48,582] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 01:18:48,608] - [task:164] INFO -  {'uptime': '771', 'memoryTotal': 15466930176, 'memoryFree': 13538357248, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:18:48,633] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:18:48,661] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 01:18:48,662] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 01:18:48,712] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:18:48,715] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:18:48,716] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:18:49,243] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:18:49,244] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:18:49,347] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:18:49,348] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:18:49,378] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:18:49,404] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:18:49,433] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:18:49,561] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 01:18:49,589] - [task:164] INFO -  {'uptime': '766', 'memoryTotal': 15466930176, 'memoryFree': 13538992128, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:18:49,615] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:18:49,643] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 01:18:49,645] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 01:18:49,697] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:18:49,702] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:18:49,702] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:18:50,210] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:18:50,211] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:18:50,312] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:18:50,313] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:18:50,343] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:18:50,370] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:18:50,401] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:18:50,519] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 01:18:50,545] - [task:164] INFO -  {'uptime': '766', 'memoryTotal': 15466930176, 'memoryFree': 13539172352, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:18:50,571] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:18:50,599] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 01:18:50,599] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 01:18:50,653] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:18:50,656] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:18:50,656] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:18:51,196] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:18:51,198] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:18:51,306] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:18:51,307] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:18:51,338] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:18:51,363] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:18:51,393] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:18:51,482] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 01:18:51,857] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:18:56,862] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 01:18:56,955] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 01:18:56,959] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:18:56,959] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:18:57,517] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:18:57,518] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:18:57,628] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:18:57,629] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:18:57,630] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 01:18:58,604] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 01:18:58,663] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 01:18:58,664] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 01:19:30,994] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:19:31,378] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:19:31,764] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 01:19:31,768] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #6 test_multi_create_query_explain_drop_index ==============
[2022-09-02 01:19:31,831] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:19:31,831] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:19:32,354] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:19:32,360] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:19:32,360] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:19:32,957] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:19:32,969] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:19:32,970] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:19:33,850] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:19:33,858] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:19:33,858] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:19:34,740] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:19:40,114] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:19:40,115] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 6.770863020791771, 'mem_free': 13543673856, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:19:40,115] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:19:40,115] - [basetestcase:467] INFO - Time to execute basesetup : 53.561805963516235
[2022-09-02 01:19:40,165] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:19:40,166] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:19:40,217] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:19:40,218] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:19:40,270] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:19:40,270] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:19:40,324] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:19:40,325] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:19:40,376] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:19:40,437] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 01:19:40,437] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 01:19:40,438] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:19:45,451] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 01:19:45,455] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:19:45,455] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:19:45,968] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:19:47,060] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 01:19:47,231] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:19:49,518] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:19:49,535] - [newtuq:85] INFO - {'update': {'start': 0, 'end': 0}, 'delete': {'start': 0, 'end': 0}, 'remaining': {'start': 0, 'end': 1}, 'create': {'start': 1, 'end': 2}}
[2022-09-02 01:19:50,941] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:19:50,942] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 01:19:50,942] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 01:20:20,963] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 01:20:20,991] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:20:21,018] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:20:21,086] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 65.96568ms
[2022-09-02 01:20:21,086] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'a29ef83e-0395-4285-99d1-29b18b4a02a2', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '65.96568ms', 'executionTime': '65.903584ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:20:21,086] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 01:20:21,113] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 01:20:21,140] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 01:20:21,825] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 682.804751ms
[2022-09-02 01:20:21,825] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 01:20:21,877] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:20:21,923] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:20:21,931] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.719251ms
[2022-09-02 01:20:22,131] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:20:22,164] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 01:20:22,180] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 01:20:22,181] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 01:20:22,199] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 01:20:22,292] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:20:23,097] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employeea6b9ad7d7c17454085e4e6a24387ae67job_title` ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2022-09-02 01:20:23,124] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employeea6b9ad7d7c17454085e4e6a24387ae67job_title%60+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 01:20:23,181] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 55.361373ms
[2022-09-02 01:20:23,181] - [base_gsi:282] INFO - BUILD INDEX on default(employeea6b9ad7d7c17454085e4e6a24387ae67job_title) USING GSI
[2022-09-02 01:20:24,211] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(employeea6b9ad7d7c17454085e4e6a24387ae67job_title) USING GSI
[2022-09-02 01:20:24,238] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28employeea6b9ad7d7c17454085e4e6a24387ae67job_title%29+USING+GSI
[2022-09-02 01:20:24,262] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 21.6691ms
[2022-09-02 01:20:25,293] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeea6b9ad7d7c17454085e4e6a24387ae67job_title'
[2022-09-02 01:20:25,320] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeea6b9ad7d7c17454085e4e6a24387ae67job_title%27
[2022-09-02 01:20:25,330] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 8.684806ms
[2022-09-02 01:20:26,361] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeea6b9ad7d7c17454085e4e6a24387ae67job_title'
[2022-09-02 01:20:26,391] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeea6b9ad7d7c17454085e4e6a24387ae67job_title%27
[2022-09-02 01:20:26,399] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.748225ms
[2022-09-02 01:20:26,888] - [basetestcase:2772] INFO - update 0.0 to default documents...
[2022-09-02 01:20:27,060] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:20:28,180] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:20:28,807] - [basetestcase:2772] INFO - delete 0.0 to default documents...
[2022-09-02 01:20:28,981] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:20:30,298] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:20:30,711] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 01:20:31,115] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:20:34,618] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:20:35,421] - [task:3235] INFO -  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:20:35,453] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 01:20:35,478] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
[2022-09-02 01:20:35,482] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.840121ms
[2022-09-02 01:20:35,482] - [task:3245] INFO - {'requestID': '86ebf7cf-9d95-4f4d-b720-4ae900e9c3bb', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employeea6b9ad7d7c17454085e4e6a24387ae67job_title', 'index_id': '2b14095372e2a9d2', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'index_key': '`job_title`', 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '2.840121ms', 'executionTime': '2.776617ms', 'resultCount': 1, 'resultSize': 724, 'serviceLoad': 6}}
[2022-09-02 01:20:35,483] - [task:3246] INFO -  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:20:35,483] - [task:3276] INFO -  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:20:35,483] - [base_gsi:560] INFO - Query : SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 01:20:35,484] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 01:20:35,484] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["job_title"] == "Sales" 
[2022-09-02 01:20:35,484] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 01:20:35,485] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 01:20:35,485] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 01:20:35,485] - [tuq_generators:329] INFO - -->where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 01:20:36,484] - [task:3235] INFO -  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:20:36,514] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 01:20:36,539] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
[2022-09-02 01:20:36,743] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 189.454132ms
[2022-09-02 01:20:36,743] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 01:20:36,744] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 01:20:38,346] - [task:3246] INFO -  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:20:38,346] - [task:3276] INFO -  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 01:20:39,377] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeea6b9ad7d7c17454085e4e6a24387ae67job_title'
[2022-09-02 01:20:39,402] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeea6b9ad7d7c17454085e4e6a24387ae67job_title%27
[2022-09-02 01:20:39,409] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.555841ms
[2022-09-02 01:20:39,436] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employeea6b9ad7d7c17454085e4e6a24387ae67job_title ON default USING GSI
[2022-09-02 01:20:39,461] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employeea6b9ad7d7c17454085e4e6a24387ae67job_title+ON+default+USING+GSI
[2022-09-02 01:20:39,507] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 43.95276ms
[2022-09-02 01:20:39,539] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeea6b9ad7d7c17454085e4e6a24387ae67job_title'
[2022-09-02 01:20:39,565] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeea6b9ad7d7c17454085e4e6a24387ae67job_title%27
[2022-09-02 01:20:39,572] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.950449ms
[2022-09-02 01:20:39,572] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '1326fe05-4f51-45d8-92b1-8b75b004a0fd', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '5.950449ms', 'executionTime': '5.881646ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:20:39,690] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:20:39,695] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:20:39,695] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:20:40,343] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:20:40,400] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:20:40,400] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 01:20:40,507] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:20:40,566] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:20:40,569] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:20:40,569] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:20:41,228] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:20:41,283] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:20:41,283] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 01:20:41,423] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:20:41,497] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:20:41,497] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 01:20:41,498] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:20:41,531] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:20:41,568] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:20:41,576] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.416972ms
[2022-09-02 01:20:41,604] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:20:41,630] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 01:20:41,685] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 53.300258ms
[2022-09-02 01:20:41,763] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:20:41,763] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 33.49064045418881, 'mem_free': 13333721088, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:20:41,764] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:20:41,768] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:20:41,768] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:20:42,427] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:20:42,432] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:20:42,432] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:20:43,524] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:20:43,533] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:20:43,533] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:20:44,680] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:20:44,688] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:20:44,689] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:20:45,806] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:20:52,271] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #6 test_multi_create_query_explain_drop_index ==============
[2022-09-02 01:20:52,419] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 01:20:52,962] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 01:20:52,991] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 01:20:52,992] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 01:20:53,055] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:20:53,113] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:20:53,166] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:20:53,167] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 01:20:53,247] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 01:20:53,248] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 01:20:53,276] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:20:53,420] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 01:20:53,420] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:20:53,451] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 01:20:53,478] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 01:20:53,478] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:20:53,505] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 01:20:53,540] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 01:20:53,541] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:20:53,573] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 01:20:53,599] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 01:20:53,599] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:20:53,626] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 01:20:53,626] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #6 test_multi_create_query_explain_drop_index ==============
[2022-09-02 01:20:53,626] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 01:20:53,627] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_6
ok

----------------------------------------------------------------------
Ran 1 test in 127.126s

OK
test_multi_create_drop_index (gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_7

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests.test_multi_create_drop_index,groups=simple,dataset=default,doc-per-day=1,cbq_version=sherlock,skip_build_tuq=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple', 'dataset': 'default', 'doc-per-day': '1', 'cbq_version': 'sherlock', 'skip_build_tuq': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 7, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_7'}
[2022-09-02 01:20:53,763] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:20:53,763] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:20:54,402] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:20:54,436] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:20:54,521] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 01:20:54,522] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #7 test_multi_create_drop_index==============
[2022-09-02 01:20:54,522] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 01:20:54,791] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 01:20:54,819] - [task:164] INFO -  {'uptime': '898', 'memoryTotal': 15466930176, 'memoryFree': 13467324416, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:20:54,846] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 01:20:54,846] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 01:20:54,846] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 01:20:54,886] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 01:20:54,921] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 01:20:54,921] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 01:20:54,951] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 01:20:54,952] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 01:20:54,952] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 01:20:54,953] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 01:20:55,005] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:20:55,008] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:20:55,008] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:20:55,597] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:20:55,598] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:20:55,715] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:20:55,716] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:20:55,747] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:20:55,774] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:20:55,802] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:20:55,923] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 01:20:55,953] - [task:164] INFO -  {'uptime': '896', 'memoryTotal': 15466930176, 'memoryFree': 13462507520, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:20:55,979] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:20:56,010] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 01:20:56,010] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 01:20:56,061] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:20:56,064] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:20:56,065] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:20:56,638] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:20:56,640] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:20:56,755] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:20:56,756] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:20:56,785] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:20:56,811] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:20:56,840] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:20:56,953] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 01:20:56,979] - [task:164] INFO -  {'uptime': '897', 'memoryTotal': 15466930176, 'memoryFree': 13466570752, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:20:57,007] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:20:57,034] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 01:20:57,034] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 01:20:57,085] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:20:57,090] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:20:57,090] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:20:57,662] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:20:57,664] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:20:57,778] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:20:57,779] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:20:57,809] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:20:57,839] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:20:57,868] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:20:57,988] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 01:20:58,017] - [task:164] INFO -  {'uptime': '897', 'memoryTotal': 15466930176, 'memoryFree': 13484314624, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:20:58,043] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:20:58,071] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 01:20:58,071] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 01:20:58,122] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:20:58,125] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:20:58,125] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:20:58,701] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:20:58,703] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:20:58,818] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:20:58,819] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:20:58,850] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:20:58,878] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:20:58,907] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:20:58,997] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 01:20:59,391] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:21:04,393] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 01:21:04,481] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 01:21:04,486] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:21:04,486] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:21:05,089] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:21:05,090] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:21:05,205] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:21:05,206] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:21:05,206] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 01:21:06,110] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 01:21:06,166] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 01:21:06,166] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 01:21:30,961] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:21:31,272] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:21:31,758] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 01:21:31,761] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #7 test_multi_create_drop_index ==============
[2022-09-02 01:21:31,963] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:21:31,963] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:21:32,554] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:21:32,559] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:21:32,559] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:21:33,452] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:21:33,461] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:21:33,461] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:21:34,499] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:21:34,508] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:21:34,508] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:21:35,603] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:21:41,654] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:21:41,655] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 7.408903637594386, 'mem_free': 13485780992, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:21:41,655] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:21:41,656] - [basetestcase:467] INFO - Time to execute basesetup : 47.89542603492737
[2022-09-02 01:21:41,709] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:21:41,709] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:21:41,763] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:21:41,764] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:21:41,817] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:21:41,817] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:21:41,871] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:21:41,872] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:21:41,924] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:21:41,986] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 01:21:41,986] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 01:21:41,986] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:21:46,996] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 01:21:47,000] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:21:47,000] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:21:47,579] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:21:48,645] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 01:21:48,815] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:21:51,035] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:21:51,113] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:21:51,113] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 01:21:51,113] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 01:22:21,142] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 01:22:21,171] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:22:21,199] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:22:21,266] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 64.839829ms
[2022-09-02 01:22:21,266] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'ef8a11eb-3d0a-493a-a2eb-12cf916ceb90', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '64.839829ms', 'executionTime': '64.766901ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:22:21,266] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 01:22:21,295] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 01:22:21,325] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 01:22:22,025] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 698.763791ms
[2022-09-02 01:22:22,026] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 01:22:22,101] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:22:22,133] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:22:22,142] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.949362ms
[2022-09-02 01:22:22,425] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:22:22,515] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 01:22:22,531] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 01:22:22,532] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 01:22:22,543] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 01:22:22,543] - [base_gsi:326] INFO - []
[2022-09-02 01:22:23,319] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employeecad8896f8a4843cfbcd00612090066eajob_title` ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2022-09-02 01:22:23,346] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employeecad8896f8a4843cfbcd00612090066eajob_title%60+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 01:22:23,415] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 67.184787ms
[2022-09-02 01:22:23,457] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employeecad8896f8a4843cfbcd00612090066eajoin_yr` ON default(join_yr) WHERE  join_yr > 2010 and join_yr < 2014  USING GSI  WITH {'defer_build': True}
[2022-09-02 01:22:23,485] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employeecad8896f8a4843cfbcd00612090066eajoin_yr%60+ON+default%28join_yr%29+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 01:22:23,530] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 42.850737ms
[2022-09-02 01:22:23,531] - [base_gsi:282] INFO - BUILD INDEX on default(employeecad8896f8a4843cfbcd00612090066eajob_title,employeecad8896f8a4843cfbcd00612090066eajoin_yr) USING GSI
[2022-09-02 01:22:24,560] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(employeecad8896f8a4843cfbcd00612090066eajob_title,employeecad8896f8a4843cfbcd00612090066eajoin_yr) USING GSI
[2022-09-02 01:22:24,587] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28employeecad8896f8a4843cfbcd00612090066eajob_title%2Cemployeecad8896f8a4843cfbcd00612090066eajoin_yr%29+USING+GSI
[2022-09-02 01:22:24,637] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 48.569925ms
[2022-09-02 01:22:25,670] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeecad8896f8a4843cfbcd00612090066eajob_title'
[2022-09-02 01:22:25,697] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeecad8896f8a4843cfbcd00612090066eajob_title%27
[2022-09-02 01:22:25,708] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 8.740487ms
[2022-09-02 01:22:26,739] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeecad8896f8a4843cfbcd00612090066eajob_title'
[2022-09-02 01:22:26,765] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeecad8896f8a4843cfbcd00612090066eajob_title%27
[2022-09-02 01:22:26,773] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.091595ms
[2022-09-02 01:22:26,799] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeecad8896f8a4843cfbcd00612090066eajoin_yr'
[2022-09-02 01:22:26,825] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeecad8896f8a4843cfbcd00612090066eajoin_yr%27
[2022-09-02 01:22:26,828] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 1.524215ms
[2022-09-02 01:22:27,858] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeecad8896f8a4843cfbcd00612090066eajob_title'
[2022-09-02 01:22:27,884] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeecad8896f8a4843cfbcd00612090066eajob_title%27
[2022-09-02 01:22:27,893] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.004706ms
[2022-09-02 01:22:27,919] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employeecad8896f8a4843cfbcd00612090066eajob_title ON default USING GSI
[2022-09-02 01:22:27,946] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employeecad8896f8a4843cfbcd00612090066eajob_title+ON+default+USING+GSI
[2022-09-02 01:22:27,988] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 40.312554ms
[2022-09-02 01:22:28,023] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeecad8896f8a4843cfbcd00612090066eajoin_yr'
[2022-09-02 01:22:28,051] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeecad8896f8a4843cfbcd00612090066eajoin_yr%27
[2022-09-02 01:22:28,059] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.853168ms
[2022-09-02 01:22:28,087] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employeecad8896f8a4843cfbcd00612090066eajoin_yr ON default USING GSI
[2022-09-02 01:22:28,113] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employeecad8896f8a4843cfbcd00612090066eajoin_yr+ON+default+USING+GSI
[2022-09-02 01:22:28,153] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 37.831877ms
[2022-09-02 01:22:28,190] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeecad8896f8a4843cfbcd00612090066eajob_title'
[2022-09-02 01:22:28,218] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeecad8896f8a4843cfbcd00612090066eajob_title%27
[2022-09-02 01:22:28,226] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.840855ms
[2022-09-02 01:22:28,226] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '1d9da277-eb70-4f29-ac12-c7a15a279a5e', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '5.840855ms', 'executionTime': '5.773616ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:22:28,256] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeecad8896f8a4843cfbcd00612090066eajoin_yr'
[2022-09-02 01:22:28,283] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeecad8896f8a4843cfbcd00612090066eajoin_yr%27
[2022-09-02 01:22:28,286] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 1.489043ms
[2022-09-02 01:22:28,286] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'bdc09db2-d278-4323-9f9e-8d4268129609', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '1.489043ms', 'executionTime': '1.425798ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:22:28,388] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:22:28,391] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:22:28,392] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:22:28,976] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:22:29,031] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:22:29,032] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 01:22:29,129] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:22:29,185] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:22:29,188] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:22:29,188] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:22:29,779] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:22:29,836] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:22:29,837] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 01:22:29,935] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:22:29,991] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:22:29,991] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 01:22:29,991] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:22:30,018] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:22:30,045] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:22:30,052] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.220985ms
[2022-09-02 01:22:30,078] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:22:30,104] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 01:22:30,157] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 50.809927ms
[2022-09-02 01:22:30,220] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:22:30,221] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 21.27518171692686, 'mem_free': 13363523584, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:22:30,221] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:22:30,226] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:22:30,226] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:22:30,845] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:22:30,850] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:22:30,850] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:22:31,465] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:22:31,469] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:22:31,469] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:22:32,214] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:22:32,222] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:22:32,223] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:22:33,300] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:22:40,549] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #7 test_multi_create_drop_index ==============
[2022-09-02 01:22:40,705] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 01:22:41,934] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 01:22:41,962] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 01:22:41,963] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 01:22:42,023] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:22:42,078] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:22:42,131] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:22:42,132] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 01:22:42,212] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 01:22:42,213] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 01:22:42,239] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:22:42,372] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 01:22:42,372] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:22:42,400] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 01:22:42,426] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 01:22:42,427] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:22:42,453] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 01:22:42,479] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 01:22:42,479] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:22:42,506] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 01:22:42,532] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 01:22:42,532] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:22:42,558] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 01:22:42,559] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #7 test_multi_create_drop_index ==============
[2022-09-02 01:22:42,559] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 01:22:42,559] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
summary so far suite gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests , pass 1 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_7
ok

----------------------------------------------------------------------
Ran 1 test in 108.851s

OK
test_multi_create_drop_index (gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_8

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests.test_multi_create_drop_index,groups=composite,dataset=default,doc-per-day=1,cbq_version=sherlock,skip_build_tuq=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'composite', 'dataset': 'default', 'doc-per-day': '1', 'cbq_version': 'sherlock', 'skip_build_tuq': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 8, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_8'}
[2022-09-02 01:22:42,639] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:22:42,639] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:22:43,283] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:22:43,318] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:22:43,400] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 01:22:43,400] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #8 test_multi_create_drop_index==============
[2022-09-02 01:22:43,401] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 01:22:43,666] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 01:22:43,694] - [task:164] INFO -  {'uptime': '1007', 'memoryTotal': 15466930176, 'memoryFree': 13486809088, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:22:43,722] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 01:22:43,722] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 01:22:43,722] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 01:22:43,753] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 01:22:43,797] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 01:22:43,797] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 01:22:43,830] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 01:22:43,831] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 01:22:43,831] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 01:22:43,831] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 01:22:43,879] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:22:43,882] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:22:43,882] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:22:44,502] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:22:44,503] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:22:44,628] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:22:44,629] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:22:44,662] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:22:44,690] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:22:44,720] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:22:44,853] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 01:22:44,882] - [task:164] INFO -  {'uptime': '1007', 'memoryTotal': 15466930176, 'memoryFree': 13486837760, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:22:44,910] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:22:44,940] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 01:22:44,940] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 01:22:44,996] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:22:44,999] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:22:45,000] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:22:45,589] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:22:45,590] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:22:45,706] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:22:45,707] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:22:45,737] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:22:45,767] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:22:45,798] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:22:45,922] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 01:22:45,948] - [task:164] INFO -  {'uptime': '1002', 'memoryTotal': 15466930176, 'memoryFree': 13485899776, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:22:45,974] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:22:46,001] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 01:22:46,001] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 01:22:46,052] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:22:46,058] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:22:46,058] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:22:46,644] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:22:46,645] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:22:46,761] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:22:46,762] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:22:46,792] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:22:46,821] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:22:46,849] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:22:46,969] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 01:22:46,995] - [task:164] INFO -  {'uptime': '1002', 'memoryTotal': 15466930176, 'memoryFree': 13486616576, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:22:47,022] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:22:47,049] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 01:22:47,049] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 01:22:47,099] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:22:47,103] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:22:47,103] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:22:47,684] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:22:47,685] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:22:47,801] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:22:47,802] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:22:47,836] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:22:47,862] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:22:47,891] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:22:47,978] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 01:22:48,355] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:22:53,356] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 01:22:53,448] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 01:22:53,452] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:22:53,452] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:22:54,112] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:22:54,113] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:22:54,240] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:22:54,242] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:22:54,242] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 01:22:55,109] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 01:22:55,166] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 01:22:55,167] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 01:23:31,385] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:23:31,699] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:23:32,083] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 01:23:32,087] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #8 test_multi_create_drop_index ==============
[2022-09-02 01:23:32,139] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:23:32,139] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:23:32,739] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:23:32,747] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:23:32,747] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:23:33,616] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:23:33,629] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:23:33,629] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:23:34,687] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:23:34,699] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:23:34,700] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:23:35,738] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:23:42,060] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:23:42,060] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 6.897208176394837, 'mem_free': 13482119168, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:23:42,060] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:23:42,060] - [basetestcase:467] INFO - Time to execute basesetup : 59.42408776283264
[2022-09-02 01:23:42,112] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:23:42,113] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:23:42,167] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:23:42,167] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:23:42,221] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:23:42,221] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:23:42,275] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:23:42,275] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:23:42,327] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:23:42,388] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 01:23:42,389] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 01:23:42,389] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:23:47,400] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 01:23:47,404] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:23:47,405] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:23:47,994] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:23:49,038] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 01:23:49,211] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:23:51,419] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:23:51,509] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:23:51,510] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 01:23:51,510] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 01:24:21,533] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 01:24:21,564] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:24:21,589] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:24:21,657] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 65.430181ms
[2022-09-02 01:24:21,657] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'e43dfe68-cd1f-4a08-bcf9-55289abb545e', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '65.430181ms', 'executionTime': '65.36046ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:24:21,657] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 01:24:21,683] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 01:24:21,709] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 01:24:22,451] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 739.968913ms
[2022-09-02 01:24:22,451] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 01:24:22,550] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:24:22,611] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:24:22,621] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.72953ms
[2022-09-02 01:24:22,831] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:24:22,871] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 01:24:22,895] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 01:24:22,897] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 01:24:22,909] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 01:24:22,910] - [base_gsi:326] INFO - []
[2022-09-02 01:24:23,789] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employee30917312478c4f8cb1b57d6c97b1c6e2job_title_join_yr` ON default(join_yr,job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2022-09-02 01:24:23,816] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employee30917312478c4f8cb1b57d6c97b1c6e2job_title_join_yr%60+ON+default%28join_yr%2Cjob_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 01:24:23,876] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 57.855982ms
[2022-09-02 01:24:23,877] - [base_gsi:282] INFO - BUILD INDEX on default(employee30917312478c4f8cb1b57d6c97b1c6e2job_title_join_yr) USING GSI
[2022-09-02 01:24:24,907] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(employee30917312478c4f8cb1b57d6c97b1c6e2job_title_join_yr) USING GSI
[2022-09-02 01:24:24,933] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28employee30917312478c4f8cb1b57d6c97b1c6e2job_title_join_yr%29+USING+GSI
[2022-09-02 01:24:24,959] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 23.532045ms
[2022-09-02 01:24:25,990] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee30917312478c4f8cb1b57d6c97b1c6e2job_title_join_yr'
[2022-09-02 01:24:26,017] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee30917312478c4f8cb1b57d6c97b1c6e2job_title_join_yr%27
[2022-09-02 01:24:26,026] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 8.263752ms
[2022-09-02 01:24:27,057] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee30917312478c4f8cb1b57d6c97b1c6e2job_title_join_yr'
[2022-09-02 01:24:27,084] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee30917312478c4f8cb1b57d6c97b1c6e2job_title_join_yr%27
[2022-09-02 01:24:27,092] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.099135ms
[2022-09-02 01:24:28,123] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee30917312478c4f8cb1b57d6c97b1c6e2job_title_join_yr'
[2022-09-02 01:24:28,149] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee30917312478c4f8cb1b57d6c97b1c6e2job_title_join_yr%27
[2022-09-02 01:24:28,157] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.480804ms
[2022-09-02 01:24:28,184] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employee30917312478c4f8cb1b57d6c97b1c6e2job_title_join_yr ON default USING GSI
[2022-09-02 01:24:28,210] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employee30917312478c4f8cb1b57d6c97b1c6e2job_title_join_yr+ON+default+USING+GSI
[2022-09-02 01:24:28,257] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 43.700392ms
[2022-09-02 01:24:28,292] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee30917312478c4f8cb1b57d6c97b1c6e2job_title_join_yr'
[2022-09-02 01:24:28,322] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee30917312478c4f8cb1b57d6c97b1c6e2job_title_join_yr%27
[2022-09-02 01:24:28,329] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.836675ms
[2022-09-02 01:24:28,329] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '1a67a619-488e-4a51-a3d2-f9906e24cc1f', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '5.836675ms', 'executionTime': '5.774057ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:24:28,439] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:24:28,443] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:24:28,443] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:24:29,050] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:24:29,107] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:24:29,107] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 01:24:29,209] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:24:29,270] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:24:29,273] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:24:29,273] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:24:29,899] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:24:29,958] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:24:29,958] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 01:24:30,064] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:24:30,120] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:24:30,120] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 01:24:30,121] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:24:30,147] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:24:30,173] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:24:30,180] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.656641ms
[2022-09-02 01:24:30,207] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:24:30,233] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 01:24:30,297] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 61.013084ms
[2022-09-02 01:24:30,361] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:24:30,362] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 19.75666290556474, 'mem_free': 13321883648, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:24:30,362] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:24:30,367] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:24:30,367] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:24:31,000] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:24:31,005] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:24:31,005] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:24:32,065] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:24:32,078] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:24:32,078] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:24:33,211] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:24:33,220] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:24:33,220] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:24:34,361] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:24:40,765] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #8 test_multi_create_drop_index ==============
[2022-09-02 01:24:40,924] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 01:24:41,899] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 01:24:41,928] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 01:24:41,928] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 01:24:41,982] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:24:42,036] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:24:42,102] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:24:42,103] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 01:24:42,184] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 01:24:42,185] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 01:24:42,210] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:24:42,341] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 01:24:42,342] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:24:42,370] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 01:24:42,397] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 01:24:42,397] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:24:42,425] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 01:24:42,451] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 01:24:42,451] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:24:42,477] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 01:24:42,502] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 01:24:42,503] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:24:42,529] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 01:24:42,529] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #8 test_multi_create_drop_index ==============
[2022-09-02 01:24:42,530] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 01:24:42,530] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
summary so far suite gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests , pass 2 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_8
ok

----------------------------------------------------------------------
Ran 1 test in 119.947s

OK
test_remove_bucket_and_query (gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_9

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_remove_bucket_and_query,groups=simple:and:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple:and:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 9, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_9'}
[2022-09-02 01:24:42,616] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:24:42,616] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:24:43,254] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:24:43,288] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:24:43,366] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 01:24:43,367] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #9 test_remove_bucket_and_query==============
[2022-09-02 01:24:43,367] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 01:24:43,643] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 01:24:43,669] - [task:164] INFO -  {'uptime': '1127', 'memoryTotal': 15466930176, 'memoryFree': 13470298112, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:24:43,698] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 01:24:43,698] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 01:24:43,699] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 01:24:43,731] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 01:24:43,771] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 01:24:43,771] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 01:24:43,800] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 01:24:43,800] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 01:24:43,801] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 01:24:43,801] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 01:24:43,849] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:24:43,852] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:24:43,852] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:24:44,492] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:24:44,494] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:24:44,622] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:24:44,623] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:24:44,653] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:24:44,682] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:24:44,713] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:24:44,844] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 01:24:44,873] - [task:164] INFO -  {'uptime': '1127', 'memoryTotal': 15466930176, 'memoryFree': 13469929472, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:24:44,899] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:24:44,927] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 01:24:44,927] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 01:24:44,979] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:24:44,982] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:24:44,982] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:24:45,590] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:24:45,591] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:24:45,708] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:24:45,710] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:24:45,743] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:24:45,771] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:24:45,801] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:24:45,917] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 01:24:45,944] - [task:164] INFO -  {'uptime': '1123', 'memoryTotal': 15466930176, 'memoryFree': 13469483008, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:24:45,969] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:24:45,996] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 01:24:45,997] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 01:24:46,050] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:24:46,056] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:24:46,056] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:24:46,644] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:24:46,645] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:24:46,765] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:24:46,766] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:24:46,795] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:24:46,821] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:24:46,850] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:24:46,980] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 01:24:47,007] - [task:164] INFO -  {'uptime': '1123', 'memoryTotal': 15466930176, 'memoryFree': 13470187520, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:24:47,033] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:24:47,062] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 01:24:47,062] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 01:24:47,114] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:24:47,118] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:24:47,118] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:24:47,711] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:24:47,713] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:24:47,834] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:24:47,835] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:24:47,866] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:24:47,892] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:24:47,920] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:24:48,010] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 01:24:48,407] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:24:53,412] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 01:24:53,497] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 01:24:53,502] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:24:53,502] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:24:54,123] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:24:54,124] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:24:54,238] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:24:54,239] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:24:54,239] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 01:24:55,131] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 01:24:55,188] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 01:24:55,189] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 01:25:35,769] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:25:36,079] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:25:36,573] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 01:25:36,577] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #9 test_remove_bucket_and_query ==============
[2022-09-02 01:25:36,635] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:25:36,635] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:25:37,273] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:25:37,278] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:25:37,278] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:25:38,150] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:25:38,162] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:25:38,162] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:25:39,230] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:25:39,238] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:25:39,238] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:25:40,329] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:25:46,922] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:25:46,922] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 12.51928629050121, 'mem_free': 13465952256, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:25:46,922] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:25:46,923] - [basetestcase:467] INFO - Time to execute basesetup : 64.30945825576782
[2022-09-02 01:25:46,975] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:25:46,975] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:25:47,030] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:25:47,030] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:25:47,083] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:25:47,084] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:25:47,146] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:25:47,146] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:25:47,199] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:25:47,259] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 01:25:47,260] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 01:25:47,260] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:25:52,272] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 01:25:52,276] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:25:52,276] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:25:52,900] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:25:54,071] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 01:25:54,235] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:25:56,422] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:25:56,503] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:25:56,504] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 01:25:56,504] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 01:26:26,533] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 01:26:26,563] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:26:26,588] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:26:26,653] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 62.878472ms
[2022-09-02 01:26:26,653] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'a3e3ee9c-89ee-4e85-8275-fd5405815214', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '62.878472ms', 'executionTime': '62.817982ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:26:26,654] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 01:26:26,679] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 01:26:26,705] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 01:26:27,450] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 743.313801ms
[2022-09-02 01:26:27,451] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 01:26:27,517] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:26:27,563] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:26:27,571] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.169688ms
[2022-09-02 01:26:27,788] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:26:27,825] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 01:26:27,843] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 01:26:27,844] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 01:26:27,867] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 01:26:27,932] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:26:28,746] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employeeafc5cbb94cf947c6a2ecc4f3a5a6b347join_yr` ON default(join_yr) WHERE  join_yr > 2010 and join_yr < 2014  USING GSI  WITH {'defer_build': True}
[2022-09-02 01:26:28,773] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employeeafc5cbb94cf947c6a2ecc4f3a5a6b347join_yr%60+ON+default%28join_yr%29+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 01:26:28,841] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 64.984325ms
[2022-09-02 01:26:28,842] - [base_gsi:282] INFO - BUILD INDEX on default(`employeeafc5cbb94cf947c6a2ecc4f3a5a6b347join_yr`) USING GSI
[2022-09-02 01:26:29,873] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(`employeeafc5cbb94cf947c6a2ecc4f3a5a6b347join_yr`) USING GSI
[2022-09-02 01:26:29,900] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28%60employeeafc5cbb94cf947c6a2ecc4f3a5a6b347join_yr%60%29+USING+GSI
[2022-09-02 01:26:29,920] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 18.450279ms
[2022-09-02 01:26:29,959] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeafc5cbb94cf947c6a2ecc4f3a5a6b347join_yr'
[2022-09-02 01:26:29,990] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeafc5cbb94cf947c6a2ecc4f3a5a6b347join_yr%27
[2022-09-02 01:26:29,998] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.801246ms
[2022-09-02 01:26:31,029] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeafc5cbb94cf947c6a2ecc4f3a5a6b347join_yr'
[2022-09-02 01:26:31,055] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeafc5cbb94cf947c6a2ecc4f3a5a6b347join_yr%27
[2022-09-02 01:26:31,065] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.771902ms
[2022-09-02 01:26:32,094] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeafc5cbb94cf947c6a2ecc4f3a5a6b347join_yr'
[2022-09-02 01:26:32,121] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeafc5cbb94cf947c6a2ecc4f3a5a6b347join_yr%27
[2022-09-02 01:26:32,129] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.800176ms
[2022-09-02 01:26:33,160] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 01:26:33,187] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+
[2022-09-02 01:26:33,192] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 3.776797ms
[2022-09-02 01:26:33,193] - [base_gsi:504] INFO - {'requestID': '77a82c89-1f5b-46cd-a1b1-05e5556e4f46', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employeeafc5cbb94cf947c6a2ecc4f3a5a6b347join_yr', 'index_id': '354c9d9f95bdbb87', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '2014', 'inclusion': 0, 'index_key': '`join_yr`', 'low': '2010'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((2010 < (`default`.`join_yr`)) and ((`default`.`join_yr`) < 2014))'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, {'#operator': 'Order', 'can_spill': True, 'clip_values': True, 'sort_terms': [{'expr': '(`default`.`_id`)'}]}]}, 'text': 'SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id'}], 'status': 'success', 'metrics': {'elapsedTime': '3.776797ms', 'executionTime': '3.717964ms', 'resultCount': 1, 'resultSize': 926, 'serviceLoad': 6}}
[2022-09-02 01:26:33,193] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 01:26:33,194] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 01:26:33,194] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 01:26:33,194] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 01:26:33,195] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 01:26:33,196] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 01:26:33,196] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,"_id" : doc["_id"]}; where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 01:26:33,196] - [tuq_generators:329] INFO - -->where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 01:26:33,258] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 01:26:33,296] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 01:26:33,322] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+&scan_consistency=request_plus
[2022-09-02 01:26:33,492] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 156.765101ms
[2022-09-02 01:26:33,493] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 01:26:33,493] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 01:26:35,720] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:26:35,747] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeafc5cbb94cf947c6a2ecc4f3a5a6b347join_yr'
[2022-09-02 01:26:35,779] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeafc5cbb94cf947c6a2ecc4f3a5a6b347join_yr%27
[2022-09-02 01:26:35,792] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 9.919995ms
[2022-09-02 01:26:35,792] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '259973d6-15ee-4e9f-820a-eb5161544d14', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '9.919995ms', 'executionTime': '9.824564ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:26:35,903] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:26:35,906] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:26:35,906] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:26:36,610] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:26:36,667] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:26:36,667] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 01:26:36,776] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:26:36,838] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:26:36,841] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:26:36,841] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:26:37,538] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:26:37,594] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:26:37,594] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 01:26:37,708] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:26:37,767] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:26:37,767] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 01:26:37,767] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:26:37,795] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:26:37,825] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:26:37,830] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 3.340073ms
[2022-09-02 01:26:37,830] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'f3752b30-604f-4d5d-b5c9-e1a79b4119b0', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '3.340073ms', 'executionTime': '3.269028ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:26:37,884] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:26:37,884] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 9.270186062085235, 'mem_free': 13349990400, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:26:37,884] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:26:37,888] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:26:37,888] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:26:38,565] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:26:38,570] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:26:38,571] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:26:39,263] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:26:39,270] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:26:39,270] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:26:40,430] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:26:40,443] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:26:40,443] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:26:41,620] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:26:48,840] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #9 test_remove_bucket_and_query ==============
[2022-09-02 01:26:48,980] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:26:49,035] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:26:49,090] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:26:49,144] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:26:49,144] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 01:26:49,224] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 01:26:49,225] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 01:26:49,255] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:26:49,388] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 01:26:49,388] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:26:49,416] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 01:26:49,443] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 01:26:49,443] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:26:49,470] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 01:26:49,497] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 01:26:49,497] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:26:49,525] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 01:26:49,551] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 01:26:49,552] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:26:49,580] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 01:26:49,580] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #9 test_remove_bucket_and_query ==============
[2022-09-02 01:26:49,580] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 01:26:49,581] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
summary so far suite gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests , pass 2 , fail 0
summary so far suite gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests , pass 1 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_9
ok

----------------------------------------------------------------------
Ran 1 test in 127.025s

OK
test_change_bucket_properties (gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_10

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_change_bucket_properties,groups=simple:and:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple:and:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 10, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_10'}
[2022-09-02 01:26:49,672] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:26:49,673] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:26:50,378] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:26:50,411] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:26:50,497] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 01:26:50,498] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #10 test_change_bucket_properties==============
[2022-09-02 01:26:50,498] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 01:26:50,700] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 01:26:50,732] - [task:164] INFO -  {'uptime': '1250', 'memoryTotal': 15466930176, 'memoryFree': 13341188096, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:26:50,761] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 01:26:50,762] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 01:26:50,762] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 01:26:50,802] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 01:26:50,838] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 01:26:50,838] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 01:26:50,868] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 01:26:50,869] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 01:26:50,869] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 01:26:50,870] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 01:26:50,919] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:26:50,922] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:26:50,922] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:26:51,556] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:26:51,557] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:26:51,678] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:26:51,679] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:26:51,709] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:26:51,735] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:26:51,765] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:26:51,888] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 01:26:51,914] - [task:164] INFO -  {'uptime': '1253', 'memoryTotal': 15466930176, 'memoryFree': 13474897920, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:26:51,940] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:26:51,967] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 01:26:51,967] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 01:26:52,018] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:26:52,024] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:26:52,024] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:26:52,715] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:26:52,717] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:26:52,852] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:26:52,853] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:26:52,884] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:26:52,910] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:26:52,940] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:26:53,062] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 01:26:53,092] - [task:164] INFO -  {'uptime': '1253', 'memoryTotal': 15466930176, 'memoryFree': 13474762752, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:26:53,119] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:26:53,149] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 01:26:53,149] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 01:26:53,202] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:26:53,205] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:26:53,205] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:26:53,885] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:26:53,886] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:26:54,021] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:26:54,022] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:26:54,055] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:26:54,083] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:26:54,115] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:26:54,238] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 01:26:54,268] - [task:164] INFO -  {'uptime': '1248', 'memoryTotal': 15466930176, 'memoryFree': 13473857536, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:26:54,296] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:26:54,329] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 01:26:54,330] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 01:26:54,385] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:26:54,388] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:26:54,388] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:26:55,088] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:26:55,089] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:26:55,220] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:26:55,221] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:26:55,252] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:26:55,279] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:26:55,309] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:26:55,405] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 01:26:55,811] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:27:00,815] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 01:27:00,908] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 01:27:00,913] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:27:00,913] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:27:01,611] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:27:01,612] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:27:01,742] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:27:01,743] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:27:01,744] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 01:27:02,527] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 01:27:02,586] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 01:27:02,587] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 01:27:31,312] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:27:31,617] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:27:31,878] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 01:27:31,881] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #10 test_change_bucket_properties ==============
[2022-09-02 01:27:31,935] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:27:31,935] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:27:32,566] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:27:32,572] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:27:32,572] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:27:33,506] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:27:33,519] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:27:33,520] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:27:34,677] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:27:34,688] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:27:34,688] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:27:35,792] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:27:42,505] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:27:42,506] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 26.35623096784374, 'mem_free': 13468893184, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:27:42,506] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:27:42,507] - [basetestcase:467] INFO - Time to execute basesetup : 52.836928367614746
[2022-09-02 01:27:42,559] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:27:42,559] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:27:42,617] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:27:42,618] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:27:42,671] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:27:42,672] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:27:42,726] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:27:42,726] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:27:42,782] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:27:42,845] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 01:27:42,845] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 01:27:42,845] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:27:47,857] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 01:27:47,860] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:27:47,860] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:27:48,524] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:27:49,700] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 01:27:49,886] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:27:52,975] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:27:53,057] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:27:53,057] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 01:27:53,057] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 01:28:23,084] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 01:28:23,114] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:28:23,144] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:28:23,211] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 65.408161ms
[2022-09-02 01:28:23,212] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '8a13789f-524c-4c2d-9e0d-5d94638596a0', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '65.408161ms', 'executionTime': '65.349223ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:28:23,212] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 01:28:23,239] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 01:28:23,266] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 01:28:23,981] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 713.252469ms
[2022-09-02 01:28:23,982] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 01:28:24,073] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:28:24,110] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:28:24,118] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.593884ms
[2022-09-02 01:28:24,341] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:28:24,398] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 01:28:24,411] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 01:28:24,411] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 01:28:24,421] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 01:28:24,481] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:28:25,293] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employee08f1929084e144f5b223286e29d31f19join_yr` ON default(join_yr) WHERE  join_yr > 2010 and join_yr < 2014  USING GSI  WITH {'defer_build': True}
[2022-09-02 01:28:25,319] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employee08f1929084e144f5b223286e29d31f19join_yr%60+ON+default%28join_yr%29+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 01:28:25,365] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 44.700359ms
[2022-09-02 01:28:25,366] - [base_gsi:282] INFO - BUILD INDEX on default(`employee08f1929084e144f5b223286e29d31f19join_yr`) USING GSI
[2022-09-02 01:28:26,395] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(`employee08f1929084e144f5b223286e29d31f19join_yr`) USING GSI
[2022-09-02 01:28:26,422] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28%60employee08f1929084e144f5b223286e29d31f19join_yr%60%29+USING+GSI
[2022-09-02 01:28:26,447] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 18.913348ms
[2022-09-02 01:28:26,490] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee08f1929084e144f5b223286e29d31f19join_yr'
[2022-09-02 01:28:26,519] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee08f1929084e144f5b223286e29d31f19join_yr%27
[2022-09-02 01:28:26,529] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.990064ms
[2022-09-02 01:28:27,559] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee08f1929084e144f5b223286e29d31f19join_yr'
[2022-09-02 01:28:27,584] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee08f1929084e144f5b223286e29d31f19join_yr%27
[2022-09-02 01:28:27,594] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 8.223569ms
[2022-09-02 01:28:28,624] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee08f1929084e144f5b223286e29d31f19join_yr'
[2022-09-02 01:28:28,649] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee08f1929084e144f5b223286e29d31f19join_yr%27
[2022-09-02 01:28:28,656] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.898502ms
[2022-09-02 01:28:29,687] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 01:28:29,712] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+
[2022-09-02 01:28:29,716] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.895572ms
[2022-09-02 01:28:29,716] - [base_gsi:504] INFO - {'requestID': 'f2f9d9d2-7ddf-452d-b741-80d894b6336c', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee08f1929084e144f5b223286e29d31f19join_yr', 'index_id': 'cb4c9c74f67250e9', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '2014', 'inclusion': 0, 'index_key': '`join_yr`', 'low': '2010'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((2010 < (`default`.`join_yr`)) and ((`default`.`join_yr`) < 2014))'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, {'#operator': 'Order', 'can_spill': True, 'clip_values': True, 'sort_terms': [{'expr': '(`default`.`_id`)'}]}]}, 'text': 'SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id'}], 'status': 'success', 'metrics': {'elapsedTime': '2.895572ms', 'executionTime': '2.82372ms', 'resultCount': 1, 'resultSize': 926, 'serviceLoad': 6}}
[2022-09-02 01:28:29,717] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 01:28:29,717] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 01:28:29,718] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 01:28:29,718] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 01:28:29,718] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 01:28:29,718] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 01:28:29,719] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,"_id" : doc["_id"]}; where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 01:28:29,719] - [tuq_generators:329] INFO - -->where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 01:28:29,786] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 01:28:29,824] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 01:28:29,850] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+&scan_consistency=request_plus
[2022-09-02 01:28:30,042] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 178.454272ms
[2022-09-02 01:28:30,043] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 01:28:30,043] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 01:28:31,247] - [on_prem_rest_client:3084] INFO - http://127.0.0.1:9000/pools/default/buckets/default with param: 
[2022-09-02 01:28:31,307] - [on_prem_rest_client:3092] INFO - bucket default updated
[2022-09-02 01:28:31,337] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 01:28:31,364] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+
[2022-09-02 01:28:31,367] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 1.891199ms
[2022-09-02 01:28:31,368] - [base_gsi:504] INFO - {'requestID': 'b83410cb-63b3-4348-887a-a05a0d6b5269', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee08f1929084e144f5b223286e29d31f19join_yr', 'index_id': 'cb4c9c74f67250e9', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '2014', 'inclusion': 0, 'index_key': '`join_yr`', 'low': '2010'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((2010 < (`default`.`join_yr`)) and ((`default`.`join_yr`) < 2014))'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, {'#operator': 'Order', 'can_spill': True, 'clip_values': True, 'sort_terms': [{'expr': '(`default`.`_id`)'}]}]}, 'text': 'SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id'}], 'status': 'success', 'metrics': {'elapsedTime': '1.891199ms', 'executionTime': '1.816624ms', 'resultCount': 1, 'resultSize': 926, 'serviceLoad': 6}}
[2022-09-02 01:28:31,368] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 01:28:31,368] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 01:28:31,368] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 01:28:31,369] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 01:28:31,369] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 01:28:31,369] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 01:28:31,369] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,"_id" : doc["_id"]}; where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 01:28:31,369] - [tuq_generators:329] INFO - -->where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 01:28:31,423] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 01:28:31,463] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 01:28:31,488] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+&scan_consistency=request_plus
[2022-09-02 01:28:31,586] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 89.145235ms
[2022-09-02 01:28:31,587] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 01:28:31,587] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 01:28:32,795] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employee08f1929084e144f5b223286e29d31f19join_yr ON default USING GSI
[2022-09-02 01:28:32,822] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employee08f1929084e144f5b223286e29d31f19join_yr+ON+default+USING+GSI
[2022-09-02 01:28:32,868] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 44.712262ms
[2022-09-02 01:28:32,903] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee08f1929084e144f5b223286e29d31f19join_yr'
[2022-09-02 01:28:32,931] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee08f1929084e144f5b223286e29d31f19join_yr%27
[2022-09-02 01:28:32,939] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.029203ms
[2022-09-02 01:28:32,939] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'afa5596e-8b55-4a57-bd36-6f9c45542ef7', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '6.029203ms', 'executionTime': '5.958047ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:28:33,046] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:28:33,049] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:28:33,049] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:28:33,713] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:28:33,769] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:28:33,770] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 01:28:33,886] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:28:33,951] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:28:33,956] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:28:33,956] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:28:34,665] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:28:34,722] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:28:34,722] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 01:28:34,839] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:28:34,893] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:28:34,893] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 01:28:34,893] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:28:34,920] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:28:34,945] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:28:34,952] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.742467ms
[2022-09-02 01:28:34,978] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:28:35,003] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 01:28:35,055] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 49.40942ms
[2022-09-02 01:28:35,116] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:28:35,116] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 19.22338726317741, 'mem_free': 13294428160, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:28:35,116] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:28:35,120] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:28:35,120] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:28:35,841] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:28:35,845] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:28:35,846] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:28:36,562] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:28:36,567] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:28:36,567] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:28:37,643] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:28:37,652] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:28:37,652] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:28:38,895] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:28:46,495] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #10 test_change_bucket_properties ==============
[2022-09-02 01:28:46,646] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 01:28:47,040] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 01:28:47,069] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 01:28:47,069] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 01:28:47,123] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:28:47,180] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:28:47,235] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:28:47,236] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 01:28:47,316] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 01:28:47,317] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 01:28:47,345] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:28:47,482] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 01:28:47,483] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:28:47,510] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 01:28:47,537] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 01:28:47,537] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:28:47,570] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 01:28:47,598] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 01:28:47,598] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:28:47,627] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 01:28:47,654] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 01:28:47,655] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:28:47,682] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 01:28:47,683] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #10 test_change_bucket_properties ==============
[2022-09-02 01:28:47,683] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 01:28:47,684] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
summary so far suite gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests , pass 2 , fail 0
summary so far suite gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests , pass 2 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_10
ok

----------------------------------------------------------------------
Ran 1 test in 118.068s

OK
test_delete_create_bucket_and_query (gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_11

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_delete_create_bucket_and_query,groups=simple:and:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple:and:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 11, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_11'}
[2022-09-02 01:28:47,782] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:28:47,782] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:28:48,485] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:28:48,518] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:28:48,601] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 01:28:48,602] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #11 test_delete_create_bucket_and_query==============
[2022-09-02 01:28:48,602] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 01:28:48,807] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 01:28:48,834] - [task:164] INFO -  {'uptime': '1372', 'memoryTotal': 15466930176, 'memoryFree': 13459361792, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:28:48,860] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 01:28:48,860] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 01:28:48,860] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 01:28:48,888] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 01:28:48,931] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 01:28:48,932] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 01:28:48,960] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 01:28:48,961] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 01:28:48,961] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 01:28:48,961] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 01:28:49,010] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:28:49,013] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:28:49,013] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:28:49,681] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:28:49,682] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:28:49,813] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:28:49,814] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:28:49,847] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:28:49,873] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:28:49,903] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:28:50,024] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 01:28:50,051] - [task:164] INFO -  {'uptime': '1368', 'memoryTotal': 15466930176, 'memoryFree': 13460496384, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:28:50,077] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:28:50,104] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 01:28:50,104] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 01:28:50,156] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:28:50,159] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:28:50,159] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:28:50,841] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:28:50,843] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:28:50,969] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:28:50,970] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:28:50,998] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:28:51,024] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:28:51,052] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:28:51,166] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 01:28:51,193] - [task:164] INFO -  {'uptime': '1369', 'memoryTotal': 15466930176, 'memoryFree': 13459910656, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:28:51,219] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:28:51,246] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 01:28:51,246] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 01:28:51,299] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:28:51,304] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:28:51,305] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:28:51,965] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:28:51,966] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:28:52,105] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:28:52,106] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:28:52,136] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:28:52,163] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:28:52,191] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:28:52,309] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 01:28:52,337] - [task:164] INFO -  {'uptime': '1369', 'memoryTotal': 15466930176, 'memoryFree': 13460656128, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:28:52,363] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:28:52,390] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 01:28:52,390] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 01:28:52,443] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:28:52,446] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:28:52,446] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:28:53,130] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:28:53,131] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:28:53,256] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:28:53,257] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:28:53,287] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:28:53,312] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:28:53,341] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:28:53,429] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 01:28:53,835] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:28:58,838] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 01:28:58,922] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 01:28:58,926] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:28:58,927] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:28:59,594] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:28:59,595] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:28:59,726] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:28:59,728] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:28:59,728] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 01:29:00,595] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 01:29:00,650] - [on_prem_rest_client:3047] INFO - 0.05 seconds to create bucket default
[2022-09-02 01:29:00,650] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 01:29:31,101] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:29:31,429] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:29:31,694] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 01:29:31,697] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #11 test_delete_create_bucket_and_query ==============
[2022-09-02 01:29:31,753] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:29:31,753] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:29:32,440] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:29:32,445] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:29:32,445] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:29:33,461] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:29:33,469] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:29:33,470] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:29:34,645] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:29:34,654] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:29:34,654] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:29:35,896] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:29:42,818] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:29:42,818] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 27.40519449579437, 'mem_free': 13453987840, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:29:42,818] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:29:42,819] - [basetestcase:467] INFO - Time to execute basesetup : 55.04244685173035
[2022-09-02 01:29:42,871] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:29:42,871] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:29:42,926] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:29:42,926] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:29:42,981] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:29:42,981] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:29:43,041] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:29:43,041] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:29:43,096] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:29:43,159] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 01:29:43,159] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 01:29:43,160] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:29:48,168] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 01:29:48,172] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:29:48,172] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:29:48,836] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:29:50,028] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 01:29:50,202] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:29:53,295] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:29:53,372] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:29:53,373] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 01:29:53,373] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 01:30:23,398] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 01:30:23,428] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:30:23,453] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:30:23,518] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 62.791141ms
[2022-09-02 01:30:23,518] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '9aa60e37-26cf-4d73-a437-26b06a1a39e0', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '62.791141ms', 'executionTime': '62.718822ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:30:23,519] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 01:30:23,544] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 01:30:23,569] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 01:30:24,275] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 704.373698ms
[2022-09-02 01:30:24,275] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 01:30:24,324] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:30:24,391] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:30:24,399] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.373686ms
[2022-09-02 01:30:24,597] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:30:24,634] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 01:30:24,666] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 01:30:24,669] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 01:30:24,687] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 01:30:24,756] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:30:25,556] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr` ON default(join_yr) WHERE  join_yr > 2010 and join_yr < 2014  USING GSI  WITH {'defer_build': True}
[2022-09-02 01:30:25,581] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr%60+ON+default%28join_yr%29+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 01:30:25,655] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 72.471847ms
[2022-09-02 01:30:25,656] - [base_gsi:282] INFO - BUILD INDEX on default(`employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr`) USING GSI
[2022-09-02 01:30:26,686] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(`employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr`) USING GSI
[2022-09-02 01:30:26,710] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28%60employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr%60%29+USING+GSI
[2022-09-02 01:30:26,737] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 24.144221ms
[2022-09-02 01:30:26,779] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr'
[2022-09-02 01:30:26,808] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr%27
[2022-09-02 01:30:26,817] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.518847ms
[2022-09-02 01:30:27,846] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr'
[2022-09-02 01:30:27,871] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr%27
[2022-09-02 01:30:27,879] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.620979ms
[2022-09-02 01:30:28,908] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr'
[2022-09-02 01:30:28,933] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr%27
[2022-09-02 01:30:28,942] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.743911ms
[2022-09-02 01:30:29,972] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 01:30:29,997] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+
[2022-09-02 01:30:30,001] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.562821ms
[2022-09-02 01:30:30,001] - [base_gsi:504] INFO - {'requestID': 'cf1df4b7-8178-462b-8e8f-39732c5e1468', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr', 'index_id': '3cbac969b03d9c0a', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '2014', 'inclusion': 0, 'index_key': '`join_yr`', 'low': '2010'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((2010 < (`default`.`join_yr`)) and ((`default`.`join_yr`) < 2014))'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, {'#operator': 'Order', 'can_spill': True, 'clip_values': True, 'sort_terms': [{'expr': '(`default`.`_id`)'}]}]}, 'text': 'SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id'}], 'status': 'success', 'metrics': {'elapsedTime': '2.562821ms', 'executionTime': '2.499748ms', 'resultCount': 1, 'resultSize': 926, 'serviceLoad': 6}}
[2022-09-02 01:30:30,002] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 01:30:30,002] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 01:30:30,002] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 01:30:30,002] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 01:30:30,003] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 01:30:30,003] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 01:30:30,003] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,"_id" : doc["_id"]}; where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 01:30:30,003] - [tuq_generators:329] INFO - -->where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 01:30:30,062] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 01:30:30,101] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 01:30:30,125] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+&scan_consistency=request_plus
[2022-09-02 01:30:30,325] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 186.389955ms
[2022-09-02 01:30:30,325] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 01:30:30,326] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 01:30:31,908] - [basetestcase:847] INFO - sleep for 2 secs.  ...
[2022-09-02 01:30:35,028] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 01:30:35,083] - [on_prem_rest_client:3047] INFO - 0.05 seconds to create bucket default
[2022-09-02 01:30:35,083] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 01:31:30,976] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:31:31,280] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:31:31,852] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 01:31:31,856] - [basetestcase:847] INFO - sleep for 2 secs.  ...
[2022-09-02 01:31:33,887] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 01:31:33,887] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:31:33,914] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 01:31:33,944] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 01:31:33,944] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:31:33,971] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 01:31:33,999] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 01:31:34,000] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:31:34,027] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 01:31:34,054] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 01:31:34,055] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:31:34,083] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 01:31:34,135] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:31:34,165] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr'
[2022-09-02 01:31:34,191] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr%27
[2022-09-02 01:31:34,232] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 39.711603ms
[2022-09-02 01:31:34,232] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '6153a936-5d76-4ee2-a19b-e859d6581acd', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '39.711603ms', 'executionTime': '39.640804ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:31:34,258] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr'
[2022-09-02 01:31:34,284] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee8b8a7e62ec8c4ae19dac4e825ef38719join_yr%27
[2022-09-02 01:31:34,286] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 1.192975ms
[2022-09-02 01:31:34,287] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '83d976f6-d522-4716-99fa-b0ba569d01cf', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '1.192975ms', 'executionTime': '1.130273ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:31:34,341] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:31:34,473] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:31:34,475] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:31:34,475] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:31:35,179] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:31:35,236] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:31:35,236] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 01:31:35,351] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:31:35,406] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:31:35,409] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:31:35,409] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:31:36,182] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:31:36,248] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:31:36,248] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 01:31:36,365] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:31:36,425] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:31:36,425] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 01:31:36,426] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:31:36,454] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:31:36,481] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:31:36,488] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.225867ms
[2022-09-02 01:31:36,488] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'f8c370aa-07ba-45d4-a3b0-a224fafb28e1', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '5.225867ms', 'executionTime': '5.153195ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:31:36,489] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:31:36,517] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:31:36,545] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:31:36,548] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 1.055881ms
[2022-09-02 01:31:36,548] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '01546a95-b810-42e8-95e9-4d775ee6562d', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '1.055881ms', 'executionTime': '988.121┬Ás', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:31:36,605] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:31:36,605] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 7.49784058566857, 'mem_free': 13428662272, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:31:36,606] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:31:36,609] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:31:36,609] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:31:37,359] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:31:37,364] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:31:37,364] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:31:38,203] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:31:38,210] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:31:38,210] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:31:39,410] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:31:39,422] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:31:39,422] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:31:40,638] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:31:47,913] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #11 test_delete_create_bucket_and_query ==============
[2022-09-02 01:31:48,053] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 01:31:48,363] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 01:31:48,391] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 01:31:48,391] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 01:31:48,449] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:31:48,503] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:31:48,561] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:31:48,562] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 01:31:48,644] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 01:31:48,645] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 01:31:48,671] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:31:48,802] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 01:31:48,802] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:31:48,829] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 01:31:48,855] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 01:31:48,855] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:31:48,882] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 01:31:48,908] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 01:31:48,908] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:31:48,934] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 01:31:48,961] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 01:31:48,961] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:31:48,987] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 01:31:48,988] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #11 test_delete_create_bucket_and_query ==============
[2022-09-02 01:31:48,988] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 01:31:48,989] - [basetestcase:778] INFO - closing all memcached connections
ok

----------------------------------------------------------------------
Ran 1 test in 181.265s

OK
suite_tearDown (gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests) ... Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
summary so far suite gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests , pass 2 , fail 0
summary so far suite gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests , pass 3 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_01-05-52/test_11

*** Tests executed count: 11

Run after suite setup for gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_delete_create_bucket_and_query
[2022-09-02 01:31:49,048] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:31:49,048] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:31:49,794] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:31:49,827] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:31:49,910] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 01:31:49,910] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #11 suite_tearDown==============
[2022-09-02 01:31:49,911] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 01:31:50,074] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 01:31:50,102] - [task:164] INFO -  {'uptime': '1553', 'memoryTotal': 15466930176, 'memoryFree': 13428957184, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:31:50,129] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 01:31:50,130] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 01:31:50,130] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 01:31:50,169] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 01:31:50,203] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 01:31:50,203] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 01:31:50,232] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 01:31:50,233] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 01:31:50,233] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 01:31:50,234] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 01:31:50,285] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:31:50,287] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:31:50,288] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:31:50,977] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:31:50,978] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:31:51,112] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:31:51,113] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:31:51,143] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:31:51,170] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:31:51,199] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:31:51,316] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 01:31:51,344] - [task:164] INFO -  {'uptime': '1554', 'memoryTotal': 15466930176, 'memoryFree': 13428854784, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:31:51,370] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:31:51,399] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 01:31:51,400] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 01:31:51,453] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:31:51,456] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:31:51,456] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:31:52,148] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:31:52,149] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:31:52,290] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:31:52,291] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:31:52,321] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:31:52,348] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:31:52,379] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:31:52,504] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 01:31:52,534] - [task:164] INFO -  {'uptime': '1549', 'memoryTotal': 15466930176, 'memoryFree': 13428867072, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:31:52,564] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:31:52,594] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 01:31:52,594] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 01:31:52,647] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:31:52,650] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:31:52,650] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:31:53,349] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:31:53,351] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:31:53,488] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:31:53,489] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:31:53,521] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:31:53,547] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:31:53,576] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:31:53,693] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 01:31:53,720] - [task:164] INFO -  {'uptime': '1550', 'memoryTotal': 15466930176, 'memoryFree': 13428727808, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 01:31:53,746] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 01:31:53,774] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 01:31:53,775] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 01:31:53,826] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 01:31:53,829] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:31:53,829] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:31:54,503] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:31:54,505] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:31:54,637] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:31:54,638] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:31:54,668] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:31:54,694] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 01:31:54,723] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 01:31:54,812] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 01:31:55,210] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:32:00,216] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 01:32:00,306] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 01:32:00,311] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:32:00,312] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:32:01,073] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:32:01,074] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 01:32:01,212] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 01:32:01,213] - [remote_util:5249] INFO - b'ok'
[2022-09-02 01:32:01,214] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 01:32:01,954] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 01:32:02,014] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 01:32:02,014] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 01:32:31,264] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:32:31,606] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:32:32,038] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 01:32:32,043] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #11 suite_tearDown ==============
[2022-09-02 01:32:32,110] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:32:32,110] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:32:32,856] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:32:32,860] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:32:32,861] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:32:34,044] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:32:34,056] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:32:34,057] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:32:35,298] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:32:35,306] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:32:35,306] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:32:36,521] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:32:43,799] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:32:43,800] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 27.2007952892636, 'mem_free': 13425504256, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:32:43,800] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:32:43,800] - [basetestcase:467] INFO - Time to execute basesetup : 54.755321741104126
[2022-09-02 01:32:43,858] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:32:43,858] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:32:43,916] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:32:43,916] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:32:43,975] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:32:43,975] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:32:44,030] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 01:32:44,031] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 01:32:44,084] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:32:44,143] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 01:32:44,143] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 01:32:44,143] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 01:32:49,152] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 01:32:49,156] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:32:49,157] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:32:49,868] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:32:50,966] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 01:32:51,143] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 01:32:54,208] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 01:32:54,282] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:32:54,282] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 01:32:54,282] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 01:33:24,297] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 01:33:24,325] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:33:24,351] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:33:24,415] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 62.099274ms
[2022-09-02 01:33:24,415] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'd271a4b3-348c-48f0-87fc-29dafeb6a90f', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '62.099274ms', 'executionTime': '62.027395ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 01:33:24,415] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 01:33:24,441] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 01:33:24,465] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 01:33:25,134] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 666.593488ms
[2022-09-02 01:33:25,134] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 01:33:25,197] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:33:25,235] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:33:25,244] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.66738ms
[2022-09-02 01:33:25,468] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:33:25,507] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 01:33:25,527] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 01:33:25,528] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 01:33:25,546] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 01:33:25,608] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:33:25,735] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:33:25,738] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:33:25,739] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:33:26,507] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:33:26,562] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:33:26,562] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 01:33:26,683] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:33:26,739] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:33:26,742] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:33:26,743] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:33:27,492] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:33:27,546] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 01:33:27,546] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 01:33:27,666] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 01:33:27,722] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 01:33:27,722] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 01:33:27,722] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:33:27,747] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 01:33:27,773] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 01:33:27,782] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.65368ms
[2022-09-02 01:33:27,810] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 01:33:27,837] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 01:33:27,888] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 42.143548ms
[2022-09-02 01:33:27,968] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 01:33:27,968] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 18.12089890862895, 'mem_free': 13226754048, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 01:33:27,968] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 01:33:27,973] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:33:27,973] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:33:28,721] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:33:28,726] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:33:28,726] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:33:30,065] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:33:30,078] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:33:30,078] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:33:31,423] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:33:31,435] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 01:33:31,435] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 01:33:32,760] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 01:33:40,507] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #11 suite_tearDown ==============
[2022-09-02 01:33:40,659] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 01:33:41,038] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 01:33:41,068] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 01:33:41,068] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 01:33:41,129] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:33:41,184] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:33:41,238] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 01:33:41,239] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 01:33:41,322] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 01:33:41,323] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 01:33:41,349] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:33:41,477] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 01:33:41,478] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:33:41,505] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 01:33:41,534] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 01:33:41,534] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:33:41,561] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 01:33:41,586] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 01:33:41,586] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:33:41,613] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 01:33:41,638] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 01:33:41,638] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 01:33:41,665] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 01:33:41,665] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #11 suite_tearDown ==============
[2022-09-02 01:33:41,666] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 01:33:41,666] - [basetestcase:778] INFO - closing all memcached connections
ok

----------------------------------------------------------------------
Ran 1 test in 112.699s

OK
Cluster instance shutdown with force
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests.test_multi_create_drop_index', ' pass')
('gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests.test_multi_create_drop_index', ' pass')
('gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_remove_bucket_and_query', ' pass')
('gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_change_bucket_properties', ' pass')
('gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_delete_create_bucket_and_query', ' pass')
*** TestRunner ***
scripts/start_cluster_and_run_tests.sh: line 91:  6057 Terminated              COUCHBASE_NUM_VBUCKETS=64 python3 ./cluster_run --nodes=$servers_count &> $wd/cluster_run.log  (wd: /opt/build/ns_server)

Testing Failed: Required test failed

FAIL	github.com/couchbase/indexing/secondary/tests/functionaltests	80.171s
FAIL	github.com/couchbase/indexing/secondary/tests/largedatatests	0.207s
panic: Error while initialising cluster: AddNodeAndRebalance: Error during rebalance, err: Rebalance failed
panic: Error in ChangeIndexerSettings: Post "http://:2/internal/settings": dial tcp :2: connect: connection refused
Version: versions-01.09.2022-22.27.cfg
Build Log: make-01.09.2022-22.27.log
Server Log: logs-01.09.2022-22.27.tar.gz

Finished