Building

Started building at 2022/09/02 06:21:51
Using pegged server, 1948 build
Calculating base
Updating mirror
Basing run on 7.2.0-1948 3df08568d
Updating tree for run 02.09.2022-06.21
query is at c3f3821, changes since last good build: 
 c3f3821ea MB-53533 Ensure initialisation is complete ahead of PrepareTopologyChange processing.
 e74608abe MB-53541 Independent exchange notifier for merge actions.
 6a1b677ce MB-53565 Do not consider primary index for index scan inside correlated subquery
 89f2c4277 MB-53565 Proper formalization for ExpressionTerm and SubqueryTerm under ANSI join
 3c0a17984 MB-53565 Do not use join predicates as index span if under hash join
 f636e8c63 MB-53506 Additional fixes for unit tests
 b5c561c25 MB-53506 Fix some unit test issues
 6de332853 MB-53230. Change load_factor to moving avg
 8fcba5ab8 MB-53410 Revise fix.
 5ce5fd088 MB-53528 add SkipMetering to index connection
 8047faf2e MB-53526 register unbillable admin units with regulator
 cae373174 MB-53410 Reduce redundant value creation in UNNEST.
 be0672272 MB-52176:If use_cbo setting is true then run update statistics after CREATE INDEX for non deferred indexes and after BUILD INDEX for deferred indexes
 d0e4f965a updating go mod for building in module aware mode
 f3f2f3580 MB-53514 Revise fix.
 08344c66a MB-53514 Report KV RUs incurred for sequential scans
 120933d52 Revert "MB-53514 Report KV RUs incurred for sequential scans"
 f70f23c4f MB-53514 Report KV RUs incurred for sequential scans
 63fbf9734 MB-53406 Reduce atomics in lockless pool.
 c68ea6550 MB-53506 Prevent multiple mutations on a single key in UPSERT.
 407a2027f MB-53477 Alter threshold for spilling group-by without a quota.
 b7552a36e MB-53472:Add command line option -query_context or -qc to specify the query_context when connecting to the couchbase cluster to launch cbq shell
 1613c282d MB-53506 Improve sequential scan integration with IUD operations.
 c754bf3c9 MB-53176:Perform memory accounting when cached values are freed in the ExpressionScan operator
 e178f11e6 MB-33748 Go fmt
 174adf156 MB-33748 Making time in catalogs ISO-8601 format
 73e10feeb MB-53230. Use minimum value
 1c61edc90 MB-51928 Additional fixes for join filter
 839c44620 MB-44219 credentials not always available to planner
 1c9c1c6a1 MB-53439 MB-50997 scope UDF definitions stored in _system scope, open system:functions
 456304363 MB-53446 Honor USE NL hint for kv range scan
 d1fc29ec6 MB-53444 Avoid invalid numbers when calculating group cost
 f2d314563 MB-44219: In serverless mode, check if user has permissions on bucket when executing a scoped function
 b72782619 MB-53434. return false only on error
 519566933 go mod tidy (update to bleve@v2.3.4)
 c327709fe MB-52098: go mod tidy
 ab8a702f2 MB-53427 cbq vim-mode: enhance commands
 a85b778e2 MB-53420 consider empty non nil parameter list for inline UDF caching
 568270f72 MB-53416. Set Conditional for NVL/NVL2/Decode/IfMissing/IfNull
 6502af299 MB-53391 Fix value tracking for RAW projections
 d1e41aa49 MB-29604 Fix test cases.
 0b269607c MB-44305 Spill sort and group by to disk.
 e8b2e1f49 MB-53394 Don't escape HTML in errors and warnings.
 8d2e43a17 MB-29604 Return warning on division by zero.
 6a6d16cb5 MB-53372 properly formalize select in UDF with variables as non correlated, avoid caching for UDF select
 0faaadf0c MB-53377 Proper set up of join keys when converting from ANSI join to lookup join
 f47e3d29b MB-53371 Use 64-bit value for 'Uses'
 86c6a347a MB-53353 Adjust unbounded folllowing window range
 039b1d27c GOCBC-1325 Add Transaction logger
 c4f13bfb9 MB-52952 - [FTS] Improving the precision of stats used for WUs metering
 4d2617f4a MB-44219 In serverless mode if the user is not an admin user and tries to access a collection, scope or bucket that does not exist or they do not have permissions on:  If the user has permissions on the bucket in the keyspace they are trying to access, display the specific error message. If not, display the generic access denied message.
 758bc9411 MB-52176 Perform automatic UPDATE STATISTICS for CREATE PRIMARY INDEX
 38fa22e85 MB-53254 Ignore transaction binary documents in sequential scans
 5dd58d359 MB-53291 Administrator is throttled
 5fcf49bd5 MB-53298 Do not return error for MISSING value in join processing
 8b1553f13 MB-53248 sub doc api to account for rus
 34267f151 MB-52971 Handing CheckResultReject
 0d0d028ec MB-53260 Fix last_scan_time for sequential scans.
 70e3a059c -= passing TlsCAFile to regulator init (unused/deprecated)
 c32b4fb19 "MB-53230 load factor of query node"
 28049ab09 MB-51961 Fix typo
 47cd8a06a Build failure Revert "MB-53230 load factor of query node"
 46c894124 Revert "MB-44305 Spill sort and group by to disk."
 9f11a3e8b MB-51961 Allow primary index to be used on inner of nested-loop join
 95839d370 MB-44305 Spill sort and group by to disk.
 4d1078dd1 MB-53230 load factor of query node
 422c73fe7 MB-53235 leave item quota management to producer for serialized operators
gometa is at 05cb6b2, changes since last good build: none
ns_server is at 90035b0, changes since last good build: 
 90035b0e7 MB-52195 Tag "system" collections to not be metered
 dd4f8b6de MB-52142: Fix default for bucket throttle setting
 f5a520077 MB-50537: Don't allow strict n2n encryption if external unencrypted dist
 823c23c4d MB-53478: Fix saving anonymous functions to disk
 badb66c8f MB-52142: Add throttle limits to bucket config
 f09932bb4 MB-52044 Fix eaccess crash
 f2b6fed41 MB-52142: Add throttle limits to internal settings
 2efa104ee MB-52350 Allow setting per-bucket storage limits
 7e4c5811d Prevent memcached bucket creation on serverless
 28db38600 MB-52350 Fix default values for storage limits
 8fff21898 MB-51516: Don't clamp in_docs estimate to current checkpoint items
 c2a03f4bf MB-52226: Introduce pause/resume APIs that are stubbed out
 9cffcd45e MB-23768: Don't allow user backup in mixed clusters
 f3e9ba2f0 MB-23768: Fix validator:json_array()
 9e90c3142 MB-23768: Add menelaus_users:store_users and use it for restore
 1f3a00032 MB-23768: Add replicated_dets:change_multiple/2
 0c909be9c MB-23768: [rbac] Make sure we compress docs only once when ...
 a804ff225 MB-23768: Add PUT /settings/rbac/backup
 85a2b99a7 MB-23768: Move security_roles_access and ldap_access checks...
 807549ea8 MB-23768: Fix validator:apply_multi_params
 c85963d04 MB-23768: Add GET /settings/rbac/backup
 253346b7b MB-23768: Call menelaus_roles:validate_roles when validating...
 7b4f6c40e MB-23768: Remove unnecessary has_permission(Permission, Req) check
 8301f1565 MB-23768: Remove permission param in verify_ldap_access
 b08f974b7 MB-53326 Push CCCP payload on all kv nodes
 19898a852 MB-52350 Fix unused variable
 666af3431 MB-52350 Add storage limits to bucket config
 8ae0cbb85 MB-52350 Add storage limits to internal settings
 6b01ad610 MB-53423 Adjust bucket maximums for _system scope
 44aa2ee1f MB-53288: New query node-quota parameter
 207277058 MB-53352 Report the running config profile
 65faa5fe7 MB-51738 Use this_node() in ns_memcached
 3ab091f2e MB-51738 Define this_node() to handle distribution crash
 f9693814a MB-53192: Add upgrade for memory alerts
 763b16746 MB-53193: Reenable autofailover popup alerts
 97fce2439 MB-47905: Pass client cert path to services
 e9572c7c9 Update regulator frequently_changed_key prefix to /regulator/report
 55725498f MB-53323: consider keep nodes when placing buckets in rebalance
 492ae395e Add isServerless to /pools result
couchstore is at 803ec5f, changes since last good build: 
 803ec5f Refactor: mcbp::datatype moved to cb::mcbp::datatype
forestdb is at acba458, changes since last good build: none
kv_engine is at 1d85f5a, changes since last good build: 
 1d85f5a55 MB-53127: Document write should clear read usage
 d69988fa9 MB-52311: [1/n] Pause / Resume Bucket: opcodes
 225d4a7ea Refactor bucket delete to add extra extra unit tests
 c14da3f90 MB-53510: Refactor bucket creation
 8a3da42c2 MB-53543: Disable BackfillSmallBuffer test
 cb7a5b432 MB-53304: Enforce holding of stateLock in VBucket::queueItem [2/3]
 e74ba64e4 MB-53304: Enforce holding of stateLock in VBucket::queueItem [1/3]
 0ceebade0 Remove ServerCallbackIface
 19d61765d MB-52553: Don't special-case persistence cursor in CM::addStats
 5836d9a70 MB-50984: Remove max_checkpoints hard limit on the single vbucket
 47572e321 MB-50984: Default checkpoint_destruction_tasks=2
 00201d310 MB-53523: Only check snap start vs last snap end if active VB
 c90a937a4 Reformat test_reader_thread_starvation_warmup
 dec1c6c2c Merge "Merge branch 'neo' into 'master'"
 af74c95fe Refactor: CheckpointManager::registerCursorBySeqno()
 a933cf568 MB-53448: DCP_ADD_STREAM_FLAG_TO_LATEST should use highSeqno of collections(s) in filter
 08438bfb8 MB-53259: Update DCP Consumer byffer-size at dynamic Bucket Quota change
 e61d09645 [Refactor] deselect bucket before trying to delete
 afac71aab MB-53055: Fix Checkpoint::isEmptyByExpel() semantic
 ae8baf2dc Remove unused code from kvstore_test
 6f5ba689c [Refactor] Move bufferevent related code to subclass
 2dd1745c6 MB-53498: Delay bucket type update
 d50b99685 Merge branch 'neo' into 'master'
 79de292f2 Merge branch 'neo' into 'master'
 34bc1c7d2 Merge "Merge branch 'neo' into 'master'"
 074db327f Enable KVStoreTest GetBySeqno for non-couchstore
 16bd96ae6 MB-53284: Use magma memory optimized writes in BucketQuotaChangeTest
 854eced08 Merge branch 'neo' into 'master'
 50f5747b7 Merge "Merge branch 'neo' into 'master'"
 c231af910 Cleanup: remove 'polling' durability timeout mode
 f5930b3ea Tidy: Checkpoint::queueDirty use structured binding in for loop
 a65ca2ba1 Merge branch 'neo' into 'master'
 16f186be2 Only regenerate serverless/configuration.json if exe changed
 194900077 MB-53052: Remove KVBucket::itemFreqDecayerIsSnoozed()
 4b4ad639d Refactor: Create factory method for Connection objects
 f3ac46848 MB-35297: Fix RangeScan sampling stats NotFound path
 7d3f297f7 MB-46738: Rename dcp_conn_buffer_ratio into dcp_consumer_buffer_ratio
 9bc866891 [Refactor] Remove the history field of sloppy gauge
 a7b78c756 MB-53055: Add highestExpelledSeqno to Checkpoint ostream
 a58bd636c MB-53055: Add highest_expelled_seqno to Checkpoint stats
 7e4587d1e Remove duplicate method in DurabilityEPBucketTest
 12136509b Add labels to Montonic<> members of Checkpoint
 795dd8dc0 MB-53055: Fix exception message in CM::registerCursorBySeqno
 200aa87ae Add "filter" capabilities to delete bucket
 6fcfed646 SetClusterConfig should create config-only bucket
 6990718c8 MB-52953: Remove refs to old replication-throttle params and stats
 0fdcf8882 MB-52953: Remove unused EPStats::replicationThrottleThreshold
 bc4592d8b MB-52953: Use mutation_mem_threshold in ReplicationThrottleEP::hasSomeMemory
 b1ed0feb2 MB-52953: Turn mutation_mem_threshold into mutation_mem_ratio
 45dd2db60 MB-53429: Hold vbState lock during pageOut
 ba18b10ca MB-53438: Acquire the vbState lock during disk backfill
 348287953 MB-53141: Return all if sampling range-scan requests samples > keys
 9da38ff86 MB-35297: Improve logging for RangeScan create/cancel
 79aa3dd72 MB-53100: Add extra seqno log information after we register a cursor
 415b3ec74 MB-53198: Do not abort warmup for shard if scan cancelled
 dc09bb535 Cleanup: Move mcbp::datatype to cb::mcbp::datatype
 a77fca118 MB-35297: Meter RangeScan create
 36d090abe MB-35297: Throttle RangeScan create/continue
 40321cf27 SetClusterConfig should handle all bucket states
 ac0c0486d Merge commit 'couchbase/neo~7' into trunk
 c615d15f2 Merge "Merge commit 'couchbase/neo~10' into trunk"
 a96e4a5e9 MB-52806: Disconnect DCP connections when they loose privilege
 6b7d68b4e MB-52158: Check for privilege in RangeScan continue/cancel
 b887f1f17 Merge commit 'couchbase/neo~10' into trunk
 2cfe963a7 Modernize config parsing [2/2]
 4a6018627 MB-53359: Add uniqe error code for config-bucket
 e0e5d5c98 MB-35297: Add EventDrivenTimeoutTask
 8bfdba483 Cleanup: move mcbp::subdoc under cb::mcbp::subdoc
 cf97e6792 Cleanup: Move mcbp::cas under cb::mcbp::cas
 3834eb115 MB-43127: Log succcess status from dumpCallback
 bcb730456 MB-52172 Refactor source file generation cmake target
 d847f8a55 MB-35297: Meter RangeScan key/values
 a7a610b48 Refactor: Rename CreateBucketCommandContext
 af47290a6 Refactor out wait code to separate method
 3c30a1142 Include all bucket states in "bucket_details "
 5d272f547 MB-53379: Allow Collection enabled clients to select COB
 0042495b9 MB-52975: Fold backfill create and scan into one invocation of run
 53f915d1d MB-35297: runtime must not be zero when backfill completes
 a811f317b MB-53359: Don't try to fetch bucket metrics from config-only bucket
 881774c5e MB-53354: Extend CheckpointMemoryTrackingTest suite for non-SSO case
 7d7389df7 Modernize parse_config [1/2]
 72e650860 Set the correct hostname for dcp metering test
 8325ff14b Remove support for DT_CONFIGFILE
 92c8f4fa8 Remove config_parse from server-api
 f85f41bad MB-35297: RangeScan document 'flags' should match GetMeta byte order
 3eccd2aa6 MB-53157: RangeScanCreate uuid should be a string
 67d4759c0 MB-52953: Add ReplicationThrottleEP::engine member
 c310b2f4a Don't use the term whitelist
 407905037 MB-53197: Add support for ClusterConfigOnly bucket
 f61b2e1c6 MB-53294: Introduce storage_bytes metering metric
 be1577087 MB-52953: Remove unused UseActiveVBMemThreshold
 ecbd40992 MB-35297: Add missing recvResponse / sendCommand from RangeScanTest/CreateInvalid
 e3bbe2ace MB-52953: Use only mutation_mem_threshold in VB::hasMemoryForStoredValue
 533286852 MB-53294: Refactor engine Prometheus metrics
 03056b2d2 MB-53294: Rename Cardinality -> MetricGroup
 8937d6e5a MB-52953: Default replication_throttle_threshold=93
 6579346af MB-52956: Update lastReadSeqno at the end of an OSO backfill
 3af167ac7 MB-52953: Move VBucket::mutationMemThreshold to KVBucket
 8c5af9915 MB-52854: Fix and re-enable the DcpConsumerBufferAckTest suite
 100a5b2af MB-52957: Avoid scan when collection high seqno < start
 cd6df9b81 Make wasFirst in ActiveStream snapshot functions const
 7bc7ee427 Sanity check that snap start > previous snap end
 8f324c470 MB-53184: Extend range-scan computed exclusive-end upto the input
 3f6fb6ba2 MB-46738: Remove Vbid arg from the buffer-ack DCP api
 cdc3c2f29 MB-52842: Fix intermittent failure in 'disk>RAM delete paged-out'
 1588cb007 Merge "Merge branch 'neo' into 'master'"
 552d9e2c7 MB-46738: Remove unused dcp_conn_buffer_size_max
 e44ee005e MB-46738: Remove unused dcp_conn_buffer_size
 769d20940 MB-52264: Add desiredMaxSize stat
 6809d7eae MB-46738: Ensure Consumer buffer size always ratio of bucket quota
 b05ebef25 Merge branch 'neo' into 'master'
 ab1ab27f8 Merge "Merge commit 'ea65052e' into 'couchbase/master'"
 a6e70fdae Merge commit 'ea65052e' into 'couchbase/master'
 503ae084b MB-46738: Make DcpFlowControlManager::engine const
 e5766a51e MB-46738: Make dcp_conn_buffer_ratio dynamic
 979159649 MB-53205: Hold VBucket stateLock while calling fetchValidValue
 89602bce3 Humpty-Dumpty: Failover exploration tool
 bb17d9439 MB-53197: [Refactor] create BucketManager::setClusterConfig
 a81e37998 Upgrade go version to 1.19 for tls_test
 256c78709 Merge "MB-52383: Merge branch 'cheshire-cat' into neo" into neo
 09bbfce5c Merge "MB-47851: Merge branch 'cheshire-cat' into neo" into neo
 e99ce1c4a Merge "MB-47267: Merge branch 'cheshire-cat' into neo" into neo
 112e09c36 Merge "MB-51373: Merge branch 'cheshire-cat' into neo" into neo
 281df3be1 Merge "Merge branch 'cheshire-cat' into neo" into neo
 46014c72f MB-52383: Merge branch 'cheshire-cat' into neo
 ecc2f6bb7 Change the logic for Unmetered privilege
 c73eaf5f5 MB-53100: Add streamName arg to MockActiveStream ctor
 ba7850f07 MB-47851: Merge branch 'cheshire-cat' into neo
 5edb02327 MB-47267: Merge branch 'cheshire-cat' into neo
 8db209a68 MB-51373: Merge branch 'cheshire-cat' into neo
 eb865cbb0 Merge branch 'cheshire-cat' into neo
 f656b5152 Merge "Merge branch 'cheshire-cat' into neo" into neo
 453eb9a98 MB-53282: Reset open_time in early return in close_and_rotate_file
 2a83a2a63 MB-52383: Merge branch 'mad-hatter' into cheshire-cat
 9c684fb52 Merge branch 'mad-hatter' into cheshire-cat
 852883091 Merge branch 'mad-hatter' into cheshire-cat
 0173173cb Revert "MB-52813: Don't call Seek for every call of ::scan"
 f1c3ddc67 Merge branch 'cheshire-cat' into neo
 349c2640c Set GOVERSION to 1.18 to remove warning from cmake
 abfb02f80 MB-46738: FCManager API takes DcpConsumer&
 4ab7dbaa3 MB-52264: Wait for memory to reduce before setting new quota
 d5d7b65d0 [serverless] Split Get metering test to individual tests
 fda7ec6b8 Remove old comment in PagingVisitor
 5d9bdbb44 MB-52633: Swap PagingVisitor freq counter histogram to flat array
 f494fa983 MB-51373: Merge branch 'mad-hatter' into cheshire-cat
 6d32e009a MB-52669: Specify GOVERSION without patch revision
 df808528f Merge "Merge branch 'neo'"
 eeb5cbad7 Merge branch 'neo'
 18a4cd691 MB-52793: Merge branch 'mad-hatter' into cheshire-cat
 b4c2fe22b Merge branch 'mad-hatter' into cheshire-cat
 c80c6f58c MB-51373: Inspect and correct Item objects created by KVStore
 ea65052eb MB-53046: [BP] Timeout SeqnoPersistenceRequests when no data is flushed
 5f6d5dc65 MB-47267 / MB-52383: Make backfill during warmup a PauseResume task
 4e51c38a8 MB-47851: Cancel any requests blocked on warmup if warmup stopped.
 2c6e95c8e MB-47267: Make ObjectRegistry getAllocSize atomic
 3d73de526 MB-52902: Populate kvstore rev if no vbstate found
 ad47f53b7 MB-51373: Inspect and correct Item objects created by KVStore
 8855aebe5 MB-52793: Ensure StoredValue::del updates datatype
 35086bc80 Merge remote-tracking branch 'couchbase/alice' into mad-hatter
 0df2087be MB-43055: [BP] Ensure ItemPager available is not left set to false
 6dfd920a8 MB-43453: mcctl: Use passwd from env or stdin
 b7d5bd362 MB-40531: [BP] Prefer paging from replicas if possible
Switching indexing to unstable
indexing is at 19ea81d, changes since last good build: none
Switching plasma to unstable
plasma is at cfa6534, changes since last good build: 
fatal: Invalid revision range 0141641db3ee3de853547c46ed58c647fc7c43a1..HEAD

Switching nitro to unstable
nitro is at 966c610, changes since last good build: none
Switching gometa to master
gometa is at 05cb6b2, changes since last good build: none
Switching testrunner to master
Submodule 'gauntlet' (https://github.com/pavithra-mahamani/gauntlet) registered for path 'gauntlet'
Submodule 'java_sdk_client' (https://github.com/couchbaselabs/java_sdk_client) registered for path 'java_sdk_client'
Submodule 'lib/capellaAPI' (https://github.com/couchbaselabs/CapellaRESTAPIs) registered for path 'lib/capellaAPI'
Submodule path 'gauntlet': checked out '4e2424851a59c6f4b4edfdb7e36fa6a0874d6300'
Submodule path 'java_sdk_client': checked out '961d8eb79ec29bad962b87425eca59fc43c6fe07'
Submodule path 'lib/capellaAPI': checked out '879091aa331e3d72f913b8192f563715d9e8597a'
testrunner is at f2361d1, changes since last good build: none
Pulling in uncommitted change 179474 at refs/changes/74/179474/1
Total 74 (delta 60), reused 70 (delta 60)
[unstable 3575497f] MB100 : Add function to satisfy datastore.Context interface
 Author: Sai Krishna Teja Kommaraju 
 Date: Thu Sep 1 22:25:22 2022 +0530
 4 files changed, 16 insertions(+)
Building community edition
Building cmakefiles and deps [CE]
Building main product [CE]
Build CE finished
BUILD_ENTERPRISE empty. Building enterprise edition
Building Enterprise Edition
Building cmakefiles and deps [EE]
Building main product [EE]
Build EE finished

Testing

Started testing at 2022/09/02 07:08:59
Testing mode: sanity,unit,functional,integration
Using storage type: plasma
Setting ulimit to 200000

Simple Test

Sep 02 07:14:00 rebalance_in_with_ops (rebalance.rebalancein.RebalanceInTests) ... ok
Sep 02 07:17:51 rebalance_in_with_ops (rebalance.rebalancein.RebalanceInTests) ... ok
Sep 02 07:18:36 do_warmup_100k (memcapable.WarmUpMemcachedTest) ... ok
Sep 02 07:20:02 test_view_ops (view.createdeleteview.CreateDeleteViewTests) ... ok
Sep 02 07:20:55 b" 'stop_on_failure': 'True'}"
Sep 02 07:20:55 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t rebalance.rebalancein.RebalanceInTests.rebalance_in_with_ops,nodes_in=3,replicas=1,items=50000,get-logs-cluster-run=True,doc_ops=create;update;delete'
Sep 02 07:20:55 b"{'nodes_in': '3', 'replicas': '1', 'items': '50000', 'get-logs-cluster-run': 'True', 'doc_ops': 'create;update;delete', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 1, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'False', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_07-09-21/test_1'}"
Sep 02 07:20:55 b'-->result: '
Sep 02 07:20:55 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 1 , fail 0'
Sep 02 07:20:55 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t rebalance.rebalancein.RebalanceInTests.rebalance_in_with_ops,nodes_in=3,bucket_type=ephemeral,replicas=1,items=50000,get-logs-cluster-run=True,doc_ops=create;update;delete'
Sep 02 07:20:55 b"{'nodes_in': '3', 'bucket_type': 'ephemeral', 'replicas': '1', 'items': '50000', 'get-logs-cluster-run': 'True', 'doc_ops': 'create;update;delete', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 2, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_07-09-21/test_2'}"
Sep 02 07:20:55 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Sep 02 07:20:55 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t memcapable.WarmUpMemcachedTest.do_warmup_100k,get-logs-cluster-run=True'
Sep 02 07:20:55 b"{'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 3, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_07-09-21/test_3'}"
Sep 02 07:20:55 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Sep 02 07:20:55 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Sep 02 07:20:55 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t view.createdeleteview.CreateDeleteViewTests.test_view_ops,ddoc_ops=create,test_with_view=True,num_ddocs=1,num_views_per_ddoc=10,items=1000,skip_cleanup=False,get-logs-cluster-run=True'
Sep 02 07:20:55 b"{'ddoc_ops': 'create', 'test_with_view': 'True', 'num_ddocs': '1', 'num_views_per_ddoc': '10', 'items': '1000', 'skip_cleanup': 'False', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 4, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_07-09-21/test_4'}"
Sep 02 07:20:55 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Sep 02 07:20:55 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Sep 02 07:20:55 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Sep 02 07:31:19 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t view.viewquerytests.ViewQueryTests.test_employee_dataset_startkey_endkey_queries_rebalance_in,num_nodes_to_add=1,skip_rebalance=true,docs-per-day=1,timeout=1200,get-logs-cluster-run=True'ok
Sep 02 07:32:02 test_simple_dataset_stale_queries_data_modification (view.viewquerytests.ViewQueryTests) ... ok
Sep 02 07:35:47 load_with_ops (xdcr.uniXDCR.unidirectional) ... ok
Sep 02 07:39:41 load_with_failover (xdcr.uniXDCR.unidirectional) ... ok
Sep 02 07:42:23 suite_tearDown (xdcr.uniXDCR.unidirectional) ... ok
Sep 02 07:42:23 b"{'num_nodes_to_add': '1', 'skip_rebalance': 'true', 'docs-per-day': '1', 'timeout': '1200', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 5, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_07-09-21/test_5'}"
Sep 02 07:42:23 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Sep 02 07:42:23 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Sep 02 07:42:23 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Sep 02 07:42:23 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 1 , fail 0'
Sep 02 07:42:23 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t view.viewquerytests.ViewQueryTests.test_simple_dataset_stale_queries_data_modification,num-docs=1000,skip_rebalance=true,timeout=1200,get-logs-cluster-run=True'
Sep 02 07:42:23 b"{'num-docs': '1000', 'skip_rebalance': 'true', 'timeout': '1200', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 6, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_07-09-21/test_6'}"
Sep 02 07:42:23 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Sep 02 07:42:23 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Sep 02 07:42:23 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Sep 02 07:42:23 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 2 , fail 0'
Sep 02 07:42:23 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t xdcr.uniXDCR.unidirectional.load_with_ops,replicas=1,items=10000,value_size=128,ctopology=chain,rdirection=unidirection,doc-ops=update-delete,get-logs-cluster-run=True'
Sep 02 07:42:23 b"{'replicas': '1', 'items': '10000', 'value_size': '128', 'ctopology': 'chain', 'rdirection': 'unidirection', 'doc-ops': 'update-delete', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 7, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_07-09-21/test_7'}"
Sep 02 07:42:23 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Sep 02 07:42:23 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Sep 02 07:42:23 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Sep 02 07:42:23 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 2 , fail 0'
Sep 02 07:42:23 b'summary so far suite xdcr.uniXDCR.unidirectional , pass 1 , fail 0'
Sep 02 07:42:23 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t xdcr.uniXDCR.unidirectional.load_with_failover,replicas=1,items=10000,ctopology=chain,rdirection=unidirection,doc-ops=update-delete,failover=source,get-logs-cluster-run=True'
Sep 02 07:42:23 b"{'replicas': '1', 'items': '10000', 'ctopology': 'chain', 'rdirection': 'unidirection', 'doc-ops': 'update-delete', 'failover': 'source', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 8, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_07-09-21/test_8'}"
Sep 02 07:42:23 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 2 , fail 0'
Sep 02 07:42:23 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Sep 02 07:42:23 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Sep 02 07:42:23 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 2 , fail 0'
Sep 02 07:42:23 b'summary so far suite xdcr.uniXDCR.unidirectional , pass 2 , fail 0'
Sep 02 07:42:23 b'Run after suite setup for xdcr.uniXDCR.unidirectional.load_with_failover'
Sep 02 07:42:23 b"('rebalance.rebalancein.RebalanceInTests.rebalance_in_with_ops', ' pass')"
Sep 02 07:42:23 b"('rebalance.rebalancein.RebalanceInTests.rebalance_in_with_ops', ' pass')"
Sep 02 07:42:23 b"('memcapable.WarmUpMemcachedTest.do_warmup_100k', ' pass')"
Sep 02 07:42:23 b"('view.createdeleteview.CreateDeleteViewTests.test_view_ops', ' pass')"
Sep 02 07:42:23 b"('view.viewquerytests.ViewQueryTests.test_employee_dataset_startkey_endkey_queries_rebalance_in', ' pass')"
Sep 02 07:42:23 b"('view.viewquerytests.ViewQueryTests.test_simple_dataset_stale_queries_data_modification', ' pass')"
Sep 02 07:42:23 b"('xdcr.uniXDCR.unidirectional.load_with_ops', ' pass')"
Sep 02 07:42:23 b"('xdcr.uniXDCR.unidirectional.load_with_failover', ' pass')"

Unit tests

=== RUN   TestMerger
--- PASS: TestMerger (0.02s)
=== RUN   TestInsert
--- PASS: TestInsert (0.00s)
=== RUN   TestInsertPerf
16000 items took 15.626999ms -> 1.0238690102942991e+06 items/s conflicts 6
--- PASS: TestInsertPerf (0.02s)
=== RUN   TestGetPerf
16000 items took 7.327468ms -> 2.183564636515642e+06 items/s
--- PASS: TestGetPerf (0.01s)
=== RUN   TestGetRangeSplitItems
{
"node_count":             1000,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       0,
"next_pointers_per_node": 1.3450,
"memory_used":            45520,
"node_allocs":            1000,
"node_frees":             0,
"level_node_distribution":{
"level0": 747,
"level1": 181,
"level2": 56,
"level3": 13,
"level4": 2,
"level5": 1,
"level6": 0,
"level7": 0,
"level8": 0,
"level9": 0,
"level10": 0,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
Split range keys [105 161 346 379 434 523 713]
No of items in each range [105 56 185 33 55 89 190 287]
--- PASS: TestGetRangeSplitItems (0.00s)
=== RUN   TestBuilder
{
"node_count":             50000,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       0,
"next_pointers_per_node": 1.3368,
"memory_used":            2269408,
"node_allocs":            50000,
"node_frees":             0,
"level_node_distribution":{
"level0": 37380,
"level1": 9466,
"level2": 2370,
"level3": 578,
"level4": 152,
"level5": 40,
"level6": 9,
"level7": 4,
"level8": 1,
"level9": 0,
"level10": 0,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
Took 9.27672ms to build 50000 items, 5.389836e+06 items/sec
Took 1.205922ms to iterate 50000 items
--- PASS: TestBuilder (0.01s)
=== RUN   TestNodeDCAS
--- PASS: TestNodeDCAS (0.00s)
PASS
ok  	github.com/couchbase/nitro/skiplist	0.070s
=== RUN   TestZstdSimple
--- PASS: TestZstdSimple (0.00s)
=== RUN   TestZstdCompressBound
--- PASS: TestZstdCompressBound (3.09s)
=== RUN   TestZstdErrors
--- PASS: TestZstdErrors (0.00s)
=== RUN   TestZstdCompressLevels
--- PASS: TestZstdCompressLevels (0.73s)
=== RUN   TestZstdEmptySrc
--- PASS: TestZstdEmptySrc (0.00s)
=== RUN   TestZstdLargeSrc
--- PASS: TestZstdLargeSrc (0.00s)
PASS
ok  	github.com/couchbase/plasma/zstd	3.824s
=== RUN   TestAutoTunerWriteUsageStats
--- PASS: TestAutoTunerWriteUsageStats (10.05s)
=== RUN   TestAutoTunerReadUsageStats
--- PASS: TestAutoTunerReadUsageStats (7.56s)
=== RUN   TestAutoTunerCleanerUsageStats
--- PASS: TestAutoTunerCleanerUsageStats (8.60s)
=== RUN   TestAutoTunerDiskStats
--- PASS: TestAutoTunerDiskStats (2.50s)
=== RUN   TestAutoTunerTargetFragRatio
--- PASS: TestAutoTunerTargetFragRatio (0.00s)
=== RUN   TestAutoTunerExcessUsedSpace
--- PASS: TestAutoTunerExcessUsedSpace (0.00s)
=== RUN   TestAutoTunerUsedSpaceRatio
--- PASS: TestAutoTunerUsedSpaceRatio (0.00s)
=== RUN   TestAutoTunerAdjustFragRatio
--- PASS: TestAutoTunerAdjustFragRatio (0.00s)
=== RUN   TestAutoTuneFlushBufferAdjustMemQuotaSingleShard
--- PASS: TestAutoTuneFlushBufferAdjustMemQuotaSingleShard (18.28s)
=== RUN   TestAutoTuneFlushBufferAdjustMemQuotaManyShards
--- PASS: TestAutoTuneFlushBufferAdjustMemQuotaManyShards (11.20s)
=== RUN   TestAutoTuneFlushBufferRebalanceIdleShards
--- PASS: TestAutoTuneFlushBufferRebalanceIdleShards (9.64s)
=== RUN   TestAutoTuneFlushBufferGetUsedMemory
--- PASS: TestAutoTuneFlushBufferGetUsedMemory (17.71s)
=== RUN   TestBloom
--- PASS: TestBloom (4.60s)
=== RUN   TestBloomDisableEnable
--- PASS: TestBloomDisableEnable (3.67s)
=== RUN   TestBloomDisable
--- PASS: TestBloomDisable (0.04s)
=== RUN   TestBloomFreeDuringLookup
--- PASS: TestBloomFreeDuringLookup (0.03s)
=== RUN   TestBloomRecoveryFreeDuringLookup
--- PASS: TestBloomRecoveryFreeDuringLookup (0.07s)
=== RUN   TestBloomRecoverySwapInLookup
--- PASS: TestBloomRecoverySwapInLookup (0.09s)
=== RUN   TestBloomRecoverySwapOutLookup
--- PASS: TestBloomRecoverySwapOutLookup (0.08s)
=== RUN   TestBloomRecoveryInserts
--- PASS: TestBloomRecoveryInserts (0.09s)
=== RUN   TestBloomRecovery
--- PASS: TestBloomRecovery (0.12s)
=== RUN   TestBloomStats
--- PASS: TestBloomStats (3.64s)
=== RUN   TestBloomStatsRecovery
--- PASS: TestBloomStatsRecovery (0.82s)
=== RUN   TestBloomFilterSimple
--- PASS: TestBloomFilterSimple (0.00s)
=== RUN   TestBloomFilterConcurrent
--- PASS: TestBloomFilterConcurrent (21.99s)
=== RUN   TestBitArrayConcurrent
--- PASS: TestBitArrayConcurrent (0.99s)
=== RUN   TestBloomCapacity
--- PASS: TestBloomCapacity (0.00s)
=== RUN   TestBloomNumHashFuncs
--- PASS: TestBloomNumHashFuncs (0.00s)
=== RUN   TestBloomTestAndAdd
--- PASS: TestBloomTestAndAdd (0.23s)
=== RUN   TestBloomReset
--- PASS: TestBloomReset (0.00s)
=== RUN   TestLFSCopier
--- PASS: TestLFSCopier (0.00s)
=== RUN   TestLFSCopierNumBytes
--- PASS: TestLFSCopierNumBytes (0.01s)
=== RUN   TestSBCopyConcurrent
--- PASS: TestSBCopyConcurrent (0.21s)
=== RUN   TestSBCopyCorrupt
--- PASS: TestSBCopyCorrupt (0.03s)
=== RUN   TestLSSCopyHeadTailSingleSegment
--- PASS: TestLSSCopyHeadTailSingleSegment (0.02s)
=== RUN   TestLSSCopyFullSegments
--- PASS: TestLSSCopyFullSegments (0.64s)
=== RUN   TestLSSCopyPartialSegments
--- PASS: TestLSSCopyPartialSegments (0.07s)
=== RUN   TestLSSCopyHolePunching
--- PASS: TestLSSCopyHolePunching (0.59s)
=== RUN   TestLSSCopyConcurrent
--- PASS: TestLSSCopyConcurrent (0.77s)
=== RUN   TestShardCopySimple
--- PASS: TestShardCopySimple (0.25s)
=== RUN   TestShardCopyMetadataCorrupted
--- PASS: TestShardCopyMetadataCorrupted (0.05s)
=== RUN   TestShardCopyLSSMetadataCorrupted
--- PASS: TestShardCopyLSSMetadataCorrupted (0.08s)
=== RUN   TestShardCopyBeforeRecovery
--- PASS: TestShardCopyBeforeRecovery (0.00s)
=== RUN   TestShardCopySkipLog
--- PASS: TestShardCopySkipLog (0.65s)
=== RUN   TestShardCopyAddDestroyInstance
--- PASS: TestShardCopyAddDestroyInstance (1.62s)
=== RUN   TestShardCopyRestoreManyShards
--- PASS: TestShardCopyRestoreManyShards (5.68s)
=== RUN   TestShardCopyRestoreConcurrentLogCleaning
--- PASS: TestShardCopyRestoreConcurrentLogCleaning (21.73s)
=== RUN   TestShardCopyRestorePartialRollback
--- PASS: TestShardCopyRestorePartialRollback (12.08s)
=== RUN   TestInvalidMVCCRollback
--- PASS: TestInvalidMVCCRollback (0.23s)
=== RUN   TestShardCopyRestoreConcurrentPurges
--- PASS: TestShardCopyRestoreConcurrentPurges (12.98s)
=== RUN   TestShardCopyDuplicateIndex
--- PASS: TestShardCopyDuplicateIndex (0.11s)
=== RUN   TestTenantCopy
--- PASS: TestTenantCopy (3.70s)
=== RUN   TestDiag
--- PASS: TestDiag (0.47s)
=== RUN   TestDumpLog
--- PASS: TestDumpLog (0.07s)
=== RUN   TestExtrasN1
=== RUN   TestExtrasN2
=== RUN   TestExtrasN3
=== RUN   TestGMRecovery
--- PASS: TestGMRecovery (8.39s)
=== RUN   TestIteratorSimple
--- PASS: TestIteratorSimple (4.77s)
=== RUN   TestIteratorSeek
--- PASS: TestIteratorSeek (5.85s)
=== RUN   TestPlasmaIteratorSeekFirst
--- PASS: TestPlasmaIteratorSeekFirst (0.52s)
=== RUN   TestPlasmaIteratorSwapin
--- PASS: TestPlasmaIteratorSwapin (5.16s)
=== RUN   TestIteratorSetEnd
--- PASS: TestIteratorSetEnd (0.74s)
=== RUN   TestIterHiItm
--- PASS: TestIterHiItm (1.83s)
=== RUN   TestIterDeleteSplitMerge
--- PASS: TestIterDeleteSplitMerge (0.03s)
=== RUN   TestKeySamplingSingle
--- PASS: TestKeySamplingSingle (0.10s)
=== RUN   TestKeySamplingAll
--- PASS: TestKeySamplingAll (0.12s)
=== RUN   TestKeySamplingEmpty
--- PASS: TestKeySamplingEmpty (0.03s)
=== RUN   TestKeySamplingExceed
--- PASS: TestKeySamplingExceed (0.10s)
=== RUN   TestLogOperation
--- PASS: TestLogOperation (59.96s)
=== RUN   TestLogLargeSize
--- PASS: TestLogLargeSize (0.19s)
=== RUN   TestLogTrim
--- PASS: TestLogTrim (59.61s)
=== RUN   TestLogSuperblockCorruption
--- PASS: TestLogSuperblockCorruption (58.59s)
=== RUN   TestLogTrimHolePunch
--- PASS: TestLogTrimHolePunch (49.68s)
=== RUN   TestLogMissingAndTruncatedSegments
--- PASS: TestLogMissingAndTruncatedSegments (0.07s)
=== RUN   TestLogReadBeyondMaxFileIndex
--- PASS: TestLogReadBeyondMaxFileIndex (2.56s)
=== RUN   TestLogReadEOFWithMMap
--- PASS: TestLogReadEOFWithMMap (0.00s)
=== RUN   TestShardLSSCleaning
--- PASS: TestShardLSSCleaning (0.23s)
=== RUN   TestShardLSSCleaningDeleteInstance
--- PASS: TestShardLSSCleaningDeleteInstance (0.21s)
=== RUN   TestShardLSSCleaningCorruptInstance
--- PASS: TestShardLSSCleaningCorruptInstance (0.19s)
=== RUN   TestPlasmaLSSCleaner
--- PASS: TestPlasmaLSSCleaner (218.61s)
=== RUN   TestLSSBasic
--- PASS: TestLSSBasic (0.08s)
=== RUN   TestLSSConcurrent
--- PASS: TestLSSConcurrent (0.90s)
=== RUN   TestLSSCleaner
--- PASS: TestLSSCleaner (12.79s)
=== RUN   TestLSSSuperBlock
--- PASS: TestLSSSuperBlock (1.14s)
=== RUN   TestLSSLargeSinglePayload
--- PASS: TestLSSLargeSinglePayload (0.82s)
=== RUN   TestLSSUnstableEnvironment
--- PASS: TestLSSUnstableEnvironment (10.23s)
=== RUN   TestLSSSmallFlushBuffer
--- PASS: TestLSSSmallFlushBuffer (0.01s)
=== RUN   TestLSSTrimFlushBufferGC
--- PASS: TestLSSTrimFlushBufferGC (1.44s)
=== RUN   TestLSSTrimFlushBufferNoIO
--- PASS: TestLSSTrimFlushBufferNoIO (30.01s)
=== RUN   TestLSSTrimFlushBufferWithIO
--- PASS: TestLSSTrimFlushBufferWithIO (33.22s)
=== RUN   TestLSSExtendFlushBufferWithIO
--- PASS: TestLSSExtendFlushBufferWithIO (30.02s)
=== RUN   TestLSSCtxTrimFlushBuffer
--- PASS: TestLSSCtxTrimFlushBuffer (3.78s)
=== RUN   TestLSSNegativeGetFlushBufferMemory
--- PASS: TestLSSNegativeGetFlushBufferMemory (0.01s)
=== RUN   TestMem
Plasma: Adaptive memory quota tuning (decrementing): RSS:792526848, freePercent:89.83908290710059, currentQuota=1099511627776, newQuota=1073741824, netGrowth=0, percent=99Plasma: Adaptive memory quota tuning (incrementing): RSS:792223744, freePercent: 89.83908290710059, currentQuota=0, newQuota=10995116277--- PASS: TestMem (15.01s)
=== RUN   TestCpu
--- PASS: TestCpu (14.73s)
=== RUN   TestTopTen20
--- PASS: TestTopTen20 (0.61s)
=== RUN   TestTopTen5
--- PASS: TestTopTen5 (0.16s)
=== RUN   TestMVCCSimple
--- PASS: TestMVCCSimple (0.18s)
=== RUN   TestMVCCLookup
--- PASS: TestMVCCLookup (0.12s)
=== RUN   TestMVCCIteratorRefresh
--- PASS: TestMVCCIteratorRefresh (4.84s)
=== RUN   TestMVCCIteratorRefreshEveryRow
--- PASS: TestMVCCIteratorRefreshEveryRow (0.77s)
=== RUN   TestMVCCGarbageCollection
--- PASS: TestMVCCGarbageCollection (0.09s)
=== RUN   TestMVCCRecoveryPoint
--- PASS: TestMVCCRecoveryPoint (1.78s)
=== RUN   TestMVCCRollbackMergeSibling
--- PASS: TestMVCCRollbackMergeSibling (0.07s)
=== RUN   TestMVCCRollbackCompact
--- PASS: TestMVCCRollbackCompact (0.05s)
=== RUN   TestMVCCRollbackSplit
--- PASS: TestMVCCRollbackSplit (0.07s)
=== RUN   TestMVCCRollbackItemsNotInSnapshot
--- PASS: TestMVCCRollbackItemsNotInSnapshot (0.14s)
=== RUN   TestMVCCRecoveryPointRollbackedSnapshot
--- PASS: TestMVCCRecoveryPointRollbackedSnapshot (0.86s)
=== RUN   TestMVCCRollbackBetweenRecoveryPoint
--- PASS: TestMVCCRollbackBetweenRecoveryPoint (0.86s)
=== RUN   TestMVCCRecoveryPointCrash
--- PASS: TestMVCCRecoveryPointCrash (0.09s)
=== RUN   TestMVCCIntervalGC
--- PASS: TestMVCCIntervalGC (0.22s)
=== RUN   TestMVCCItemsCount
--- PASS: TestMVCCItemsCount (0.32s)
=== RUN   TestLargeItems
--- PASS: TestLargeItems (105.74s)
=== RUN   TestTooLargeKey
--- PASS: TestTooLargeKey (3.26s)
=== RUN   TestMVCCItemUpdateSize
--- PASS: TestMVCCItemUpdateSize (0.22s)
=== RUN   TestEvictionStats
--- PASS: TestEvictionStats (0.41s)
=== RUN   TestReaderCacheStats
--- PASS: TestReaderCacheStats (1.11s)
=== RUN   TestInvalidSnapshot
--- PASS: TestInvalidSnapshot (0.88s)
=== RUN   TestEmptyKeyInsert
--- PASS: TestEmptyKeyInsert (0.03s)
=== RUN   TestMVCCRecoveryPointError
--- PASS: TestMVCCRecoveryPointError (0.03s)
=== RUN   TestMVCCReaderPurgeSequential
--- PASS: TestMVCCReaderPurgeSequential (0.21s)
=== RUN   TestMVCCReaderNoPurge
--- PASS: TestMVCCReaderNoPurge (0.20s)
=== RUN   TestMVCCReaderPurgeAfterUpdate
--- PASS: TestMVCCReaderPurgeAfterUpdate (0.19s)
=== RUN   TestMVCCReaderPurgeAfterRollback
--- PASS: TestMVCCReaderPurgeAfterRollback (0.25s)
=== RUN   TestMVCCReaderPurgeSimple
--- PASS: TestMVCCReaderPurgeSimple (0.04s)
=== RUN   TestMVCCReaderPurgeRandom
--- PASS: TestMVCCReaderPurgeRandom (0.18s)
=== RUN   TestMVCCReaderPurgePageFlag
--- PASS: TestMVCCReaderPurgePageFlag (0.12s)
=== RUN   TestMVCCPurgeRatioWithRollback
--- PASS: TestMVCCPurgeRatioWithRollback (15.84s)
=== RUN   TestComputeItemsCountMVCCWithRollbackI
--- PASS: TestComputeItemsCountMVCCWithRollbackI (0.11s)
=== RUN   TestComputeItemsCountMVCCWithRollbackII
--- PASS: TestComputeItemsCountMVCCWithRollbackII (0.07s)
=== RUN   TestComputeItemsCountMVCCWithRollbackIII
--- PASS: TestComputeItemsCountMVCCWithRollbackIII (0.05s)
=== RUN   TestComputeItemsCountMVCCWithRollbackIV
--- PASS: TestComputeItemsCountMVCCWithRollbackIV (0.12s)
=== RUN   TestMVCCPurgedRecordsWithCompactFullMarshalAndCascadedEmptyPagesMerge
--- PASS: TestMVCCPurgedRecordsWithCompactFullMarshalAndCascadedEmptyPagesMerge (1.73s)
=== RUN   TestMaxDeltaChainLenWithCascadedEmptyPagesMerge
--- PASS: TestMaxDeltaChainLenWithCascadedEmptyPagesMerge (1.42s)
=== RUN   TestAutoHoleCleaner
--- PASS: TestAutoHoleCleaner (34.65s)
=== RUN   TestAutoHoleCleaner5Indexes
--- PASS: TestAutoHoleCleaner5Indexes (197.77s)
=== RUN   TestIteratorReportedHoleRegionBoundary
--- PASS: TestIteratorReportedHoleRegionBoundary (0.16s)
=== RUN   TestFullRangeHoleScans
--- PASS: TestFullRangeHoleScans (0.32s)
=== RUN   TestOverlappingRangeHoleScans
--- PASS: TestOverlappingRangeHoleScans (0.34s)
=== RUN   TestMVCCIteratorSMRRefreshOnHoleScan
--- PASS: TestMVCCIteratorSMRRefreshOnHoleScan (7.57s)
=== RUN   TestAutoHoleCleanerWithRecovery
--- PASS: TestAutoHoleCleanerWithRecovery (2.96s)
=== RUN   TestPageMergeCorrectness2
--- PASS: TestPageMergeCorrectness2 (0.00s)
=== RUN   TestPageMergeCorrectness
--- PASS: TestPageMergeCorrectness (0.00s)
=== RUN   TestPageMarshalFull
--- PASS: TestPageMarshalFull (0.01s)
=== RUN   TestPageMergeMarshal
--- PASS: TestPageMergeMarshal (0.00s)
=== RUN   TestPageOperations
--- PASS: TestPageOperations (0.03s)
=== RUN   TestPageIterator
--- PASS: TestPageIterator (0.00s)
=== RUN   TestPageMarshal
--- PASS: TestPageMarshal (0.02s)
=== RUN   TestPageMergeCorrectness3
--- PASS: TestPageMergeCorrectness3 (0.00s)
=== RUN   TestPageHasDataRecords
--- PASS: TestPageHasDataRecords (0.00s)
=== RUN   TestPlasmaPageVisitor
--- PASS: TestPlasmaPageVisitor (4.59s)
=== RUN   TestPageRingVisitor
--- PASS: TestPageRingVisitor (4.33s)
=== RUN   TestPauseVisitorOnLowMemory
--- PASS: TestPauseVisitorOnLowMemory (1.10s)
=== RUN   TestCheckpointRecovery
--- PASS: TestCheckpointRecovery (7.96s)
=== RUN   TestPageCorruption
--- PASS: TestPageCorruption (0.81s)
=== RUN   TestCheckPointRecoveryFollowCleaning
--- PASS: TestCheckPointRecoveryFollowCleaning (0.08s)
=== RUN   TestFragmentationWithZeroItems
--- PASS: TestFragmentationWithZeroItems (1.15s)
=== RUN   TestEvictOnPersist
--- PASS: TestEvictOnPersist (0.15s)
=== RUN   TestPlasmaSimple
--- PASS: TestPlasmaSimple (13.30s)
=== RUN   TestPlasmaCompression
--- PASS: TestPlasmaCompression (0.05s)
=== RUN   TestPlasmaCompressionWrong
--- PASS: TestPlasmaCompressionWrong (0.03s)
=== RUN   TestPlasmaInMemCompression
--- PASS: TestPlasmaInMemCompression (0.02s)
=== RUN   TestPlasmaInMemCompressionZstd
--- PASS: TestPlasmaInMemCompressionZstd (0.02s)
=== RUN   TestPlasmaInMemCompressionWrong
--- PASS: TestPlasmaInMemCompressionWrong (0.02s)
=== RUN   TestSpoiledConfig
--- PASS: TestSpoiledConfig (0.04s)
=== RUN   TestPlasmaErrorFile
--- PASS: TestPlasmaErrorFile (0.04s)
=== RUN   TestPlasmaPersistor
--- PASS: TestPlasmaPersistor (9.81s)
=== RUN   TestPlasmaEvictionLSSDataSize
--- PASS: TestPlasmaEvictionLSSDataSize (0.04s)
=== RUN   TestPlasmaEviction
--- PASS: TestPlasmaEviction (29.41s)
=== RUN   TestConcurrDelOps
--- PASS: TestConcurrDelOps (72.05s)
=== RUN   TestPlasmaDataSize
--- PASS: TestPlasmaDataSize (0.04s)
=== RUN   TestLargeBasePage
--- PASS: TestLargeBasePage (60.63s)
=== RUN   TestLargeValue
--- PASS: TestLargeValue (101.81s)
=== RUN   TestPlasmaTooLargeKey
--- PASS: TestPlasmaTooLargeKey (3.20s)
=== RUN   TestEvictAfterMerge
--- PASS: TestEvictAfterMerge (0.11s)
=== RUN   TestEvictDirty
--- PASS: TestEvictDirty (0.16s)
=== RUN   TestEvictUnderQuota
--- PASS: TestEvictUnderQuota (60.12s)
=== RUN   TestEvictSetting
--- PASS: TestEvictSetting (1.19s)
=== RUN   TestBasePageAfterCompaction
--- PASS: TestBasePageAfterCompaction (0.12s)
=== RUN   TestSwapout
--- PASS: TestSwapout (0.03s)
=== RUN   TestSwapoutSplitBasePage
--- PASS: TestSwapoutSplitBasePage (0.03s)
=== RUN   TestCompactFullMarshal
--- PASS: TestCompactFullMarshal (0.05s)
=== RUN   TestPageStats
--- PASS: TestPageStats (2.11s)
=== RUN   TestPageStatsTinyIndex
--- PASS: TestPageStatsTinyIndex (0.15s)
=== RUN   TestPageStatsTinyIndexOnRecovery
--- PASS: TestPageStatsTinyIndexOnRecovery (0.08s)
=== RUN   TestPageStatsTinyIndexOnSplitAndMerge
--- PASS: TestPageStatsTinyIndexOnSplitAndMerge (0.07s)
=== RUN   TestPageCompress
--- PASS: TestPageCompress (0.05s)
=== RUN   TestPageCompressSwapin
--- PASS: TestPageCompressSwapin (0.05s)
=== RUN   TestPageCompressStats
--- PASS: TestPageCompressStats (0.66s)
=== RUN   TestPageDecompressStats
--- PASS: TestPageDecompressStats (0.04s)
=== RUN   TestSharedDedicatedDataSize
--- PASS: TestSharedDedicatedDataSize (3.59s)
=== RUN   TestLastRpSns
--- PASS: TestLastRpSns (0.04s)
=== RUN   TestPageCompressState
--- PASS: TestPageCompressState (0.06s)
=== RUN   TestPageCompressDuringBurst
--- PASS: TestPageCompressDuringBurst (0.06s)
=== RUN   TestPageDontDecompressDuringScan
--- PASS: TestPageDontDecompressDuringScan (0.11s)
=== RUN   TestPageDecompressAndCompressSwapin
--- PASS: TestPageDecompressAndCompressSwapin (2.06s)
=== RUN   TestPageCompressibleStat
--- PASS: TestPageCompressibleStat (0.53s)
=== RUN   TestPageCompressibleStatRecovery
--- PASS: TestPageCompressibleStatRecovery (0.19s)
=== RUN   TestPageCompressBeforeEvictPercent
--- PASS: TestPageCompressBeforeEvictPercent (0.72s)
=== RUN   TestPageCompressDecompressAfterDisable
--- PASS: TestPageCompressDecompressAfterDisable (0.72s)
=== RUN   TestWrittenDataSz
--- PASS: TestWrittenDataSz (3.34s)
=== RUN   TestWrittenDataSzAfterRecoveryCleaning
--- PASS: TestWrittenDataSzAfterRecoveryCleaning (3.81s)
=== RUN   TestWrittenHdrSz
--- PASS: TestWrittenHdrSz (3.17s)
=== RUN   TestPersistConfigUpgrade
--- PASS: TestPersistConfigUpgrade (0.00s)
=== RUN   TestLSSSegmentSize
--- PASS: TestLSSSegmentSize (0.23s)
=== RUN   TestPlasmaFlushBufferSzCfg
--- PASS: TestPlasmaFlushBufferSzCfg (0.11s)
=== RUN   TestCompactionCountwithCompactFullMarshal
--- PASS: TestCompactionCountwithCompactFullMarshal (0.12s)
=== RUN   TestCompactionCountwithCompactFullMarshalSMO
--- PASS: TestCompactionCountwithCompactFullMarshalSMO (0.03s)
=== RUN   TestPageHasDataRecordsOnCompactFullMarshal
--- PASS: TestPageHasDataRecordsOnCompactFullMarshal (0.08s)
=== RUN   TestPauseReaderOnLowMemory
--- PASS: TestPauseReaderOnLowMemory (1.05s)
=== RUN   TestRecoveryCleanerFragRatio
--- PASS: TestRecoveryCleanerFragRatio (219.60s)
=== RUN   TestRecoveryCleanerRelocation
--- PASS: TestRecoveryCleanerRelocation (219.67s)
=== RUN   TestRecoveryCleanerDataSize
--- PASS: TestRecoveryCleanerDataSize (224.62s)
=== RUN   TestRecoveryCleanerDeleteInstance
--- PASS: TestRecoveryCleanerDeleteInstance (440.55s)
=== RUN   TestRecoveryCleanerRecoveryPoint
--- PASS: TestRecoveryCleanerRecoveryPoint (27.48s)
=== RUN   TestRecoveryCleanerCorruptInstance
--- PASS: TestRecoveryCleanerCorruptInstance (0.18s)
=== RUN   TestRecoveryCleanerAhead
--- PASS: TestRecoveryCleanerAhead (4.25s)
=== RUN   TestRecoveryCleanerAheadAfterRecovery
--- PASS: TestRecoveryCleanerAheadAfterRecovery (2.22s)
=== RUN   TestCleaningUncommittedData
--- PASS: TestCleaningUncommittedData (0.03s)
=== RUN   TestPlasmaRecoverySimple
--- PASS: TestPlasmaRecoverySimple (0.05s)
=== RUN   TestPlasmaRecovery
--- PASS: TestPlasmaRecovery (24.61s)
=== RUN   TestShardRecoveryShared
--- PASS: TestShardRecoveryShared (10.79s)
=== RUN   TestShardRecoveryRecoveryLogAhead
--- PASS: TestShardRecoveryRecoveryLogAhead (32.62s)
=== RUN   TestShardRecoveryDataLogAhead
--- PASS: TestShardRecoveryDataLogAhead (21.99s)
=== RUN   TestShardRecoveryDestroyBlksInDataLog
--- PASS: TestShardRecoveryDestroyBlksInDataLog (9.73s)
=== RUN   TestShardRecoveryDestroyBlksInRecoveryLog
--- PASS: TestShardRecoveryDestroyBlksInRecoveryLog (10.31s)
=== RUN   TestShardRecoveryDestroyBlksInBothLog
--- PASS: TestShardRecoveryDestroyBlksInBothLog (9.71s)
=== RUN   TestShardRecoveryRecoveryLogCorruption
--- PASS: TestShardRecoveryRecoveryLogCorruption (9.41s)
=== RUN   TestShardRecoveryDataLogCorruption
--- PASS: TestShardRecoveryDataLogCorruption (10.56s)
=== RUN   TestShardRecoverySharedNoRP
--- PASS: TestShardRecoverySharedNoRP (10.53s)
=== RUN   TestShardRecoveryNotEnoughMem
--- PASS: TestShardRecoveryNotEnoughMem (33.39s)
=== RUN   TestShardRecoveryCleanup
--- PASS: TestShardRecoveryCleanup (0.42s)
=== RUN   TestShardRecoveryRebuildSharedLog
--- PASS: TestShardRecoveryRebuildSharedLog (1.22s)
=== RUN   TestShardRecoveryUpgradeWithCheckpoint
--- PASS: TestShardRecoveryUpgradeWithCheckpoint (0.46s)
=== RUN   TestShardRecoveryUpgradeWithLogReplay
--- PASS: TestShardRecoveryUpgradeWithLogReplay (0.40s)
=== RUN   TestShardRecoveryRebuildAfterError
--- PASS: TestShardRecoveryRebuildAfterError (1.17s)
=== RUN   TestShardRecoveryRebuildAfterConcurrentDelete
--- PASS: TestShardRecoveryRebuildAfterConcurrentDelete (1.69s)
=== RUN   TestShardRecoveryAfterDeleteInstance
--- PASS: TestShardRecoveryAfterDeleteInstance (0.14s)
=== RUN   TestShardRecoveryDestroyShard
--- PASS: TestShardRecoveryDestroyShard (0.24s)
=== RUN   TestHeaderRepair
--- PASS: TestHeaderRepair (0.06s)
=== RUN   TestCheckpointWithWriter
--- PASS: TestCheckpointWithWriter (3.60s)
=== RUN   TestPlasmaRecoveryWithRepairFullReplay
--- PASS: TestPlasmaRecoveryWithRepairFullReplay (31.16s)
=== RUN   TestPlasmaRecoveryWithInsertRepairCheckpoint
--- PASS: TestPlasmaRecoveryWithInsertRepairCheckpoint (25.97s)
=== RUN   TestPlasmaRecoveryWithDeleteRepairCheckpoint
--- PASS: TestPlasmaRecoveryWithDeleteRepairCheckpoint (14.37s)
=== RUN   TestShardRecoverySharedFullReplayOnError
--- PASS: TestShardRecoverySharedFullReplayOnError (12.27s)
=== RUN   TestShardRecoveryDedicatedFullReplayOnError
--- PASS: TestShardRecoveryDedicatedFullReplayOnError (12.27s)
=== RUN   TestShardRecoverySharedFullReplayOnErrorWithRepair
--- PASS: TestShardRecoverySharedFullReplayOnErrorWithRepair (14.62s)
=== RUN   TestGlobalWorkContextForRecovery
--- PASS: TestGlobalWorkContextForRecovery (0.33s)
=== RUN   TestSkipLogSimple
--- PASS: TestSkipLogSimple (0.00s)
=== RUN   TestSkipLogLoadStore
--- PASS: TestSkipLogLoadStore (0.00s)
=== RUN   TestShardMetadata
--- PASS: TestShardMetadata (0.04s)
=== RUN   TestPlasmaId
--- PASS: TestPlasmaId (0.03s)
=== RUN   TestShardPersistence
--- PASS: TestShardPersistence (0.21s)
=== RUN   TestShardDestroy
--- PASS: TestShardDestroy (0.05s)
=== RUN   TestShardClose
--- PASS: TestShardClose (5.04s)
=== RUN   TestShardMgrRecovery
--- PASS: TestShardMgrRecovery (0.09s)
=== RUN   TestShardDeadData
--- PASS: TestShardDeadData (0.23s)
=== RUN   TestShardConfigUpdate
--- PASS: TestShardConfigUpdate (0.05s)
=== RUN   TestShardSelection
--- PASS: TestShardSelection (0.09s)
=== RUN   TestShardWriteAmp
--- PASS: TestShardWriteAmp (10.14s)
=== RUN   TestShardStats
--- PASS: TestShardStats (0.17s)
=== RUN   TestShardMultipleWriters
--- PASS: TestShardMultipleWriters (0.15s)
=== RUN   TestShardDestroyMultiple
--- PASS: TestShardDestroyMultiple (0.13s)
=== RUN   TestShardBackupCorrupted
--- PASS: TestShardBackupCorrupted (0.10s)
=== RUN   TestShardBackupCorruptedShare
--- PASS: TestShardBackupCorruptedShare (0.08s)
=== RUN   TestShardCorruption
--- PASS: TestShardCorruption (0.06s)
=== RUN   TestShardCorruptionAddInstance
--- PASS: TestShardCorruptionAddInstance (0.12s)
=== RUN   TestShardCreateError
--- PASS: TestShardCreateError (0.23s)
=== RUN   TestShardNumInsts
--- PASS: TestShardNumInsts (1.34s)
=== RUN   TestShardInstanceGroup
--- PASS: TestShardInstanceGroup (0.07s)
=== RUN   TestShardLeak
--- PASS: TestShardLeak (1.74s)
=== RUN   TestShardMemLeak
--- PASS: TestShardMemLeak (0.74s)
=== RUN   TestShardFind
--- PASS: TestShardFind (0.22s)
=== RUN   TestShardFileOpenDescCount
--- PASS: TestShardFileOpenDescCount (58.09s)
=== RUN   TestSMRSimple
--- PASS: TestSMRSimple (1.11s)
=== RUN   TestSMRConcurrent
--- PASS: TestSMRConcurrent (47.72s)
=== RUN   TestSMRComplex
--- PASS: TestSMRComplex (109.39s)
=== RUN   TestDGMWithCASConflicts
--- PASS: TestDGMWithCASConflicts (31.69s)
=== RUN   TestMaxSMRPendingMem
--- PASS: TestMaxSMRPendingMem (0.02s)
=== RUN   TestStatsLogger
--- PASS: TestStatsLogger (20.32s)
=== RUN   TestStatsSamplePercentile
--- PASS: TestStatsSamplePercentile (0.02s)
=== RUN   TestPlasmaSwapper
--- PASS: TestPlasmaSwapper (21.54s)
=== RUN   TestPlasmaAutoSwapper
--- PASS: TestPlasmaAutoSwapper (84.97s)
=== RUN   TestSwapperAddInstance
--- PASS: TestSwapperAddInstance (4.08s)
=== RUN   TestSwapperRemoveInstance
--- PASS: TestSwapperRemoveInstance (4.18s)
=== RUN   TestSwapperJoinContext
--- PASS: TestSwapperJoinContext (4.54s)
=== RUN   TestSwapperSplitContext
--- PASS: TestSwapperSplitContext (4.54s)
=== RUN   TestSwapperGlobalClock
--- PASS: TestSwapperGlobalClock (29.55s)
=== RUN   TestSwapperConflict
--- PASS: TestSwapperConflict (2.78s)
=== RUN   TestSwapperRemoveInstanceWait
--- PASS: TestSwapperRemoveInstanceWait (3.43s)
=== RUN   TestSwapperStats
--- PASS: TestSwapperStats (0.95s)
=== RUN   TestSwapperSweepInterval
--- PASS: TestSwapperSweepInterval (0.45s)
=== RUN   TestSweepCompress
--- PASS: TestSweepCompress (0.06s)
=== RUN   TestTenantShardAssignment
--- PASS: TestTenantShardAssignment (2.92s)
=== RUN   TestTenantShardAssignmentServerless
--- PASS: TestTenantShardAssignmentServerless (13.49s)
=== RUN   TestTenantShardAssignmentDedicated
--- PASS: TestTenantShardAssignmentDedicated (1.55s)
=== RUN   TestTenantShardAssignmentDedicatedMainBackIndexes
--- PASS: TestTenantShardAssignmentDedicatedMainBackIndexes (0.10s)
=== RUN   TestTenantShardRecovery
--- PASS: TestTenantShardRecovery (2.91s)
=== RUN   TestTenantMemUsed
--- PASS: TestTenantMemUsed (2.63s)
=== RUN   TestTenantSwitchController
--- PASS: TestTenantSwitchController (0.10s)
=== RUN   TestTenantAssignMandatoryQuota
--- PASS: TestTenantAssignMandatoryQuota (0.42s)
=== RUN   TestTenantMutationQuota
--- PASS: TestTenantMutationQuota (0.04s)
=== RUN   TestTenantInitialBuildQuota
--- PASS: TestTenantInitialBuildQuota (0.04s)
=== RUN   TestTenantInitialBuildNonDGM
--- PASS: TestTenantInitialBuildNonDGM (1.97s)
=== RUN   TestTenantInitialBuildDGM
--- PASS: TestTenantInitialBuildDGM (1.91s)
=== RUN   TestTenantInitialBuildZeroResident
--- PASS: TestTenantInitialBuildZeroResident (1.88s)
=== RUN   TestTenantIncrementalBuildDGM
--- PASS: TestTenantIncrementalBuildDGM (3.06s)
=== RUN   TestTenantInitialBuildTwoTenants
--- PASS: TestTenantInitialBuildTwoTenants (3.03s)
=== RUN   TestTenantInitialBuildTwoControllers
--- PASS: TestTenantInitialBuildTwoControllers (3.04s)
=== RUN   TestTenantIncrementalBuildTwoIndexes
--- PASS: TestTenantIncrementalBuildTwoIndexes (0.35s)
=== RUN   TestTenantIncrementalBuildConcurrent
--- PASS: TestTenantIncrementalBuildConcurrent (2.78s)
=== RUN   TestTenantDecrementGlobalQuota
--- PASS: TestTenantDecrementGlobalQuota (2.26s)
=== RUN   TestTenantInitialBuildNotEnoughQuota
--- PASS: TestTenantInitialBuildNotEnoughQuota (3.03s)
=== RUN   TestTenantRecoveryResidentRatioHeaderReplay
--- PASS: TestTenantRecoveryResidentRatioHeaderReplay (0.13s)
=== RUN   TestTenantRecoveryResidentRatioDataReplay
--- PASS: TestTenantRecoveryResidentRatioDataReplay (0.21s)
=== RUN   TestTenantRecoveryController
--- PASS: TestTenantRecoveryController (1.57s)
=== RUN   TestTenantRecoveryQuotaWithLastCheckpoint
--- PASS: TestTenantRecoveryQuotaWithLastCheckpoint (0.77s)
=== RUN   TestTenantRecoveryQuotaZeroResidentWithLastCheckpoint
--- PASS: TestTenantRecoveryQuotaZeroResidentWithLastCheckpoint (3.17s)
=== RUN   TestTenantRecoveryQuotaWithFormula
--- PASS: TestTenantRecoveryQuotaWithFormula (3.10s)
=== RUN   TestTenantRecoveryQuotaWithDataReplay
--- PASS: TestTenantRecoveryQuotaWithDataReplay (6.68s)
=== RUN   TestTenantRecoveryEvictionNoCheckpoint
--- PASS: TestTenantRecoveryEvictionNoCheckpoint (14.67s)
=== RUN   TestTenantRecoveryEvictionHeaderReplay
--- PASS: TestTenantRecoveryEvictionHeaderReplay (8.92s)
=== RUN   TestTenantRecoveryEvictionDataReplaySequential
--- PASS: TestTenantRecoveryEvictionDataReplaySequential (8.26s)
=== RUN   TestTenantRecoveryEvictionDataReplayInterleaved
--- PASS: TestTenantRecoveryEvictionDataReplayInterleaved (9.99s)
=== RUN   TestTenantRecoveryEvictionDataReplayNoCheckpoint
--- PASS: TestTenantRecoveryEvictionDataReplayNoCheckpoint (10.08s)
=== RUN   TestTenantRecoveryEvictionDataReplaySingle
--- PASS: TestTenantRecoveryEvictionDataReplaySingle (4.28s)
=== RUN   TestTenantRecoveryLastCheckpoint
--- PASS: TestTenantRecoveryLastCheckpoint (5.56s)
=== RUN   TestTenantRecoveryRequestQuota
--- PASS: TestTenantRecoveryRequestQuota (2.53s)
=== RUN   TestTenantAssignDiscretionaryQuota
--- PASS: TestTenantAssignDiscretionaryQuota (0.38s)
=== RUN   TestSCtx
--- PASS: TestSCtx (16.88s)
=== RUN   TestWCtxGeneric
--- PASS: TestWCtxGeneric (45.49s)
=== RUN   TestWCtxWriter
--- PASS: TestWCtxWriter (46.58s)
=== RUN   TestSCtxTrimWithReader
--- PASS: TestSCtxTrimWithReader (0.05s)
=== RUN   TestSCtxTrimWithWriter
--- PASS: TestSCtxTrimWithWriter (0.03s)
=== RUN   TestSCtxTrimEmpty
--- PASS: TestSCtxTrimEmpty (0.02s)
=== RUN   TestWCtxTrimWithReader
--- PASS: TestWCtxTrimWithReader (0.03s)
=== RUN   TestWCtxTrimWithWriter
--- PASS: TestWCtxTrimWithWriter (0.03s)
--- PASS: TestExtrasN1 (0.00s)
--- PASS: TestExtrasN3 (0.00s)
--- PASS: TestExtrasN2 (0.00s)
PASS
ok  	github.com/couchbase/plasma	3729.419s
=== RUN   TestInteger
--- PASS: TestInteger (0.00s)
=== RUN   TestSmallDecimal
--- PASS: TestSmallDecimal (0.00s)
=== RUN   TestLargeDecimal
--- PASS: TestLargeDecimal (0.00s)
=== RUN   TestFloat
--- PASS: TestFloat (0.00s)
=== RUN   TestSuffixCoding
--- PASS: TestSuffixCoding (0.00s)
=== RUN   TestCodecLength
--- PASS: TestCodecLength (0.00s)
=== RUN   TestSpecialString
--- PASS: TestSpecialString (0.00s)
=== RUN   TestCodecNoLength
--- PASS: TestCodecNoLength (0.00s)
=== RUN   TestCodecJSON
--- PASS: TestCodecJSON (0.00s)
=== RUN   TestReference
--- PASS: TestReference (0.00s)
=== RUN   TestN1QLEncode
--- PASS: TestN1QLEncode (0.00s)
=== RUN   TestArrayExplodeJoin
--- PASS: TestArrayExplodeJoin (0.00s)
=== RUN   TestN1QLDecode
--- PASS: TestN1QLDecode (0.00s)
=== RUN   TestN1QLDecode2
--- PASS: TestN1QLDecode2 (0.00s)
=== RUN   TestArrayExplodeJoin2
--- PASS: TestArrayExplodeJoin2 (0.00s)
=== RUN   TestMB28956
--- PASS: TestMB28956 (0.00s)
=== RUN   TestFixEncodedInt
--- PASS: TestFixEncodedInt (0.00s)
=== RUN   TestN1QLDecodeLargeInt64
--- PASS: TestN1QLDecodeLargeInt64 (0.00s)
=== RUN   TestMixedModeFixEncodedInt
TESTING [4111686018427387900, -8223372036854775808, 822337203685477618] 
PASS 
TESTING [0] 
PASS 
TESTING [0.0] 
PASS 
TESTING [0.0000] 
PASS 
TESTING [0.0000000] 
PASS 
TESTING [-0] 
PASS 
TESTING [-0.0] 
PASS 
TESTING [-0.0000] 
PASS 
TESTING [-0.0000000] 
PASS 
TESTING [1] 
PASS 
TESTING [20] 
PASS 
TESTING [3456] 
PASS 
TESTING [7645000] 
PASS 
TESTING [9223372036854775807] 
PASS 
TESTING [9223372036854775806] 
PASS 
TESTING [9223372036854775808] 
PASS 
TESTING [92233720368547758071234000] 
PASS 
TESTING [92233720368547758071234987437653] 
PASS 
TESTING [12300000000000000000000000000000056] 
PASS 
TESTING [12300000000000000000000000000000000] 
PASS 
TESTING [123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000056] 
PASS 
TESTING [123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000] 
PASS 
TESTING [12300000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000056] 
PASS 
TESTING [210690] 
PASS 
TESTING [90000] 
PASS 
TESTING [123000000] 
PASS 
TESTING [3.60e2] 
PASS 
TESTING [36e2] 
PASS 
TESTING [1.9999999999e10] 
PASS 
TESTING [1.99999e10] 
PASS 
TESTING [1.99999e5] 
PASS 
TESTING [0.00000000000012e15] 
PASS 
TESTING [7.64507352e8] 
PASS 
TESTING [9.2233720368547758071234987437653e31] 
PASS 
TESTING [2650e-1] 
PASS 
TESTING [26500e-1] 
PASS 
TESTING [-1] 
PASS 
TESTING [-20] 
PASS 
TESTING [-3456] 
PASS 
TESTING [-7645000] 
PASS 
TESTING [-9223372036854775808] 
PASS 
TESTING [-9223372036854775807] 
PASS 
TESTING [-9223372036854775806] 
PASS 
TESTING [-9223372036854775809] 
PASS 
TESTING [-92233720368547758071234000] 
PASS 
TESTING [-92233720368547758071234987437653] 
PASS 
TESTING [-12300000000000000000000000000000056] 
PASS 
TESTING [-12300000000000000000000000000000000] 
PASS 
TESTING [-123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000056] 
PASS 
TESTING [-123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000] 
PASS 
TESTING [-210690] 
PASS 
TESTING [-90000] 
PASS 
TESTING [-123000000] 
PASS 
TESTING [-3.60e2] 
PASS 
TESTING [-36e2] 
PASS 
TESTING [-1.9999999999e10] 
PASS 
TESTING [-1.99999e10] 
PASS 
TESTING [-1.99999e5] 
PASS 
TESTING [-0.00000000000012e15] 
PASS 
TESTING [-2650e-1] 
PASS 
TESTING [-26500e-1] 
PASS 
TESTING [0.03] 
PASS 
TESTING [198.60] 
PASS 
TESTING [2000045.178] 
PASS 
TESTING [1.7976931348623157e+308] 
PASS 
TESTING [0.000000000000000000890] 
PASS 
TESTING [257953786.9864236576] 
PASS 
TESTING [257953786.9864236576e8] 
PASS 
TESTING [36.912e3] 
PASS 
TESTING [2761.67e0] 
PASS 
TESTING [2761.67e00] 
PASS 
TESTING [2761.67e000] 
PASS 
TESTING [7676546.67e-3] 
PASS 
TESTING [-0.03] 
PASS 
TESTING [-198.60] 
PASS 
TESTING [-2000045.178] 
PASS 
TESTING [-1.7976931348623157e+308] 
PASS 
TESTING [-0.000000000000000000890] 
PASS 
TESTING [-257953786.9864236576] 
PASS 
TESTING [-257953786.9864236576e8] 
PASS 
TESTING [-36.912e3] 
PASS 
TESTING [-2761.67e0] 
PASS 
TESTING [-2761.67e00] 
PASS 
TESTING [-2761.67e000] 
PASS 
TESTING [-7676546.67e-3] 
PASS 
--- PASS: TestMixedModeFixEncodedInt (0.01s)
=== RUN   TestCodecDesc
--- PASS: TestCodecDesc (0.00s)
=== RUN   TestCodecDescPropLen
--- PASS: TestCodecDescPropLen (0.00s)
=== RUN   TestCodecDescSplChar
--- PASS: TestCodecDescSplChar (0.00s)
PASS
ok  	github.com/couchbase/indexing/secondary/collatejson	0.036s
Initializing write barrier = 8000
=== RUN   TestForestDBIterator
2022-09-02T08:44:54.484+05:30 [INFO][FDB] Forestdb blockcache size 134217728 initialized in 5763 us

2022-09-02T08:44:54.485+05:30 [INFO][FDB] Forestdb opened database file test
2022-09-02T08:44:54.488+05:30 [INFO][FDB] Forestdb closed database file test
--- PASS: TestForestDBIterator (0.01s)
=== RUN   TestForestDBIteratorSeek
2022-09-02T08:44:54.489+05:30 [INFO][FDB] Forestdb opened database file test
2022-09-02T08:44:54.492+05:30 [INFO][FDB] Forestdb closed database file test
--- PASS: TestForestDBIteratorSeek (0.00s)
=== RUN   TestPrimaryIndexEntry
--- PASS: TestPrimaryIndexEntry (0.00s)
=== RUN   TestSecondaryIndexEntry
--- PASS: TestSecondaryIndexEntry (0.00s)
=== RUN   TestPrimaryIndexEntryMatch
--- PASS: TestPrimaryIndexEntryMatch (0.00s)
=== RUN   TestSecondaryIndexEntryMatch
--- PASS: TestSecondaryIndexEntryMatch (0.00s)
=== RUN   TestLongDocIdEntry
--- PASS: TestLongDocIdEntry (0.00s)
=== RUN   TestMemDBInsertionPerf
Maximum number of file descriptors = 200000
Set IO Concurrency: 7200
Initial build: 10000000 items took 2m0.76971589s -> 82802.21516053117 items/s
Incr build: 10000000 items took 35.809767823s -> 279253.4162585989 items/s
Main Index: {
"node_count":             12379426,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       3,
"next_pointers_per_node": 1.3334,
"memory_used":            1166279580,
"node_allocs":            12379426,
"node_frees":             0,
"level_node_distribution":{
"level0": 9284419,
"level1": 2321142,
"level2": 580189,
"level3": 145229,
"level4": 36307,
"level5": 9137,
"level6": 2249,
"level7": 550,
"level8": 162,
"level9": 29,
"level10": 11,
"level11": 1,
"level12": 1,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
Back Index 0 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 1 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 2 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 3 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 4 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 5 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 6 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 7 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 8 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 9 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 10 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 11 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 12 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 13 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 14 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 15 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
--- PASS: TestMemDBInsertionPerf (156.58s)
=== RUN   TestBasicsA
--- PASS: TestBasicsA (0.00s)
=== RUN   TestSizeA
--- PASS: TestSizeA (0.01s)
=== RUN   TestSizeWithFreelistA
--- PASS: TestSizeWithFreelistA (0.01s)
=== RUN   TestDequeueUptoSeqnoA
--- PASS: TestDequeueUptoSeqnoA (0.10s)
=== RUN   TestDequeueA
--- PASS: TestDequeueA (1.21s)
=== RUN   TestMultipleVbucketsA
--- PASS: TestMultipleVbucketsA (0.00s)
=== RUN   TestDequeueUptoFreelistA
--- PASS: TestDequeueUptoFreelistA (0.00s)
=== RUN   TestDequeueUptoFreelistMultVbA
--- PASS: TestDequeueUptoFreelistMultVbA (0.00s)
=== RUN   TestConcurrentEnqueueDequeueA
--- PASS: TestConcurrentEnqueueDequeueA (0.00s)
=== RUN   TestConcurrentEnqueueDequeueA1
--- PASS: TestConcurrentEnqueueDequeueA1 (10.01s)
=== RUN   TestEnqueueAppCh
--- PASS: TestEnqueueAppCh (2.00s)
=== RUN   TestDequeueN
--- PASS: TestDequeueN (0.00s)
=== RUN   TestConcurrentEnqueueDequeueN
--- PASS: TestConcurrentEnqueueDequeueN (0.00s)
=== RUN   TestConcurrentEnqueueDequeueN1
--- PASS: TestConcurrentEnqueueDequeueN1 (10.01s)
PASS
ok  	github.com/couchbase/indexing/secondary/indexer	180.555s
=== RUN   TestConnPoolBasicSanity
2022-09-02T08:47:58.138+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 3 overflow 6 low WM 3 relConn batch size 1 ...
2022-09-02T08:47:58.346+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2022-09-02T08:47:59.139+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestConnPoolBasicSanity (5.00s)
=== RUN   TestConnRelease
2022-09-02T08:48:03.142+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 500 overflow 10 low WM 40 relConn batch size 10 ...
Waiting for connections to get released
Waiting for more connections to get released
Waiting for further more connections to get released
2022-09-02T08:48:42.924+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2022-09-02T08:48:43.160+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestConnRelease (43.78s)
=== RUN   TestLongevity
2022-09-02T08:48:46.926+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 500 overflow 10 low WM 40 relConn batch size 10 ...
Releasing 1 conns.
Getting 2 conns.
Releasing 2 conns.
Getting 4 conns.
Releasing 1 conns.
Getting 3 conns.
Releasing 0 conns.
Getting 0 conns.
Releasing 1 conns.
Getting 0 conns.
Releasing 4 conns.
Getting 1 conns.
Releasing 2 conns.
Getting 4 conns.
Releasing 3 conns.
Getting 4 conns.
Releasing 1 conns.
Getting 0 conns.
Releasing 2 conns.
Getting 1 conns.
Releasing 0 conns.
Getting 1 conns.
Releasing 3 conns.
Getting 3 conns.
Releasing 2 conns.
Getting 2 conns.
Releasing 2 conns.
Getting 3 conns.
Releasing 0 conns.
Getting 0 conns.
2022-09-02T08:49:25.326+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2022-09-02T08:49:25.947+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestLongevity (42.40s)
=== RUN   TestSustainedHighConns
2022-09-02T08:49:29.328+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 500 overflow 10 low WM 40 relConn batch size 10 ...
Allocating 16 Connections
cp.curActConns = 0
Returning 3 Connections
cp.curActConns = 11
Returning 2 Connections
cp.curActConns = 11
Allocating 6 Connections
Returning 4 Connections
cp.curActConns = 13
Returning 1 Connections
Allocating 12 Connections
cp.curActConns = 22
Returning 1 Connections
cp.curActConns = 23
Allocating 10 Connections
Returning 1 Connections
cp.curActConns = 32
Returning 3 Connections
Allocating 15 Connections
cp.curActConns = 33
Returning 4 Connections
cp.curActConns = 40
Returning 3 Connections
Allocating 8 Connections
cp.curActConns = 44
Returning 2 Connections
cp.curActConns = 43
Allocating 3 Connections
Returning 4 Connections
cp.curActConns = 42
Returning 4 Connections
Allocating 3 Connections
cp.curActConns = 41
Returning 2 Connections
Allocating 21 Connections
cp.curActConns = 56
Returning 4 Connections
cp.curActConns = 56
Allocating 24 Connections
Returning 0 Connections
cp.curActConns = 67
Returning 0 Connections
cp.curActConns = 79
Returning 3 Connections
cp.curActConns = 77
Allocating 13 Connections
Returning 3 Connections
cp.curActConns = 87
Returning 0 Connections
Allocating 1 Connections
cp.curActConns = 88
Returning 0 Connections
Allocating 5 Connections
cp.curActConns = 90
Returning 1 Connections
cp.curActConns = 92
Returning 1 Connections
Allocating 3 Connections
cp.curActConns = 94
Returning 1 Connections
Allocating 2 Connections
cp.curActConns = 95
Returning 3 Connections
Allocating 21 Connections
cp.curActConns = 101
Returning 3 Connections
cp.curActConns = 110
Returning 1 Connections
Allocating 2 Connections
cp.curActConns = 111
Returning 3 Connections
Allocating 22 Connections
cp.curActConns = 112
Returning 4 Connections
cp.curActConns = 124
Returning 2 Connections
cp.curActConns = 124
Allocating 13 Connections
Returning 1 Connections
cp.curActConns = 136
Returning 0 Connections
Allocating 23 Connections
cp.curActConns = 136
Returning 3 Connections
cp.curActConns = 148
Returning 2 Connections
cp.curActConns = 154
Returning 3 Connections
Allocating 16 Connections
cp.curActConns = 160
Returning 4 Connections
cp.curActConns = 163
Returning 3 Connections
Allocating 18 Connections
cp.curActConns = 171
Returning 1 Connections
cp.curActConns = 177
Returning 2 Connections
Allocating 3 Connections
cp.curActConns = 178
Returning 1 Connections
Allocating 21 Connections
cp.curActConns = 182
Returning 2 Connections
cp.curActConns = 193
Returning 0 Connections
cp.curActConns = 196
Returning 3 Connections
Allocating 2 Connections
cp.curActConns = 195
Returning 3 Connections
Allocating 5 Connections
cp.curActConns = 197
Returning 3 Connections
Allocating 0 Connections
cp.curActConns = 194
Returning 1 Connections
Allocating 15 Connections
cp.curActConns = 205
Returning 2 Connections
cp.curActConns = 206
Returning 2 Connections
Allocating 0 Connections
cp.curActConns = 204
Allocating 0 Connections
Returning 0 Connections
cp.curActConns = 204
Returning 0 Connections
Allocating 2 Connections
cp.curActConns = 206
Returning 3 Connections
Allocating 3 Connections
cp.curActConns = 206
Allocating 1 Connections
Returning 2 Connections
cp.curActConns = 205
Allocating 4 Connections
Returning 2 Connections
cp.curActConns = 207
Returning 2 Connections
Allocating 2 Connections
cp.curActConns = 207
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 210
Returning 1 Connections
Allocating 2 Connections
cp.curActConns = 211
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 214
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 213
Returning 3 Connections
Allocating 1 Connections
cp.curActConns = 211
Returning 4 Connections
Allocating 1 Connections
cp.curActConns = 208
Returning 2 Connections
Allocating 0 Connections
cp.curActConns = 206
Returning 0 Connections
Allocating 0 Connections
cp.curActConns = 206
Returning 2 Connections
Allocating 4 Connections
cp.curActConns = 208
Returning 3 Connections
Allocating 3 Connections
cp.curActConns = 208
Returning 4 Connections
Allocating 3 Connections
cp.curActConns = 207
Returning 4 Connections
Allocating 2 Connections
cp.curActConns = 205
Returning 2 Connections
Allocating 2 Connections
cp.curActConns = 205
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 208
Returning 0 Connections
Allocating 1 Connections
cp.curActConns = 209
Returning 1 Connections
Allocating 1 Connections
cp.curActConns = 209
Returning 0 Connections
Allocating 1 Connections
cp.curActConns = 210
Returning 4 Connections
Allocating 4 Connections
cp.curActConns = 210
Returning 0 Connections
Allocating 3 Connections
cp.curActConns = 213
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 212
Returning 2 Connections
Allocating 3 Connections
cp.curActConns = 213
Returning 2 Connections
Allocating 1 Connections
cp.curActConns = 212
Returning 3 Connections
Allocating 4 Connections
cp.curActConns = 213
Returning 2 Connections
Allocating 3 Connections
cp.curActConns = 214
Returning 2 Connections
Allocating 3 Connections
cp.curActConns = 215
Returning 3 Connections
Allocating 3 Connections
cp.curActConns = 215
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 214
Returning 0 Connections
Allocating 0 Connections
cp.curActConns = 214
Returning 4 Connections
Allocating 2 Connections
cp.curActConns = 212
Returning 2 Connections
Allocating 0 Connections
cp.curActConns = 210
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 213
Returning 0 Connections
Allocating 3 Connections
cp.curActConns = 216
Returning 0 Connections
Allocating 4 Connections
cp.curActConns = 216
Returning 0 Connections
cp.curActConns = 220
Returning 3 Connections
Allocating 0 Connections
cp.curActConns = 217
Returning 4 Connections
Allocating 2 Connections
cp.curActConns = 215
Returning 0 Connections
Allocating 0 Connections
cp.curActConns = 215
Returning 2 Connections
Allocating 4 Connections
cp.curActConns = 217
Returning 0 Connections
Allocating 3 Connections
cp.curActConns = 220
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 219
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 218
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 221
Returning 2 Connections
Allocating 1 Connections
cp.curActConns = 220
Returning 2 Connections
Allocating 1 Connections
cp.curActConns = 219
Returning 0 Connections
Allocating 4 Connections
cp.curActConns = 223
Returning 2 Connections
Allocating 2 Connections
cp.curActConns = 223
Retuning from startDeallocatorRoutine
Retuning from startAllocatorRoutine
2022-09-02T08:50:24.400+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2022-09-02T08:50:25.363+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestSustainedHighConns (59.07s)
=== RUN   TestLowWM
2022-09-02T08:50:28.402+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 20 overflow 5 low WM 10 relConn batch size 2 ...
2022-09-02T08:51:28.418+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] active conns 0, free conns 10
2022-09-02T08:52:28.436+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] active conns 0, free conns 10
2022-09-02T08:52:33.912+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2022-09-02T08:52:34.439+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestLowWM (129.51s)
=== RUN   TestTotalConns
2022-09-02T08:52:37.914+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 120 overflow 5 low WM 10 relConn batch size 10 ...
2022-09-02T08:52:52.095+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2022-09-02T08:52:52.922+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestTotalConns (18.18s)
=== RUN   TestUpdateTickRate
2022-09-02T08:52:56.096+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 40 overflow 5 low WM 2 relConn batch size 2 ...
2022-09-02T08:53:16.946+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2022-09-02T08:53:17.108+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestUpdateTickRate (24.85s)
PASS
ok  	github.com/couchbase/indexing/secondary/queryport/client	322.864s
Starting server: attempt 1

Functional tests

2022/09/02 08:55:32 In TestMain()
2022/09/02 08:55:32 otp node fetch error: json: cannot unmarshal string into Go value of type couchbase.Pool
2022/09/02 08:55:32 Initialising services with role: kv,n1ql on node: 127.0.0.1:9000
2022/09/02 08:55:33 Initialising web UI on node: 127.0.0.1:9000
2022/09/02 08:55:33 InitWebCreds, response is: {"newBaseUri":"http://127.0.0.1:9000/"}
2022/09/02 08:55:34 Setting data quota of 1500M and Index quota of 1500M
2022/09/02 08:55:35 Adding node: https://127.0.0.1:19001 with role: kv,index to the cluster
2022/09/02 08:55:43 AddNode: Successfully added node: 127.0.0.1:9001 (role kv,index), response: {"otpNode":"n_1@127.0.0.1"}
2022/09/02 08:55:48 Rebalance progress: 0
2022/09/02 08:55:53 Rebalance progress: 0
2022/09/02 08:55:58 Rebalance progress: 0
2022/09/02 08:56:03 Rebalance progress: 0
2022/09/02 08:56:08 Rebalance progress: 0
2022/09/02 08:56:13 Rebalance progress: 0
2022/09/02 08:56:18 Rebalance progress: 0
2022/09/02 08:56:23 Rebalance progress: 0
2022/09/02 08:56:28 Rebalance progress: 0
2022/09/02 08:56:33 Rebalance progress: 0
2022/09/02 08:56:38 Rebalance progress: 0
2022/09/02 08:56:43 Rebalance progress: 0
2022/09/02 08:56:48 Rebalance progress: 0
2022/09/02 08:56:53 Rebalance failed. See logs for detailed reason. You can try again.
2022/09/02 08:56:53 Error while initialising cluster: AddNodeAndRebalance: Error during rebalance, err: Rebalance failed
panic: Error while initialising cluster: AddNodeAndRebalance: Error during rebalance, err: Rebalance failed


goroutine 1 [running]:
panic({0xe8aee0, 0xc00051f250})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/panic.go:941 +0x397 fp=0xc0001e1ca8 sp=0xc0001e1be8 pc=0x43b757
log.Panicf({0x1025e70?, 0x20?}, {0xc0001e1df8?, 0xd?, 0xc0001b0e60?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/log/log.go:392 +0x67 fp=0xc0001e1cf0 sp=0xc0001e1ca8 pc=0x5bdfc7
github.com/couchbase/indexing/secondary/tests/framework/common.HandleError(...)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/tests/framework/common/util.go:76
github.com/couchbase/indexing/secondary/tests/functionaltests.TestMain(0x4461b1?)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/tests/functionaltests/common_test.go:94 +0x456 fp=0xc0001e1ec8 sp=0xc0001e1cf0 pc=0xd31a16
main.main()
	_testmain.go:483 +0x1d3 fp=0xc0001e1f80 sp=0xc0001e1ec8 pc=0xe0cf73
runtime.main()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:250 +0x212 fp=0xc0001e1fe0 sp=0xc0001e1f80 pc=0x43e2d2
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0001e1fe8 sp=0xc0001e1fe0 pc=0x46ec21

goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000060fb0 sp=0xc000060f90 pc=0x43e696
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.forcegchelper()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:301 +0xad fp=0xc000060fe0 sp=0xc000060fb0 pc=0x43e52d
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000060fe8 sp=0xc000060fe0 pc=0x46ec21
created by runtime.init.6
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:289 +0x25

goroutine 18 [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005c790 sp=0xc00005c770 pc=0x43e696
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.bgsweep(0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgcsweep.go:297 +0xd7 fp=0xc00005c7c8 sp=0xc00005c790 pc=0x4297f7
runtime.gcenable.func1()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:177 +0x26 fp=0xc00005c7e0 sp=0xc00005c7c8 pc=0x41f3a6
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005c7e8 sp=0xc00005c7e0 pc=0x46ec21
created by runtime.gcenable
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:177 +0x6b

goroutine 19 [GC scavenge wait]:
runtime.gopark(0x4cab8f330efd?, 0x10000?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005cf20 sp=0xc00005cf00 pc=0x43e696
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.bgscavenge(0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgcscavenge.go:364 +0x2a5 fp=0xc00005cfc8 sp=0xc00005cf20 pc=0x427605
runtime.gcenable.func2()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:178 +0x26 fp=0xc00005cfe0 sp=0xc00005cfc8 pc=0x41f346
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005cfe8 sp=0xc00005cfe0 pc=0x46ec21
created by runtime.gcenable
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:178 +0xaa

goroutine 34 [finalizer wait]:
runtime.gopark(0x0?, 0x109fd68?, 0x0?, 0x20?, 0x2000000020?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000060630 sp=0xc000060610 pc=0x43e696
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.runfinq()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mfinal.go:177 +0xb3 fp=0xc0000607e0 sp=0xc000060630 pc=0x41e453
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0000607e8 sp=0xc0000607e0 pc=0x46ec21
created by runtime.createfing
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mfinal.go:157 +0x45

goroutine 20 [select]:
runtime.gopark(0xc00021c798?, 0x2?, 0x47?, 0xe7?, 0xc00021c78c?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00021c618 sp=0xc00021c5f8 pc=0x43e696
runtime.selectgo(0xc00021c798, 0xc00021c788, 0x40bfb9?, 0x0, 0x8?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc00021c758 sp=0xc00021c618 pc=0x44e112
github.com/couchbase/cbauth/cbauthimpl.(*tlsNotifier).loop(0xc00013c0d8)
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:396 +0x67 fp=0xc00021c7c8 sp=0xc00021c758 pc=0x785e07
github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest.func2()
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:561 +0x26 fp=0xc00021c7e0 sp=0xc00021c7c8 pc=0x786a86
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00021c7e8 sp=0xc00021c7e0 pc=0x46ec21
created by github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:561 +0x37a

goroutine 21 [select]:
runtime.gopark(0xc00005d798?, 0x2?, 0x0?, 0x0?, 0xc00005d78c?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005d608 sp=0xc00005d5e8 pc=0x43e696
runtime.selectgo(0xc00005d798, 0xc00005d788, 0x0?, 0x0, 0x0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc00005d748 sp=0xc00005d608 pc=0x44e112
github.com/couchbase/cbauth/cbauthimpl.(*cfgChangeNotifier).loop(0xc00013c0f0)
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:316 +0x85 fp=0xc00005d7c8 sp=0xc00005d748 pc=0x785825
github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest.func3()
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:562 +0x26 fp=0xc00005d7e0 sp=0xc00005d7c8 pc=0x786a26
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005d7e8 sp=0xc00005d7e0 pc=0x46ec21
created by github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:562 +0x3ca

goroutine 22 [IO wait]:
runtime.gopark(0xc0001029c0?, 0xc00004e000?, 0x70?, 0x98?, 0x484542?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000549800 sp=0xc0005497e0 pc=0x43e696
runtime.netpollblock(0xc0000a7000?, 0x1000?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:522 +0xf7 fp=0xc000549838 sp=0xc000549800 pc=0x437137
internal/poll.runtime_pollWait(0x7f8207952fc8, 0x72)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:302 +0x89 fp=0xc000549858 sp=0xc000549838 pc=0x469209
internal/poll.(*pollDesc).wait(0xc000032100?, 0xc0000a7000?, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:83 +0x32 fp=0xc000549880 sp=0xc000549858 pc=0x4a2132
internal/poll.(*pollDesc).waitRead(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc000032100, {0xc0000a7000, 0x1000, 0x1000})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_unix.go:167 +0x25a fp=0xc000549900 sp=0xc000549880 pc=0x4a349a
net.(*netFD).Read(0xc000032100, {0xc0000a7000?, 0xc000076530?, 0xc0005499d8?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/fd_posix.go:55 +0x29 fp=0xc000549948 sp=0xc000549900 pc=0x669209
net.(*conn).Read(0xc000010020, {0xc0000a7000?, 0x11?, 0xc000549a68?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/net.go:183 +0x45 fp=0xc000549990 sp=0xc000549948 pc=0x679485
bufio.(*Reader).Read(0xc000090240, {0xc000176001, 0x5ff, 0x453934?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:236 +0x1b4 fp=0xc0005499c8 sp=0xc000549990 pc=0x5206f4
github.com/couchbase/cbauth/revrpc.(*minirwc).Read(0x191?, {0xc000176001?, 0x8?, 0x20?})
	/opt/build/goproj/src/github.com/couchbase/cbauth/revrpc/revrpc.go:102 +0x25 fp=0xc0005499f8 sp=0xc0005499c8 pc=0x7d7045
encoding/json.(*Decoder).refill(0xc000135400)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:165 +0x17f fp=0xc000549a48 sp=0xc0005499f8 pc=0x565bbf
encoding/json.(*Decoder).readValue(0xc000135400)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:140 +0xbb fp=0xc000549a98 sp=0xc000549a48 pc=0x5657bb
encoding/json.(*Decoder).Decode(0xc000135400, {0xebbc00, 0xc0001142c0})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:63 +0x78 fp=0xc000549ac8 sp=0xc000549a98 pc=0x565418
net/rpc/jsonrpc.(*serverCodec).ReadRequestHeader(0xc0001142a0, 0xc00011e420)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/jsonrpc/server.go:66 +0x85 fp=0xc000549b08 sp=0xc000549ac8 pc=0x7d62a5
github.com/couchbase/cbauth/revrpc.(*jsonServerCodec).ReadRequestHeader(0xc0001166e0?, 0x4cd388?)
	:1 +0x2a fp=0xc000549b28 sp=0xc000549b08 pc=0x7d8dca
net/rpc.(*Server).readRequestHeader(0xc0001166e0, {0x11b5fe8, 0xc000110620})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/server.go:587 +0x66 fp=0xc000549bf8 sp=0xc000549b28 pc=0x7d59c6
net/rpc.(*Server).readRequest(0x0?, {0x11b5fe8, 0xc000110620})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/server.go:547 +0x3b fp=0xc000549cd0 sp=0xc000549bf8 pc=0x7d551b
net/rpc.(*Server).ServeCodec(0xc0001166e0, {0x11b5fe8?, 0xc000110620})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/server.go:462 +0x87 fp=0xc000549dc8 sp=0xc000549cd0 pc=0x7d4c47
github.com/couchbase/cbauth/revrpc.(*Service).Run(0xc00005df60?, 0xc000073fa0)
	/opt/build/goproj/src/github.com/couchbase/cbauth/revrpc/revrpc.go:192 +0x5d9 fp=0xc000549f38 sp=0xc000549dc8 pc=0x7d7799
github.com/couchbase/cbauth/revrpc.BabysitService(0x0?, 0x0?, {0x11ac700?, 0xc00000e018?})
	/opt/build/goproj/src/github.com/couchbase/cbauth/revrpc/revrpc.go:288 +0x58 fp=0xc000549f70 sp=0xc000549f38 pc=0x7d7e98
github.com/couchbase/cbauth.runRPCForSvc(0x0?, 0xc00014a000)
	/opt/build/goproj/src/github.com/couchbase/cbauth/default.go:57 +0xbd fp=0xc000549fc0 sp=0xc000549f70 pc=0x7e1c1d
github.com/couchbase/cbauth.startDefault.func1()
	/opt/build/goproj/src/github.com/couchbase/cbauth/default.go:66 +0x25 fp=0xc000549fe0 sp=0xc000549fc0 pc=0x7e1f05
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000549fe8 sp=0xc000549fe0 pc=0x46ec21
created by github.com/couchbase/cbauth.startDefault
	/opt/build/goproj/src/github.com/couchbase/cbauth/default.go:65 +0xf9

goroutine 52 [GC worker (idle)]:
runtime.gopark(0xc0001b0ec0?, 0xc00021cfd0?, 0x51?, 0x79?, 0xc00054c000?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00021cf58 sp=0xc00021cf38 pc=0x43e696
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc00021cfe0 sp=0xc00021cf58 pc=0x421485
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00021cfe8 sp=0xc00021cfe0 pc=0x46ec21
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25

goroutine 43 [IO wait]:
runtime.gopark(0xc0002176c0?, 0xc000054f00?, 0x68?, 0xb?, 0x484542?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000070af8 sp=0xc000070ad8 pc=0x43e696
runtime.netpollblock(0xc0004d2000?, 0x1000?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:522 +0xf7 fp=0xc000070b30 sp=0xc000070af8 pc=0x437137
internal/poll.runtime_pollWait(0x7f8207952ed8, 0x72)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:302 +0x89 fp=0xc000070b50 sp=0xc000070b30 pc=0x469209
internal/poll.(*pollDesc).wait(0xc00019e900?, 0xc0004d2000?, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:83 +0x32 fp=0xc000070b78 sp=0xc000070b50 pc=0x4a2132
internal/poll.(*pollDesc).waitRead(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc00019e900, {0xc0004d2000, 0x1000, 0x1000})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_unix.go:167 +0x25a fp=0xc000070bf8 sp=0xc000070b78 pc=0x4a349a
net.(*netFD).Read(0xc00019e900, {0xc0004d2000?, 0x40bae9?, 0x4?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/fd_posix.go:55 +0x29 fp=0xc000070c40 sp=0xc000070bf8 pc=0x669209
net.(*conn).Read(0xc00019a890, {0xc0004d2000?, 0x0?, 0x0?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/net.go:183 +0x45 fp=0xc000070c88 sp=0xc000070c40 pc=0x679485
net/http.(*persistConn).Read(0xc000413b00, {0xc0004d2000?, 0xc0001808a0?, 0xc000070d30?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1929 +0x4e fp=0xc000070ce8 sp=0xc000070c88 pc=0x76a6ae
bufio.(*Reader).fill(0xc000411680)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:106 +0x103 fp=0xc000070d20 sp=0xc000070ce8 pc=0x520123
bufio.(*Reader).Peek(0xc000411680, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:144 +0x5d fp=0xc000070d40 sp=0xc000070d20 pc=0x52027d
net/http.(*persistConn).readLoop(0xc000413b00)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2093 +0x1ac fp=0xc000070fc8 sp=0xc000070d40 pc=0x76b4cc
net/http.(*Transport).dialConn.func5()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x26 fp=0xc000070fe0 sp=0xc000070fc8 pc=0x769ca6
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000070fe8 sp=0xc000070fe0 pc=0x46ec21
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x173e

goroutine 44 [select]:
runtime.gopark(0xc00006ff90?, 0x2?, 0xd8?, 0xfd?, 0xc00006ff24?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00006fd90 sp=0xc00006fd70 pc=0x43e696
runtime.selectgo(0xc00006ff90, 0xc00006ff20, 0xc0004ca900?, 0x0, 0xc000093080?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc00006fed0 sp=0xc00006fd90 pc=0x44e112
net/http.(*persistConn).writeLoop(0xc000413b00)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2392 +0xf5 fp=0xc00006ffc8 sp=0xc00006fed0 pc=0x76d1b5
net/http.(*Transport).dialConn.func6()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x26 fp=0xc00006ffe0 sp=0xc00006ffc8 pc=0x769c46
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00006ffe8 sp=0xc00006ffe0 pc=0x46ec21
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x1791

goroutine 53 [GC worker (idle)]:
runtime.gopark(0xe5da60?, 0xc00050c137?, 0x16?, 0x0?, 0x11b5fe8?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000218758 sp=0xc000218738 pc=0x43e696
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc0002187e0 sp=0xc000218758 pc=0x421485
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0002187e8 sp=0xc0002187e0 pc=0x46ec21
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25

goroutine 54 [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000218f58 sp=0xc000218f38 pc=0x43e696
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc000218fe0 sp=0xc000218f58 pc=0x421485
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000218fe8 sp=0xc000218fe0 pc=0x46ec21
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25

goroutine 55 [GC worker (idle)]:
runtime.gopark(0x4cab8ef95b32?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000219758 sp=0xc000219738 pc=0x43e696
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc0002197e0 sp=0xc000219758 pc=0x421485
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0002197e8 sp=0xc0002197e0 pc=0x46ec21
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25
signal: aborted (core dumped)
FAIL	github.com/couchbase/indexing/secondary/tests/functionaltests	80.636s
Indexer Go routine dump logged in /opt/build/ns_server/logs/n_1/indexer_functests_pprof.log
curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (7) Failed to connect to 127.0.0.1 port 9108 after 1 ms: Connection refused
2022/09/02 08:56:55 In TestMain()
2022/09/02 08:56:55 ChangeIndexerSettings: Host  Port 0 Nodes []
2022/09/02 08:56:55 Changing config key indexer.api.enableTestServer to value true
2022/09/02 08:56:55 Error in ChangeIndexerSettings: Post "http://:2/internal/settings": dial tcp :2: connect: connection refused
panic: Error in ChangeIndexerSettings: Post "http://:2/internal/settings": dial tcp :2: connect: connection refused


goroutine 1 [running]:
panic({0xcd25e0, 0xc00049d240})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/panic.go:941 +0x397 fp=0xc00002bd40 sp=0xc00002bc80 pc=0x43a6d7
log.Panicf({0xe3f566?, 0x1e?}, {0xc00002be38?, 0x1c?, 0xcc7ce0?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/log/log.go:392 +0x67 fp=0xc00002bd88 sp=0xc00002bd40 pc=0x5bb407
github.com/couchbase/indexing/secondary/tests/framework/common.HandleError(...)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/tests/framework/common/util.go:76
github.com/couchbase/indexing/secondary/tests/largedatatests.TestMain(0x445131?)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/tests/largedatatests/common_test.go:52 +0x468 fp=0xc00002bec8 sp=0xc00002bd88 pc=0xc5e708
main.main()
	_testmain.go:59 +0x1d3 fp=0xc00002bf80 sp=0xc00002bec8 pc=0xc647d3
runtime.main()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:250 +0x212 fp=0xc00002bfe0 sp=0xc00002bf80 pc=0x43d252
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00002bfe8 sp=0xc00002bfe0 pc=0x46dba1

goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000060fb0 sp=0xc000060f90 pc=0x43d616
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.forcegchelper()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:301 +0xad fp=0xc000060fe0 sp=0xc000060fb0 pc=0x43d4ad
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000060fe8 sp=0xc000060fe0 pc=0x46dba1
created by runtime.init.6
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:289 +0x25

goroutine 18 [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005c790 sp=0xc00005c770 pc=0x43d616
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.bgsweep(0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgcsweep.go:297 +0xd7 fp=0xc00005c7c8 sp=0xc00005c790 pc=0x428777
runtime.gcenable.func1()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:177 +0x26 fp=0xc00005c7e0 sp=0xc00005c7c8 pc=0x41e326
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005c7e8 sp=0xc00005c7e0 pc=0x46dba1
created by runtime.gcenable
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:177 +0x6b

goroutine 19 [GC scavenge wait]:
runtime.gopark(0x4cbe1f560a49?, 0x10000?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005cf20 sp=0xc00005cf00 pc=0x43d616
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.bgscavenge(0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgcscavenge.go:364 +0x2a5 fp=0xc00005cfc8 sp=0xc00005cf20 pc=0x426585
runtime.gcenable.func2()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:178 +0x26 fp=0xc00005cfe0 sp=0xc00005cfc8 pc=0x41e2c6
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005cfe8 sp=0xc00005cfe0 pc=0x46dba1
created by runtime.gcenable
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:178 +0xaa

goroutine 3 [finalizer wait]:
runtime.gopark(0x0?, 0xea3598?, 0x60?, 0xe0?, 0x2000000020?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000060630 sp=0xc000060610 pc=0x43d616
runtime.goparkunlock(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:367
runtime.runfinq()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mfinal.go:177 +0xb3 fp=0xc0000607e0 sp=0xc000060630 pc=0x41d3d3
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0000607e8 sp=0xc0000607e0 pc=0x46dba1
created by runtime.createfing
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mfinal.go:157 +0x45

goroutine 34 [select]:
runtime.gopark(0xc000061798?, 0x2?, 0xc7?, 0xd6?, 0xc00006178c?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000061618 sp=0xc0000615f8 pc=0x43d616
runtime.selectgo(0xc000061798, 0xc000061788, 0x40af39?, 0x0, 0x8?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc000061758 sp=0xc000061618 pc=0x44d092
github.com/couchbase/cbauth/cbauthimpl.(*tlsNotifier).loop(0xc0002100f0)
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:396 +0x67 fp=0xc0000617c8 sp=0xc000061758 pc=0x779647
github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest.func2()
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:561 +0x26 fp=0xc0000617e0 sp=0xc0000617c8 pc=0x77a2c6
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0000617e8 sp=0xc0000617e0 pc=0x46dba1
created by github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:561 +0x37a

goroutine 35 [select]:
runtime.gopark(0xc000242798?, 0x2?, 0x0?, 0x0?, 0xc00024278c?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000242608 sp=0xc0002425e8 pc=0x43d616
runtime.selectgo(0xc000242798, 0xc000242788, 0x0?, 0x0, 0x0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc000242748 sp=0xc000242608 pc=0x44d092
github.com/couchbase/cbauth/cbauthimpl.(*cfgChangeNotifier).loop(0xc000210108)
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:316 +0x85 fp=0xc0002427c8 sp=0xc000242748 pc=0x779065
github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest.func3()
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:562 +0x26 fp=0xc0002427e0 sp=0xc0002427c8 pc=0x77a266
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0002427e8 sp=0xc0002427e0 pc=0x46dba1
created by github.com/couchbase/cbauth/cbauthimpl.NewSVCForTest
	/opt/build/goproj/src/github.com/couchbase/cbauth/cbauthimpl/impl.go:562 +0x3ca

goroutine 36 [IO wait]:
runtime.gopark(0xc00022a340?, 0xc000052a00?, 0x70?, 0x38?, 0x483482?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc0000cd800 sp=0xc0000cd7e0 pc=0x43d616
runtime.netpollblock(0xc00018f000?, 0x1000?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:522 +0xf7 fp=0xc0000cd838 sp=0xc0000cd800 pc=0x4360b7
internal/poll.runtime_pollWait(0x7fed7a2462d8, 0x72)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:302 +0x89 fp=0xc0000cd858 sp=0xc0000cd838 pc=0x468189
internal/poll.(*pollDesc).wait(0xc000032200?, 0xc00018f000?, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:83 +0x32 fp=0xc0000cd880 sp=0xc0000cd858 pc=0x4a0db2
internal/poll.(*pollDesc).waitRead(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc000032200, {0xc00018f000, 0x1000, 0x1000})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_unix.go:167 +0x25a fp=0xc0000cd900 sp=0xc0000cd880 pc=0x4a211a
net.(*netFD).Read(0xc000032200, {0xc00018f000?, 0xc00049c3a0?, 0xc?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/fd_posix.go:55 +0x29 fp=0xc0000cd948 sp=0xc0000cd900 pc=0x665589
net.(*conn).Read(0xc000010518, {0xc00018f000?, 0x11?, 0x0?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/net.go:183 +0x45 fp=0xc0000cd990 sp=0xc0000cd948 pc=0x674aa5
bufio.(*Reader).Read(0xc00009e5a0, {0xc0004f6001, 0x5ff, 0x4528b4?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:236 +0x1b4 fp=0xc0000cd9c8 sp=0xc0000cd990 pc=0x51dd14
github.com/couchbase/cbauth/revrpc.(*minirwc).Read(0x203000?, {0xc0004f6001?, 0x203000?, 0xc00045a7c0?})
	/opt/build/goproj/src/github.com/couchbase/cbauth/revrpc/revrpc.go:102 +0x25 fp=0xc0000cd9f8 sp=0xc0000cd9c8 pc=0x7b9da5
encoding/json.(*Decoder).refill(0xc0004a8280)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:165 +0x17f fp=0xc0000cda48 sp=0xc0000cd9f8 pc=0x562fff
encoding/json.(*Decoder).readValue(0xc0004a8280)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:140 +0xbb fp=0xc0000cda98 sp=0xc0000cda48 pc=0x562bfb
encoding/json.(*Decoder).Decode(0xc0004a8280, {0xcfd5c0, 0xc0003e6ec0})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:63 +0x78 fp=0xc0000cdac8 sp=0xc0000cda98 pc=0x562858
net/rpc/jsonrpc.(*serverCodec).ReadRequestHeader(0xc0003e6ea0, 0xc00045a7c0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/jsonrpc/server.go:66 +0x85 fp=0xc0000cdb08 sp=0xc0000cdac8 pc=0x7b9005
github.com/couchbase/cbauth/revrpc.(*jsonServerCodec).ReadRequestHeader(0xc000217900?, 0x4cc008?)
	:1 +0x2a fp=0xc0000cdb28 sp=0xc0000cdb08 pc=0x7bbb2a
net/rpc.(*Server).readRequestHeader(0xc000217900, {0xf8d6e8, 0xc00049c320})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/server.go:587 +0x66 fp=0xc0000cdbf8 sp=0xc0000cdb28 pc=0x7b8726
net/rpc.(*Server).readRequest(0x0?, {0xf8d6e8, 0xc00049c320})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/server.go:547 +0x3b fp=0xc0000cdcd0 sp=0xc0000cdbf8 pc=0x7b827b
net/rpc.(*Server).ServeCodec(0xc000217900, {0xf8d6e8?, 0xc00049c320})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/rpc/server.go:462 +0x87 fp=0xc0000cddc8 sp=0xc0000cdcd0 pc=0x7b79a7
github.com/couchbase/cbauth/revrpc.(*Service).Run(0xc000242f60?, 0xc000073fa0)
	/opt/build/goproj/src/github.com/couchbase/cbauth/revrpc/revrpc.go:192 +0x5d9 fp=0xc0000cdf38 sp=0xc0000cddc8 pc=0x7ba4f9
github.com/couchbase/cbauth/revrpc.BabysitService(0x0?, 0x0?, {0xf84480?, 0xc00000e618?})
	/opt/build/goproj/src/github.com/couchbase/cbauth/revrpc/revrpc.go:288 +0x58 fp=0xc0000cdf70 sp=0xc0000cdf38 pc=0x7babf8
github.com/couchbase/cbauth.runRPCForSvc(0x0?, 0xc00023c000)
	/opt/build/goproj/src/github.com/couchbase/cbauth/default.go:57 +0xbd fp=0xc0000cdfc0 sp=0xc0000cdf70 pc=0x7c44fd
github.com/couchbase/cbauth.startDefault.func1()
	/opt/build/goproj/src/github.com/couchbase/cbauth/default.go:66 +0x25 fp=0xc0000cdfe0 sp=0xc0000cdfc0 pc=0x7c47e5
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0000cdfe8 sp=0xc0000cdfe0 pc=0x46dba1
created by github.com/couchbase/cbauth.startDefault
	/opt/build/goproj/src/github.com/couchbase/cbauth/default.go:65 +0xf9

goroutine 37 [chan receive]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000243678 sp=0xc000243658 pc=0x43d616
runtime.chanrecv(0xc0003e6de0, 0xc000243790, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/chan.go:577 +0x56c fp=0xc000243708 sp=0xc000243678 pc=0x40b5cc
runtime.chanrecv2(0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/chan.go:445 +0x18 fp=0xc000243730 sp=0xc000243708 pc=0x40b038
github.com/couchbase/goutils/systemeventlog.(*SystemEventLoggerImpl).logEvents(0xc00026d880)
	/opt/build/goproj/src/github.com/couchbase/goutils/systemeventlog/system_event_logger.go:186 +0xb7 fp=0xc0002437c8 sp=0xc000243730 pc=0xb00157
github.com/couchbase/goutils/systemeventlog.NewSystemEventLogger.func1()
	/opt/build/goproj/src/github.com/couchbase/goutils/systemeventlog/system_event_logger.go:125 +0x26 fp=0xc0002437e0 sp=0xc0002437c8 pc=0xaffa66
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0002437e8 sp=0xc0002437e0 pc=0x46dba1
created by github.com/couchbase/goutils/systemeventlog.NewSystemEventLogger
	/opt/build/goproj/src/github.com/couchbase/goutils/systemeventlog/system_event_logger.go:125 +0x1d6

goroutine 58 [select]:
runtime.gopark(0xc00005f790?, 0x2?, 0x0?, 0x30?, 0xc00005f774?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005f5f0 sp=0xc00005f5d0 pc=0x43d616
runtime.selectgo(0xc00005f790, 0xc00005f770, 0xe6221d?, 0x0, 0x0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc00005f730 sp=0xc00005f5f0 pc=0x44d092
github.com/couchbase/indexing/secondary/common.(*internalVersionMonitor).ticker(0xc0000f6070)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:652 +0x158 fp=0xc00005f7c8 sp=0xc00005f730 pc=0xacb498
github.com/couchbase/indexing/secondary/common.newInternalVersionMonitor.func1()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:231 +0x26 fp=0xc00005f7e0 sp=0xc00005f7c8 pc=0xac73e6
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005f7e8 sp=0xc00005f7e0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/common.newInternalVersionMonitor
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:231 +0x2f6

goroutine 52 [GC worker (idle)]:
runtime.gopark(0x4cbe1f57b834?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00023ef58 sp=0xc00023ef38 pc=0x43d616
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc00023efe0 sp=0xc00023ef58 pc=0x420405
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00023efe8 sp=0xc00023efe0 pc=0x46dba1
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25

goroutine 50 [select]:
runtime.gopark(0xc000074f68?, 0x4?, 0x3?, 0x0?, 0xc000074db0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000074c00 sp=0xc000074be0 pc=0x43d616
runtime.selectgo(0xc000074f68, 0xc000074da8, 0xc000112940?, 0x0, 0x1?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc000074d40 sp=0xc000074c00 pc=0x44d092
net/http.(*persistConn).readLoop(0xc000134120)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2213 +0xda5 fp=0xc000074fc8 sp=0xc000074d40 pc=0x7672a5
net/http.(*Transport).dialConn.func5()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x26 fp=0xc000074fe0 sp=0xc000074fc8 pc=0x764e86
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000074fe8 sp=0xc000074fe0 pc=0x46dba1
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x173e

goroutine 51 [select]:
runtime.gopark(0xc00006ef90?, 0x2?, 0xd8?, 0xed?, 0xc00006ef24?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00006ed90 sp=0xc00006ed70 pc=0x43d616
runtime.selectgo(0xc00006ef90, 0xc00006ef20, 0xc00050a000?, 0x0, 0xc000189020?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc00006eed0 sp=0xc00006ed90 pc=0x44d092
net/http.(*persistConn).writeLoop(0xc000134120)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2392 +0xf5 fp=0xc00006efc8 sp=0xc00006eed0 pc=0x768395
net/http.(*Transport).dialConn.func6()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x26 fp=0xc00006efe0 sp=0xc00006efc8 pc=0x764e26
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00006efe8 sp=0xc00006efe0 pc=0x46dba1
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x1791

goroutine 22 [GC worker (idle)]:
runtime.gopark(0x4cbe1f57b2a3?, 0x0?, 0x0?, 0x0?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005df58 sp=0xc00005df38 pc=0x43d616
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc00005dfe0 sp=0xc00005df58 pc=0x420405
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005dfe8 sp=0xc00005dfe0 pc=0x46dba1
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25

goroutine 23 [GC worker (idle)]:
runtime.gopark(0x4cbe1f4f7a50?, 0x3?, 0x75?, 0xbf?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005e758 sp=0xc00005e738 pc=0x43d616
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc00005e7e0 sp=0xc00005e758 pc=0x420405
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005e7e8 sp=0xc00005e7e0 pc=0x46dba1
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25

goroutine 24 [GC worker (idle)]:
runtime.gopark(0x4cbe1f596667?, 0x3?, 0x4b?, 0x40?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005ef58 sp=0xc00005ef38 pc=0x43d616
runtime.gcBgMarkWorker()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1207 +0xe5 fp=0xc00005efe0 sp=0xc00005ef58 pc=0x420405
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005efe8 sp=0xc00005efe0 pc=0x46dba1
created by runtime.gcBgMarkStartWorkers
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/mgc.go:1131 +0x25

goroutine 11 [select]:
runtime.gopark(0xc00013f790?, 0x2?, 0x0?, 0x0?, 0xc00013f78c?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00013f618 sp=0xc00013f5f8 pc=0x43d616
runtime.selectgo(0xc00013f790, 0xc00013f788, 0x0?, 0x0, 0x0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc00013f758 sp=0xc00013f618 pc=0x44d092
github.com/couchbase/indexing/secondary/queryport/client.(*GsiClient).listenMetaChange(0xc000246b00, 0xc00023a380)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/client.go:1830 +0x70 fp=0xc00013f7c0 sp=0xc00013f758 pc=0xc248d0
github.com/couchbase/indexing/secondary/queryport/client.makeWithMetaProvider.func1()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/client.go:1823 +0x2a fp=0xc00013f7e0 sp=0xc00013f7c0 pc=0xc2482a
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00013f7e8 sp=0xc00013f7e0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/queryport/client.makeWithMetaProvider
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/client.go:1823 +0x28a

goroutine 55 [select]:
runtime.gopark(0xc00005d798?, 0x2?, 0x0?, 0x30?, 0xc00005d78c?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00005d618 sp=0xc00005d5f8 pc=0x43d616
runtime.selectgo(0xc00005d798, 0xc00005d788, 0xc000100000?, 0x0, 0xf8d598?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc00005d758 sp=0xc00005d618 pc=0x44d092
github.com/couchbase/indexing/secondary/queryport/client.(*schedTokenMonitor).updater(0xc000090230)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:2389 +0x92 fp=0xc00005d7c8 sp=0xc00005d758 pc=0xc37f32
github.com/couchbase/indexing/secondary/queryport/client.newSchedTokenMonitor.func1()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:2186 +0x26 fp=0xc00005d7e0 sp=0xc00005d7c8 pc=0xc36866
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00005d7e8 sp=0xc00005d7e0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/queryport/client.newSchedTokenMonitor
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:2186 +0x2e5

goroutine 25 [select]:
runtime.gopark(0xc0001b1ba0?, 0x3?, 0x0?, 0x30?, 0xc0001b1b1a?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc0001b1978 sp=0xc0001b1958 pc=0x43d616
runtime.selectgo(0xc0001b1ba0, 0xc0001b1b14, 0x3?, 0x0, 0x0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc0001b1ab8 sp=0xc0001b1978 pc=0x44d092
github.com/couchbase/cbauth/metakv.doRunObserveChildren(0xc00020a7f0?, {0xe504e2, 0x1b}, 0xc0001b1e68, 0xc0000425a0)
	/opt/build/goproj/src/github.com/couchbase/cbauth/metakv/metakv.go:301 +0x429 fp=0xc0001b1e40 sp=0xc0001b1ab8 pc=0x9b8289
github.com/couchbase/cbauth/metakv.(*store).runObserveChildren(...)
	/opt/build/goproj/src/github.com/couchbase/cbauth/metakv/metakv.go:259
github.com/couchbase/cbauth/metakv.RunObserveChildren({0xe504e2?, 0xc000122200?}, 0xc00023e638?, 0x0?)
	/opt/build/goproj/src/github.com/couchbase/cbauth/metakv/metakv.go:389 +0x58 fp=0xc0001b1e88 sp=0xc0001b1e40 pc=0x9b8838
github.com/couchbase/indexing/secondary/manager/common.(*CommandListener).ListenTokens.func2.1(0x100000000000000?, {0x0?, 0x0?})
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/manager/common/token.go:1579 +0xc7 fp=0xc0001b1f00 sp=0xc0001b1e88 pc=0xb050e7
github.com/couchbase/indexing/secondary/common.(*RetryHelper).Run(0xc0001b1fa0)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/retry_helper.go:36 +0x83 fp=0xc0001b1f38 sp=0xc0001b1f00 pc=0xace643
github.com/couchbase/indexing/secondary/manager/common.(*CommandListener).ListenTokens.func2()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/manager/common/token.go:1584 +0xdf fp=0xc0001b1fe0 sp=0xc0001b1f38 pc=0xb04f9f
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0001b1fe8 sp=0xc0001b1fe0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/manager/common.(*CommandListener).ListenTokens
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/manager/common/token.go:1572 +0xaf

goroutine 45 [select]:
runtime.gopark(0xc000071f68?, 0x4?, 0x3?, 0x0?, 0xc000071db0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000071c00 sp=0xc000071be0 pc=0x43d616
runtime.selectgo(0xc000071f68, 0xc000071da8, 0xc00050a280?, 0x0, 0x44c701?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc000071d40 sp=0xc000071c00 pc=0x44d092
net/http.(*persistConn).readLoop(0xc000135440)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2213 +0xda5 fp=0xc000071fc8 sp=0xc000071d40 pc=0x7672a5
net/http.(*Transport).dialConn.func5()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x26 fp=0xc000071fe0 sp=0xc000071fc8 pc=0x764e86
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000071fe8 sp=0xc000071fe0 pc=0x46dba1
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x173e

goroutine 46 [select]:
runtime.gopark(0xc000543f90?, 0x2?, 0xd8?, 0x3d?, 0xc000543f24?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000543d90 sp=0xc000543d70 pc=0x43d616
runtime.selectgo(0xc000543f90, 0xc000543f20, 0xc000112900?, 0x0, 0xc000117ce0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc000543ed0 sp=0xc000543d90 pc=0x44d092
net/http.(*persistConn).writeLoop(0xc000135440)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2392 +0xf5 fp=0xc000543fc8 sp=0xc000543ed0 pc=0x768395
net/http.(*Transport).dialConn.func6()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x26 fp=0xc000543fe0 sp=0xc000543fc8 pc=0x764e26
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000543fe8 sp=0xc000543fe0 pc=0x46dba1
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x1791

goroutine 12 [select]:
runtime.gopark(0xc00013ff50?, 0x2?, 0x0?, 0x30?, 0xc00013ff14?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00013fd90 sp=0xc00013fd70 pc=0x43d616
runtime.selectgo(0xc00013ff50, 0xc00013ff10, 0xe6f1ce?, 0x0, 0xcd1be0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc00013fed0 sp=0xc00013fd90 pc=0x44d092
github.com/couchbase/indexing/secondary/queryport/client.(*GsiClient).logstats(0xc000246b00, 0xc00023a380)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/client.go:1849 +0x238 fp=0xc00013ffc0 sp=0xc00013fed0 pc=0xc24b58
github.com/couchbase/indexing/secondary/queryport/client.makeWithMetaProvider.func2()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/client.go:1824 +0x2a fp=0xc00013ffe0 sp=0xc00013ffc0 pc=0xc247ca
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00013ffe8 sp=0xc00013ffe0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/queryport/client.makeWithMetaProvider
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/client.go:1824 +0x2ed

goroutine 10 [chan receive]:
runtime.gopark(0xc00023e6d8?, 0x4431bb?, 0x20?, 0xe7?, 0x459dc5?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc00023e6c8 sp=0xc00023e6a8 pc=0x43d616
runtime.chanrecv(0xc000114f60, 0x0, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/chan.go:577 +0x56c fp=0xc00023e758 sp=0xc00023e6c8 pc=0x40b5cc
runtime.chanrecv1(0xdf8475800?, 0xc0001581e0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/chan.go:440 +0x18 fp=0xc00023e780 sp=0xc00023e758 pc=0x40aff8
github.com/couchbase/indexing/secondary/queryport/client.(*metadataClient).logstats(0xc0000c04d0)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:1410 +0x79 fp=0xc00023e7c8 sp=0xc00023e780 pc=0xc309d9
github.com/couchbase/indexing/secondary/queryport/client.newMetaBridgeClient.func2()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:150 +0x26 fp=0xc00023e7e0 sp=0xc00023e7c8 pc=0xc29366
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc00023e7e8 sp=0xc00023e7e0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/queryport/client.newMetaBridgeClient
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:150 +0x605

goroutine 7 [select]:
runtime.gopark(0xc0001abf68?, 0x4?, 0x3?, 0x0?, 0xc0001abdb0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc0001abc00 sp=0xc0001abbe0 pc=0x43d616
runtime.selectgo(0xc0001abf68, 0xc0001abda8, 0xc00050a180?, 0x0, 0xc00023e501?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc0001abd40 sp=0xc0001abc00 pc=0x44d092
net/http.(*persistConn).readLoop(0xc000520120)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2213 +0xda5 fp=0xc0001abfc8 sp=0xc0001abd40 pc=0x7672a5
net/http.(*Transport).dialConn.func5()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x26 fp=0xc0001abfe0 sp=0xc0001abfc8 pc=0x764e86
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0001abfe8 sp=0xc0001abfe0 pc=0x46dba1
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1750 +0x173e

goroutine 9 [runnable]:
runtime.gopark(0xc0000c8ea8?, 0x6?, 0x0?, 0x30?, 0xc0000c8cbc?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc0000c8b20 sp=0xc0000c8b00 pc=0x43d616
runtime.selectgo(0xc0000c8ea8, 0xc0000c8cb0, 0xe45617?, 0x0, 0xc0000c8ef0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc0000c8c60 sp=0xc0000c8b20 pc=0x44d092
net/http.(*persistConn).roundTrip(0xc000135440, 0xc000112840)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2620 +0x974 fp=0xc0000c8f18 sp=0xc0000c8c60 pc=0x769254
net/http.(*Transport).roundTrip(0x170e3e0, 0xc000123200)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:594 +0x7c9 fp=0xc0000c9150 sp=0xc0000c8f18 pc=0x75cce9
net/http.(*Transport).RoundTrip(0xc000123200?, 0xf83c00?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/roundtrip.go:17 +0x19 fp=0xc0000c9170 sp=0xc0000c9150 pc=0x744f19
net/http.send(0xc000123100, {0xf83c00, 0x170e3e0}, {0xdceb60?, 0x48e901?, 0x1787d20?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/client.go:252 +0x5d8 fp=0xc0000c9350 sp=0xc0000c9170 pc=0x706818
net/http.(*Client).send(0xc000117b30, 0xc000123100, {0x7fed515ac4d0?, 0x150?, 0x1787d20?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/client.go:176 +0x9b fp=0xc0000c93c8 sp=0xc0000c9350 pc=0x7060bb
net/http.(*Client).do(0xc000117b30, 0xc000123100)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/client.go:725 +0x8f5 fp=0xc0000c95c8 sp=0xc0000c93c8 pc=0x7084f5
net/http.(*Client).Do(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/client.go:593
github.com/couchbase/indexing/secondary/security.getWithAuthInternal({0xc000118a50?, 0x1b?}, 0xc0000c9960, {0x0, 0x0}, 0x0)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/security/tls.go:669 +0x549 fp=0xc0000c96c8 sp=0xc0000c95c8 pc=0x8626c9
github.com/couchbase/indexing/secondary/security.GetWithAuthNonTLS(...)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/security/tls.go:604
github.com/couchbase/indexing/secondary/dcp.queryRestAPIOnLocalhost(0xc00012de60, {0xe3ebec, 0x6}, {0xd43bc0?, 0x1?}, {0xcb0680, 0xc000246298}, 0xc0001290e0?)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/dcp/pools.go:359 +0x1b3 fp=0xc0000c9910 sp=0xc0000c96c8 pc=0x884093
github.com/couchbase/indexing/secondary/dcp.(*Client).parseURLResponse(...)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/dcp/pools.go:530
github.com/couchbase/indexing/secondary/dcp.ConnectWithAuth({0xc0001290e0, 0x3f}, {0xf83820?, 0xc00011a5a0})
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/dcp/pools.go:594 +0x125 fp=0xc0000c9988 sp=0xc0000c9910 pc=0x886045
github.com/couchbase/indexing/secondary/dcp.Connect({0xc0001290e0, 0x3f})
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/dcp/pools.go:600 +0xce fp=0xc0000c9b20 sp=0xc0000c9988 pc=0x8861ee
github.com/couchbase/indexing/secondary/common.NewServicesChangeNotifier({0xc0001290e0, 0x3f}, {0xe3fad7, 0x7}, {0xe42d8c, 0xb})
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/services_notifier.go:239 +0x1a8 fp=0xc0000c9da0 sp=0xc0000c9b20 pc=0xad0f48
github.com/couchbase/indexing/secondary/queryport/client.(*metadataClient).watchClusterChanges(0xc0000c04d0)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:2071 +0xff fp=0xc0000c9fc8 sp=0xc0000c9da0 pc=0xc35c3f
github.com/couchbase/indexing/secondary/queryport/client.newMetaBridgeClient.func1()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:149 +0x26 fp=0xc0000c9fe0 sp=0xc0000c9fc8 pc=0xc293c6
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0000c9fe8 sp=0xc0000c9fe0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/queryport/client.newMetaBridgeClient
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/queryport/client/meta_client.go:149 +0x5c5

goroutine 8 [select]:
runtime.gopark(0xc0001acf90?, 0x2?, 0xd8?, 0xcd?, 0xc0001acf24?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc0001acd90 sp=0xc0001acd70 pc=0x43d616
runtime.selectgo(0xc0001acf90, 0xc0001acf20, 0xc0002041c0?, 0x0, 0xc00048f4a0?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc0001aced0 sp=0xc0001acd90 pc=0x44d092
net/http.(*persistConn).writeLoop(0xc000520120)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2392 +0xf5 fp=0xc0001acfc8 sp=0xc0001aced0 pc=0x768395
net/http.(*Transport).dialConn.func6()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x26 fp=0xc0001acfe0 sp=0xc0001acfc8 pc=0x764e26
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0001acfe8 sp=0xc0001acfe0 pc=0x46dba1
created by net/http.(*Transport).dialConn
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1751 +0x1791

goroutine 28 [IO wait]:
runtime.gopark(0xc00050e680?, 0xc000050500?, 0x10?, 0x7a?, 0x483482?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc0001a79a0 sp=0xc0001a7980 pc=0x43d616
runtime.netpollblock(0xc0001b6000?, 0x1000?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:522 +0xf7 fp=0xc0001a79d8 sp=0xc0001a79a0 pc=0x4360b7
internal/poll.runtime_pollWait(0x7fed7a245f18, 0x72)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:302 +0x89 fp=0xc0001a79f8 sp=0xc0001a79d8 pc=0x468189
internal/poll.(*pollDesc).wait(0xc00015c080?, 0xc0001b6000?, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:83 +0x32 fp=0xc0001a7a20 sp=0xc0001a79f8 pc=0x4a0db2
internal/poll.(*pollDesc).waitRead(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc00015c080, {0xc0001b6000, 0x1000, 0x1000})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_unix.go:167 +0x25a fp=0xc0001a7aa0 sp=0xc0001a7a20 pc=0x4a211a
net.(*netFD).Read(0xc00015c080, {0xc0001b6000?, 0x0?, 0x0?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/fd_posix.go:55 +0x29 fp=0xc0001a7ae8 sp=0xc0001a7aa0 pc=0x665589
net.(*conn).Read(0xc00020e0c8, {0xc0001b6000?, 0x0?, 0xc00050c010?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/net.go:183 +0x45 fp=0xc0001a7b30 sp=0xc0001a7ae8 pc=0x674aa5
net/http.(*persistConn).Read(0xc000520120, {0xc0001b6000?, 0x1000?, 0x1000?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1929 +0x4e fp=0xc0001a7b90 sp=0xc0001a7b30 pc=0x76588e
bufio.(*Reader).fill(0xc0003e72c0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:106 +0x103 fp=0xc0001a7bc8 sp=0xc0001a7b90 pc=0x51d743
bufio.(*Reader).ReadSlice(0xc0003e72c0, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:371 +0x2f fp=0xc0001a7c18 sp=0xc0001a7bc8 pc=0x51e32f
net/http/internal.readChunkLine(0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/internal/chunked.go:129 +0x25 fp=0xc0001a7c68 sp=0xc0001a7c18 pc=0x7036c5
net/http/internal.(*chunkedReader).beginChunk(0xc000502480)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/internal/chunked.go:48 +0x28 fp=0xc0001a7c98 sp=0xc0001a7c68 pc=0x703148
net/http/internal.(*chunkedReader).Read(0xc000502480, {0xc000164400?, 0x0?, 0x0?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/internal/chunked.go:98 +0x14e fp=0xc0001a7d18 sp=0xc0001a7c98 pc=0x70340e
net/http.(*body).readLocked(0xc00050a180, {0xc000164400?, 0x0?, 0x0?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transfer.go:844 +0x3c fp=0xc0001a7d68 sp=0xc0001a7d18 pc=0x75a3fc
net/http.(*body).Read(0x1000000000000?, {0xc000164400?, 0x0?, 0x7fed7a2b75b8?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transfer.go:836 +0x125 fp=0xc0001a7de0 sp=0xc0001a7d68 pc=0x75a2c5
net/http.(*bodyEOFSignal).Read(0xc00050a1c0, {0xc000164400, 0x200, 0x200})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2774 +0x142 fp=0xc0001a7e60 sp=0xc0001a7de0 pc=0x769fc2
encoding/json.(*Decoder).refill(0xc0000b0140)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:165 +0x17f fp=0xc0001a7eb0 sp=0xc0001a7e60 pc=0x562fff
encoding/json.(*Decoder).readValue(0xc0000b0140)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:140 +0xbb fp=0xc0001a7f00 sp=0xc0001a7eb0 pc=0x562bfb
encoding/json.(*Decoder).Decode(0xc0000b0140, {0xcaee80, 0xc000152190})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/encoding/json/stream.go:63 +0x78 fp=0xc0001a7f30 sp=0xc0001a7f00 pc=0x562858
github.com/couchbase/cbauth/metakv.doRunObserveChildren.func1()
	/opt/build/goproj/src/github.com/couchbase/cbauth/metakv/metakv.go:284 +0x10b fp=0xc0001a7fe0 sp=0xc0001a7f30 pc=0x9b872b
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0001a7fe8 sp=0xc0001a7fe0 pc=0x46dba1
created by github.com/couchbase/cbauth/metakv.doRunObserveChildren
	/opt/build/goproj/src/github.com/couchbase/cbauth/metakv/metakv.go:280 +0x2eb

goroutine 59 [IO wait]:
runtime.gopark(0xc0001036c0?, 0xc000052a00?, 0x60?, 0x57?, 0x483482?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc0001b56f0 sp=0xc0001b56d0 pc=0x43d616
runtime.netpollblock(0xc000508000?, 0x1000?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:522 +0xf7 fp=0xc0001b5728 sp=0xc0001b56f0 pc=0x4360b7
internal/poll.runtime_pollWait(0x7fed7a2461e8, 0x72)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/netpoll.go:302 +0x89 fp=0xc0001b5748 sp=0xc0001b5728 pc=0x468189
internal/poll.(*pollDesc).wait(0xc000126280?, 0xc000508000?, 0x0)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:83 +0x32 fp=0xc0001b5770 sp=0xc0001b5748 pc=0x4a0db2
internal/poll.(*pollDesc).waitRead(...)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_poll_runtime.go:88
internal/poll.(*FD).Read(0xc000126280, {0xc000508000, 0x1000, 0x1000})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/internal/poll/fd_unix.go:167 +0x25a fp=0xc0001b57f0 sp=0xc0001b5770 pc=0x4a211a
net.(*netFD).Read(0xc000126280, {0xc000508000?, 0xc000188f60?, 0xc000510f20?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/fd_posix.go:55 +0x29 fp=0xc0001b5838 sp=0xc0001b57f0 pc=0x665589
net.(*conn).Read(0xc000504000, {0xc000508000?, 0xc0001b58b0?, 0x744f19?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/net.go:183 +0x45 fp=0xc0001b5880 sp=0xc0001b5838 pc=0x674aa5
net/http.(*persistConn).Read(0xc000134120, {0xc000508000?, 0x7fed5052b438?, 0x150?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:1929 +0x4e fp=0xc0001b58e0 sp=0xc0001b5880 pc=0x76588e
bufio.(*Reader).fill(0xc000506000)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:106 +0x103 fp=0xc0001b5918 sp=0xc0001b58e0 pc=0x51d743
bufio.(*Reader).ReadSlice(0xc000506000, 0xa8?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:371 +0x2f fp=0xc0001b5968 sp=0xc0001b5918 pc=0x51e32f
net/http/internal.readChunkLine(0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/internal/chunked.go:129 +0x25 fp=0xc0001b59b8 sp=0xc0001b5968 pc=0x7036c5
net/http/internal.(*chunkedReader).beginChunk(0xc000117d40)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/internal/chunked.go:48 +0x28 fp=0xc0001b59e8 sp=0xc0001b59b8 pc=0x703148
net/http/internal.(*chunkedReader).Read(0xc000117d40, {0xc000186000?, 0x8?, 0xc000065400?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/internal/chunked.go:98 +0x14e fp=0xc0001b5a68 sp=0xc0001b59e8 pc=0x70340e
net/http.(*body).readLocked(0xc000112940, {0xc000186000?, 0x0?, 0x1?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transfer.go:844 +0x3c fp=0xc0001b5ab8 sp=0xc0001b5a68 pc=0x75a3fc
net/http.(*body).Read(0x30?, {0xc000186000?, 0x0?, 0x2c?})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transfer.go:836 +0x125 fp=0xc0001b5b30 sp=0xc0001b5ab8 pc=0x75a2c5
net/http.(*bodyEOFSignal).Read(0xc000112980, {0xc000186000, 0x1000, 0x1000})
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/net/http/transport.go:2774 +0x142 fp=0xc0001b5bb0 sp=0xc0001b5b30 pc=0x769fc2
bufio.(*Reader).fill(0xc0001b5f58)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:106 +0x103 fp=0xc0001b5be8 sp=0xc0001b5bb0 pc=0x51d743
bufio.(*Reader).ReadSlice(0xc0001b5f58, 0x4?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:371 +0x2f fp=0xc0001b5c38 sp=0xc0001b5be8 pc=0x51e32f
bufio.(*Reader).collectFragments(0xc0001b5d20?, 0x0?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:446 +0x74 fp=0xc0001b5ce8 sp=0xc0001b5c38 pc=0x51e774
bufio.(*Reader).ReadBytes(0xc0001b5de0?, 0x85?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/bufio/bufio.go:474 +0x1d fp=0xc0001b5d68 sp=0xc0001b5ce8 pc=0x51e91d
github.com/couchbase/indexing/secondary/common.(*internalVersionMonitor).waitForChange(0xc00003a990?, 0x2c?, 0xe643a9?)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:620 +0x4b fp=0xc0001b5e18 sp=0xc0001b5d68 pc=0xacb16b
github.com/couchbase/indexing/secondary/common.(*internalVersionMonitor).notifier(0xc0000f6070)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:709 +0x493 fp=0xc0001b5fc8 sp=0xc0001b5e18 pc=0xacba13
github.com/couchbase/indexing/secondary/common.newInternalVersionMonitor.func2()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:232 +0x26 fp=0xc0001b5fe0 sp=0xc0001b5fc8 pc=0xac7386
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0001b5fe8 sp=0xc0001b5fe0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/common.newInternalVersionMonitor
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:232 +0x336

goroutine 60 [select]:
runtime.gopark(0xc000062f70?, 0x2?, 0x0?, 0x0?, 0xc000062f4c?)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/proc.go:361 +0xd6 fp=0xc000062dc8 sp=0xc000062da8 pc=0x43d616
runtime.selectgo(0xc000062f70, 0xc000062f48, 0xe70c96?, 0x0, 0xc000062f90?, 0x1)
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/select.go:328 +0x772 fp=0xc000062f08 sp=0xc000062dc8 pc=0x44d092
github.com/couchbase/indexing/secondary/common.(*internalVersionMonitor).monitor(0xc0000f6070)
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:589 +0x166 fp=0xc000062fc8 sp=0xc000062f08 pc=0xacae66
github.com/couchbase/indexing/secondary/common.MonitorInternalVersion.func1()
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:801 +0x26 fp=0xc000062fe0 sp=0xc000062fc8 pc=0xacc346
runtime.goexit()
	/home/buildbot/.cbdepscache/exploded/x86_64/go-1.18.4/go/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc000062fe8 sp=0xc000062fe0 pc=0x46dba1
created by github.com/couchbase/indexing/secondary/common.MonitorInternalVersion
	/opt/build/goproj/src/github.com/couchbase/indexing/secondary/common/internal_version.go:801 +0x125
signal: aborted (core dumped)
FAIL	github.com/couchbase/indexing/secondary/tests/largedatatests	0.137s
Indexer Go routine dump logged in /opt/build/ns_server/logs/n_1/indexer_largedata_pprof.log
curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
curl: (7) Failed to connect to 127.0.0.1 port 9108 after 1 ms: Connection refused

Integration tests

echo "Running gsi integration tests with 4 node cluster"
Running gsi integration tests with 4 node cluster
scripts/start_cluster_and_run_tests.sh b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini conf/simple_gsi_n1ql.conf 1 1 gsi_type=plasma
Printing gsi_type=plasma
gsi_type=plasma
In here
-p makefile=True,gsi_type=plasma
/opt/build/testrunner /opt/build/testrunner
make[1]: Entering directory '/opt/build/ns_server'
cd build && make --no-print-directory ns_dataclean
Built target ns_dataclean
make[1]: Leaving directory '/opt/build/ns_server'
make[1]: Entering directory '/opt/build/ns_server'
cd build && make --no-print-directory all
[  0%] Built target event_ui_build_prepare
[100%] Built target ns_ui_build_prepare
[100%] Building Go Modules target ns_minify_js using Go 1.18.5
[100%] Built target ns_minify_js
[100%] Building Go Modules target ns_minify_css using Go 1.18.5
[100%] Built target ns_minify_css
[100%] Built target query_ui_build_prepare
[100%] Built target fts_ui_build_prepare
[100%] Built target cbas_ui_build_prepare
[100%] Built target backup_ui_build_prepare
[100%] Built target ui_build
==> enacl (compile)
[100%] Built target enacl
[100%] Built target kv_mappings
[100%] Built target ns_cfg
==> ale (compile)
[100%] Built target ale
==> chronicle (compile)
[100%] Built target chronicle
==> ns_server (compile)
[100%] Built target ns_server
==> gen_smtp (compile)
[100%] Built target gen_smtp
==> ns_babysitter (compile)
[100%] Built target ns_babysitter
==> ns_couchdb (compile)
[100%] Built target ns_couchdb
[100%] Building Go target ns_goport using Go 1.18.5
[100%] Built target ns_goport
[100%] Building Go target ns_generate_cert using Go 1.18.5
[100%] Built target ns_generate_cert
[100%] Building Go target ns_godu using Go 1.18.5
[100%] Built target ns_godu
[100%] Building Go target ns_gosecrets using Go 1.18.5
[100%] Built target ns_gosecrets
[100%] Building Go target ns_generate_hash using Go 1.18.5
[100%] Built target ns_generate_hash
==> chronicle (escriptize)
[100%] Built target chronicle_dump
make[1]: Leaving directory '/opt/build/ns_server'
/opt/build/testrunner
INFO:__main__:Checking arguments...
INFO:__main__:Conf filename: conf/simple_gsi_n1ql.conf
INFO:__main__:Test prefix: gsi.indexscans_gsi.SecondaryIndexingScanTests
INFO:__main__:Test prefix: gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests
INFO:__main__:Test prefix: gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests
INFO:__main__:TestRunner: start...
INFO:__main__:Global Test input params:
INFO:__main__:
Number of tests initially selected before GROUP filters: 11
INFO:__main__:--> Running test: gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi
INFO:__main__:Logs folder: /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_1
*** TestRunner ***
{'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi',
 'conf_file': 'conf/simple_gsi_n1ql.conf',
 'gsi_type': 'plasma',
 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini',
 'makefile': 'True',
 'num_nodes': 4,
 'spec': 'simple_gsi_n1ql'}
Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_1

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 1, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'False', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_1'}
Run before suite setup for gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index
suite_setUp (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... -->before_suite_name:gsi.indexscans_gsi.SecondaryIndexingScanTests.suite_setUp,suite: ]>
2022-09-02 08:57:40 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 08:57:40 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 08:57:40 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 08:57:40 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] socket error while connecting to http://127.0.0.1:9000/pools/default error [Errno 111] Connection refused 
2022-09-02 08:57:43 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] socket error while connecting to http://127.0.0.1:9000/pools/default error [Errno 111] Connection refused 
2022-09-02 08:57:49 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] socket error while connecting to http://127.0.0.1:9000/pools/default error [Errno 111] Connection refused 
2022-09-02 08:58:01 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 08:58:01 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 08:58:01 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 08:58:01 | INFO | MainProcess | MainThread | [on_prem_rest_client.get_nodes_version] Node version in cluster 7.2.0-1948-rel-EE-enterprise
2022-09-02 08:58:01 | INFO | MainProcess | MainThread | [basetestcase.setUp] ==============  basetestcase setup was started for test #1 suite_setUp==============
2022-09-02 08:58:01 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 08:58:01 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 08:58:01 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 08:58:01 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 08:58:01 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] cannot find service node index in cluster 
2022-09-02 08:58:01 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 08:58:01 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 08:58:01 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2022-09-02 08:58:01 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2022-09-02 08:58:01 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 08:58:01 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 08:58:01 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 08:58:01 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 08:58:01 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 08:58:02 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 08:58:02 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 08:58:02 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 08:58:02 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 08:58:02 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 08:58:02 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 08:58:03 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 08:58:06 | INFO | MainProcess | MainThread | [basetestcase.tearDown] ==============  basetestcase cleanup was started for test #1 suite_setUp ==============
2022-09-02 08:58:06 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 08:58:06 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default/ body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 08:58:06 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 08:58:06 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 08:58:06 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9001/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9001/pools/default with status False: unknown pool
2022-09-02 08:58:06 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 08:58:06 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9002/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9002/pools/default with status False: unknown pool
2022-09-02 08:58:06 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 08:58:07 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9003/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9003/pools/default with status False: unknown pool
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [basetestcase.tearDown] Removing user 'clientuser'...
2022-09-02 08:58:07 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 08:58:07 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 08:58:07 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 08:58:07 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 08:58:07 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
2022-09-02 08:58:07 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9001/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9001/pools/default with status False: unknown pool
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9001
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9001 is running
2022-09-02 08:58:07 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9002/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9002/pools/default with status False: unknown pool
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9002
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9002 is running
2022-09-02 08:58:07 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9003/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9003/pools/default with status False: unknown pool
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9003
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9003 is running
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [basetestcase.tearDown] ==============  basetestcase cleanup was finished for test #1 suite_setUp ==============
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [basetestcase.tearDown] closing all ssh connections
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [basetestcase.tearDown] closing all memcached connections
Cluster instance shutdown with force
2022-09-02 08:58:07 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 08:58:07 | INFO | MainProcess | MainThread | [basetestcase.setUp] initializing cluster
2022-09-02 08:58:08 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '22', 'memoryTotal': 15466930176, 'memoryFree': 12273364992, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 8184, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 08:58:08 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [task.execute] quota for index service will be 256 MB
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [task.execute] set index quota to node 127.0.0.1 
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_service_memoryQuota] pools/default params : indexMemoryQuota=256
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7650
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_node_services] --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_node_services] /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9000
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 08:58:08 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9001/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9001/pools/default with status False: unknown pool
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '22', 'memoryTotal': 15466930176, 'memoryFree': 12097347584, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 8184, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 08:58:08 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9001/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 08:58:08 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7906
2022-09-02 08:58:09 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9001
2022-09-02 08:58:09 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
2022-09-02 08:58:09 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 08:58:09 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 08:58:09 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 08:58:09 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 08:58:09 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 08:58:09 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 08:58:09 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 08:58:09 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 08:58:09 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 08:58:09 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 08:58:10 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9002/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9002/pools/default with status False: unknown pool
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '24', 'memoryTotal': 15466930176, 'memoryFree': 11983482880, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 8184, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 08:58:10 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9002/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7906
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9002
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 08:58:10 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9003/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9003/pools/default with status False: unknown pool
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '18', 'memoryTotal': 15466930176, 'memoryFree': 11885596672, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 8184, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 08:58:10 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9003/pools/default body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7906
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9003
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 08:58:10 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 08:58:11 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 08:58:11 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 08:58:11 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 08:58:11 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 08:58:11 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 08:58:11 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 08:58:11 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 08:58:11 | INFO | MainProcess | MainThread | [basetestcase.add_built_in_server_user] **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
2022-09-02 08:58:11 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] DELETE http://127.0.0.1:9000/settings/rbac/users/local/cbadminbucket body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
2022-09-02 08:58:11 | INFO | MainProcess | MainThread | [internal_user.delete_user] Exception while deleting user. Exception is -b'"User was not found."'
2022-09-02 08:58:11 | INFO | MainProcess | MainThread | [basetestcase.sleep] sleep for 5 secs.  ...
2022-09-02 08:58:16 | INFO | MainProcess | MainThread | [basetestcase.add_built_in_server_user] **** add 'admin' role to 'cbadminbucket' user ****
2022-09-02 08:58:16 | INFO | MainProcess | MainThread | [basetestcase.setUp] done initializing cluster
2022-09-02 08:58:16 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 08:58:16 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 08:58:17 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 08:58:17 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 08:58:17 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 08:58:17 | INFO | MainProcess | MainThread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 08:58:17 | INFO | MainProcess | MainThread | [basetestcase.enable_diag_eval_on_non_local_hosts] Enabled diag/eval for non-local hosts from 127.0.0.1
2022-09-02 08:58:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.create_bucket] http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
2022-09-02 08:58:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.create_bucket] 0.05 seconds to create bucket default
2022-09-02 08:58:17 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2022-09-02 08:59:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.vbucket_map_ready] vbucket map is not ready for bucket default after waiting 60 seconds
2022-09-02 08:59:18 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2022-09-02 08:59:18 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2022-09-02 08:59:19 | INFO | MainProcess | Cluster_Thread | [task.check] bucket 'default' was created with per node RAM quota: 7650
2022-09-02 08:59:19 | INFO | MainProcess | MainThread | [basetestcase.setUp] ==============  basetestcase setup was finished for test #1 suite_setUp ==============
2022-09-02 08:59:19 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 08:59:19 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 08:59:19 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 08:59:19 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 08:59:19 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 08:59:19 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 08:59:19 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 08:59:19 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 08:59:19 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 08:59:19 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 08:59:19 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 08:59:19 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 6.535007946181358, 'mem_free': 13922205696, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [basetestcase.setUp] Time to execute basesetup : 102.2538366317749
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [on_prem_rest_client.set_index_settings_internal] {'indexer.settings.storage_mode': 'plasma'} set
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [newtuq.setUp] Allowing the indexer to complete restart after setting the internal settings
2022-09-02 08:59:22 | INFO | MainProcess | MainThread | [basetestcase.sleep] sleep for 5 secs.  ...
2022-09-02 08:59:27 | INFO | MainProcess | MainThread | [on_prem_rest_client.set_index_settings_internal] {'indexer.api.enableTestServer': True} set
2022-09-02 08:59:27 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 08:59:27 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 08:59:28 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 08:59:28 | INFO | MainProcess | MainThread | [basetestcase.load] create 2016.0 to default documents...
2022-09-02 08:59:29 | INFO | MainProcess | MainThread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2022-09-02 08:59:30 | INFO | MainProcess | MainThread | [basetestcase.load] LOAD IS FINISHED
2022-09-02 08:59:30 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 08:59:30 | INFO | MainProcess | MainThread | [newtuq.setUp] ip:127.0.0.1 port:9000 ssh_username:Administrator
2022-09-02 08:59:30 | INFO | MainProcess | MainThread | [basetestcase.sleep] sleep for 30 secs.  ...
2022-09-02 09:00:00 | INFO | MainProcess | MainThread | [tuq_helper.create_primary_index] Check if index existed in default on server 127.0.0.1
2022-09-02 09:00:01 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2022-09-02 09:00:01 | INFO | MainProcess | MainThread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2022-09-02 09:00:01 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 137.571381ms
2022-09-02 09:00:01 | ERROR | MainProcess | MainThread | [tuq_helper._is_index_in_list] Fail to get index list.  List output: {'requestID': '50868314-7688-4ae3-9dce-8e55557ac594', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '137.571381ms', 'executionTime': '137.498818ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
2022-09-02 09:00:01 | INFO | MainProcess | MainThread | [tuq_helper.create_primary_index] Create primary index
2022-09-02 09:00:01 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] RUN QUERY CREATE PRIMARY INDEX ON default 
2022-09-02 09:00:01 | INFO | MainProcess | MainThread | [on_prem_rest_client.query_tool] query params : statement=CREATE+PRIMARY+INDEX+ON+default+
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 792.539005ms
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [tuq_helper.create_primary_index] Check if index is online
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 6.324528ms
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [on_prem_rest_client.urllib_request] Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [on_prem_rest_client.set_index_settings] {'queryport.client.waitForScheduledIndex': False} set
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [on_prem_rest_client.urllib_request] Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [on_prem_rest_client.set_index_settings] {'indexer.allowScheduleCreateRebal': True} set
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
2022-09-02 09:00:02 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with Administrator
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with Administrator
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [tuq_helper.drop_primary_index] CHECK FOR PRIMARY INDEXES
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [tuq_helper.drop_primary_index] DROP PRIMARY INDEX ON default USING GSI
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 5.895671ms
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] RUN QUERY DROP PRIMARY INDEX ON default USING GSI
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [on_prem_rest_client.query_tool] query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 32.351334ms
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 4.483357010726674, 'mem_free': 13823131648, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:04 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:04 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:04 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:04 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:04 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:04 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:04 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:08 | INFO | MainProcess | MainThread | [basetestcase.tearDown] ==============  basetestcase cleanup was started for test #1 suite_setUp ==============
2022-09-02 09:00:08 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] deleting existing buckets ['default'] on 127.0.0.1
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete....
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [on_prem_rest_client.bucket_exists] node 127.0.0.1 existing buckets : []
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : default from 127.0.0.1
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [basetestcase.tearDown] Removing user 'clientuser'...
2022-09-02 09:00:09 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [basetestcase.tearDown] b'"User was not found."'
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9001
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9001 is running
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9002
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9002 is running
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9003
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9003 is running
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [basetestcase.tearDown] ==============  basetestcase cleanup was finished for test #1 suite_setUp ==============
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [basetestcase.tearDown] closing all ssh connections
2022-09-02 09:00:09 | INFO | MainProcess | MainThread | [basetestcase.tearDown] closing all memcached connections
ok

----------------------------------------------------------------------
Ran 1 test in 149.599s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Cluster instance shutdown with force
-->result: 
2022-09-02 09:00:09 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:09 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:10 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:10 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:00:10 | INFO | MainProcess | test_thread | [on_prem_rest_client.get_nodes_version] Node version in cluster 7.2.0-1948-rel-EE-enterprise
2022-09-02 09:00:10 | INFO | MainProcess | test_thread | [basetestcase.setUp] ==============  basetestcase setup was started for test #1 test_multi_create_query_explain_drop_index==============
2022-09-02 09:00:10 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 09:00:10 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:10 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:10 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:10 | INFO | MainProcess | test_thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2022-09-02 09:00:10 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
2022-09-02 09:00:10 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2022-09-02 09:00:10 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 09:00:10 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:10 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:10 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 29.05773994775457, 'mem_free': 13804396544, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:11 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:12 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [basetestcase.tearDown] ==============  basetestcase cleanup was started for test #1 test_multi_create_query_explain_drop_index ==============
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [basetestcase.tearDown] Removing user 'clientuser'...
2022-09-02 09:00:15 | ERROR | MainProcess | test_thread | [on_prem_rest_client._http_request] DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [basetestcase.tearDown] b'"User was not found."'
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9001
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9001 is running
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9002
2022-09-02 09:00:15 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:00:16 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9002 is running
2022-09-02 09:00:16 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9003
2022-09-02 09:00:16 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:00:16 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9003 is running
2022-09-02 09:00:16 | INFO | MainProcess | test_thread | [basetestcase.tearDown] ==============  basetestcase cleanup was finished for test #1 test_multi_create_query_explain_drop_index ==============
2022-09-02 09:00:16 | INFO | MainProcess | test_thread | [basetestcase.tearDown] closing all ssh connections
2022-09-02 09:00:16 | INFO | MainProcess | test_thread | [basetestcase.tearDown] closing all memcached connections
Cluster instance shutdown with force
2022-09-02 09:00:16 | INFO | MainProcess | test_thread | [basetestcase.setUp] initializing cluster
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '152', 'memoryTotal': 15466930176, 'memoryFree': 13804396544, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [task.execute] quota for index service will be 256 MB
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [task.execute] set index quota to node 127.0.0.1 
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_service_memoryQuota] pools/default params : indexMemoryQuota=256
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7650
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_node_services] --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_node_services] /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
2022-09-02 09:00:17 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_node_services] This node is already provisioned with services, we do not consider this as failure for test case
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9000
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '152', 'memoryTotal': 15466930176, 'memoryFree': 13803630592, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7906
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9001
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:17 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '149', 'memoryTotal': 15466930176, 'memoryFree': 13803978752, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7906
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9002
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 09:00:18 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 09:00:19 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
2022-09-02 09:00:19 | INFO | MainProcess | Cluster_Thread | [task.execute]  {'uptime': '149', 'memoryTotal': 15466930176, 'memoryFree': 13805346816, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
2022-09-02 09:00:19 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=7906
2022-09-02 09:00:19 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,asdasd,9003
2022-09-02 09:00:19 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
2022-09-02 09:00:19 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True
2022-09-02 09:00:19 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:19 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:19 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:19 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 09:00:19 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 09:00:19 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 09:00:19 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 09:00:19 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
2022-09-02 09:00:19 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma
2022-09-02 09:00:19 | INFO | MainProcess | test_thread | [basetestcase.add_built_in_server_user] **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
2022-09-02 09:00:20 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 5 secs.  ...
2022-09-02 09:00:25 | INFO | MainProcess | test_thread | [basetestcase.add_built_in_server_user] **** add 'admin' role to 'cbadminbucket' user ****
2022-09-02 09:00:25 | INFO | MainProcess | test_thread | [basetestcase.setUp] done initializing cluster
2022-09-02 09:00:25 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:00:25 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:00:25 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:00:25 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
2022-09-02 09:00:25 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
2022-09-02 09:00:25 | INFO | MainProcess | test_thread | [remote_util.enable_diag_eval_on_non_local_hosts] b'ok'
2022-09-02 09:00:25 | INFO | MainProcess | test_thread | [basetestcase.enable_diag_eval_on_non_local_hosts] Enabled diag/eval for non-local hosts from 127.0.0.1
2022-09-02 09:00:25 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.create_bucket] http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
2022-09-02 09:00:25 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.create_bucket] 0.06 seconds to create bucket default
2022-09-02 09:00:25 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2022-09-02 09:01:18 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2022-09-02 09:01:18 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2022-09-02 09:01:19 | INFO | MainProcess | Cluster_Thread | [task.check] bucket 'default' was created with per node RAM quota: 7650
2022-09-02 09:01:19 | INFO | MainProcess | test_thread | [basetestcase.setUp] ==============  basetestcase setup was finished for test #1 test_multi_create_query_explain_drop_index ==============
2022-09-02 09:01:19 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:01:19 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:01:19 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:01:19 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:01:19 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:01:19 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:01:19 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:01:19 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:01:19 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:01:19 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:01:19 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:01:20 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 6.127324606149722, 'mem_free': 13782630400, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [basetestcase.setUp] Time to execute basesetup : 73.46278071403503
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [newtuq.setUp] Initial status of 127.0.0.1 cluster is healthy
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [newtuq.setUp] current status of 127.0.0.1  is healthy
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [on_prem_rest_client.set_index_settings_internal] {'indexer.settings.storage_mode': 'plasma'} set
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [newtuq.setUp] Allowing the indexer to complete restart after setting the internal settings
2022-09-02 09:01:23 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 5 secs.  ...
2022-09-02 09:01:28 | INFO | MainProcess | test_thread | [on_prem_rest_client.set_index_settings_internal] {'indexer.api.enableTestServer': True} set
2022-09-02 09:01:28 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:01:28 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:01:28 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:01:29 | INFO | MainProcess | test_thread | [basetestcase.load] create 2016.0 to default documents...
2022-09-02 09:01:30 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2022-09-02 09:01:31 | INFO | MainProcess | test_thread | [basetestcase.load] LOAD IS FINISHED
2022-09-02 09:01:31 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 09:01:31 | INFO | MainProcess | test_thread | [newtuq.setUp] ip:127.0.0.1 port:9000 ssh_username:Administrator
2022-09-02 09:01:31 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 30 secs.  ...
2022-09-02 09:02:01 | INFO | MainProcess | test_thread | [tuq_helper.create_primary_index] Check if index existed in default on server 127.0.0.1
2022-09-02 09:02:01 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2022-09-02 09:02:01 | INFO | MainProcess | test_thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2022-09-02 09:02:01 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 73.910273ms
2022-09-02 09:02:01 | ERROR | MainProcess | test_thread | [tuq_helper._is_index_in_list] Fail to get index list.  List output: {'requestID': '15d8325f-68d6-493c-a257-994cd11a902e', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '73.910273ms', 'executionTime': '73.84651ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
2022-09-02 09:02:01 | INFO | MainProcess | test_thread | [tuq_helper.create_primary_index] Create primary index
2022-09-02 09:02:01 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] RUN QUERY CREATE PRIMARY INDEX ON default 
2022-09-02 09:02:02 | INFO | MainProcess | test_thread | [on_prem_rest_client.query_tool] query params : statement=CREATE+PRIMARY+INDEX+ON+default+
2022-09-02 09:02:02 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 730.632507ms
2022-09-02 09:02:02 | INFO | MainProcess | test_thread | [tuq_helper.create_primary_index] Check if index is online
2022-09-02 09:02:02 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2022-09-02 09:02:02 | INFO | MainProcess | test_thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2022-09-02 09:02:02 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 6.185363ms
2022-09-02 09:02:03 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 09:02:03 | INFO | MainProcess | test_thread | [on_prem_rest_client.urllib_request] Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
2022-09-02 09:02:03 | INFO | MainProcess | test_thread | [on_prem_rest_client.set_index_settings] {'queryport.client.waitForScheduledIndex': False} set
2022-09-02 09:02:03 | INFO | MainProcess | test_thread | [on_prem_rest_client.urllib_request] Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
2022-09-02 09:02:03 | INFO | MainProcess | test_thread | [on_prem_rest_client.set_index_settings] {'indexer.allowScheduleCreateRebal': True} set
2022-09-02 09:02:03 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 09:02:04 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY CREATE INDEX `employee2099f365094f40f8a69d36087c436a88job_title` ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
2022-09-02 09:02:04 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=CREATE+INDEX+%60employee2099f365094f40f8a69d36087c436a88job_title%60+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
2022-09-02 09:02:04 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 60.66946ms
2022-09-02 09:02:04 | INFO | MainProcess | test_thread | [base_gsi.async_build_index] BUILD INDEX on default(employee2099f365094f40f8a69d36087c436a88job_title) USING GSI
2022-09-02 09:02:05 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY BUILD INDEX on default(employee2099f365094f40f8a69d36087c436a88job_title) USING GSI
2022-09-02 09:02:05 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=BUILD+INDEX+on+default%28employee2099f365094f40f8a69d36087c436a88job_title%29+USING+GSI
2022-09-02 09:02:05 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 21.720585ms
2022-09-02 09:02:06 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = 'employee2099f365094f40f8a69d36087c436a88job_title'
2022-09-02 09:02:06 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee2099f365094f40f8a69d36087c436a88job_title%27
2022-09-02 09:02:06 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 8.076914ms
2022-09-02 09:02:07 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = 'employee2099f365094f40f8a69d36087c436a88job_title'
2022-09-02 09:02:07 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee2099f365094f40f8a69d36087c436a88job_title%27
2022-09-02 09:02:07 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 10.264173ms
2022-09-02 09:02:08 | INFO | MainProcess | Cluster_Thread | [task.execute]  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2022-09-02 09:02:08 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
2022-09-02 09:02:08 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
2022-09-02 09:02:08 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 2.565166ms
2022-09-02 09:02:08 | INFO | MainProcess | Cluster_Thread | [task.execute] {'requestID': '4534cf83-7038-4cfc-9e16-874e6f598f23', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee2099f365094f40f8a69d36087c436a88job_title', 'index_id': '678a8f52ab572348', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'index_key': '`job_title`', 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '2.565166ms', 'executionTime': '2.505114ms', 'resultCount': 1, 'resultSize': 724, 'serviceLoad': 6}}
2022-09-02 09:02:08 | INFO | MainProcess | Cluster_Thread | [task.execute]  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2022-09-02 09:02:08 | INFO | MainProcess | Cluster_Thread | [task.check]  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2022-09-02 09:02:08 | INFO | MainProcess | test_thread | [base_gsi.async_query_using_index] Query : SELECT * FROM default WHERE  job_title = "Sales" 
2022-09-02 09:02:08 | INFO | MainProcess | test_thread | [tuq_generators.generate_expected_result] FROM clause ===== is default
2022-09-02 09:02:08 | INFO | MainProcess | test_thread | [tuq_generators.generate_expected_result] WHERE clause ===== is   doc["job_title"] == "Sales" 
2022-09-02 09:02:08 | INFO | MainProcess | test_thread | [tuq_generators.generate_expected_result] UNNEST clause ===== is None
2022-09-02 09:02:08 | INFO | MainProcess | test_thread | [tuq_generators.generate_expected_result] SELECT clause ===== is {"*" : doc,}
2022-09-02 09:02:08 | INFO | MainProcess | test_thread | [tuq_generators._filter_full_set] -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
2022-09-02 09:02:08 | INFO | MainProcess | test_thread | [tuq_generators._filter_full_set] -->where_clause=  doc["job_title"] == "Sales" 
2022-09-02 09:02:09 | INFO | MainProcess | Cluster_Thread | [task.execute]  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2022-09-02 09:02:09 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
2022-09-02 09:02:09 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
2022-09-02 09:02:09 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 107.700291ms
2022-09-02 09:02:09 | INFO | MainProcess | Cluster_Thread | [tuq_helper._verify_results]  Analyzing Actual Result
2022-09-02 09:02:09 | INFO | MainProcess | Cluster_Thread | [tuq_helper._verify_results]  Analyzing Expected Result
2022-09-02 09:02:10 | INFO | MainProcess | Cluster_Thread | [task.execute]  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2022-09-02 09:02:10 | INFO | MainProcess | Cluster_Thread | [task.check]  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
2022-09-02 09:02:11 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = 'employee2099f365094f40f8a69d36087c436a88job_title'
2022-09-02 09:02:11 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee2099f365094f40f8a69d36087c436a88job_title%27
2022-09-02 09:02:11 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 5.564603ms
2022-09-02 09:02:11 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY DROP INDEX employee2099f365094f40f8a69d36087c436a88job_title ON default USING GSI
2022-09-02 09:02:11 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=DROP+INDEX+employee2099f365094f40f8a69d36087c436a88job_title+ON+default+USING+GSI
2022-09-02 09:02:11 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 54.617314ms
2022-09-02 09:02:11 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = 'employee2099f365094f40f8a69d36087c436a88job_title'
2022-09-02 09:02:11 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee2099f365094f40f8a69d36087c436a88job_title%27
2022-09-02 09:02:11 | INFO | MainProcess | Cluster_Thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 5.921663ms
2022-09-02 09:02:11 | ERROR | MainProcess | Cluster_Thread | [tuq_helper._is_index_in_list] Fail to get index list.  List output: {'requestID': 'd6d51683-b59b-4771-90b2-d0453e9101eb', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '5.921663ms', 'executionTime': '5.861142ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
2022-09-02 09:02:11 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 09:02:11 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:02:11 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [on_prem_rest_client.diag_eval] /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with Administrator
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [tuq_helper.drop_primary_index] CHECK FOR PRIMARY INDEXES
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [tuq_helper.drop_primary_index] DROP PRIMARY INDEX ON default USING GSI
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] RUN QUERY SELECT * FROM system:indexes where name = '#primary'
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [on_prem_rest_client.query_tool] query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 8.213038ms
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] RUN QUERY DROP PRIMARY INDEX ON default USING GSI
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [on_prem_rest_client.query_tool] query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [tuq_helper.run_cbq_query] TOTAL ELAPSED TIME: 57.643638ms
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] ------- Cluster statistics -------
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 19.67570889852491, 'mem_free': 13686636544, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] --- End of cluster statistics ---
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:02:12 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:02:13 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:02:13 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:02:13 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:02:13 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:02:13 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:02:13 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:02:14 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:02:14 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2022-09-02 09:02:14 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2022-09-02 09:02:14 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2022-09-02 09:02:17 | INFO | MainProcess | test_thread | [basetestcase.tearDown] ==============  basetestcase cleanup was started for test #1 test_multi_create_query_explain_drop_index ==============
2022-09-02 09:02:18 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleting existing buckets ['default'] on 127.0.0.1
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete....
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [on_prem_rest_client.bucket_exists] node 127.0.0.1 existing buckets : []
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : default from 127.0.0.1
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 127.0.0.1, nothing to delete
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [basetestcase.tearDown] Removing user 'clientuser'...
2022-09-02 09:02:19 | ERROR | MainProcess | test_thread | [on_prem_rest_client._http_request] DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [basetestcase.tearDown] b'"User was not found."'
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9001
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9001 is running
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9002
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9002 is running
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9003
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9003 is running
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [basetestcase.tearDown] ==============  basetestcase cleanup was finished for test #1 test_multi_create_query_explain_drop_index ==============
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [basetestcase.tearDown] closing all ssh connections
2022-09-02 09:02:19 | INFO | MainProcess | test_thread | [basetestcase.tearDown] closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 1 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_1
ok

----------------------------------------------------------------------
Ran 1 test in 129.956s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_2

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,doc_ops=True,delete_ops_per=.5,run_async=True,scan_consistency=request_plus,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'doc_ops': 'True', 'delete_ops_per': '.5', 'run_async': 'True', 'scan_consistency': 'request_plus', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 2, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_2'}
[2022-09-02 09:02:19,844] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:02:19,845] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:02:20,120] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:02:20,154] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:02:20,242] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 09:02:20,242] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #2 test_multi_create_query_explain_drop_index==============
[2022-09-02 09:02:20,243] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 09:02:20,874] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 09:02:20,904] - [task:164] INFO -  {'uptime': '277', 'memoryTotal': 15466930176, 'memoryFree': 13718601728, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:02:20,933] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 09:02:20,933] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 09:02:20,934] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 09:02:20,980] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 09:02:21,014] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 09:02:21,015] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 09:02:21,045] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 09:02:21,046] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 09:02:21,046] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 09:02:21,046] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 09:02:21,098] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:02:21,100] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:02:21,101] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:02:21,373] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:02:21,375] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:02:21,445] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:02:21,446] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:02:21,477] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:02:21,503] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:02:21,532] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:02:21,663] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 09:02:21,692] - [task:164] INFO -  {'uptime': '273', 'memoryTotal': 15466930176, 'memoryFree': 13684301824, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:02:21,719] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:02:21,749] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 09:02:21,749] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 09:02:21,802] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:02:21,805] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:02:21,805] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:02:22,074] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:02:22,075] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:02:22,145] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:02:22,146] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:02:22,179] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:02:22,210] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:02:22,244] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:02:22,368] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 09:02:22,397] - [task:164] INFO -  {'uptime': '275', 'memoryTotal': 15466930176, 'memoryFree': 13672886272, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:02:22,426] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:02:22,460] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 09:02:22,460] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 09:02:22,515] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:02:22,520] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:02:22,520] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:02:22,807] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:02:22,808] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:02:22,884] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:02:22,885] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:02:22,916] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:02:22,943] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:02:22,974] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:02:23,098] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 09:02:23,127] - [task:164] INFO -  {'uptime': '274', 'memoryTotal': 15466930176, 'memoryFree': 13718814720, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:02:23,155] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:02:23,185] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 09:02:23,186] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 09:02:23,245] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:02:23,248] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:02:23,248] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:02:23,532] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:02:23,533] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:02:23,607] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:02:23,608] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:02:23,640] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:02:23,668] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:02:23,699] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:02:23,796] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 09:02:24,217] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:02:29,222] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 09:02:29,314] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 09:02:29,318] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:02:29,319] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:02:29,595] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:02:29,596] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:02:29,665] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:02:29,667] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:02:29,667] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 09:02:29,911] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 09:02:29,968] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 09:02:29,969] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 09:03:18,448] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:03:18,763] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:03:19,079] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 09:03:19,082] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #2 test_multi_create_query_explain_drop_index ==============
[2022-09-02 09:03:19,138] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:03:19,139] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:03:19,412] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:03:19,417] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:03:19,417] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:03:19,659] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:03:19,664] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:03:19,664] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:03:19,908] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:03:19,913] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:03:19,913] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:03:20,254] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:03:23,974] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:03:23,975] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 6.5915809728487, 'mem_free': 13700272128, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:03:23,975] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:03:23,975] - [basetestcase:467] INFO - Time to execute basesetup : 64.1336452960968
[2022-09-02 09:03:24,029] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:03:24,029] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:03:24,089] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:03:24,090] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:03:24,151] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:03:24,151] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:03:24,213] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:03:24,214] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:03:24,267] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:03:24,329] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 09:03:24,329] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 09:03:24,329] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:03:29,341] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 09:03:29,345] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:03:29,345] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:03:29,629] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:03:30,587] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 09:03:30,760] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:03:33,379] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:03:33,400] - [newtuq:85] INFO - {'delete': {'start': 0, 'end': 0}, 'remaining': {'start': 0, 'end': 1}}
[2022-09-02 09:03:34,058] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:03:34,058] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 09:03:34,058] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 09:04:04,086] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 09:04:04,115] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:04:04,142] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:04:04,210] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 65.299844ms
[2022-09-02 09:04:04,210] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '21d6efec-102a-4957-94ff-8a360e77f994', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '65.299844ms', 'executionTime': '65.240168ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:04:04,210] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 09:04:04,238] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 09:04:04,264] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 09:04:05,006] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 739.230444ms
[2022-09-02 09:04:05,006] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 09:04:05,083] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:04:05,117] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:04:05,125] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.096289ms
[2022-09-02 09:04:05,327] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:04:05,370] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 09:04:05,390] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 09:04:05,390] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 09:04:05,409] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 09:04:05,475] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:04:06,284] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employee1c21385129e64f20ba7a3462d13c7c6cjob_title` ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2022-09-02 09:04:06,312] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employee1c21385129e64f20ba7a3462d13c7c6cjob_title%60+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 09:04:06,363] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 49.744209ms
[2022-09-02 09:04:06,364] - [base_gsi:282] INFO - BUILD INDEX on default(employee1c21385129e64f20ba7a3462d13c7c6cjob_title) USING GSI
[2022-09-02 09:04:07,395] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(employee1c21385129e64f20ba7a3462d13c7c6cjob_title) USING GSI
[2022-09-02 09:04:07,422] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28employee1c21385129e64f20ba7a3462d13c7c6cjob_title%29+USING+GSI
[2022-09-02 09:04:07,448] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 24.348869ms
[2022-09-02 09:04:08,479] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee1c21385129e64f20ba7a3462d13c7c6cjob_title'
[2022-09-02 09:04:08,505] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee1c21385129e64f20ba7a3462d13c7c6cjob_title%27
[2022-09-02 09:04:08,514] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.189539ms
[2022-09-02 09:04:09,543] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee1c21385129e64f20ba7a3462d13c7c6cjob_title'
[2022-09-02 09:04:09,571] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee1c21385129e64f20ba7a3462d13c7c6cjob_title%27
[2022-09-02 09:04:09,580] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 8.362113ms
[2022-09-02 09:04:10,035] - [basetestcase:2772] INFO - delete 0.0 to default documents...
[2022-09-02 09:04:10,211] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:04:11,024] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:04:11,584] - [task:3235] INFO -  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:04:11,617] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 09:04:11,644] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
[2022-09-02 09:04:11,648] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.374855ms
[2022-09-02 09:04:11,648] - [task:3245] INFO - {'requestID': 'bb08ae08-eaeb-4d38-8e00-6ff1590bc27d', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee1c21385129e64f20ba7a3462d13c7c6cjob_title', 'index_id': 'd176d3dea357fb29', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'index_key': '`job_title`', 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '2.374855ms', 'executionTime': '2.251619ms', 'resultCount': 1, 'resultSize': 724, 'serviceLoad': 6}}
[2022-09-02 09:04:11,648] - [task:3246] INFO -  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:04:11,649] - [task:3276] INFO -  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:04:11,649] - [base_gsi:560] INFO - Query : SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 09:04:11,649] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 09:04:11,650] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["job_title"] == "Sales" 
[2022-09-02 09:04:11,650] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 09:04:11,650] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 09:04:11,650] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 09:04:11,650] - [tuq_generators:329] INFO - -->where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 09:04:12,650] - [task:3235] INFO -  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:04:12,680] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 09:04:12,706] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
[2022-09-02 09:04:12,846] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 130.904737ms
[2022-09-02 09:04:12,846] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 09:04:12,847] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 09:04:13,657] - [task:3246] INFO -  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:04:13,657] - [task:3276] INFO -  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:04:14,691] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee1c21385129e64f20ba7a3462d13c7c6cjob_title'
[2022-09-02 09:04:14,717] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee1c21385129e64f20ba7a3462d13c7c6cjob_title%27
[2022-09-02 09:04:14,723] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.350145ms
[2022-09-02 09:04:14,749] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employee1c21385129e64f20ba7a3462d13c7c6cjob_title ON default USING GSI
[2022-09-02 09:04:14,774] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employee1c21385129e64f20ba7a3462d13c7c6cjob_title+ON+default+USING+GSI
[2022-09-02 09:04:14,814] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 37.68827ms
[2022-09-02 09:04:14,849] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee1c21385129e64f20ba7a3462d13c7c6cjob_title'
[2022-09-02 09:04:14,876] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee1c21385129e64f20ba7a3462d13c7c6cjob_title%27
[2022-09-02 09:04:14,883] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.689628ms
[2022-09-02 09:04:14,883] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '19207f24-e891-4c8a-bd90-f526daa4cc51', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '5.689628ms', 'executionTime': '5.624633ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:04:14,995] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:04:14,998] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:04:14,998] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:04:15,373] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:04:15,431] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:04:15,431] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 09:04:15,495] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:04:15,554] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:04:15,557] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:04:15,557] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:04:15,933] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:04:15,997] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:04:15,997] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 09:04:16,063] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:04:16,120] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:04:16,120] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 09:04:16,120] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:04:16,146] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:04:16,172] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:04:16,179] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.752393ms
[2022-09-02 09:04:16,208] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:04:16,235] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 09:04:16,291] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 53.777846ms
[2022-09-02 09:04:16,360] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:04:16,360] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 15.35194559459018, 'mem_free': 13592457216, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:04:16,360] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:04:16,365] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:04:16,365] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:04:16,741] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:04:16,746] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:04:16,747] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:04:17,204] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:04:17,212] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:04:17,212] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:04:17,814] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:04:17,824] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:04:17,825] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:04:18,465] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:04:22,541] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #2 test_multi_create_query_explain_drop_index ==============
[2022-09-02 09:04:22,683] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 09:04:23,076] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 09:04:23,104] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 09:04:23,104] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 09:04:23,160] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:04:23,213] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:04:23,266] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:04:23,267] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 09:04:23,351] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 09:04:23,352] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 09:04:23,377] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:04:23,503] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 09:04:23,503] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:04:23,530] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 09:04:23,555] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 09:04:23,556] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:04:23,583] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 09:04:23,608] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 09:04:23,608] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:04:23,635] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 09:04:23,660] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 09:04:23,661] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:04:23,687] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 09:04:23,687] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #2 test_multi_create_query_explain_drop_index ==============
[2022-09-02 09:04:23,687] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 09:04:23,688] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 2 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_2
ok

----------------------------------------------------------------------
Ran 1 test in 123.898s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_3

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,doc_ops=True,update_ops_per=.5,run_async=True,scan_consistency=request_plus,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'doc_ops': 'True', 'update_ops_per': '.5', 'run_async': 'True', 'scan_consistency': 'request_plus', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 3, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_3'}
[2022-09-02 09:04:23,786] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:04:23,786] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:04:24,104] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:04:24,137] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:04:24,218] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 09:04:24,218] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #3 test_multi_create_query_explain_drop_index==============
[2022-09-02 09:04:24,219] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 09:04:24,816] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 09:04:24,847] - [task:164] INFO -  {'uptime': '401', 'memoryTotal': 15466930176, 'memoryFree': 13647994880, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:04:24,875] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 09:04:24,875] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 09:04:24,875] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 09:04:24,907] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 09:04:24,946] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 09:04:24,946] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 09:04:24,975] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 09:04:24,976] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 09:04:24,976] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 09:04:24,977] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 09:04:25,027] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:04:25,030] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:04:25,030] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:04:25,373] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:04:25,374] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:04:25,452] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:04:25,453] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:04:25,482] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:04:25,509] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:04:25,539] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:04:25,667] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 09:04:25,698] - [task:164] INFO -  {'uptime': '398', 'memoryTotal': 15466930176, 'memoryFree': 13593677824, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:04:25,728] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:04:25,759] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 09:04:25,759] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 09:04:25,814] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:04:25,818] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:04:25,818] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:04:26,151] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:04:26,152] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:04:26,238] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:04:26,239] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:04:26,269] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:04:26,296] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:04:26,323] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:04:26,441] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 09:04:26,470] - [task:164] INFO -  {'uptime': '395', 'memoryTotal': 15466930176, 'memoryFree': 13595971584, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:04:26,497] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:04:26,526] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 09:04:26,527] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 09:04:26,579] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:04:26,583] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:04:26,583] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:04:26,884] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:04:26,886] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:04:26,960] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:04:26,961] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:04:26,989] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:04:27,015] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:04:27,044] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:04:27,160] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 09:04:27,189] - [task:164] INFO -  {'uptime': '395', 'memoryTotal': 15466930176, 'memoryFree': 13593755648, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:04:27,216] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:04:27,248] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 09:04:27,248] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 09:04:27,299] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:04:27,302] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:04:27,302] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:04:27,609] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:04:27,611] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:04:27,690] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:04:27,691] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:04:27,720] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:04:27,749] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:04:27,778] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:04:27,871] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 09:04:28,297] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:04:33,302] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 09:04:33,395] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 09:04:33,400] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:04:33,400] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:04:33,740] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:04:33,741] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:04:33,817] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:04:33,818] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:04:33,818] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 09:04:34,991] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 09:04:35,047] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 09:04:35,047] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 09:05:18,652] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:05:18,997] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:05:19,292] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 09:05:19,295] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #3 test_multi_create_query_explain_drop_index ==============
[2022-09-02 09:05:19,352] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:05:19,353] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:05:19,660] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:05:19,665] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:05:19,665] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:05:19,971] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:05:20,046] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:05:20,046] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:05:20,426] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:05:20,433] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:05:20,433] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:05:21,020] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:05:25,047] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:05:25,048] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 6.540043281277202, 'mem_free': 13657583616, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:05:25,048] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:05:25,048] - [basetestcase:467] INFO - Time to execute basesetup : 61.26459336280823
[2022-09-02 09:05:25,100] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:05:25,100] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:05:25,154] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:05:25,155] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:05:25,208] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:05:25,208] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:05:25,260] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:05:25,260] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:05:25,312] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:05:25,372] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 09:05:25,373] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 09:05:25,373] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:05:30,382] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 09:05:30,385] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:05:30,385] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:05:30,733] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:05:31,721] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 09:05:31,899] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:05:34,544] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:05:34,561] - [newtuq:85] INFO - {'update': {'start': 0, 'end': 0}, 'remaining': {'start': 0, 'end': 1}}
[2022-09-02 09:05:35,425] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:05:35,426] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 09:05:35,426] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 09:06:05,443] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 09:06:05,473] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:06:05,500] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:06:05,567] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 65.398759ms
[2022-09-02 09:06:05,567] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '05d451a5-c255-40b1-9909-77c6bb950376', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '65.398759ms', 'executionTime': '65.339503ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:06:05,568] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 09:06:05,597] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 09:06:05,624] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 09:06:06,313] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 687.809346ms
[2022-09-02 09:06:06,314] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 09:06:06,381] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:06:06,419] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:06:06,427] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.180062ms
[2022-09-02 09:06:06,657] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:06:06,695] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 09:06:06,711] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 09:06:06,711] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 09:06:06,725] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 09:06:06,794] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:06:07,608] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employee2b065614e503451ca888f4dba96708a1job_title` ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2022-09-02 09:06:07,635] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employee2b065614e503451ca888f4dba96708a1job_title%60+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 09:06:07,693] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 55.507304ms
[2022-09-02 09:06:07,693] - [base_gsi:282] INFO - BUILD INDEX on default(employee2b065614e503451ca888f4dba96708a1job_title) USING GSI
[2022-09-02 09:06:08,723] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(employee2b065614e503451ca888f4dba96708a1job_title) USING GSI
[2022-09-02 09:06:08,753] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28employee2b065614e503451ca888f4dba96708a1job_title%29+USING+GSI
[2022-09-02 09:06:08,784] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 28.10309ms
[2022-09-02 09:06:09,816] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee2b065614e503451ca888f4dba96708a1job_title'
[2022-09-02 09:06:09,844] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee2b065614e503451ca888f4dba96708a1job_title%27
[2022-09-02 09:06:09,852] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.503645ms
[2022-09-02 09:06:10,886] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee2b065614e503451ca888f4dba96708a1job_title'
[2022-09-02 09:06:10,914] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee2b065614e503451ca888f4dba96708a1job_title%27
[2022-09-02 09:06:10,922] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.174491ms
[2022-09-02 09:06:11,375] - [basetestcase:2772] INFO - update 0.0 to default documents...
[2022-09-02 09:06:11,552] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:06:12,171] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:06:12,925] - [task:3235] INFO -  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:06:12,957] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 09:06:12,985] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
[2022-09-02 09:06:12,989] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.533548ms
[2022-09-02 09:06:12,989] - [task:3245] INFO - {'requestID': '27f3a6fa-949c-462e-b2fd-4b80a72a4668', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee2b065614e503451ca888f4dba96708a1job_title', 'index_id': '7710ee51b0590951', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'index_key': '`job_title`', 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '2.533548ms', 'executionTime': '2.47263ms', 'resultCount': 1, 'resultSize': 724, 'serviceLoad': 6}}
[2022-09-02 09:06:12,989] - [task:3246] INFO -  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:06:12,990] - [task:3276] INFO -  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:06:12,990] - [base_gsi:560] INFO - Query : SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 09:06:12,991] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 09:06:12,991] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["job_title"] == "Sales" 
[2022-09-02 09:06:12,991] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 09:06:12,991] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 09:06:12,992] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 09:06:12,992] - [tuq_generators:329] INFO - -->where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 09:06:13,991] - [task:3235] INFO -  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:06:14,022] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 09:06:14,050] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
[2022-09-02 09:06:14,169] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 107.210676ms
[2022-09-02 09:06:14,170] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 09:06:14,170] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 09:06:14,983] - [task:3246] INFO -  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:06:14,984] - [task:3276] INFO -  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:06:16,015] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee2b065614e503451ca888f4dba96708a1job_title'
[2022-09-02 09:06:16,044] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee2b065614e503451ca888f4dba96708a1job_title%27
[2022-09-02 09:06:16,053] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.993666ms
[2022-09-02 09:06:16,081] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employee2b065614e503451ca888f4dba96708a1job_title ON default USING GSI
[2022-09-02 09:06:16,108] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employee2b065614e503451ca888f4dba96708a1job_title+ON+default+USING+GSI
[2022-09-02 09:06:16,146] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 36.205781ms
[2022-09-02 09:06:16,182] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee2b065614e503451ca888f4dba96708a1job_title'
[2022-09-02 09:06:16,210] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee2b065614e503451ca888f4dba96708a1job_title%27
[2022-09-02 09:06:16,218] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.023828ms
[2022-09-02 09:06:16,218] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'f516b319-1b03-447a-a62f-ba75356668aa', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '6.023828ms', 'executionTime': '5.940814ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:06:16,333] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:06:16,337] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:06:16,338] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:06:16,771] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:06:16,829] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:06:16,829] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 09:06:16,901] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:06:16,965] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:06:16,970] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:06:16,970] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:06:17,413] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:06:17,472] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:06:17,473] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 09:06:17,546] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:06:17,604] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:06:17,605] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 09:06:17,605] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:06:17,633] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:06:17,661] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:06:17,672] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 8.810639ms
[2022-09-02 09:06:17,701] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:06:17,727] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 09:06:17,783] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 54.627996ms
[2022-09-02 09:06:17,855] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:06:17,855] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 13.84235060934148, 'mem_free': 13532876800, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:06:17,855] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:06:17,859] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:06:17,859] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:06:18,287] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:06:18,292] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:06:18,292] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:06:18,718] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:06:18,723] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:06:18,723] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:06:19,131] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:06:19,136] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:06:19,136] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:06:19,752] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:06:24,977] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #3 test_multi_create_query_explain_drop_index ==============
[2022-09-02 09:06:25,124] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 09:06:26,164] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 09:06:26,195] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 09:06:26,196] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 09:06:26,271] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:06:26,328] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:06:26,386] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:06:26,387] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 09:06:26,466] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 09:06:26,468] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 09:06:26,493] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:06:26,622] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 09:06:26,623] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:06:26,649] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 09:06:26,675] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 09:06:26,675] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:06:26,702] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 09:06:26,727] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 09:06:26,727] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:06:26,754] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 09:06:26,781] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 09:06:26,781] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:06:26,808] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 09:06:26,808] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #3 test_multi_create_query_explain_drop_index ==============
[2022-09-02 09:06:26,808] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 09:06:26,808] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 3 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_3
ok

----------------------------------------------------------------------
Ran 1 test in 123.078s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_4

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,doc_ops=True,expiry_ops_per=.5,run_async=True,scan_consistency=request_plus,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'doc_ops': 'True', 'expiry_ops_per': '.5', 'run_async': 'True', 'scan_consistency': 'request_plus', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 4, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_4'}
[2022-09-02 09:06:26,912] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:06:26,912] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:06:27,311] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:06:27,342] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:06:27,422] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 09:06:27,422] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #4 test_multi_create_query_explain_drop_index==============
[2022-09-02 09:06:27,423] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 09:06:27,941] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 09:06:27,971] - [task:164] INFO -  {'uptime': '524', 'memoryTotal': 15466930176, 'memoryFree': 13509820416, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:06:27,999] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 09:06:27,999] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 09:06:27,999] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 09:06:28,042] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 09:06:28,076] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 09:06:28,077] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 09:06:28,105] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 09:06:28,106] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 09:06:28,107] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 09:06:28,107] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 09:06:28,160] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:06:28,164] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:06:28,164] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:06:28,589] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:06:28,591] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:06:28,685] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:06:28,686] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:06:28,718] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:06:28,747] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:06:28,778] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:06:28,914] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 09:06:28,942] - [task:164] INFO -  {'uptime': '524', 'memoryTotal': 15466930176, 'memoryFree': 13606961152, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:06:28,968] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:06:28,996] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 09:06:28,996] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 09:06:29,049] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:06:29,052] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:06:29,052] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:06:29,438] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:06:29,439] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:06:29,523] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:06:29,524] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:06:29,552] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:06:29,578] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:06:29,606] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:06:29,721] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 09:06:29,747] - [task:164] INFO -  {'uptime': '521', 'memoryTotal': 15466930176, 'memoryFree': 13606793216, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:06:29,772] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:06:29,800] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 09:06:29,801] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 09:06:29,852] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:06:29,857] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:06:29,858] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:06:30,249] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:06:30,251] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:06:30,339] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:06:30,340] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:06:30,369] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:06:30,396] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:06:30,425] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:06:30,543] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 09:06:30,569] - [task:164] INFO -  {'uptime': '520', 'memoryTotal': 15466930176, 'memoryFree': 13606277120, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:06:30,594] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:06:30,623] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 09:06:30,623] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 09:06:30,678] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:06:30,681] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:06:30,681] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:06:31,083] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:06:31,084] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:06:31,177] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:06:31,179] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:06:31,214] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:06:31,243] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:06:31,273] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:06:31,370] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 09:06:31,754] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:06:36,759] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 09:06:36,843] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 09:06:36,848] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:06:36,849] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:06:37,259] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:06:37,260] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:06:37,347] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:06:37,348] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:06:37,348] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 09:06:38,495] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 09:06:38,556] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 09:06:38,556] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 09:07:18,553] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:07:18,878] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:07:19,153] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 09:07:19,156] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #4 test_multi_create_query_explain_drop_index ==============
[2022-09-02 09:07:19,209] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:07:19,210] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:07:19,575] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:07:19,579] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:07:19,580] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:07:19,967] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:07:19,972] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:07:19,973] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:07:20,548] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:07:20,555] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:07:20,556] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:07:21,246] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:07:26,148] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:07:26,148] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 9.103118409987871, 'mem_free': 13612249088, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:07:26,148] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:07:26,148] - [basetestcase:467] INFO - Time to execute basesetup : 59.238627433776855
[2022-09-02 09:07:26,202] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:07:26,203] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:07:26,257] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:07:26,257] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:07:26,313] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:07:26,313] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:07:26,366] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:07:26,366] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:07:26,421] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:07:26,487] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 09:07:26,488] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 09:07:26,488] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:07:31,497] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 09:07:31,502] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:07:31,502] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:07:31,893] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:07:32,963] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 09:07:33,143] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:07:35,562] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:07:35,578] - [newtuq:85] INFO - {'expiry': {'start': 0, 'end': 0}, 'remaining': {'start': 0, 'end': 1}}
[2022-09-02 09:07:36,280] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:07:36,280] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 09:07:36,281] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 09:08:06,310] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 09:08:06,340] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:08:06,366] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:08:06,436] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 67.53257ms
[2022-09-02 09:08:06,436] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '63bfeb30-049c-417e-96c0-31bcd09739e3', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '67.53257ms', 'executionTime': '67.473996ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:08:06,436] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 09:08:06,463] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 09:08:06,490] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 09:08:07,277] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 785.267986ms
[2022-09-02 09:08:07,277] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 09:08:07,337] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:08:07,376] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:08:07,384] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.304779ms
[2022-09-02 09:08:07,607] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:08:07,644] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 09:08:07,662] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 09:08:07,662] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 09:08:07,674] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 09:08:07,746] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:08:08,567] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employeeea416caf487947718b17dce206d8b868job_title` ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2022-09-02 09:08:08,595] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employeeea416caf487947718b17dce206d8b868job_title%60+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 09:08:08,648] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 50.960543ms
[2022-09-02 09:08:08,649] - [base_gsi:282] INFO - BUILD INDEX on default(employeeea416caf487947718b17dce206d8b868job_title) USING GSI
[2022-09-02 09:08:09,680] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(employeeea416caf487947718b17dce206d8b868job_title) USING GSI
[2022-09-02 09:08:09,707] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28employeeea416caf487947718b17dce206d8b868job_title%29+USING+GSI
[2022-09-02 09:08:09,731] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 22.157139ms
[2022-09-02 09:08:10,762] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeea416caf487947718b17dce206d8b868job_title'
[2022-09-02 09:08:10,789] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeea416caf487947718b17dce206d8b868job_title%27
[2022-09-02 09:08:10,798] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.309842ms
[2022-09-02 09:08:11,831] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeea416caf487947718b17dce206d8b868job_title'
[2022-09-02 09:08:11,858] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeea416caf487947718b17dce206d8b868job_title%27
[2022-09-02 09:08:11,866] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.328542ms
[2022-09-02 09:08:12,336] - [basetestcase:2772] INFO - update 0.0 to default documents...
[2022-09-02 09:08:12,515] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:08:13,269] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:08:13,461] - [data_helper:309] INFO - dict:{'ip': '127.0.0.1', 'port': '9000', 'username': 'Administrator', 'password': 'asdasd'}
[2022-09-02 09:08:13,461] - [data_helper:310] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:08:13,631] - [cluster_helper:379] INFO - Setting flush param on server {'ip': '127.0.0.1', 'port': '9000', 'username': 'Administrator', 'password': 'asdasd'}, exp_pager_stime to 10 on default
[2022-09-02 09:08:13,632] - [mc_bin_client:669] INFO - setting param: exp_pager_stime 10
[2022-09-02 09:08:13,632] - [cluster_helper:393] INFO - Setting flush param on server {'ip': '127.0.0.1', 'port': '9000', 'username': 'Administrator', 'password': 'asdasd'}, exp_pager_stime to 10, result: (409135672, 0, b'')
[2022-09-02 09:08:13,869] - [task:3235] INFO -  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:08:13,899] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 09:08:13,926] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
[2022-09-02 09:08:13,930] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.202942ms
[2022-09-02 09:08:13,930] - [task:3245] INFO - {'requestID': '8d5febc0-02bc-4864-ab5b-033b19bc9d57', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employeeea416caf487947718b17dce206d8b868job_title', 'index_id': '8b71314aa5e20333', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'index_key': '`job_title`', 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '2.202942ms', 'executionTime': '2.143511ms', 'resultCount': 1, 'resultSize': 724, 'serviceLoad': 6}}
[2022-09-02 09:08:13,931] - [task:3246] INFO -  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:08:13,931] - [task:3276] INFO -  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:08:13,932] - [base_gsi:560] INFO - Query : SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 09:08:13,932] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 09:08:13,932] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["job_title"] == "Sales" 
[2022-09-02 09:08:13,932] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 09:08:13,933] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 09:08:13,933] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 09:08:13,933] - [tuq_generators:329] INFO - -->where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 09:08:14,933] - [task:3235] INFO -  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:08:14,963] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 09:08:14,990] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
[2022-09-02 09:08:15,138] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 139.78381ms
[2022-09-02 09:08:15,138] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 09:08:15,139] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 09:08:15,930] - [task:3246] INFO -  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:08:15,930] - [task:3276] INFO -  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:08:16,962] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeea416caf487947718b17dce206d8b868job_title'
[2022-09-02 09:08:16,989] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeea416caf487947718b17dce206d8b868job_title%27
[2022-09-02 09:08:16,997] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.712148ms
[2022-09-02 09:08:17,025] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employeeea416caf487947718b17dce206d8b868job_title ON default USING GSI
[2022-09-02 09:08:17,054] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employeeea416caf487947718b17dce206d8b868job_title+ON+default+USING+GSI
[2022-09-02 09:08:17,102] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 45.764302ms
[2022-09-02 09:08:17,142] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeea416caf487947718b17dce206d8b868job_title'
[2022-09-02 09:08:17,174] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeea416caf487947718b17dce206d8b868job_title%27
[2022-09-02 09:08:17,181] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.674885ms
[2022-09-02 09:08:17,182] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '705afb91-5ba0-4f1f-9515-aa2b48305d0c', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '5.674885ms', 'executionTime': '5.61264ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:08:17,298] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:08:17,302] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:08:17,302] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:08:17,801] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:08:17,859] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:08:17,860] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 09:08:17,944] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:08:18,005] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:08:18,009] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:08:18,009] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:08:18,509] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:08:18,571] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:08:18,571] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 09:08:18,655] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:08:18,716] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:08:18,717] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 09:08:18,717] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:08:18,745] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:08:18,773] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:08:18,783] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.863495ms
[2022-09-02 09:08:18,812] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:08:18,840] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 09:08:18,913] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 71.25938ms
[2022-09-02 09:08:18,987] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:08:18,987] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 25.06377732050098, 'mem_free': 13461995520, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:08:18,987] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:08:18,992] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:08:18,992] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:08:19,499] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:08:19,505] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:08:19,505] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:08:19,980] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:08:19,987] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:08:19,987] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:08:20,607] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:08:20,615] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:08:20,615] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:08:21,433] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:08:27,069] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #4 test_multi_create_query_explain_drop_index ==============
[2022-09-02 09:08:27,225] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 09:08:28,157] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 09:08:28,186] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 09:08:28,186] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 09:08:28,240] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:08:28,295] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:08:28,359] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:08:28,360] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 09:08:28,446] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 09:08:28,447] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 09:08:28,473] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:08:28,610] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 09:08:28,610] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:08:28,640] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 09:08:28,667] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 09:08:28,667] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:08:28,695] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 09:08:28,722] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 09:08:28,722] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:08:28,751] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 09:08:28,779] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 09:08:28,779] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:08:28,809] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 09:08:28,810] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #4 test_multi_create_query_explain_drop_index ==============
[2022-09-02 09:08:28,810] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 09:08:28,810] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 4 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_4
ok

----------------------------------------------------------------------
Ran 1 test in 121.958s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_5

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,doc_ops=True,create_ops_per=.5,run_async=True,scan_consistency=request_plus,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'doc_ops': 'True', 'create_ops_per': '.5', 'run_async': 'True', 'scan_consistency': 'request_plus', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 5, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_5'}
[2022-09-02 09:08:28,919] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:08:28,919] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:08:29,394] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:08:29,430] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:08:29,514] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 09:08:29,515] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #5 test_multi_create_query_explain_drop_index==============
[2022-09-02 09:08:29,515] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 09:08:29,946] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 09:08:29,975] - [task:164] INFO -  {'uptime': '646', 'memoryTotal': 15466930176, 'memoryFree': 13563895808, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:08:30,003] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 09:08:30,003] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 09:08:30,004] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 09:08:30,035] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 09:08:30,073] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 09:08:30,074] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 09:08:30,104] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 09:08:30,105] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 09:08:30,105] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 09:08:30,105] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 09:08:30,156] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:08:30,161] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:08:30,161] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:08:30,594] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:08:30,595] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:08:30,685] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:08:30,686] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:08:30,716] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:08:30,742] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:08:30,769] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:08:30,889] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 09:08:30,916] - [task:164] INFO -  {'uptime': '644', 'memoryTotal': 15466930176, 'memoryFree': 13573079040, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:08:30,943] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:08:30,973] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 09:08:30,973] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 09:08:31,027] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:08:31,030] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:08:31,030] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:08:31,474] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:08:31,475] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:08:31,570] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:08:31,571] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:08:31,601] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:08:31,627] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:08:31,655] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:08:31,780] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 09:08:31,810] - [task:164] INFO -  {'uptime': '641', 'memoryTotal': 15466930176, 'memoryFree': 13571915776, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:08:31,838] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:08:31,869] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 09:08:31,869] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 09:08:31,921] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:08:31,926] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:08:31,926] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:08:32,390] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:08:32,391] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:08:32,485] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:08:32,487] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:08:32,521] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:08:32,550] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:08:32,581] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:08:32,704] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 09:08:32,733] - [task:164] INFO -  {'uptime': '641', 'memoryTotal': 15466930176, 'memoryFree': 13573713920, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:08:32,762] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:08:32,792] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 09:08:32,792] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 09:08:32,846] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:08:32,849] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:08:32,849] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:08:33,279] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:08:33,280] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:08:33,377] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:08:33,378] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:08:33,409] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:08:33,435] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:08:33,466] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:08:33,553] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 09:08:33,929] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:08:38,934] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 09:08:39,022] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 09:08:39,037] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:08:39,038] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:08:39,471] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:08:39,472] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:08:39,563] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:08:39,564] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:08:39,564] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 09:08:40,671] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 09:08:40,726] - [on_prem_rest_client:3047] INFO - 0.05 seconds to create bucket default
[2022-09-02 09:08:40,726] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 09:09:18,520] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:09:18,858] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:09:19,225] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 09:09:19,228] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #5 test_multi_create_query_explain_drop_index ==============
[2022-09-02 09:09:19,297] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:09:19,297] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:09:19,760] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:09:19,765] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:09:19,765] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:09:20,222] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:09:20,227] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:09:20,228] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:09:21,009] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:09:21,016] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:09:21,016] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:09:21,759] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:09:26,755] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:09:26,756] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 6.603889707656475, 'mem_free': 13579399168, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:09:26,756] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:09:26,756] - [basetestcase:467] INFO - Time to execute basesetup : 57.83961772918701
[2022-09-02 09:09:26,809] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:09:26,809] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:09:26,863] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:09:26,863] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:09:26,916] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:09:26,917] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:09:26,975] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:09:26,975] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:09:27,027] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:09:27,087] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 09:09:27,087] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 09:09:27,089] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:09:32,100] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 09:09:32,104] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:09:32,104] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:09:32,552] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:09:33,581] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 09:09:33,750] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:09:36,181] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:09:36,201] - [newtuq:85] INFO - {'remaining': {'start': 0, 'end': 1}, 'create': {'start': 1, 'end': 2}}
[2022-09-02 09:09:37,300] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:09:37,301] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 09:09:37,301] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 09:10:07,330] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 09:10:07,360] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:10:07,387] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:10:07,460] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 71.038175ms
[2022-09-02 09:10:07,460] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '726710be-53e8-496d-a626-3ed2d57fce00', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '71.038175ms', 'executionTime': '70.960597ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:10:07,460] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 09:10:07,488] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 09:10:07,515] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 09:10:08,196] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 677.176837ms
[2022-09-02 09:10:08,196] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 09:10:08,258] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:10:08,300] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:10:08,308] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.312366ms
[2022-09-02 09:10:08,530] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:10:08,567] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 09:10:08,588] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 09:10:08,589] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 09:10:08,605] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 09:10:08,681] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:10:09,502] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employee73af04094f6a46088c7c82855655b7cdjob_title` ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2022-09-02 09:10:09,541] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employee73af04094f6a46088c7c82855655b7cdjob_title%60+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 09:10:09,597] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 53.310209ms
[2022-09-02 09:10:09,597] - [base_gsi:282] INFO - BUILD INDEX on default(employee73af04094f6a46088c7c82855655b7cdjob_title) USING GSI
[2022-09-02 09:10:10,628] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(employee73af04094f6a46088c7c82855655b7cdjob_title) USING GSI
[2022-09-02 09:10:10,655] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28employee73af04094f6a46088c7c82855655b7cdjob_title%29+USING+GSI
[2022-09-02 09:10:10,683] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 25.096496ms
[2022-09-02 09:10:11,714] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee73af04094f6a46088c7c82855655b7cdjob_title'
[2022-09-02 09:10:11,745] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee73af04094f6a46088c7c82855655b7cdjob_title%27
[2022-09-02 09:10:11,752] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.153972ms
[2022-09-02 09:10:12,784] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee73af04094f6a46088c7c82855655b7cdjob_title'
[2022-09-02 09:10:12,811] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee73af04094f6a46088c7c82855655b7cdjob_title%27
[2022-09-02 09:10:12,818] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.448801ms
[2022-09-02 09:10:13,207] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 09:10:13,491] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:10:17,029] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:10:17,823] - [task:3235] INFO -  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:10:17,858] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 09:10:17,887] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
[2022-09-02 09:10:17,891] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.647417ms
[2022-09-02 09:10:17,892] - [task:3245] INFO - {'requestID': '0281623a-0740-436d-a0e2-ac4f2dd22efc', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee73af04094f6a46088c7c82855655b7cdjob_title', 'index_id': 'fdd6d7d0a37ca8d7', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'index_key': '`job_title`', 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '2.647417ms', 'executionTime': '2.591956ms', 'resultCount': 1, 'resultSize': 724, 'serviceLoad': 6}}
[2022-09-02 09:10:17,892] - [task:3246] INFO -  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:10:17,892] - [task:3276] INFO -  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:10:17,893] - [base_gsi:560] INFO - Query : SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 09:10:17,893] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 09:10:17,893] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["job_title"] == "Sales" 
[2022-09-02 09:10:17,894] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 09:10:17,894] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 09:10:17,894] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 09:10:17,894] - [tuq_generators:329] INFO - -->where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 09:10:18,894] - [task:3235] INFO -  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:10:18,924] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 09:10:18,951] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
[2022-09-02 09:10:19,112] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 147.309023ms
[2022-09-02 09:10:19,113] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 09:10:19,113] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 09:10:20,726] - [task:3246] INFO -  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:10:20,726] - [task:3276] INFO -  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:10:21,759] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee73af04094f6a46088c7c82855655b7cdjob_title'
[2022-09-02 09:10:21,785] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee73af04094f6a46088c7c82855655b7cdjob_title%27
[2022-09-02 09:10:21,793] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.757303ms
[2022-09-02 09:10:21,821] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employee73af04094f6a46088c7c82855655b7cdjob_title ON default USING GSI
[2022-09-02 09:10:21,848] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employee73af04094f6a46088c7c82855655b7cdjob_title+ON+default+USING+GSI
[2022-09-02 09:10:21,908] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 57.738904ms
[2022-09-02 09:10:21,943] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee73af04094f6a46088c7c82855655b7cdjob_title'
[2022-09-02 09:10:21,971] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee73af04094f6a46088c7c82855655b7cdjob_title%27
[2022-09-02 09:10:21,980] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.396648ms
[2022-09-02 09:10:21,980] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '62b57e4c-abd1-4178-9bdb-eb8ceb6a0bd9', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '6.396648ms', 'executionTime': '6.328844ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:10:22,097] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:10:22,101] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:10:22,101] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:10:22,656] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:10:22,713] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:10:22,713] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 09:10:22,806] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:10:22,864] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:10:22,868] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:10:22,868] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:10:23,443] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:10:23,502] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:10:23,502] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 09:10:23,595] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:10:23,651] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:10:23,652] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 09:10:23,652] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:10:23,679] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:10:23,705] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:10:23,718] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 10.129544ms
[2022-09-02 09:10:23,747] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:10:23,774] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 09:10:23,830] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 35.716995ms
[2022-09-02 09:10:23,902] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:10:23,902] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 40.29399226651136, 'mem_free': 13416939520, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:10:23,902] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:10:23,906] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:10:23,907] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:10:24,482] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:10:24,487] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:10:24,488] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:10:25,142] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:10:25,153] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:10:25,154] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:10:26,129] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:10:26,137] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:10:26,137] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:10:27,079] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:10:33,362] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #5 test_multi_create_query_explain_drop_index ==============
[2022-09-02 09:10:33,515] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 09:10:34,166] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 09:10:34,197] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 09:10:34,197] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 09:10:34,253] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:10:34,309] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:10:34,365] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:10:34,366] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 09:10:34,448] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 09:10:34,449] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 09:10:34,479] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:10:34,621] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 09:10:34,621] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:10:34,661] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 09:10:34,689] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 09:10:34,689] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:10:34,717] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 09:10:34,745] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 09:10:34,745] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:10:34,774] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 09:10:34,806] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 09:10:34,806] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:10:34,836] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 09:10:34,836] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #5 test_multi_create_query_explain_drop_index ==============
[2022-09-02 09:10:34,837] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 09:10:34,837] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 5 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_5
ok

----------------------------------------------------------------------
Ran 1 test in 125.976s

OK
test_multi_create_query_explain_drop_index (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_6

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,doc_ops=True,create_ops_per=.5,delete_ops_per=.2,update_ops_per=.2,run_async=True,scan_consistency=request_plus,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'doc_ops': 'True', 'create_ops_per': '.5', 'delete_ops_per': '.2', 'update_ops_per': '.2', 'run_async': 'True', 'scan_consistency': 'request_plus', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 6, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_6'}
[2022-09-02 09:10:34,950] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:10:34,951] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:10:35,514] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:10:35,548] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:10:35,637] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 09:10:35,637] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #6 test_multi_create_query_explain_drop_index==============
[2022-09-02 09:10:35,638] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 09:10:35,980] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 09:10:36,009] - [task:164] INFO -  {'uptime': '772', 'memoryTotal': 15466930176, 'memoryFree': 13519949824, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:10:36,038] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 09:10:36,038] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 09:10:36,038] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 09:10:36,070] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 09:10:36,107] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 09:10:36,108] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 09:10:36,137] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 09:10:36,138] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 09:10:36,138] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 09:10:36,139] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 09:10:36,191] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:10:36,196] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:10:36,197] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:10:36,747] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:10:36,748] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:10:36,862] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:10:36,863] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:10:36,896] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:10:36,924] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:10:36,953] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:10:37,082] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 09:10:37,111] - [task:164] INFO -  {'uptime': '770', 'memoryTotal': 15466930176, 'memoryFree': 13518512128, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:10:37,141] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:10:37,171] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 09:10:37,172] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 09:10:37,229] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:10:37,232] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:10:37,233] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:10:37,757] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:10:37,758] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:10:37,867] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:10:37,868] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:10:37,899] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:10:37,928] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:10:37,958] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:10:38,094] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 09:10:38,124] - [task:164] INFO -  {'uptime': '772', 'memoryTotal': 15466930176, 'memoryFree': 13517496320, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:10:38,152] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:10:38,183] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 09:10:38,183] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 09:10:38,239] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:10:38,244] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:10:38,244] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:10:38,789] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:10:38,790] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:10:38,900] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:10:38,901] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:10:38,932] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:10:38,958] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:10:38,988] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:10:39,106] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 09:10:39,133] - [task:164] INFO -  {'uptime': '766', 'memoryTotal': 15466930176, 'memoryFree': 13517725696, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:10:39,160] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:10:39,194] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 09:10:39,194] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 09:10:39,248] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:10:39,252] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:10:39,252] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:10:39,786] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:10:39,787] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:10:39,901] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:10:39,903] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:10:39,940] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:10:39,966] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:10:39,997] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:10:40,088] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 09:10:40,503] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:10:45,508] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 09:10:45,598] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 09:10:45,603] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:10:45,603] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:10:46,169] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:10:46,170] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:10:46,282] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:10:46,283] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:10:46,283] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 09:10:47,265] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 09:10:47,323] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 09:10:47,325] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 09:11:18,701] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:11:19,033] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:11:19,318] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 09:11:19,321] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #6 test_multi_create_query_explain_drop_index ==============
[2022-09-02 09:11:19,381] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:11:19,381] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:11:19,937] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:11:19,942] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:11:19,942] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:11:20,636] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:11:20,644] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:11:20,644] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:11:21,556] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:11:21,567] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:11:21,567] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:11:22,471] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:11:28,144] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:11:28,144] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 33.22237492998903, 'mem_free': 13524942848, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:11:28,145] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:11:28,145] - [basetestcase:467] INFO - Time to execute basesetup : 53.196900367736816
[2022-09-02 09:11:28,200] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:11:28,200] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:11:28,254] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:11:28,255] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:11:28,313] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:11:28,313] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:11:28,374] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:11:28,374] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:11:28,427] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:11:28,487] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 09:11:28,488] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 09:11:28,488] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:11:33,498] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 09:11:33,502] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:11:33,503] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:11:34,055] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:11:35,188] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 09:11:35,368] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:11:37,649] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:11:37,667] - [newtuq:85] INFO - {'update': {'start': 0, 'end': 0}, 'delete': {'start': 0, 'end': 0}, 'remaining': {'start': 0, 'end': 1}, 'create': {'start': 1, 'end': 2}}
[2022-09-02 09:11:39,078] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:11:39,078] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 09:11:39,078] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 09:12:09,098] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 09:12:09,129] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:12:09,159] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:12:09,229] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 68.227605ms
[2022-09-02 09:12:09,230] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'a91b9cd5-03ec-429c-a064-71b4644710e0', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '68.227605ms', 'executionTime': '68.169685ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:12:09,230] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 09:12:09,257] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 09:12:09,284] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 09:12:10,025] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 738.215936ms
[2022-09-02 09:12:10,025] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 09:12:10,097] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:12:10,139] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:12:10,146] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.926041ms
[2022-09-02 09:12:10,364] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:12:10,399] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 09:12:10,415] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 09:12:10,415] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 09:12:10,437] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 09:12:10,512] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:12:11,324] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employeed6ebcdc5a8e147f09fd85c1846098a58job_title` ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2022-09-02 09:12:11,351] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employeed6ebcdc5a8e147f09fd85c1846098a58job_title%60+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 09:12:11,428] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 74.277068ms
[2022-09-02 09:12:11,428] - [base_gsi:282] INFO - BUILD INDEX on default(employeed6ebcdc5a8e147f09fd85c1846098a58job_title) USING GSI
[2022-09-02 09:12:12,459] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(employeed6ebcdc5a8e147f09fd85c1846098a58job_title) USING GSI
[2022-09-02 09:12:12,485] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28employeed6ebcdc5a8e147f09fd85c1846098a58job_title%29+USING+GSI
[2022-09-02 09:12:12,510] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 23.409973ms
[2022-09-02 09:12:13,542] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeed6ebcdc5a8e147f09fd85c1846098a58job_title'
[2022-09-02 09:12:13,568] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeed6ebcdc5a8e147f09fd85c1846098a58job_title%27
[2022-09-02 09:12:13,577] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.080469ms
[2022-09-02 09:12:14,608] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeed6ebcdc5a8e147f09fd85c1846098a58job_title'
[2022-09-02 09:12:14,635] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeed6ebcdc5a8e147f09fd85c1846098a58job_title%27
[2022-09-02 09:12:14,642] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.98557ms
[2022-09-02 09:12:15,157] - [basetestcase:2772] INFO - update 0.0 to default documents...
[2022-09-02 09:12:15,337] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:12:16,292] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:12:16,907] - [basetestcase:2772] INFO - delete 0.0 to default documents...
[2022-09-02 09:12:17,084] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:12:18,406] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:12:18,803] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 09:12:18,990] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:12:22,714] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:12:22,767] - [task:3235] INFO -  <<<<< START Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:12:22,799] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 09:12:22,826] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+
[2022-09-02 09:12:22,830] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.312098ms
[2022-09-02 09:12:22,831] - [task:3245] INFO - {'requestID': 'd661d57e-2a77-4b73-93ca-e1795af7e38a', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employeed6ebcdc5a8e147f09fd85c1846098a58job_title', 'index_id': '8323136e43286a95', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '"Sales"', 'inclusion': 3, 'index_key': '`job_title`', 'low': '"Sales"'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((`default`.`job_title`) = "Sales")'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, 'text': 'SELECT * FROM default WHERE  job_title = "Sales"'}], 'status': 'success', 'metrics': {'elapsedTime': '2.312098ms', 'executionTime': '2.244611ms', 'resultCount': 1, 'resultSize': 724, 'serviceLoad': 6}}
[2022-09-02 09:12:22,831] - [task:3246] INFO -  <<<<< Done Executing Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:12:22,831] - [task:3276] INFO -  <<<<< Done VERIFYING Query EXPLAIN SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:12:22,832] - [base_gsi:560] INFO - Query : SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 09:12:22,832] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 09:12:22,832] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["job_title"] == "Sales" 
[2022-09-02 09:12:22,832] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 09:12:22,833] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 09:12:22,833] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,}; where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 09:12:22,833] - [tuq_generators:329] INFO - -->where_clause=  doc["job_title"] == "Sales" 
[2022-09-02 09:12:23,833] - [task:3235] INFO -  <<<<< START Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:12:23,863] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  job_title = "Sales" 
[2022-09-02 09:12:23,892] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++job_title+%3D+%22Sales%22+&scan_consistency=request_plus
[2022-09-02 09:12:24,060] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 153.733452ms
[2022-09-02 09:12:24,060] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 09:12:24,061] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 09:12:25,668] - [task:3246] INFO -  <<<<< Done Executing Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:12:25,668] - [task:3276] INFO -  <<<<< Done VERIFYING Query SELECT * FROM default WHERE  job_title = "Sales"  >>>>>>
[2022-09-02 09:12:26,700] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeed6ebcdc5a8e147f09fd85c1846098a58job_title'
[2022-09-02 09:12:26,727] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeed6ebcdc5a8e147f09fd85c1846098a58job_title%27
[2022-09-02 09:12:26,735] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.201382ms
[2022-09-02 09:12:26,762] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employeed6ebcdc5a8e147f09fd85c1846098a58job_title ON default USING GSI
[2022-09-02 09:12:26,788] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employeed6ebcdc5a8e147f09fd85c1846098a58job_title+ON+default+USING+GSI
[2022-09-02 09:12:26,829] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 38.725364ms
[2022-09-02 09:12:26,871] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeed6ebcdc5a8e147f09fd85c1846098a58job_title'
[2022-09-02 09:12:26,898] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeed6ebcdc5a8e147f09fd85c1846098a58job_title%27
[2022-09-02 09:12:26,910] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 8.979843ms
[2022-09-02 09:12:26,910] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '908bdaab-fc6b-40d8-a717-082aae405fc2', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '8.979843ms', 'executionTime': '8.900896ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:12:27,022] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:12:27,025] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:12:27,025] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:12:27,684] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:12:27,739] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:12:27,740] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 09:12:27,845] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:12:27,906] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:12:27,909] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:12:27,909] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:12:28,588] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:12:28,645] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:12:28,645] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 09:12:28,753] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:12:28,811] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:12:28,811] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 09:12:28,811] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:12:28,838] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:12:28,865] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:12:28,873] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.802639ms
[2022-09-02 09:12:28,902] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:12:28,929] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 09:12:28,987] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 56.157022ms
[2022-09-02 09:12:29,054] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:12:29,055] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 34.8227416717233, 'mem_free': 13332971520, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:12:29,055] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:12:29,060] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:12:29,060] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:12:29,787] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:12:29,792] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:12:29,792] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:12:30,456] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:12:30,461] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:12:30,461] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:12:31,408] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:12:31,416] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:12:31,417] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:12:32,577] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:12:40,107] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #6 test_multi_create_query_explain_drop_index ==============
[2022-09-02 09:12:40,257] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 09:12:41,204] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 09:12:41,232] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 09:12:41,232] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 09:12:41,288] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:12:41,341] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:12:41,410] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:12:41,411] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 09:12:41,492] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 09:12:41,493] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 09:12:41,519] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:12:41,656] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 09:12:41,656] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:12:41,686] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 09:12:41,714] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 09:12:41,715] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:12:41,743] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 09:12:41,769] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 09:12:41,769] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:12:41,795] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 09:12:41,823] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 09:12:41,823] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:12:41,851] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 09:12:41,851] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #6 test_multi_create_query_explain_drop_index ==============
[2022-09-02 09:12:41,852] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 09:12:41,852] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_6
ok

----------------------------------------------------------------------
Ran 1 test in 126.957s

OK
test_multi_create_drop_index (gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_7

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests.test_multi_create_drop_index,groups=simple,dataset=default,doc-per-day=1,cbq_version=sherlock,skip_build_tuq=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple', 'dataset': 'default', 'doc-per-day': '1', 'cbq_version': 'sherlock', 'skip_build_tuq': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 7, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_7'}
[2022-09-02 09:12:41,992] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:12:41,993] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:12:42,602] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:12:42,635] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:12:42,714] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 09:12:42,714] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #7 test_multi_create_drop_index==============
[2022-09-02 09:12:42,715] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 09:12:43,025] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 09:12:43,055] - [task:164] INFO -  {'uptime': '899', 'memoryTotal': 15466930176, 'memoryFree': 13471739904, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:12:43,083] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 09:12:43,083] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 09:12:43,084] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 09:12:43,123] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 09:12:43,156] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 09:12:43,156] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 09:12:43,187] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 09:12:43,187] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 09:12:43,188] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 09:12:43,188] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 09:12:43,238] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:12:43,241] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:12:43,242] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:12:43,864] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:12:43,865] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:12:43,994] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:12:43,996] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:12:44,029] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:12:44,057] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:12:44,087] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:12:44,214] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 09:12:44,244] - [task:164] INFO -  {'uptime': '896', 'memoryTotal': 15466930176, 'memoryFree': 13471952896, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:12:44,272] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:12:44,300] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 09:12:44,301] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 09:12:44,353] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:12:44,356] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:12:44,356] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:12:44,959] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:12:44,960] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:12:45,079] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:12:45,080] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:12:45,109] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:12:45,136] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:12:45,165] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:12:45,285] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 09:12:45,316] - [task:164] INFO -  {'uptime': '897', 'memoryTotal': 15466930176, 'memoryFree': 13487927296, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:12:45,343] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:12:45,375] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 09:12:45,376] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 09:12:45,430] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:12:45,435] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:12:45,436] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:12:46,034] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:12:46,036] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:12:46,162] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:12:46,163] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:12:46,199] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:12:46,232] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:12:46,264] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:12:46,394] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 09:12:46,427] - [task:164] INFO -  {'uptime': '897', 'memoryTotal': 15466930176, 'memoryFree': 13487464448, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:12:46,455] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:12:46,486] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 09:12:46,487] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 09:12:46,540] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:12:46,543] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:12:46,543] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:12:47,187] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:12:47,188] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:12:47,316] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:12:47,318] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:12:47,350] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:12:47,379] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:12:47,411] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:12:47,511] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 09:12:47,899] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:12:52,900] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 09:12:52,986] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 09:12:52,991] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:12:52,991] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:12:53,674] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:12:53,675] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:12:53,801] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:12:53,802] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:12:53,803] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 09:12:54,667] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 09:12:54,722] - [on_prem_rest_client:3047] INFO - 0.05 seconds to create bucket default
[2022-09-02 09:12:54,722] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 09:13:18,505] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:13:18,853] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:13:19,155] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 09:13:19,158] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #7 test_multi_create_drop_index ==============
[2022-09-02 09:13:19,222] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:13:19,222] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:13:19,836] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:13:19,841] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:13:19,841] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:13:20,707] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:13:20,716] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:13:20,716] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:13:21,801] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:13:21,810] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:13:21,811] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:13:22,891] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:13:29,495] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:13:29,495] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 34.03666924830343, 'mem_free': 13484322816, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:13:29,496] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:13:29,496] - [basetestcase:467] INFO - Time to execute basesetup : 47.50629115104675
[2022-09-02 09:13:29,550] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:13:29,551] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:13:29,606] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:13:29,606] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:13:29,661] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:13:29,661] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:13:29,717] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:13:29,717] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:13:29,772] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:13:29,831] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 09:13:29,831] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 09:13:29,832] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:13:34,843] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 09:13:34,847] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:13:34,847] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:13:35,505] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:13:36,669] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 09:13:36,851] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:13:38,945] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:13:39,029] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:13:39,029] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 09:13:39,030] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 09:14:09,031] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 09:14:09,061] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:14:09,088] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:14:09,159] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 68.217686ms
[2022-09-02 09:14:09,159] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'baf9ab21-d075-43e8-b7de-b9fcd8467f09', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '68.217686ms', 'executionTime': '68.151119ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:14:09,160] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 09:14:09,187] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 09:14:09,214] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 09:14:09,986] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 770.145639ms
[2022-09-02 09:14:09,987] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 09:14:10,045] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:14:10,078] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:14:10,087] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.757399ms
[2022-09-02 09:14:10,300] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:14:10,338] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 09:14:10,356] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 09:14:10,356] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 09:14:10,377] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 09:14:10,378] - [base_gsi:326] INFO - []
[2022-09-02 09:14:11,248] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employeecd1494f48d0a432994bfbb32192e98b4job_title` ON default(job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2022-09-02 09:14:11,277] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employeecd1494f48d0a432994bfbb32192e98b4job_title%60+ON+default%28job_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 09:14:11,342] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 63.867249ms
[2022-09-02 09:14:11,381] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employeecd1494f48d0a432994bfbb32192e98b4join_yr` ON default(join_yr) WHERE  join_yr > 2010 and join_yr < 2014  USING GSI  WITH {'defer_build': True}
[2022-09-02 09:14:11,410] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employeecd1494f48d0a432994bfbb32192e98b4join_yr%60+ON+default%28join_yr%29+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 09:14:11,461] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 48.769039ms
[2022-09-02 09:14:11,461] - [base_gsi:282] INFO - BUILD INDEX on default(employeecd1494f48d0a432994bfbb32192e98b4job_title,employeecd1494f48d0a432994bfbb32192e98b4join_yr) USING GSI
[2022-09-02 09:14:12,492] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(employeecd1494f48d0a432994bfbb32192e98b4job_title,employeecd1494f48d0a432994bfbb32192e98b4join_yr) USING GSI
[2022-09-02 09:14:12,520] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28employeecd1494f48d0a432994bfbb32192e98b4job_title%2Cemployeecd1494f48d0a432994bfbb32192e98b4join_yr%29+USING+GSI
[2022-09-02 09:14:12,559] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 37.351777ms
[2022-09-02 09:14:13,590] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeecd1494f48d0a432994bfbb32192e98b4job_title'
[2022-09-02 09:14:13,617] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeecd1494f48d0a432994bfbb32192e98b4job_title%27
[2022-09-02 09:14:13,626] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.992223ms
[2022-09-02 09:14:14,658] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeecd1494f48d0a432994bfbb32192e98b4job_title'
[2022-09-02 09:14:14,686] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeecd1494f48d0a432994bfbb32192e98b4job_title%27
[2022-09-02 09:14:14,695] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.544766ms
[2022-09-02 09:14:14,723] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeecd1494f48d0a432994bfbb32192e98b4join_yr'
[2022-09-02 09:14:14,751] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeecd1494f48d0a432994bfbb32192e98b4join_yr%27
[2022-09-02 09:14:14,755] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.032314ms
[2022-09-02 09:14:15,787] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeecd1494f48d0a432994bfbb32192e98b4job_title'
[2022-09-02 09:14:15,814] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeecd1494f48d0a432994bfbb32192e98b4job_title%27
[2022-09-02 09:14:15,824] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 8.224695ms
[2022-09-02 09:14:15,850] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employeecd1494f48d0a432994bfbb32192e98b4job_title ON default USING GSI
[2022-09-02 09:14:15,881] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employeecd1494f48d0a432994bfbb32192e98b4job_title+ON+default+USING+GSI
[2022-09-02 09:14:15,930] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 47.437378ms
[2022-09-02 09:14:15,967] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeecd1494f48d0a432994bfbb32192e98b4join_yr'
[2022-09-02 09:14:15,994] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeecd1494f48d0a432994bfbb32192e98b4join_yr%27
[2022-09-02 09:14:16,002] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.689555ms
[2022-09-02 09:14:16,029] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employeecd1494f48d0a432994bfbb32192e98b4join_yr ON default USING GSI
[2022-09-02 09:14:16,055] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employeecd1494f48d0a432994bfbb32192e98b4join_yr+ON+default+USING+GSI
[2022-09-02 09:14:16,089] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 32.196286ms
[2022-09-02 09:14:16,126] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeecd1494f48d0a432994bfbb32192e98b4job_title'
[2022-09-02 09:14:16,152] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeecd1494f48d0a432994bfbb32192e98b4job_title%27
[2022-09-02 09:14:16,160] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.889678ms
[2022-09-02 09:14:16,160] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '1bf1aa1d-a1e5-4284-897f-2a854fea1ca0', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '5.889678ms', 'executionTime': '5.81619ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:14:16,193] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeecd1494f48d0a432994bfbb32192e98b4join_yr'
[2022-09-02 09:14:16,220] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeecd1494f48d0a432994bfbb32192e98b4join_yr%27
[2022-09-02 09:14:16,223] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 1.651488ms
[2022-09-02 09:14:16,223] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '34679c03-fc8b-42da-bf5e-a5279ca02357', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '1.651488ms', 'executionTime': '1.601742ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:14:16,329] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:14:16,332] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:14:16,333] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:14:16,922] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:14:16,978] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:14:16,978] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 09:14:17,080] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:14:17,142] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:14:17,146] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:14:17,146] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:14:17,754] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:14:17,812] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:14:17,812] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 09:14:17,928] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:14:17,987] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:14:17,987] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 09:14:17,987] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:14:18,013] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:14:18,041] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:14:18,048] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.358547ms
[2022-09-02 09:14:18,077] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:14:18,103] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 09:14:18,166] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 61.038584ms
[2022-09-02 09:14:18,241] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:14:18,241] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 5.276570916606754, 'mem_free': 13368676352, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:14:18,241] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:14:18,246] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:14:18,246] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:14:18,841] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:14:18,846] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:14:18,846] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:14:19,446] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:14:19,451] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:14:19,451] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:14:20,361] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:14:20,369] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:14:20,369] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:14:21,420] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:14:28,594] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #7 test_multi_create_drop_index ==============
[2022-09-02 09:14:28,738] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 09:14:29,113] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 09:14:29,142] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 09:14:29,142] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 09:14:29,200] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:14:29,252] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:14:29,305] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:14:29,306] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 09:14:29,385] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 09:14:29,386] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 09:14:29,412] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:14:29,540] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 09:14:29,540] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:14:29,568] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 09:14:29,594] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 09:14:29,594] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:14:29,622] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 09:14:29,648] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 09:14:29,648] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:14:29,674] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 09:14:29,699] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 09:14:29,699] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:14:29,726] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 09:14:29,726] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #7 test_multi_create_drop_index ==============
[2022-09-02 09:14:29,726] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 09:14:29,727] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
summary so far suite gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests , pass 1 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_7
ok

----------------------------------------------------------------------
Ran 1 test in 107.794s

OK
test_multi_create_drop_index (gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_8

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests.test_multi_create_drop_index,groups=composite,dataset=default,doc-per-day=1,cbq_version=sherlock,skip_build_tuq=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'composite', 'dataset': 'default', 'doc-per-day': '1', 'cbq_version': 'sherlock', 'skip_build_tuq': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 8, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_8'}
[2022-09-02 09:14:29,817] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:14:29,817] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:14:30,480] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:14:30,513] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:14:30,599] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 09:14:30,599] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #8 test_multi_create_drop_index==============
[2022-09-02 09:14:30,600] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 09:14:30,844] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 09:14:30,872] - [task:164] INFO -  {'uptime': '1007', 'memoryTotal': 15466930176, 'memoryFree': 13485793280, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:14:30,898] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 09:14:30,898] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 09:14:30,899] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 09:14:30,930] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 09:14:30,967] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 09:14:30,968] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 09:14:30,996] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 09:14:30,997] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 09:14:30,997] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 09:14:30,997] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 09:14:31,046] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:14:31,049] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:14:31,049] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:14:31,642] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:14:31,643] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:14:31,764] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:14:31,765] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:14:31,795] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:14:31,822] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:14:31,852] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:14:31,984] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 09:14:32,013] - [task:164] INFO -  {'uptime': '1006', 'memoryTotal': 15466930176, 'memoryFree': 13486456832, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:14:32,040] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:14:32,069] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 09:14:32,070] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 09:14:32,125] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:14:32,129] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:14:32,129] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:14:32,786] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:14:32,787] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:14:32,911] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:14:32,912] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:14:32,943] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:14:32,970] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:14:33,002] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:14:33,129] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 09:14:33,157] - [task:164] INFO -  {'uptime': '1003', 'memoryTotal': 15466930176, 'memoryFree': 13486866432, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:14:33,187] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:14:33,217] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 09:14:33,217] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 09:14:33,273] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:14:33,278] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:14:33,278] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:14:33,912] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:14:33,913] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:14:34,031] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:14:34,032] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:14:34,067] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:14:34,095] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:14:34,127] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:14:34,250] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 09:14:34,278] - [task:164] INFO -  {'uptime': '1002', 'memoryTotal': 15466930176, 'memoryFree': 13485993984, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:14:34,306] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:14:34,335] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 09:14:34,336] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 09:14:34,388] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:14:34,391] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:14:34,392] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:14:35,023] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:14:35,024] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:14:35,146] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:14:35,147] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:14:35,180] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:14:35,211] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:14:35,242] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:14:35,339] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 09:14:35,753] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:14:40,759] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 09:14:40,854] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 09:14:40,868] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:14:40,869] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:14:41,487] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:14:41,488] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:14:41,603] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:14:41,604] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:14:41,604] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 09:14:42,500] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 09:14:42,556] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 09:14:42,557] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 09:15:18,213] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:15:18,613] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:15:19,013] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 09:15:19,018] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #8 test_multi_create_drop_index ==============
[2022-09-02 09:15:19,076] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:15:19,077] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:15:19,741] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:15:19,746] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:15:19,747] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:15:20,623] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:15:20,633] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:15:20,633] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:15:21,724] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:15:21,733] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:15:21,733] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:15:22,841] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:15:29,536] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:15:29,536] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 34.66158599781637, 'mem_free': 13483868160, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:15:29,536] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:15:29,536] - [basetestcase:467] INFO - Time to execute basesetup : 59.721673011779785
[2022-09-02 09:15:29,592] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:15:29,592] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:15:29,652] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:15:29,652] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:15:29,712] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:15:29,712] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:15:29,770] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:15:29,771] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:15:29,826] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:15:29,895] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 09:15:29,896] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 09:15:29,896] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:15:34,909] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 09:15:34,912] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:15:34,913] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:15:35,542] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:15:36,616] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 09:15:36,911] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:15:38,992] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:15:39,075] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:15:39,075] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 09:15:39,076] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 09:16:09,106] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 09:16:09,136] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:16:09,164] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:16:09,234] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 67.49964ms
[2022-09-02 09:16:09,234] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '2fe8d14c-9df3-4a90-aab8-7663e4c14b35', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '67.49964ms', 'executionTime': '67.423147ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:16:09,234] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 09:16:09,262] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 09:16:09,289] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 09:16:10,065] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 773.986875ms
[2022-09-02 09:16:10,065] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 09:16:10,148] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:16:10,183] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:16:10,192] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.82977ms
[2022-09-02 09:16:10,416] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:16:10,459] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 09:16:10,489] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 09:16:10,489] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 09:16:10,504] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 09:16:10,505] - [base_gsi:326] INFO - []
[2022-09-02 09:16:11,372] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employee2261d2280b6a461aae494fda7149cc7fjob_title_join_yr` ON default(join_yr,job_title) WHERE  job_title IS NOT NULL  USING GSI  WITH {'defer_build': True}
[2022-09-02 09:16:11,399] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employee2261d2280b6a461aae494fda7149cc7fjob_title_join_yr%60+ON+default%28join_yr%2Cjob_title%29+WHERE++job_title+IS+NOT+NULL++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 09:16:11,454] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 52.529396ms
[2022-09-02 09:16:11,454] - [base_gsi:282] INFO - BUILD INDEX on default(employee2261d2280b6a461aae494fda7149cc7fjob_title_join_yr) USING GSI
[2022-09-02 09:16:12,487] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(employee2261d2280b6a461aae494fda7149cc7fjob_title_join_yr) USING GSI
[2022-09-02 09:16:12,514] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28employee2261d2280b6a461aae494fda7149cc7fjob_title_join_yr%29+USING+GSI
[2022-09-02 09:16:12,539] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 22.578553ms
[2022-09-02 09:16:13,570] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee2261d2280b6a461aae494fda7149cc7fjob_title_join_yr'
[2022-09-02 09:16:13,596] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee2261d2280b6a461aae494fda7149cc7fjob_title_join_yr%27
[2022-09-02 09:16:13,605] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.604684ms
[2022-09-02 09:16:14,642] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee2261d2280b6a461aae494fda7149cc7fjob_title_join_yr'
[2022-09-02 09:16:14,674] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee2261d2280b6a461aae494fda7149cc7fjob_title_join_yr%27
[2022-09-02 09:16:14,687] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 10.104988ms
[2022-09-02 09:16:15,719] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee2261d2280b6a461aae494fda7149cc7fjob_title_join_yr'
[2022-09-02 09:16:15,747] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee2261d2280b6a461aae494fda7149cc7fjob_title_join_yr%27
[2022-09-02 09:16:15,757] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 9.068628ms
[2022-09-02 09:16:15,785] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employee2261d2280b6a461aae494fda7149cc7fjob_title_join_yr ON default USING GSI
[2022-09-02 09:16:15,812] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employee2261d2280b6a461aae494fda7149cc7fjob_title_join_yr+ON+default+USING+GSI
[2022-09-02 09:16:15,870] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 56.279601ms
[2022-09-02 09:16:15,907] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee2261d2280b6a461aae494fda7149cc7fjob_title_join_yr'
[2022-09-02 09:16:15,935] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee2261d2280b6a461aae494fda7149cc7fjob_title_join_yr%27
[2022-09-02 09:16:15,943] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.720012ms
[2022-09-02 09:16:15,943] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '05cdd30e-9e9d-4081-9e3a-5c5b942565f5', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '5.720012ms', 'executionTime': '5.661168ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:16:16,055] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:16:16,058] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:16:16,058] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:16:16,720] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:16:16,779] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:16:16,779] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 09:16:16,895] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:16:16,955] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:16:16,958] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:16:16,958] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:16:17,599] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:16:17,654] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:16:17,654] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 09:16:17,764] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:16:17,820] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:16:17,820] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 09:16:17,820] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:16:17,846] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:16:17,875] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:16:17,883] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.786004ms
[2022-09-02 09:16:17,909] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:16:17,937] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 09:16:17,988] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 48.817417ms
[2022-09-02 09:16:18,065] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:16:18,065] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 5.282039132875309, 'mem_free': 13354360832, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:16:18,066] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:16:18,070] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:16:18,070] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:16:18,704] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:16:18,709] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:16:18,709] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:16:19,366] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:16:19,371] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:16:19,371] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:16:20,367] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:16:20,375] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:16:20,375] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:16:21,512] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:16:28,817] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #8 test_multi_create_drop_index ==============
[2022-09-02 09:16:28,960] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 09:16:30,159] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 09:16:30,188] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 09:16:30,188] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 09:16:30,252] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:16:30,312] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:16:30,370] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:16:30,371] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 09:16:30,452] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 09:16:30,453] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 09:16:30,478] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:16:30,612] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 09:16:30,613] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:16:30,641] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 09:16:30,669] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 09:16:30,669] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:16:30,697] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 09:16:30,724] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 09:16:30,724] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:16:30,752] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 09:16:30,779] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 09:16:30,779] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:16:30,807] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 09:16:30,807] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #8 test_multi_create_drop_index ==============
[2022-09-02 09:16:30,807] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 09:16:30,808] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
summary so far suite gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests , pass 2 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_8
ok

----------------------------------------------------------------------
Ran 1 test in 121.047s

OK
test_remove_bucket_and_query (gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_9

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_remove_bucket_and_query,groups=simple:and:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple:and:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 9, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_9'}
[2022-09-02 09:16:30,898] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:16:30,898] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:16:31,521] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:16:31,554] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:16:31,633] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 09:16:31,633] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #9 test_remove_bucket_and_query==============
[2022-09-02 09:16:31,634] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 09:16:31,925] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 09:16:31,951] - [task:164] INFO -  {'uptime': '1128', 'memoryTotal': 15466930176, 'memoryFree': 13477441536, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:16:31,977] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 09:16:31,977] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 09:16:31,977] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 09:16:32,009] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 09:16:32,045] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 09:16:32,046] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 09:16:32,074] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 09:16:32,075] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 09:16:32,075] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 09:16:32,076] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 09:16:32,126] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:16:32,129] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:16:32,129] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:16:32,796] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:16:32,798] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:16:32,924] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:16:32,925] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:16:32,957] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:16:32,983] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:16:33,016] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:16:33,155] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 09:16:33,186] - [task:164] INFO -  {'uptime': '1127', 'memoryTotal': 15466930176, 'memoryFree': 13477490688, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:16:33,214] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:16:33,246] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 09:16:33,247] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 09:16:33,304] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:16:33,307] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:16:33,307] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:16:33,959] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:16:33,960] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:16:34,088] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:16:34,090] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:16:34,120] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:16:34,151] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:16:34,180] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:16:34,299] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 09:16:34,328] - [task:164] INFO -  {'uptime': '1123', 'memoryTotal': 15466930176, 'memoryFree': 13477482496, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:16:34,357] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:16:34,387] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 09:16:34,387] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 09:16:34,441] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:16:34,447] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:16:34,447] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:16:35,129] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:16:35,130] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:16:35,265] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:16:35,266] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:16:35,300] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:16:35,328] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:16:35,359] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:16:35,487] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 09:16:35,517] - [task:164] INFO -  {'uptime': '1123', 'memoryTotal': 15466930176, 'memoryFree': 13478113280, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:16:35,545] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:16:35,575] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 09:16:35,576] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 09:16:35,630] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:16:35,633] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:16:35,633] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:16:36,315] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:16:36,316] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:16:36,446] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:16:36,447] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:16:36,478] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:16:36,505] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:16:36,535] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:16:36,629] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 09:16:37,024] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:16:42,030] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 09:16:42,117] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 09:16:42,121] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:16:42,121] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:16:42,777] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:16:42,778] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:16:42,903] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:16:42,904] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:16:42,905] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 09:16:43,775] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 09:16:43,833] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 09:16:43,834] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 09:17:18,401] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:17:18,720] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:17:19,113] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 09:17:19,117] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #9 test_remove_bucket_and_query ==============
[2022-09-02 09:17:19,178] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:17:19,178] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:17:19,858] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:17:19,863] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:17:19,863] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:17:20,820] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:17:20,833] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:17:20,833] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:17:21,976] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:17:21,984] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:17:21,985] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:17:23,116] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:17:29,784] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:17:29,785] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 34.4194002486414, 'mem_free': 13472280576, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:17:29,785] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:17:29,785] - [basetestcase:467] INFO - Time to execute basesetup : 58.88949680328369
[2022-09-02 09:17:29,839] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:17:29,840] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:17:29,900] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:17:29,900] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:17:29,953] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:17:29,954] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:17:30,018] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:17:30,019] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:17:30,075] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:17:30,142] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 09:17:30,143] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 09:17:30,143] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:17:35,156] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 09:17:35,160] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:17:35,160] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:17:35,828] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:17:37,004] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 09:17:37,187] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:17:40,225] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:17:40,308] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:17:40,308] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 09:17:40,308] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 09:18:10,338] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 09:18:10,368] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:18:10,397] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:18:10,469] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 69.798795ms
[2022-09-02 09:18:10,469] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'c3f53da4-6982-43ea-876e-1909e9974fa2', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '69.798795ms', 'executionTime': '69.735388ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:18:10,469] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 09:18:10,497] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 09:18:10,525] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 09:18:11,246] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 718.295715ms
[2022-09-02 09:18:11,246] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 09:18:11,306] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:18:11,344] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:18:11,353] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.545243ms
[2022-09-02 09:18:11,593] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:18:11,633] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 09:18:11,652] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 09:18:11,652] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 09:18:11,672] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 09:18:11,748] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:18:12,547] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employee6905b5f4d3b243d38c13443dc8ed3239join_yr` ON default(join_yr) WHERE  join_yr > 2010 and join_yr < 2014  USING GSI  WITH {'defer_build': True}
[2022-09-02 09:18:12,574] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employee6905b5f4d3b243d38c13443dc8ed3239join_yr%60+ON+default%28join_yr%29+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 09:18:12,641] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 65.443171ms
[2022-09-02 09:18:12,641] - [base_gsi:282] INFO - BUILD INDEX on default(`employee6905b5f4d3b243d38c13443dc8ed3239join_yr`) USING GSI
[2022-09-02 09:18:13,671] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(`employee6905b5f4d3b243d38c13443dc8ed3239join_yr`) USING GSI
[2022-09-02 09:18:13,697] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28%60employee6905b5f4d3b243d38c13443dc8ed3239join_yr%60%29+USING+GSI
[2022-09-02 09:18:13,727] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 26.055756ms
[2022-09-02 09:18:13,793] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee6905b5f4d3b243d38c13443dc8ed3239join_yr'
[2022-09-02 09:18:13,822] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee6905b5f4d3b243d38c13443dc8ed3239join_yr%27
[2022-09-02 09:18:13,831] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.107413ms
[2022-09-02 09:18:14,862] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee6905b5f4d3b243d38c13443dc8ed3239join_yr'
[2022-09-02 09:18:14,889] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee6905b5f4d3b243d38c13443dc8ed3239join_yr%27
[2022-09-02 09:18:14,897] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.564015ms
[2022-09-02 09:18:15,927] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee6905b5f4d3b243d38c13443dc8ed3239join_yr'
[2022-09-02 09:18:15,953] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee6905b5f4d3b243d38c13443dc8ed3239join_yr%27
[2022-09-02 09:18:15,960] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.455599ms
[2022-09-02 09:18:16,991] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 09:18:17,018] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+
[2022-09-02 09:18:17,023] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.751409ms
[2022-09-02 09:18:17,023] - [base_gsi:504] INFO - {'requestID': '2aab7858-ffdc-4672-aef7-e2af15093b69', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee6905b5f4d3b243d38c13443dc8ed3239join_yr', 'index_id': 'f8be37ee450a753a', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '2014', 'inclusion': 0, 'index_key': '`join_yr`', 'low': '2010'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((2010 < (`default`.`join_yr`)) and ((`default`.`join_yr`) < 2014))'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, {'#operator': 'Order', 'can_spill': True, 'clip_values': True, 'sort_terms': [{'expr': '(`default`.`_id`)'}]}]}, 'text': 'SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id'}], 'status': 'success', 'metrics': {'elapsedTime': '2.751409ms', 'executionTime': '2.668292ms', 'resultCount': 1, 'resultSize': 926, 'serviceLoad': 6}}
[2022-09-02 09:18:17,023] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 09:18:17,024] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 09:18:17,024] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 09:18:17,024] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 09:18:17,025] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 09:18:17,026] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 09:18:17,026] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,"_id" : doc["_id"]}; where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 09:18:17,026] - [tuq_generators:329] INFO - -->where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 09:18:17,087] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 09:18:17,128] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 09:18:17,158] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+&scan_consistency=request_plus
[2022-09-02 09:18:17,343] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 169.31226ms
[2022-09-02 09:18:17,343] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 09:18:17,344] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 09:18:19,288] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:18:19,315] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee6905b5f4d3b243d38c13443dc8ed3239join_yr'
[2022-09-02 09:18:19,343] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee6905b5f4d3b243d38c13443dc8ed3239join_yr%27
[2022-09-02 09:18:19,349] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 4.258057ms
[2022-09-02 09:18:19,349] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'e05c3e8a-255c-4e6e-b807-c9a04185fbee', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '4.258057ms', 'executionTime': '4.18613ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:18:19,457] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:18:19,460] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:18:19,460] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:18:20,237] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:18:20,297] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:18:20,298] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 09:18:20,408] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:18:20,470] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:18:20,474] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:18:20,474] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:18:21,137] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:18:21,198] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:18:21,199] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 09:18:21,316] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:18:21,375] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:18:21,375] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 09:18:21,375] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:18:21,403] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:18:21,431] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:18:21,436] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 3.703674ms
[2022-09-02 09:18:21,436] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '65549077-a79f-4395-9955-046d6790e72f', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '3.703674ms', 'executionTime': '3.64317ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:18:21,491] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:18:21,492] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 21.55680272510201, 'mem_free': 13313306624, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:18:21,492] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:18:21,496] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:18:21,496] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:18:22,207] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:18:22,213] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:18:22,213] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:18:22,904] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:18:22,911] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:18:22,912] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:18:24,022] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:18:24,031] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:18:24,032] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:18:25,248] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:18:32,664] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #9 test_remove_bucket_and_query ==============
[2022-09-02 09:18:32,799] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:18:32,853] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:18:32,906] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:18:32,958] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:18:32,959] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 09:18:33,039] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 09:18:33,041] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 09:18:33,069] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:18:33,201] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 09:18:33,202] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:18:33,229] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 09:18:33,256] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 09:18:33,256] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:18:33,284] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 09:18:33,309] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 09:18:33,310] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:18:33,337] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 09:18:33,364] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 09:18:33,364] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:18:33,392] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 09:18:33,392] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #9 test_remove_bucket_and_query ==============
[2022-09-02 09:18:33,392] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 09:18:33,393] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
summary so far suite gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests , pass 2 , fail 0
summary so far suite gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests , pass 1 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_9
ok

----------------------------------------------------------------------
Ran 1 test in 122.549s

OK
test_change_bucket_properties (gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_10

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_change_bucket_properties,groups=simple:and:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple:and:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 10, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_10'}
[2022-09-02 09:18:33,476] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:18:33,476] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:18:34,108] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:18:34,141] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:18:34,219] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 09:18:34,220] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #10 test_change_bucket_properties==============
[2022-09-02 09:18:34,220] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 09:18:34,503] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 09:18:34,532] - [task:164] INFO -  {'uptime': '1252', 'memoryTotal': 15466930176, 'memoryFree': 13462630400, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:18:34,558] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 09:18:34,558] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 09:18:34,559] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 09:18:34,593] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 09:18:34,632] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 09:18:34,632] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 09:18:34,661] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 09:18:34,662] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 09:18:34,663] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 09:18:34,663] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 09:18:34,713] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:18:34,716] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:18:34,716] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:18:35,406] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:18:35,407] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:18:35,542] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:18:35,543] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:18:35,576] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:18:35,603] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:18:35,632] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:18:35,764] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 09:18:35,793] - [task:164] INFO -  {'uptime': '1247', 'memoryTotal': 15466930176, 'memoryFree': 13459136512, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:18:35,821] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:18:35,850] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 09:18:35,851] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 09:18:35,902] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:18:35,910] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:18:35,910] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:18:36,568] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:18:36,569] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:18:36,701] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:18:36,702] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:18:36,733] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:18:36,761] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:18:36,792] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:18:36,919] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 09:18:36,948] - [task:164] INFO -  {'uptime': '1249', 'memoryTotal': 15466930176, 'memoryFree': 13459300352, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:18:36,976] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:18:37,006] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 09:18:37,007] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 09:18:37,061] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:18:37,065] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:18:37,065] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:18:37,758] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:18:37,760] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:18:37,885] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:18:37,886] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:18:37,916] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:18:37,943] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:18:37,974] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:18:38,097] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 09:18:38,128] - [task:164] INFO -  {'uptime': '1248', 'memoryTotal': 15466930176, 'memoryFree': 13467074560, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:18:38,155] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:18:38,184] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 09:18:38,184] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 09:18:38,239] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:18:38,242] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:18:38,242] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:18:38,906] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:18:38,907] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:18:39,036] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:18:39,037] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:18:39,069] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:18:39,095] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:18:39,128] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:18:39,224] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 09:18:39,607] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:18:44,612] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 09:18:44,701] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 09:18:44,707] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:18:44,708] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:18:45,436] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:18:45,437] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:18:45,569] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:18:45,570] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:18:45,570] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 09:18:46,338] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 09:18:46,396] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 09:18:46,397] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 09:19:18,320] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:19:18,627] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:19:19,020] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 09:19:19,024] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #10 test_change_bucket_properties ==============
[2022-09-02 09:19:19,088] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:19:19,088] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:19:19,809] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:19:19,814] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:19:19,815] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:19:20,822] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:19:20,830] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:19:20,831] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:19:21,984] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:19:21,996] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:19:21,996] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:19:23,126] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:19:29,915] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:19:29,915] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 34.42069438719913, 'mem_free': 13461180416, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:19:29,915] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:19:29,916] - [basetestcase:467] INFO - Time to execute basesetup : 56.442039251327515
[2022-09-02 09:19:29,971] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:19:29,971] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:19:30,030] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:19:30,030] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:19:30,084] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:19:30,084] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:19:30,140] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:19:30,140] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:19:30,197] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:19:30,258] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 09:19:30,259] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 09:19:30,259] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:19:35,268] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 09:19:35,272] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:19:35,273] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:19:35,966] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:19:37,056] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 09:19:37,249] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:19:40,319] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:19:40,402] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:19:40,403] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 09:19:40,403] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 09:20:10,408] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 09:20:10,439] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:20:10,467] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:20:10,535] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 66.414712ms
[2022-09-02 09:20:10,535] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '4bb9680b-dbd4-4cb5-a8bc-c72b2089c1c3', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '66.414712ms', 'executionTime': '66.352674ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:20:10,535] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 09:20:10,569] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 09:20:10,597] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 09:20:11,295] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 696.637847ms
[2022-09-02 09:20:11,296] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 09:20:11,367] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:20:11,407] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:20:11,415] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.624062ms
[2022-09-02 09:20:11,632] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:20:11,676] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 09:20:11,699] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 09:20:11,703] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 09:20:11,716] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 09:20:11,786] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:20:12,591] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employee86cabedfdb34489a98e35b08fa1e0fe3join_yr` ON default(join_yr) WHERE  join_yr > 2010 and join_yr < 2014  USING GSI  WITH {'defer_build': True}
[2022-09-02 09:20:12,618] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employee86cabedfdb34489a98e35b08fa1e0fe3join_yr%60+ON+default%28join_yr%29+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 09:20:12,664] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 43.962408ms
[2022-09-02 09:20:12,665] - [base_gsi:282] INFO - BUILD INDEX on default(`employee86cabedfdb34489a98e35b08fa1e0fe3join_yr`) USING GSI
[2022-09-02 09:20:13,695] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(`employee86cabedfdb34489a98e35b08fa1e0fe3join_yr`) USING GSI
[2022-09-02 09:20:13,724] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28%60employee86cabedfdb34489a98e35b08fa1e0fe3join_yr%60%29+USING+GSI
[2022-09-02 09:20:13,749] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 22.79728ms
[2022-09-02 09:20:13,794] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee86cabedfdb34489a98e35b08fa1e0fe3join_yr'
[2022-09-02 09:20:13,823] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee86cabedfdb34489a98e35b08fa1e0fe3join_yr%27
[2022-09-02 09:20:13,832] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.086362ms
[2022-09-02 09:20:14,863] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee86cabedfdb34489a98e35b08fa1e0fe3join_yr'
[2022-09-02 09:20:14,890] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee86cabedfdb34489a98e35b08fa1e0fe3join_yr%27
[2022-09-02 09:20:14,898] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.275684ms
[2022-09-02 09:20:15,928] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee86cabedfdb34489a98e35b08fa1e0fe3join_yr'
[2022-09-02 09:20:15,954] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee86cabedfdb34489a98e35b08fa1e0fe3join_yr%27
[2022-09-02 09:20:15,963] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.637846ms
[2022-09-02 09:20:16,995] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 09:20:17,024] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+
[2022-09-02 09:20:17,029] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.913315ms
[2022-09-02 09:20:17,029] - [base_gsi:504] INFO - {'requestID': 'db071bce-0281-4b05-b773-f1ae188b0a20', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee86cabedfdb34489a98e35b08fa1e0fe3join_yr', 'index_id': '2a2e3f384efd36f6', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '2014', 'inclusion': 0, 'index_key': '`join_yr`', 'low': '2010'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((2010 < (`default`.`join_yr`)) and ((`default`.`join_yr`) < 2014))'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, {'#operator': 'Order', 'can_spill': True, 'clip_values': True, 'sort_terms': [{'expr': '(`default`.`_id`)'}]}]}, 'text': 'SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id'}], 'status': 'success', 'metrics': {'elapsedTime': '2.913315ms', 'executionTime': '2.830963ms', 'resultCount': 1, 'resultSize': 926, 'serviceLoad': 6}}
[2022-09-02 09:20:17,030] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 09:20:17,030] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 09:20:17,030] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 09:20:17,031] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 09:20:17,031] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 09:20:17,031] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 09:20:17,031] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,"_id" : doc["_id"]}; where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 09:20:17,031] - [tuq_generators:329] INFO - -->where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 09:20:17,089] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 09:20:17,127] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 09:20:17,160] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+&scan_consistency=request_plus
[2022-09-02 09:20:17,347] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 173.023903ms
[2022-09-02 09:20:17,348] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 09:20:17,348] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 09:20:18,569] - [on_prem_rest_client:3084] INFO - http://127.0.0.1:9000/pools/default/buckets/default with param: 
[2022-09-02 09:20:18,621] - [on_prem_rest_client:3092] INFO - bucket default updated
[2022-09-02 09:20:18,650] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 09:20:18,676] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+
[2022-09-02 09:20:18,680] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.045356ms
[2022-09-02 09:20:18,680] - [base_gsi:504] INFO - {'requestID': 'ce05e17a-f063-4876-a42e-f26e5ebc91aa', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employee86cabedfdb34489a98e35b08fa1e0fe3join_yr', 'index_id': '2a2e3f384efd36f6', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '2014', 'inclusion': 0, 'index_key': '`join_yr`', 'low': '2010'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((2010 < (`default`.`join_yr`)) and ((`default`.`join_yr`) < 2014))'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, {'#operator': 'Order', 'can_spill': True, 'clip_values': True, 'sort_terms': [{'expr': '(`default`.`_id`)'}]}]}, 'text': 'SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id'}], 'status': 'success', 'metrics': {'elapsedTime': '2.045356ms', 'executionTime': '1.984134ms', 'resultCount': 1, 'resultSize': 926, 'serviceLoad': 6}}
[2022-09-02 09:20:18,681] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 09:20:18,681] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 09:20:18,681] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 09:20:18,681] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 09:20:18,681] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 09:20:18,682] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 09:20:18,682] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,"_id" : doc["_id"]}; where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 09:20:18,682] - [tuq_generators:329] INFO - -->where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 09:20:18,736] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 09:20:18,775] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 09:20:18,810] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+&scan_consistency=request_plus
[2022-09-02 09:20:18,912] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 92.636002ms
[2022-09-02 09:20:18,912] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 09:20:18,913] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 09:20:20,137] - [tuq_helper:320] INFO - RUN QUERY DROP INDEX employee86cabedfdb34489a98e35b08fa1e0fe3join_yr ON default USING GSI
[2022-09-02 09:20:20,164] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+INDEX+employee86cabedfdb34489a98e35b08fa1e0fe3join_yr+ON+default+USING+GSI
[2022-09-02 09:20:20,214] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 47.846565ms
[2022-09-02 09:20:20,249] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employee86cabedfdb34489a98e35b08fa1e0fe3join_yr'
[2022-09-02 09:20:20,276] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employee86cabedfdb34489a98e35b08fa1e0fe3join_yr%27
[2022-09-02 09:20:20,284] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.27624ms
[2022-09-02 09:20:20,285] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '00e0b882-3187-4549-80da-5814fca0584f', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '6.27624ms', 'executionTime': '6.214536ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:20:20,395] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:20:20,398] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:20:20,398] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:20:21,145] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:20:21,204] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:20:21,205] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 09:20:21,322] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:20:21,382] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:20:21,385] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:20:21,386] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:20:22,129] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:20:22,186] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:20:22,186] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 09:20:22,317] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:20:22,377] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:20:22,377] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 09:20:22,378] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:20:22,404] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:20:22,431] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:20:22,439] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.897766ms
[2022-09-02 09:20:22,467] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:20:22,494] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 09:20:22,571] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 73.296488ms
[2022-09-02 09:20:22,637] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:20:22,638] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 20.80689380811964, 'mem_free': 13282271232, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:20:22,638] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:20:22,642] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:20:22,643] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:20:23,355] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:20:23,360] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:20:23,360] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:20:24,030] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:20:24,036] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:20:24,036] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:20:25,230] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:20:25,240] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:20:25,240] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:20:26,469] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:20:34,262] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #10 test_change_bucket_properties ==============
[2022-09-02 09:20:34,417] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 09:20:35,182] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 09:20:35,212] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 09:20:35,212] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 09:20:35,269] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:20:35,323] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:20:35,380] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:20:35,381] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 09:20:35,467] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 09:20:35,468] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 09:20:35,497] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:20:35,644] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 09:20:35,645] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:20:35,675] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 09:20:35,708] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 09:20:35,708] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:20:35,736] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 09:20:35,763] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 09:20:35,764] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:20:35,794] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 09:20:35,822] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 09:20:35,822] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:20:35,850] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 09:20:35,851] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #10 test_change_bucket_properties ==============
[2022-09-02 09:20:35,851] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 09:20:35,852] - [basetestcase:778] INFO - closing all memcached connections
Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
summary so far suite gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests , pass 2 , fail 0
summary so far suite gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests , pass 2 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_10
ok

----------------------------------------------------------------------
Ran 1 test in 122.433s

OK
test_delete_create_bucket_and_query (gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests) ... Logs will be stored at /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_11

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=plasma -t gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_delete_create_bucket_and_query,groups=simple:and:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple:and:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'plasma', 'num_nodes': 4, 'case_number': 11, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_11'}
[2022-09-02 09:20:35,940] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:20:35,940] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:20:36,643] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:20:36,677] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:20:36,757] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 09:20:36,758] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #11 test_delete_create_bucket_and_query==============
[2022-09-02 09:20:36,758] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 09:20:36,965] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 09:20:36,991] - [task:164] INFO -  {'uptime': '1373', 'memoryTotal': 15466930176, 'memoryFree': 13450158080, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:20:37,017] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 09:20:37,017] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 09:20:37,018] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 09:20:37,052] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 09:20:37,085] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 09:20:37,085] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 09:20:37,120] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 09:20:37,121] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 09:20:37,121] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 09:20:37,121] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 09:20:37,173] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:20:37,176] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:20:37,176] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:20:37,846] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:20:37,848] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:20:37,970] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:20:37,971] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:20:38,001] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:20:38,026] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:20:38,055] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:20:38,180] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 09:20:38,207] - [task:164] INFO -  {'uptime': '1373', 'memoryTotal': 15466930176, 'memoryFree': 13449994240, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:20:38,233] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:20:38,260] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 09:20:38,261] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 09:20:38,313] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:20:38,316] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:20:38,316] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:20:38,965] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:20:38,966] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:20:39,089] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:20:39,090] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:20:39,121] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:20:39,152] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:20:39,184] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:20:39,313] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 09:20:39,343] - [task:164] INFO -  {'uptime': '1369', 'memoryTotal': 15466930176, 'memoryFree': 13449879552, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:20:39,372] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:20:39,402] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 09:20:39,402] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 09:20:39,457] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:20:39,462] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:20:39,462] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:20:40,198] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:20:40,199] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:20:40,337] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:20:40,338] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:20:40,369] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:20:40,397] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:20:40,428] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:20:40,563] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 09:20:40,592] - [task:164] INFO -  {'uptime': '1369', 'memoryTotal': 15466930176, 'memoryFree': 13450260480, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:20:40,620] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:20:40,651] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 09:20:40,653] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 09:20:40,709] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:20:40,713] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:20:40,714] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:20:41,438] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:20:41,439] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:20:41,576] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:20:41,578] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:20:41,612] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:20:41,640] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:20:41,670] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:20:41,766] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 09:20:42,166] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:20:47,170] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 09:20:47,264] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 09:20:47,272] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:20:47,273] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:20:48,017] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:20:48,018] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:20:48,154] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:20:48,155] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:20:48,155] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 09:20:48,895] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 09:20:48,950] - [on_prem_rest_client:3047] INFO - 0.05 seconds to create bucket default
[2022-09-02 09:20:48,950] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 09:21:18,375] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:21:18,714] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:21:19,133] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 09:21:19,137] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #11 test_delete_create_bucket_and_query ==============
[2022-09-02 09:21:19,197] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:21:19,197] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:21:19,959] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:21:19,964] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:21:19,964] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:21:21,144] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:21:21,153] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:21:21,154] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:21:22,379] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:21:22,389] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:21:22,390] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:21:23,610] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:21:30,621] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:21:30,621] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 34.96542313764336, 'mem_free': 13457199104, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:21:30,622] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:21:30,622] - [basetestcase:467] INFO - Time to execute basesetup : 54.68471026420593
[2022-09-02 09:21:30,673] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:21:30,674] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:21:30,727] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:21:30,728] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:21:30,780] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:21:30,780] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:21:30,833] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:21:30,833] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:21:30,885] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:21:30,954] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 09:21:30,955] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 09:21:30,955] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:21:35,964] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 09:21:35,967] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:21:35,968] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:21:36,709] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:21:37,943] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 09:21:38,125] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:21:41,053] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:21:41,140] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:21:41,140] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 09:21:41,141] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 09:22:11,169] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 09:22:11,198] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:22:11,225] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:22:11,292] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 65.331948ms
[2022-09-02 09:22:11,292] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '6d881942-857b-4fca-b33e-50d8f053bbc1', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '65.331948ms', 'executionTime': '65.257188ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:22:11,292] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 09:22:11,319] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 09:22:11,345] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 09:22:12,010] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 662.675322ms
[2022-09-02 09:22:12,010] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 09:22:12,064] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:22:12,103] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:22:12,112] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.612341ms
[2022-09-02 09:22:12,309] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:22:12,343] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 09:22:12,361] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 09:22:12,362] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 09:22:12,375] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 09:22:12,451] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:22:13,273] - [tuq_helper:320] INFO - RUN QUERY CREATE INDEX `employeeb979054c2a76477eaa1665f92fb20e89join_yr` ON default(join_yr) WHERE  join_yr > 2010 and join_yr < 2014  USING GSI  WITH {'defer_build': True}
[2022-09-02 09:22:13,299] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+INDEX+%60employeeb979054c2a76477eaa1665f92fb20e89join_yr%60+ON+default%28join_yr%29+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014++USING+GSI++WITH+%7B%27defer_build%27%3A+True%7D
[2022-09-02 09:22:13,350] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 47.774841ms
[2022-09-02 09:22:13,351] - [base_gsi:282] INFO - BUILD INDEX on default(`employeeb979054c2a76477eaa1665f92fb20e89join_yr`) USING GSI
[2022-09-02 09:22:14,382] - [tuq_helper:320] INFO - RUN QUERY BUILD INDEX on default(`employeeb979054c2a76477eaa1665f92fb20e89join_yr`) USING GSI
[2022-09-02 09:22:14,409] - [on_prem_rest_client:4201] INFO - query params : statement=BUILD+INDEX+on+default%28%60employeeb979054c2a76477eaa1665f92fb20e89join_yr%60%29+USING+GSI
[2022-09-02 09:22:14,431] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 20.043236ms
[2022-09-02 09:22:14,471] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeb979054c2a76477eaa1665f92fb20e89join_yr'
[2022-09-02 09:22:14,500] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeb979054c2a76477eaa1665f92fb20e89join_yr%27
[2022-09-02 09:22:14,509] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 7.727431ms
[2022-09-02 09:22:15,538] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeb979054c2a76477eaa1665f92fb20e89join_yr'
[2022-09-02 09:22:15,564] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeb979054c2a76477eaa1665f92fb20e89join_yr%27
[2022-09-02 09:22:15,572] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.01201ms
[2022-09-02 09:22:16,604] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeb979054c2a76477eaa1665f92fb20e89join_yr'
[2022-09-02 09:22:16,630] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeb979054c2a76477eaa1665f92fb20e89join_yr%27
[2022-09-02 09:22:16,638] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.008043ms
[2022-09-02 09:22:17,667] - [tuq_helper:320] INFO - RUN QUERY EXPLAIN SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 09:22:17,693] - [on_prem_rest_client:4201] INFO - query params : statement=EXPLAIN+SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+
[2022-09-02 09:22:17,697] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 2.653283ms
[2022-09-02 09:22:17,698] - [base_gsi:504] INFO - {'requestID': '73bdbe97-8e7f-44f3-bd37-17db8f74c3f9', 'signature': 'json', 'results': [{'plan': {'#operator': 'Sequence', '~children': [{'#operator': 'Sequence', '~children': [{'#operator': 'IndexScan3', 'index': 'employeeb979054c2a76477eaa1665f92fb20e89join_yr', 'index_id': 'd229b4d4f2590556', 'index_projection': {'primary_key': True}, 'keyspace': 'default', 'namespace': 'default', 'spans': [{'exact': True, 'range': [{'high': '2014', 'inclusion': 0, 'index_key': '`join_yr`', 'low': '2010'}]}], 'using': 'gsi'}, {'#operator': 'Fetch', 'keyspace': 'default', 'namespace': 'default'}, {'#operator': 'Parallel', '~child': {'#operator': 'Sequence', '~children': [{'#operator': 'Filter', 'condition': '((2010 < (`default`.`join_yr`)) and ((`default`.`join_yr`) < 2014))'}, {'#operator': 'InitialProject', 'result_terms': [{'expr': 'self', 'star': True}]}]}}]}, {'#operator': 'Order', 'can_spill': True, 'clip_values': True, 'sort_terms': [{'expr': '(`default`.`_id`)'}]}]}, 'text': 'SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id'}], 'status': 'success', 'metrics': {'elapsedTime': '2.653283ms', 'executionTime': '2.580788ms', 'resultCount': 1, 'resultSize': 926, 'serviceLoad': 6}}
[2022-09-02 09:22:17,698] - [tuq_generators:70] INFO - FROM clause ===== is default
[2022-09-02 09:22:17,699] - [tuq_generators:72] INFO - WHERE clause ===== is   doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 09:22:17,699] - [tuq_generators:74] INFO - UNNEST clause ===== is None
[2022-09-02 09:22:17,699] - [tuq_generators:76] INFO - SELECT clause ===== is {"*" : doc,}
[2022-09-02 09:22:17,699] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 09:22:17,700] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 09:22:17,700] - [tuq_generators:326] INFO - -->select_clause:{"*" : doc,"_id" : doc["_id"]}; where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 09:22:17,700] - [tuq_generators:329] INFO - -->where_clause=  doc["join_yr"] > 2010 and doc["join_yr"] < 2014 
[2022-09-02 09:22:17,774] - [tuq_generators:419] INFO - ORDER clause ========= is doc["_id"],
[2022-09-02 09:22:17,816] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM default WHERE  join_yr > 2010 and join_yr < 2014 ORDER BY _id 
[2022-09-02 09:22:17,841] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+default+WHERE++join_yr+%3E+2010+and+join_yr+%3C+2014+ORDER+BY+_id+&scan_consistency=request_plus
[2022-09-02 09:22:18,001] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 146.489265ms
[2022-09-02 09:22:18,001] - [tuq_helper:409] INFO -  Analyzing Actual Result
[2022-09-02 09:22:18,002] - [tuq_helper:411] INFO -  Analyzing Expected Result
[2022-09-02 09:22:20,183] - [basetestcase:847] INFO - sleep for 2 secs.  ...
[2022-09-02 09:22:22,805] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 09:22:22,862] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 09:22:22,862] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 09:23:18,342] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:23:18,680] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:23:19,091] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 09:23:19,095] - [basetestcase:847] INFO - sleep for 2 secs.  ...
[2022-09-02 09:23:21,128] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 09:23:21,129] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:23:21,158] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 09:23:21,190] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 09:23:21,190] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:23:21,220] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 09:23:21,252] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 09:23:21,252] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:23:21,281] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 09:23:21,312] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 09:23:21,313] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:23:21,342] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 09:23:21,404] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:23:21,431] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeb979054c2a76477eaa1665f92fb20e89join_yr'
[2022-09-02 09:23:21,460] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeb979054c2a76477eaa1665f92fb20e89join_yr%27
[2022-09-02 09:23:21,505] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 43.039913ms
[2022-09-02 09:23:21,505] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'b0a5a337-10a1-42d2-b103-efc691571e7c', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '43.039913ms', 'executionTime': '42.976245ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:23:21,532] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = 'employeeb979054c2a76477eaa1665f92fb20e89join_yr'
[2022-09-02 09:23:21,559] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27employeeb979054c2a76477eaa1665f92fb20e89join_yr%27
[2022-09-02 09:23:21,562] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 1.212865ms
[2022-09-02 09:23:21,562] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'ed124367-e985-4a25-82e7-e4643067c004', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '1.212865ms', 'executionTime': '1.147231ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:23:21,617] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:23:21,761] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:23:21,763] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:23:21,763] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:23:22,504] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:23:22,565] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:23:22,565] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 09:23:22,681] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:23:22,744] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:23:22,749] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:23:22,749] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:23:23,506] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:23:23,565] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:23:23,565] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 09:23:23,681] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:23:23,739] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:23:23,739] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 09:23:23,739] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:23:23,769] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:23:23,796] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:23:23,802] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.310195ms
[2022-09-02 09:23:23,803] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': 'b39b88b3-b66e-4216-8cab-3b9fa5025af6', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '5.310195ms', 'executionTime': '5.247825ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:23:23,803] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:23:23,829] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:23:23,855] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:23:23,858] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 1.13112ms
[2022-09-02 09:23:23,858] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '57f7ee3a-f374-4443-b151-31acaf15e2a2', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '1.13112ms', 'executionTime': '1.057561ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:23:23,910] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:23:23,910] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 7.804188543039331, 'mem_free': 13436977152, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:23:23,911] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:23:23,914] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:23:23,915] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:23:24,621] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:23:24,626] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:23:24,626] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:23:25,521] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:23:25,529] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:23:25,529] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:23:26,774] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:23:26,783] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:23:26,783] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:23:28,022] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:23:35,731] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #11 test_delete_create_bucket_and_query ==============
[2022-09-02 09:23:36,081] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 09:23:36,393] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 09:23:36,421] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 09:23:36,421] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 09:23:36,480] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:23:36,533] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:23:36,587] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:23:36,588] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 09:23:36,667] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 09:23:36,668] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 09:23:36,694] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:23:36,825] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 09:23:36,825] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:23:36,853] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 09:23:36,879] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 09:23:36,879] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:23:36,905] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 09:23:36,931] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 09:23:36,931] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:23:36,957] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 09:23:36,982] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 09:23:36,983] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:23:37,012] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 09:23:37,012] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #11 test_delete_create_bucket_and_query ==============
[2022-09-02 09:23:37,012] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 09:23:37,012] - [basetestcase:778] INFO - closing all memcached connections
ok

----------------------------------------------------------------------
Ran 1 test in 181.127s

OK
suite_tearDown (gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests) ... Cluster instance shutdown with force
summary so far suite gsi.indexscans_gsi.SecondaryIndexingScanTests , pass 6 , fail 0
summary so far suite gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests , pass 2 , fail 0
summary so far suite gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests , pass 3 , fail 0
testrunner logs, diags and results are available under /opt/build/testrunner/logs/testrunner-22-Sep-02_08-57-39/test_11

*** Tests executed count: 11

Run after suite setup for gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_delete_create_bucket_and_query
[2022-09-02 09:23:37,069] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:23:37,069] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:23:37,780] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:23:37,812] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:23:37,895] - [on_prem_rest_client:2668] INFO - Node version in cluster 7.2.0-1948-rel-EE-enterprise
[2022-09-02 09:23:37,895] - [basetestcase:157] INFO - ==============  basetestcase setup was started for test #11 suite_tearDown==============
[2022-09-02 09:23:37,896] - [basetestcase:224] INFO - initializing cluster
[2022-09-02 09:23:38,096] - [task:159] INFO - server: ip:127.0.0.1 port:9000 ssh_username:Administrator, nodes/self 
[2022-09-02 09:23:38,126] - [task:164] INFO -  {'uptime': '1554', 'memoryTotal': 15466930176, 'memoryFree': 13434187776, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9000', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7650, 'memcached': 12000, 'id': 'n_0@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9000', 'services': ['index', 'kv', 'n1ql'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:23:38,154] - [task:203] INFO - quota for index service will be 256 MB
[2022-09-02 09:23:38,154] - [task:205] INFO - set index quota to node 127.0.0.1 
[2022-09-02 09:23:38,155] - [on_prem_rest_client:1271] INFO - pools/default params : indexMemoryQuota=256
[2022-09-02 09:23:38,187] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7650
[2022-09-02 09:23:38,224] - [on_prem_rest_client:1218] INFO - --> init_node_services(Administrator,asdasd,127.0.0.1,9000,['kv', 'index', 'n1ql'])
[2022-09-02 09:23:38,224] - [on_prem_rest_client:1234] INFO - /node/controller/setupServices params on 127.0.0.1: 9000:hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql
[2022-09-02 09:23:38,253] - [on_prem_rest_client:1080] ERROR - POST http://127.0.0.1:9000//node/controller/setupServices body: hostname=127.0.0.1%3A9000&user=Administrator&password=asdasd&services=kv%2Cindex%2Cn1ql headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 400 reason: unknown b'{"errors":{"services":"cannot change node services after cluster is provisioned"}}' auth: Administrator:asdasd
[2022-09-02 09:23:38,254] - [on_prem_rest_client:1240] INFO - This node is already provisioned with services, we do not consider this as failure for test case
[2022-09-02 09:23:38,254] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9000
[2022-09-02 09:23:38,254] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9000:port=9000&username=Administrator&password=asdasd
[2022-09-02 09:23:38,302] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:23:38,306] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:23:38,307] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:23:38,991] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:23:38,992] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:23:39,131] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:23:39,132] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:23:39,164] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:23:39,192] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:23:39,223] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:23:39,345] - [task:159] INFO - server: ip:127.0.0.1 port:9001 ssh_username:Administrator, nodes/self 
[2022-09-02 09:23:39,374] - [task:164] INFO -  {'uptime': '1553', 'memoryTotal': 15466930176, 'memoryFree': 13437255680, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9001', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12002, 'id': 'n_1@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9001', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:23:39,403] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:23:39,437] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9001
[2022-09-02 09:23:39,437] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9001:port=9001&username=Administrator&password=asdasd
[2022-09-02 09:23:39,492] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:23:39,495] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:23:39,496] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:23:40,299] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:23:40,300] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9001/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:23:40,440] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:23:40,441] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:23:40,471] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:23:40,500] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9001: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:23:40,531] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:23:40,659] - [task:159] INFO - server: ip:127.0.0.1 port:9002 ssh_username:Administrator, nodes/self 
[2022-09-02 09:23:40,688] - [task:164] INFO -  {'uptime': '1550', 'memoryTotal': 15466930176, 'memoryFree': 13436473344, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9002', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12004, 'id': 'n_2@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9002', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:23:40,717] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:23:40,747] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9002
[2022-09-02 09:23:40,747] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9002:port=9002&username=Administrator&password=asdasd
[2022-09-02 09:23:40,800] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:23:40,803] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:23:40,804] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:23:41,522] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:23:41,523] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9002/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:23:41,649] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:23:41,650] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:23:41,680] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:23:41,707] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9002: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:23:41,736] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:23:41,855] - [task:159] INFO - server: ip:127.0.0.1 port:9003 ssh_username:Administrator, nodes/self 
[2022-09-02 09:23:41,883] - [task:164] INFO -  {'uptime': '1550', 'memoryTotal': 15466930176, 'memoryFree': 13437325312, 'mcdMemoryReserved': 11800, 'mcdMemoryAllocated': 11800, 'status': 'healthy', 'hostname': '127.0.0.1:9003', 'clusterCompatibility': 458754, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.2.0-1948-rel-EE-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7906, 'memcached': 12006, 'id': 'n_3@cb.local', 'ip': '127.0.0.1', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '9003', 'services': ['kv'], 'storageTotalRam': 14750, 'curr_items': 0}
[2022-09-02 09:23:41,909] - [on_prem_rest_client:1258] INFO - pools/default params : memoryQuota=7906
[2022-09-02 09:23:41,938] - [on_prem_rest_client:1152] INFO - --> in init_cluster...Administrator,asdasd,9003
[2022-09-02 09:23:41,938] - [on_prem_rest_client:1157] INFO - settings/web params on 127.0.0.1:9003:port=9003&username=Administrator&password=asdasd
[2022-09-02 09:23:41,990] - [on_prem_rest_client:1159] INFO - --> status:True
[2022-09-02 09:23:41,994] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:23:41,994] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:23:42,730] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:23:42,731] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9003/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:23:42,871] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:23:42,872] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:23:42,907] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:23:42,937] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9003: True content: [7,2] command: cluster_compat_mode:get_compat_version().
[2022-09-02 09:23:42,968] - [on_prem_rest_client:1295] INFO - settings/indexes params : storageMode=plasma
[2022-09-02 09:23:43,067] - [basetestcase:2396] INFO - **** add built-in 'cbadminbucket' user to node 127.0.0.1 ****
[2022-09-02 09:23:43,500] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:23:48,505] - [basetestcase:2401] INFO - **** add 'admin' role to 'cbadminbucket' user ****
[2022-09-02 09:23:48,590] - [basetestcase:262] INFO - done initializing cluster
[2022-09-02 09:23:48,603] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:23:48,603] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:23:49,293] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:23:49,294] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: curl --silent --show-error http://Administrator:asdasd@localhost:9000/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).'
[2022-09-02 09:23:49,429] - [remote_util:3397] INFO - command executed with Administrator but got an error b'curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)\n' ...
[2022-09-02 09:23:49,430] - [remote_util:5249] INFO - b'ok'
[2022-09-02 09:23:49,431] - [basetestcase:2992] INFO - Enabled diag/eval for non-local hosts from 127.0.0.1
[2022-09-02 09:23:50,189] - [on_prem_rest_client:3022] INFO - http://127.0.0.1:9000/pools/default/buckets with param: name=default&ramQuotaMB=7650&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore
[2022-09-02 09:23:50,248] - [on_prem_rest_client:3047] INFO - 0.06 seconds to create bucket default
[2022-09-02 09:23:50,248] - [bucket_helper:340] INFO - waiting for memcached bucket : default in 127.0.0.1 to accept set ops
[2022-09-02 09:24:18,479] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:24:18,806] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:24:19,083] - [task:382] INFO - bucket 'default' was created with per node RAM quota: 7650
[2022-09-02 09:24:19,086] - [basetestcase:435] INFO - ==============  basetestcase setup was finished for test #11 suite_tearDown ==============
[2022-09-02 09:24:19,150] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:24:19,151] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:24:19,865] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:24:19,870] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:24:19,870] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:24:20,940] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:24:20,948] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:24:20,948] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:24:22,201] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:24:22,210] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:24:22,210] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:24:23,466] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:24:30,803] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:24:30,804] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 34.14147958743391, 'mem_free': 13442445312, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:24:30,804] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:24:30,804] - [basetestcase:467] INFO - Time to execute basesetup : 53.737107038497925
[2022-09-02 09:24:30,862] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:24:30,863] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:24:30,918] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:24:30,919] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:24:30,976] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:24:30,976] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:24:31,034] - [newtuq:23] INFO - Initial status of 127.0.0.1 cluster is healthy
[2022-09-02 09:24:31,034] - [newtuq:28] INFO - current status of 127.0.0.1  is healthy
[2022-09-02 09:24:31,088] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:24:31,152] - [on_prem_rest_client:2093] INFO - {'indexer.settings.storage_mode': 'plasma'} set
[2022-09-02 09:24:31,152] - [newtuq:39] INFO - Allowing the indexer to complete restart after setting the internal settings
[2022-09-02 09:24:31,153] - [basetestcase:847] INFO - sleep for 5 secs.  ...
[2022-09-02 09:24:36,160] - [on_prem_rest_client:2093] INFO - {'indexer.api.enableTestServer': True} set
[2022-09-02 09:24:36,165] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:24:36,165] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:24:36,897] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:24:37,998] - [basetestcase:2772] INFO - create 2016.0 to default documents...
[2022-09-02 09:24:38,166] - [data_helper:312] INFO - creating direct client 127.0.0.1:12000 default
[2022-09-02 09:24:41,216] - [basetestcase:2785] INFO - LOAD IS FINISHED
[2022-09-02 09:24:41,302] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:24:41,302] - [newtuq:97] INFO - ip:127.0.0.1 port:9000 ssh_username:Administrator
[2022-09-02 09:24:41,302] - [basetestcase:847] INFO - sleep for 30 secs.  ...
[2022-09-02 09:25:11,314] - [tuq_helper:755] INFO - Check if index existed in default on server 127.0.0.1
[2022-09-02 09:25:11,344] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:25:11,369] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:25:11,436] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 64.50587ms
[2022-09-02 09:25:11,436] - [tuq_helper:919] ERROR - Fail to get index list.  List output: {'requestID': '43d49a80-15ff-46ab-bb34-962681e0558a', 'signature': {'*': '*'}, 'results': [], 'status': 'success', 'metrics': {'elapsedTime': '64.50587ms', 'executionTime': '64.428621ms', 'resultCount': 0, 'resultSize': 0, 'serviceLoad': 6}}
[2022-09-02 09:25:11,436] - [tuq_helper:758] INFO - Create primary index
[2022-09-02 09:25:11,462] - [tuq_helper:320] INFO - RUN QUERY CREATE PRIMARY INDEX ON default 
[2022-09-02 09:25:11,487] - [on_prem_rest_client:4201] INFO - query params : statement=CREATE+PRIMARY+INDEX+ON+default+
[2022-09-02 09:25:12,228] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 738.951483ms
[2022-09-02 09:25:12,229] - [tuq_helper:760] INFO - Check if index is online
[2022-09-02 09:25:12,297] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:25:12,331] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:25:12,338] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 6.019238ms
[2022-09-02 09:25:12,573] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:25:12,635] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"queryport.client.waitForScheduledIndex": false} client_cert=None verify=False
[2022-09-02 09:25:12,649] - [on_prem_rest_client:2080] INFO - {'queryport.client.waitForScheduledIndex': False} set
[2022-09-02 09:25:12,650] - [on_prem_rest_client:1116] INFO - Making a rest request api=http://127.0.0.1:9102/settings verb=POST params={"indexer.allowScheduleCreateRebal": true} client_cert=None verify=False
[2022-09-02 09:25:12,660] - [on_prem_rest_client:2080] INFO - {'indexer.allowScheduleCreateRebal': True} set
[2022-09-02 09:25:12,719] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:25:12,850] - [basetestcase:2663] INFO - list of index nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:25:12,853] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:25:12,853] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:25:13,645] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:25:13,699] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:25:13,700] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/indexer.log* | wc -l
[2022-09-02 09:25:13,822] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:25:13,880] - [basetestcase:2663] INFO - list of kv nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:25:13,883] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:25:13,883] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:25:14,647] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:25:14,704] - [on_prem_rest_client:1880] INFO - /diag/eval status on 127.0.0.1:9000: True content: "/opt/build/ns_server/logs/n_0" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))).
[2022-09-02 09:25:14,704] - [remote_util:3350] INFO - running command.raw on 127.0.0.1: zgrep "panic" "/opt/build/ns_server/logs/n_0"/projector.log* | wc -l
[2022-09-02 09:25:14,829] - [remote_util:3399] INFO - command executed successfully with Administrator
[2022-09-02 09:25:14,887] - [basetestcase:2663] INFO - list of n1ql nodes in cluster: [ip:127.0.0.1 port:9000 ssh_username:Administrator]
[2022-09-02 09:25:14,887] - [tuq_helper:731] INFO - CHECK FOR PRIMARY INDEXES
[2022-09-02 09:25:14,887] - [tuq_helper:738] INFO - DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:25:14,912] - [tuq_helper:320] INFO - RUN QUERY SELECT * FROM system:indexes where name = '#primary'
[2022-09-02 09:25:14,940] - [on_prem_rest_client:4201] INFO - query params : statement=SELECT+%2A+FROM+system%3Aindexes+where+name+%3D+%27%23primary%27
[2022-09-02 09:25:14,947] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 5.432971ms
[2022-09-02 09:25:14,972] - [tuq_helper:320] INFO - RUN QUERY DROP PRIMARY INDEX ON default USING GSI
[2022-09-02 09:25:15,000] - [on_prem_rest_client:4201] INFO - query params : statement=DROP+PRIMARY+INDEX+ON+default+USING+GSI
[2022-09-02 09:25:15,056] - [tuq_helper:349] INFO - TOTAL ELAPSED TIME: 52.169608ms
[2022-09-02 09:25:15,114] - [basetestcase:601] INFO - ------- Cluster statistics -------
[2022-09-02 09:25:15,114] - [basetestcase:603] INFO - 127.0.0.1:9000 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 5.354582186807966, 'mem_free': 13266853888, 'mem_total': 15466930176, 'swap_mem_used': 0, 'swap_mem_total': 0}
[2022-09-02 09:25:15,114] - [basetestcase:604] INFO - --- End of cluster statistics ---
[2022-09-02 09:25:15,119] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:25:15,119] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:25:15,879] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:25:15,884] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:25:15,884] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:25:16,665] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:25:16,672] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:25:16,673] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:25:17,891] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:25:17,899] - [remote_util:318] INFO - SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
[2022-09-02 09:25:17,899] - [remote_util:354] INFO - SSH Connected to 127.0.0.1 as Administrator
[2022-09-02 09:25:19,220] - [remote_util:3674] INFO - extract_remote_info-->distribution_type: linux, distribution_version: default
[2022-09-02 09:25:28,021] - [basetestcase:719] INFO - ==============  basetestcase cleanup was started for test #11 suite_tearDown ==============
[2022-09-02 09:25:28,169] - [bucket_helper:130] INFO - deleting existing buckets ['default'] on 127.0.0.1
[2022-09-02 09:25:29,140] - [bucket_helper:221] INFO - waiting for bucket deletion to complete....
[2022-09-02 09:25:29,169] - [on_prem_rest_client:141] INFO - node 127.0.0.1 existing buckets : []
[2022-09-02 09:25:29,170] - [bucket_helper:153] INFO - deleted bucket : default from 127.0.0.1
[2022-09-02 09:25:29,225] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:25:29,284] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:25:29,346] - [bucket_helper:155] INFO - Could not find any buckets for node 127.0.0.1, nothing to delete
[2022-09-02 09:25:29,347] - [basetestcase:738] INFO - Removing user 'clientuser'...
[2022-09-02 09:25:29,429] - [on_prem_rest_client:1080] ERROR - DELETE http://127.0.0.1:9000/settings/rbac/users/local/clientuser body:  headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:asdasd
[2022-09-02 09:25:29,430] - [basetestcase:742] INFO - b'"User was not found."'
[2022-09-02 09:25:29,457] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:25:29,587] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9000
[2022-09-02 09:25:29,588] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:25:29,614] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9000 is running
[2022-09-02 09:25:29,641] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9001
[2022-09-02 09:25:29,641] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:25:29,668] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9001 is running
[2022-09-02 09:25:29,694] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9002
[2022-09-02 09:25:29,695] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:25:29,721] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9002 is running
[2022-09-02 09:25:29,747] - [cluster_helper:90] INFO - waiting for ns_server @ 127.0.0.1:9003
[2022-09-02 09:25:29,747] - [on_prem_rest_client:45] INFO - -->is_ns_server_running?
[2022-09-02 09:25:29,774] - [cluster_helper:94] INFO - ns_server @ 127.0.0.1:9003 is running
[2022-09-02 09:25:29,774] - [basetestcase:761] INFO - ==============  basetestcase cleanup was finished for test #11 suite_tearDown ==============
[2022-09-02 09:25:29,774] - [basetestcase:773] INFO - closing all ssh connections
[2022-09-02 09:25:29,775] - [basetestcase:778] INFO - closing all memcached connections
ok

----------------------------------------------------------------------
Ran 1 test in 112.788s

OK
Cluster instance shutdown with force
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('Thread', , 'was not properly terminated, will be terminated now.')
Shutting down the thread...
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index', ' pass')
('gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests.test_multi_create_drop_index', ' pass')
('gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests.test_multi_create_drop_index', ' pass')
('gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_remove_bucket_and_query', ' pass')
('gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_change_bucket_properties', ' pass')
('gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests.test_delete_create_bucket_and_query', ' pass')
*** TestRunner ***
scripts/start_cluster_and_run_tests.sh: line 91:  7399 Terminated              COUCHBASE_NUM_VBUCKETS=64 python3 ./cluster_run --nodes=$servers_count &> $wd/cluster_run.log  (wd: /opt/build/ns_server)

Testing Failed: Required test failed

FAIL	github.com/couchbase/indexing/secondary/tests/functionaltests	80.636s
FAIL	github.com/couchbase/indexing/secondary/tests/largedatatests	0.137s
panic: Error while initialising cluster: AddNodeAndRebalance: Error during rebalance, err: Rebalance failed
panic: Error in ChangeIndexerSettings: Post "http://:2/internal/settings": dial tcp :2: connect: connection refused
Version: versions-02.09.2022-06.21.cfg
Build Log: make-02.09.2022-06.21.log
Server Log: logs-02.09.2022-06.21.tar.gz

Finished