Building

Started building at 2023/01/06 19:20:29
Using pegged server, 3451 build
Calculating base
Updating mirror
Basing run on 7.5.0-3451 6ee824f3d6
Updating tree for run 06.01.2023-19.20
query is at bd6cae0, changes since last good build: none
gometa is at 4f94ea5, changes since last good build: none
ns_server is at 7fb2ffc, changes since last good build: none
couchstore is at d75745b, changes since last good build: none
forestdb is at acba458, changes since last good build: none
kv_engine is at f0f330f, changes since last good build: none
Switching indexing to unstable
indexing is at a5d8a80, changes since last good build: none
Switching plasma to unstable
plasma is at 353bd02, changes since last good build: 
fatal: Invalid revision range 46f5be1bce014306817c10fbf78dcc3510ca57ae..HEAD

Switching nitro to unstable
nitro is at d5f5610, changes since last good build: none
Switching gometa to master
gometa is at 4f94ea5, changes since last good build: none
Switching testrunner to master
Submodule 'gauntlet' (https://github.com/pavithra-mahamani/gauntlet) registered for path 'gauntlet'
Submodule 'java_sdk_client' (https://github.com/couchbaselabs/java_sdk_client) registered for path 'java_sdk_client'
Submodule 'lib/capellaAPI' (https://github.com/couchbaselabs/CapellaRESTAPIs) registered for path 'lib/capellaAPI'
Submodule 'magma_loader/DocLoader' (https://github.com/couchbaselabs/DocLoader.git) registered for path 'magma_loader/DocLoader'
Cloning into '/opt/build/testrunner/gauntlet'...
Cloning into '/opt/build/testrunner/java_sdk_client'...
Cloning into '/opt/build/testrunner/lib/capellaAPI'...
Cloning into '/opt/build/testrunner/magma_loader/DocLoader'...
Submodule path 'gauntlet': checked out '4e2424851a59c6f4b4edfdb7e36fa6a0874d6300'
Submodule path 'java_sdk_client': checked out '5dd338995c16ac2f5b187729e549b28862060732'
Submodule path 'lib/capellaAPI': checked out '2d99cb11c4006ad5738fe0b3062c4a2bb7f6ee7d'
Submodule path 'magma_loader/DocLoader': checked out '997f514b43e9beef241d0b7e61c8171042d22543'
testrunner is at 423e6fd, changes since last good build: none
Pulling in uncommitted change 184353 at refs/changes/53/184353/7
Total 5 (delta 3), reused 4 (delta 3)
[unstable 8e532679] MB-54922: Add PauseUploadToken
 Author: akhilmd 
 Date: Thu Dec 22 22:22:26 2022 +0530
 1 file changed, 93 insertions(+)
Pulling in uncommitted change 184354 at refs/changes/54/184354/7
Total 10 (delta 6), reused 8 (delta 6)
[unstable 3b3af427] MB-54922: Add skeleton for Pauser
 Author: akhilmd 
 Date: Thu Dec 22 22:04:19 2022 +0530
 1 file changed, 34 insertions(+)
Pulling in uncommitted change 184355 at refs/changes/55/184355/7
Total 15 (delta 11), reused 14 (delta 11)
[unstable 5ba5e330] MB-54922: Generate PauseUploadTokens
 Author: akhilmd 
 Date: Thu Dec 22 22:37:31 2022 +0530
 1 file changed, 87 insertions(+), 1 deletion(-)
Pulling in uncommitted change 184356 at refs/changes/56/184356/7
Total 20 (delta 15), reused 19 (delta 15)
[unstable 01459cca] MB-54922: Publish PauseUploadTokens to metaKV
 Author: akhilmd 
 Date: Thu Dec 22 22:46:29 2022 +0530
 1 file changed, 30 insertions(+), 1 deletion(-)
Pulling in uncommitted change 184357 at refs/changes/57/184357/7
Total 26 (delta 18), reused 24 (delta 18)
[unstable 8ac1d821] MB-54923: Add skeleton for pause observer
 Author: akhilmd 
 Date: Thu Dec 22 23:16:34 2022 +0530
 2 files changed, 118 insertions(+), 4 deletions(-)
Pulling in uncommitted change 184363 at refs/changes/63/184363/5
Total 31 (delta 23), reused 30 (delta 23)
[unstable ad75fab6] MB-54923: Add pause upload token in-mem bookkeeping
 Author: akhilmd 
 Date: Fri Dec 23 14:36:24 2022 +0530
 1 file changed, 79 insertions(+)
Pulling in uncommitted change 184364 at refs/changes/64/184364/5
Total 36 (delta 27), reused 35 (delta 27)
[unstable 9429b61b] MB-54923: Implement master pause state handler
 Author: akhilmd 
 Date: Fri Dec 23 15:10:07 2022 +0530
 1 file changed, 101 insertions(+), 5 deletions(-)
Pulling in uncommitted change 184365 at refs/changes/65/184365/6
Total 41 (delta 30), reused 39 (delta 30)
[unstable 45d14119] MB-54923: Implement follower pause state handler
 Author: akhilmd 
 Date: Fri Dec 23 16:04:15 2022 +0530
 1 file changed, 69 insertions(+), 1 deletion(-)
Pulling in uncommitted change 184366 at refs/changes/66/184366/6
Total 46 (delta 35), reused 46 (delta 35)
[unstable 249bd47e] MB-54923: Allow pause master to work in InProgess
 Author: akhilmd 
 Date: Fri Dec 23 18:10:16 2022 +0530
 1 file changed, 5 insertions(+)
Pulling in uncommitted change 184458 at refs/changes/58/184458/3
Total 6 (delta 5), reused 6 (delta 5)
[unstable c5b1200e] skip pause resume tests
 Author: Dhruvil Shah 
 Date: Wed Jan 4 15:32:59 2023 +0530
 1 file changed, 2 insertions(+)
Pulling in uncommitted change 183988 at refs/changes/88/183988/9
Total 4 (delta 1), reused 2 (delta 1)
[unstable 3058325] MB-54418: Add Copier Upload/Download Bytes
 Author: Saptarshi Sen 
 Date: Mon Dec 12 13:07:57 2022 -0800
 2 files changed, 221 insertions(+), 29 deletions(-)
Pulling in uncommitted change 184358 at refs/changes/58/184358/5
Total 8 (delta 5), reused 7 (delta 5)
[unstable 2a09a21] MB-54418: Add/Update Copier RestoreFile
 Author: Saptarshi Sen 
 Date: Thu Dec 22 18:51:39 2022 -0800
 2 files changed, 244 insertions(+), 31 deletions(-)
Pulling in uncommitted change 184359 at refs/changes/59/184359/5
Total 29 (delta 15), reused 20 (delta 15)
[unstable e19ce80]  MB-54418: Use KeyPrefix in copier cleanup
 Author: Saptarshi Sen 
 Date: Thu Dec 22 12:42:32 2022 -0800
 2 files changed, 9 insertions(+), 6 deletions(-)
Pulling in uncommitted change 184388 at refs/changes/88/184388/4
[unstable 088a02d] MB-54418: Add context to Copier APIs
 Author: Saptarshi Sen 
 Date: Mon Dec 26 16:21:48 2022 -0800
 7 files changed, 208 insertions(+), 193 deletions(-)
Pulling in uncommitted change 184389 at refs/changes/89/184389/4
[unstable 59fc392] MB-54418: Export Copier Path encoding
 Author: Saptarshi Sen 
 Date: Thu Dec 22 18:35:38 2022 -0800
 6 files changed, 69 insertions(+), 8 deletions(-)
Building community edition
Building cmakefiles and deps [CE]
Building main product [CE]
Build CE finished
BUILD_ENTERPRISE empty. Building enterprise edition
Building Enterprise Edition
Building cmakefiles and deps [EE]
Building main product [EE]
Build EE finished

Testing

Started testing at 2023/01/06 20:16:15
Testing mode: sanity,unit,functional,serverless,integration
Using storage type: memdb
Setting ulimit to 200000

Simple Test

Jan 06 20:17:33     .format(failure_dict))
Jan 06 20:17:33 FAILED (failures=1)
Jan 06 20:21:53 rebalance_in_with_ops (rebalance.rebalancein.RebalanceInTests) ... ok
Jan 06 20:22:42 do_warmup_100k (memcapable.WarmUpMemcachedTest) ... ok
Jan 06 20:24:16 test_view_ops (view.createdeleteview.CreateDeleteViewTests) ... ok
Jan 06 20:25:08 b" 'stop_on_failure': 'True'}"
Jan 06 20:25:08 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t rebalance.rebalancein.RebalanceInTests.rebalance_in_with_ops,nodes_in=3,replicas=1,items=50000,get-logs-cluster-run=True,doc_ops=create;update;delete'
Jan 06 20:25:08 b"{'nodes_in': '3', 'replicas': '1', 'items': '50000', 'get-logs-cluster-run': 'True', 'doc_ops': 'create;update;delete', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 1, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'False', 'logs_folder': '/opt/build/testrunner/logs/testrunner-23-Jan-06_20-16-39/test_1'}"
Jan 06 20:25:08 b'-->result: '
Jan 06 20:25:08 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 0 , fail 1'
Jan 06 20:25:08 b'failures so far...'
Jan 06 20:25:08 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t rebalance.rebalancein.RebalanceInTests.rebalance_in_with_ops,nodes_in=3,bucket_type=ephemeral,replicas=1,items=50000,get-logs-cluster-run=True,doc_ops=create;update;delete'
Jan 06 20:25:08 b"{'nodes_in': '3', 'bucket_type': 'ephemeral', 'replicas': '1', 'items': '50000', 'get-logs-cluster-run': 'True', 'doc_ops': 'create;update;delete', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 2, 'total_testcases': 8, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/opt/build/testrunner/logs/testrunner-23-Jan-06_20-16-39/test_2'}"
Jan 06 20:25:08 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 1 , fail 1'
Jan 06 20:25:08 b'failures so far...'
Jan 06 20:25:08 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t memcapable.WarmUpMemcachedTest.do_warmup_100k,get-logs-cluster-run=True'
Jan 06 20:25:08 b"{'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 3, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-23-Jan-06_20-16-39/test_3'}"
Jan 06 20:25:08 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 1 , fail 1'
Jan 06 20:25:08 b'failures so far...'
Jan 06 20:25:08 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Jan 06 20:25:08 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t view.createdeleteview.CreateDeleteViewTests.test_view_ops,ddoc_ops=create,test_with_view=True,num_ddocs=1,num_views_per_ddoc=10,items=1000,skip_cleanup=False,get-logs-cluster-run=True'
Jan 06 20:25:08 b"{'ddoc_ops': 'create', 'test_with_view': 'True', 'num_ddocs': '1', 'num_views_per_ddoc': '10', 'items': '1000', 'skip_cleanup': 'False', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 4, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-23-Jan-06_20-16-39/test_4'}"
Jan 06 20:25:08 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 1 , fail 1'
Jan 06 20:25:08 b'failures so far...'
Jan 06 20:25:08 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Jan 06 20:25:08 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Jan 06 20:25:08 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t view.viewquerytests.ViewQueryTests.test_employee_dataset_startkey_endkey_queries_rebalance_in,num_nodes_to_add=1,skip_rebalance=true,docs-per-day=1,timeout=1200,get-logs-cluster-run=True'
Jan 06 20:35:18 b"{'num_nodes_to_add': '1', 'skip_rebalance': 'true', 'docs-per-day': '1', 'timeout': '1200', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 5, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-23-Jan-06_20-16-39/test_5'}"ok
Jan 06 20:36:06 test_simple_dataset_stale_queries_data_modification (view.viewquerytests.ViewQueryTests) ... ok
Jan 06 20:39:56 load_with_ops (xdcr.uniXDCR.unidirectional) ... ok
Jan 06 20:43:54 load_with_failover (xdcr.uniXDCR.unidirectional) ... ok
Jan 06 20:46:39 suite_tearDown (xdcr.uniXDCR.unidirectional) ... ok
Jan 06 20:46:39 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 1 , fail 1'
Jan 06 20:46:39 b'failures so far...'
Jan 06 20:46:39 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Jan 06 20:46:39 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Jan 06 20:46:39 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 1 , fail 0'
Jan 06 20:46:39 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t view.viewquerytests.ViewQueryTests.test_simple_dataset_stale_queries_data_modification,num-docs=1000,skip_rebalance=true,timeout=1200,get-logs-cluster-run=True'
Jan 06 20:46:39 b"{'num-docs': '1000', 'skip_rebalance': 'true', 'timeout': '1200', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 6, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-23-Jan-06_20-16-39/test_6'}"
Jan 06 20:46:39 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 1 , fail 1'
Jan 06 20:46:39 b'failures so far...'
Jan 06 20:46:39 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Jan 06 20:46:39 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Jan 06 20:46:39 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 2 , fail 0'
Jan 06 20:46:39 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t xdcr.uniXDCR.unidirectional.load_with_ops,replicas=1,items=10000,value_size=128,ctopology=chain,rdirection=unidirection,doc-ops=update-delete,get-logs-cluster-run=True'
Jan 06 20:46:39 b"{'replicas': '1', 'items': '10000', 'value_size': '128', 'ctopology': 'chain', 'rdirection': 'unidirection', 'doc-ops': 'update-delete', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 7, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-23-Jan-06_20-16-39/test_7'}"
Jan 06 20:46:39 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 1 , fail 1'
Jan 06 20:46:39 b'failures so far...'
Jan 06 20:46:39 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Jan 06 20:46:39 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Jan 06 20:46:39 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 2 , fail 0'
Jan 06 20:46:39 b'summary so far suite xdcr.uniXDCR.unidirectional , pass 1 , fail 0'
Jan 06 20:46:39 b'./testrunner -i b/resources/dev-4-nodes-xdcr.ini -p makefile=True,stop_on_failure=True,log_level=CRITICAL -t xdcr.uniXDCR.unidirectional.load_with_failover,replicas=1,items=10000,ctopology=chain,rdirection=unidirection,doc-ops=update-delete,failover=source,get-logs-cluster-run=True'
Jan 06 20:46:39 b"{'replicas': '1', 'items': '10000', 'ctopology': 'chain', 'rdirection': 'unidirection', 'doc-ops': 'update-delete', 'failover': 'source', 'get-logs-cluster-run': 'True', 'ini': 'b/resources/dev-4-nodes-xdcr.ini', 'cluster_name': 'dev-4-nodes-xdcr', 'spec': 'simple', 'conf_file': 'conf/simple.conf', 'makefile': 'True', 'stop_on_failure': 'True', 'log_level': 'CRITICAL', 'num_nodes': 4, 'case_number': 8, 'total_testcases': 8, 'last_case_fail': 'False', 'teardown_run': 'True', 'logs_folder': '/opt/build/testrunner/logs/testrunner-23-Jan-06_20-16-39/test_8'}"
Jan 06 20:46:39 b'summary so far suite rebalance.rebalancein.RebalanceInTests , pass 1 , fail 1'
Jan 06 20:46:39 b'failures so far...'
Jan 06 20:46:39 b'summary so far suite memcapable.WarmUpMemcachedTest , pass 1 , fail 0'
Jan 06 20:46:39 b'summary so far suite view.createdeleteview.CreateDeleteViewTests , pass 1 , fail 0'
Jan 06 20:46:39 b'summary so far suite view.viewquerytests.ViewQueryTests , pass 2 , fail 0'
Jan 06 20:46:39 b'summary so far suite xdcr.uniXDCR.unidirectional , pass 2 , fail 0'
Jan 06 20:46:39 b'Run after suite setup for xdcr.uniXDCR.unidirectional.load_with_failover'
Jan 06 20:46:40 b"('rebalance.rebalancein.RebalanceInTests.rebalance_in_with_ops', ' fail ')"
Jan 06 20:46:40 b"('rebalance.rebalancein.RebalanceInTests.rebalance_in_with_ops', ' pass')"
Jan 06 20:46:40 b"('memcapable.WarmUpMemcachedTest.do_warmup_100k', ' pass')"
Jan 06 20:46:40 b"('view.createdeleteview.CreateDeleteViewTests.test_view_ops', ' pass')"
Jan 06 20:46:40 b"('view.viewquerytests.ViewQueryTests.test_employee_dataset_startkey_endkey_queries_rebalance_in', ' pass')"
Jan 06 20:46:40 b"('view.viewquerytests.ViewQueryTests.test_simple_dataset_stale_queries_data_modification', ' pass')"
Jan 06 20:46:40 b"('xdcr.uniXDCR.unidirectional.load_with_ops', ' pass')"
Jan 06 20:46:40 b"('xdcr.uniXDCR.unidirectional.load_with_failover', ' pass')"
Jan 06 20:46:41 Makefile:30: recipe for target 'simple-test' failed

Unit tests

=== RUN   TestMerger
--- PASS: TestMerger (0.01s)
=== RUN   TestInsert
--- PASS: TestInsert (0.00s)
=== RUN   TestInsertPerf
16000000 items took 27.558532243s -> 580582.4439022539 items/s conflicts 33
--- PASS: TestInsertPerf (27.56s)
=== RUN   TestGetPerf
16000000 items took 10.153076011s -> 1.5758771019408652e+06 items/s
--- PASS: TestGetPerf (10.75s)
=== RUN   TestGetRangeSplitItems
{
"node_count":             1000000,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       0,
"next_pointers_per_node": 1.3351,
"memory_used":            37360992,
"node_allocs":            1000000,
"node_frees":             0,
"level_node_distribution":{
"level0": 749144,
"level1": 187776,
"level2": 47222,
"level3": 11903,
"level4": 2974,
"level5": 735,
"level6": 182,
"level7": 48,
"level8": 12,
"level9": 2,
"level10": 2,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
Split range keys [35911 108875 357987 462344 598654 671258 781160]
No of items in each range [35911 72964 249112 104357 136310 72604 109902 218840]
--- PASS: TestGetRangeSplitItems (1.17s)
=== RUN   TestBuilder
{
"node_count":             50000000,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       0,
"next_pointers_per_node": 1.3333,
"memory_used":            1866650912,
"node_allocs":            50000000,
"node_frees":             0,
"level_node_distribution":{
"level0": 37500395,
"level1": 9374070,
"level2": 2344403,
"level3": 586130,
"level4": 146617,
"level5": 36338,
"level6": 9033,
"level7": 2299,
"level8": 539,
"level9": 124,
"level10": 38,
"level11": 9,
"level12": 5,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
Took 7.08097153s to build 50000000 items, 7.061178e+06 items/sec
Took 464.262335ms to iterate 50000000 items
--- PASS: TestBuilder (7.55s)
=== RUN   TestNodeDCAS
--- PASS: TestNodeDCAS (0.00s)
PASS
ok  	github.com/couchbase/indexing/secondary/memdb/skiplist	47.292s
=== RUN   TestInsert
Set IO Concurrency: 716
--- PASS: TestInsert (0.01s)
=== RUN   TestInsertPerf
20000000 items took 53.634626761s -> 372893.43112466374 items/s snapshots_created 210 live_snapshots 1
--- PASS: TestInsertPerf (68.28s)
=== RUN   TestInsertDuplicates
--- PASS: TestInsertDuplicates (0.01s)
=== RUN   TestGetPerf
16000000 items took 22.797534752s -> 701830.2713014326 items/s
--- PASS: TestGetPerf (24.50s)
=== RUN   TestLoadStoreDisk
Inserting 1000000 items took 1.654049581s
{
"node_count":             1000000,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       21,
"next_pointers_per_node": 1.3323,
"memory_used":            57316448,
"node_allocs":            1000000,
"node_frees":             0,
"level_node_distribution":{
"level0": 750462,
"level1": 187397,
"level2": 46711,
"level3": 11552,
"level4": 2921,
"level5": 718,
"level6": 171,
"level7": 52,
"level8": 8,
"level9": 5,
"level10": 3,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
snap count = 1000000
2023-01-06T20:49:24.952+05:30 [Info] MemDB::StoreToDisk: Done dir [db.dump] disk snapshot - count per shard [[58428 45069 21471 95017 47703 30804 101619 37396 26210 84956 74434 70407 38318 87467 82492 98209]] total count [1000000] snap count [1000048] size per shard [[467424 360552 171768 760136 381624 246432 812952 299168 209680 679648 595472 563256 306544 699736 659936 785672]] total size [8000000] memoryInUse [57316448]
Storing to disk took 286.241786ms
Loading from disk took 651.822432ms
{
"node_count":             1000000,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       0,
"next_pointers_per_node": 1.3336,
"memory_used":            57336960,
"node_allocs":            1000000,
"node_frees":             0,
"level_node_distribution":{
"level0": 749678,
"level1": 187793,
"level2": 47012,
"level3": 11630,
"level4": 2901,
"level5": 740,
"level6": 190,
"level7": 43,
"level8": 9,
"level9": 4,
"level10": 0,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
--- PASS: TestLoadStoreDisk (4.02s)
=== RUN   TestConcurrentLoadCloseFragmentation
Done Inserting 2000000 items
frag = NaN%
2023-01-06T20:49:31.220+05:30 [Info] MemDB::StoreToDisk: Done dir [db.dump] disk snapshot - count per shard [[9042 178510 41534 170195 34708 21631 138611 123299 215000 170015 141198 102898 104957 236292 137849 174261]] total count [2000000] snap count [2000000] size per shard [[72336 1428080 332272 1361560 277664 173048 1108888 986392 1720000 1360120 1129584 823184 839656 1890336 1102792 1394088]] total size [16000000] memoryInUse [114675600]
Done Storing to disk
frag = NaN%
Done Closing
Done Loading from disk
frag = NaN%
--- PASS: TestConcurrentLoadCloseFragmentation (6.44s)
=== RUN   TestConcurrentLoadCloseFragmentationSmall
Done Inserting 10 items
frag = NaN%
2023-01-06T20:49:33.341+05:30 [Info] MemDB::StoreToDisk: Done dir [db.dump] disk snapshot - count per shard [[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] total count [0] snap count [0] size per shard [[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] total size [0] memoryInUse [0]
Done Storing to disk
frag = NaN%
Done Closing
Done Loading from disk
frag = NaN%
--- PASS: TestConcurrentLoadCloseFragmentationSmall (0.47s)
=== RUN   TestConcurrentLoadCloseFragmentationEmpty
Done Inserting 0 items
frag = NaN%
2023-01-06T20:49:33.807+05:30 [Info] MemDB::StoreToDisk: Done dir [db.dump] disk snapshot - count per shard [[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] total count [0] snap count [0] size per shard [[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]] total size [0] memoryInUse [0]
Done Storing to disk
frag = NaN%
Done Closing
Done Loading from disk
frag = NaN%
--- PASS: TestConcurrentLoadCloseFragmentationEmpty (0.10s)
=== RUN   TestConcurrentCloseSingleNode
A
--- PASS: TestConcurrentCloseSingleNode (0.00s)
=== RUN   TestStoreDiskShutdown
Inserting 1000000 items took 1.532848093s
{
"node_count":             1000000,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       17,
"next_pointers_per_node": 1.3334,
"memory_used":            57333712,
"node_allocs":            1000000,
"node_frees":             0,
"level_node_distribution":{
"level0": 749813,
"level1": 187803,
"level2": 46835,
"level3": 11661,
"level4": 2887,
"level5": 737,
"level6": 196,
"level7": 55,
"level8": 11,
"level9": 1,
"level10": 1,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
snap count = 1000000
Prepare returns error:  MemDB instance has been shutdown
--- PASS: TestStoreDiskShutdown (2.36s)
=== RUN   TestDelete
{
"node_count":             10,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       0,
"next_pointers_per_node": 1.5000,
"memory_used":            620,
"node_allocs":            10,
"node_frees":             0,
"level_node_distribution":{
"level0": 6,
"level1": 3,
"level2": 1,
"level3": 0,
"level4": 0,
"level5": 0,
"level6": 0,
"level7": 0,
"level8": 0,
"level9": 0,
"level10": 0,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
{
"node_count":             10,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       0,
"next_pointers_per_node": 1.1000,
"memory_used":            556,
"node_allocs":            20,
"node_frees":             10,
"level_node_distribution":{
"level0": 9,
"level1": 1,
"level2": 0,
"level3": 0,
"level4": 0,
"level5": 0,
"level6": 0,
"level7": 0,
"level8": 0,
"level9": 0,
"level10": 0,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
--- PASS: TestDelete (1.00s)
=== RUN   TestGCPerf
final_node_count = 16000, average_live_node_count = 68071, wait_time_for_collection = 23ms
--- PASS: TestGCPerf (36.64s)
=== RUN   TestMemoryInUse
ItemsCount: 5000, MemoryInUse: 295648, NodesCount: 5000
ItemsCount: 0, MemoryInUse: 64, NodesCount: 0
--- PASS: TestMemoryInUse (1.02s)
=== RUN   TestFullScan
Inserting 1600000 items took 2.520722921s
{
"node_count":             1599999,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       14,
"next_pointers_per_node": 1.3336,
"memory_used":            91739552,
"node_allocs":            1600000,
"node_frees":             0,
"level_node_distribution":{
"level0": 1199452,
"level1": 300585,
"level2": 75122,
"level3": 18586,
"level4": 4673,
"level5": 1170,
"level6": 310,
"level7": 79,
"level8": 18,
"level9": 4,
"level10": 0,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
Full iteration of 1600000 items took 282.734207ms
--- PASS: TestFullScan (4.01s)
=== RUN   TestVisitor
{
"node_count":             1000000,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       0,
"next_pointers_per_node": 1.3327,
"memory_used":            57322864,
"node_allocs":            1000000,
"node_frees":             0,
"level_node_distribution":{
"level0": 750465,
"level1": 187142,
"level2": 46763,
"level3": 11741,
"level4": 2950,
"level5": 716,
"level6": 173,
"level7": 37,
"level8": 8,
"level9": 3,
"level10": 2,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
Took 42.007056ms to iterate 1000000 items, 2.3805524e+07 items/s
shard - 0 count = 8279, range: 0-8278
shard - 1 count = 14971, range: 8279-23249
shard - 2 count = 11227, range: 23250-34476
shard - 3 count = 22992, range: 34477-57468
shard - 4 count = 8480, range: 57469-65948
shard - 5 count = 50361, range: 65949-116309
shard - 6 count = 95601, range: 116310-211910
shard - 7 count = 74860, range: 211911-286770
shard - 8 count = 54301, range: 286771-341071
shard - 9 count = 33106, range: 341072-374177
shard - 10 count = 16145, range: 374178-390322
shard - 11 count = 16162, range: 390323-406484
shard - 12 count = 47779, range: 406485-454263
shard - 13 count = 26439, range: 454264-480702
shard - 14 count = 14602, range: 480703-495304
shard - 15 count = 45450, range: 495305-540754
shard - 16 count = 84341, range: 540755-625095
shard - 17 count = 13675, range: 625096-638770
shard - 18 count = 73093, range: 638771-711863
shard - 19 count = 25658, range: 711864-737521
shard - 20 count = 46697, range: 737522-784218
shard - 21 count = 18950, range: 784219-803168
shard - 22 count = 86467, range: 803169-889635
shard - 23 count = 26732, range: 889636-916367
shard - 24 count = 76986, range: 916368-993353
shard - 25 count = 6646, range: 993354-999999
shard - 26 count = 0, range: 0-0
shard - 27 count = 0, range: 0-0
shard - 28 count = 0, range: 0-0
shard - 29 count = 0, range: 0-0
shard - 30 count = 0, range: 0-0
shard - 31 count = 0, range: 0-0
--- PASS: TestVisitor (1.45s)
=== RUN   TestVisitorError
--- PASS: TestVisitorError (0.50s)
=== RUN   TestLoadDeltaStoreDisk
snap count = 1000000
2023-01-06T20:50:24.966+05:30 [Info] MemDB::StoreToDisk: Done dir [db.dump] disk snapshot - count per shard [[0 0 0 0 0 0 199748 82832 122198 113693 70806 14459 142701 126472 64591 37432]] total count [974932] snap count [1000000] size per shard [[0 0 0 0 0 0 1597984 662656 977584 909544 566448 115672 1141608 1011776 516728 299456]] total size [7799456] memoryInUse [128893116]
Storing to disk took 15.640644105s
Loading from disk took 903.890149ms
{
"node_count":             1000000,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       0,
"next_pointers_per_node": 1.3349,
"memory_used":            57358592,
"node_allocs":            1000000,
"node_frees":             0,
"level_node_distribution":{
"level0": 748655,
"level1": 188514,
"level2": 47196,
"level3": 11761,
"level4": 2943,
"level5": 709,
"level6": 170,
"level7": 39,
"level8": 6,
"level9": 5,
"level10": 2,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
Restored 25068
RestoredFailed 549156
--- PASS: TestLoadDeltaStoreDisk (18.66s)
=== RUN   TestExecuteConcurrGCWorkers
--- PASS: TestExecuteConcurrGCWorkers (0.67s)
=== RUN   TestCloseWithActiveIterators
--- PASS: TestCloseWithActiveIterators (12.31s)
=== RUN   TestDiskCorruption
Inserting 100000 items took 97.343687ms
{
"node_count":             100811,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       5,
"next_pointers_per_node": 1.3291,
"memory_used":            5730784,
"node_allocs":            99999,
"node_frees":             0,
"level_node_distribution":{
"level0": 75748,
"level1": 18937,
"level2": 4615,
"level3": 1154,
"level4": 273,
"level5": 56,
"level6": 24,
"level7": 3,
"level8": 1,
"level9": 0,
"level10": 0,
"level11": 0,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
2023-01-06T20:50:52.668+05:30 [Info] MemDB::StoreToDisk: Done dir [db.dump] disk snapshot - count per shard [[120 13486 1870 1374 2251 6400 10274 1128 5337 7923 8213 8195 8274 14005 11150 0]] total count [100000] snap count [100000] size per shard [[960 107888 14960 10992 18008 51200 82192 9024 42696 63384 65704 65560 66192 112040 89200 0]] total size [800000] memoryInUse [5730784]
Storing to disk took 84.92099ms
Loading from disk took 432.982091ms
--- PASS: TestDiskCorruption (0.68s)
=== RUN   TestSnapshotStats
--- PASS: TestSnapshotStats (0.00s)
=== RUN   TestInsertDeleteConcurrent
total items in snapshot: 7987858  items deleted: 2012142
{
"node_count":             7987858,
"soft_deletes":           0,
"read_conflicts":         7235,
"insert_conflicts":       105,
"next_pointers_per_node": 1.3334,
"memory_used":            457972984,
"node_allocs":            10000000,
"node_frees":             10000002,
"level_node_distribution":{
"level0": 5991032,
"level1": 1497587,
"level2": 374326,
"level3": 93591,
"level4": 23500,
"level5": 5819,
"level6": 1507,
"level7": 376,
"level8": 97,
"level9": 15,
"level10": 7,
"level11": 1,
"level12": 0,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
allocs: 20000002 frees: 20000002
--- PASS: TestInsertDeleteConcurrent (20.10s)
=== RUN   TestNodeList
--- PASS: TestNodeList (0.00s)
PASS
ok  	github.com/couchbase/indexing/secondary/memdb	203.528s
=== RUN   TestInteger
--- PASS: TestInteger (0.00s)
=== RUN   TestSmallDecimal
--- PASS: TestSmallDecimal (0.00s)
=== RUN   TestLargeDecimal
--- PASS: TestLargeDecimal (0.00s)
=== RUN   TestFloat
--- PASS: TestFloat (0.00s)
=== RUN   TestSuffixCoding
--- PASS: TestSuffixCoding (0.00s)
=== RUN   TestCodecLength
--- PASS: TestCodecLength (0.00s)
=== RUN   TestSpecialString
--- PASS: TestSpecialString (0.00s)
=== RUN   TestCodecNoLength
--- PASS: TestCodecNoLength (0.00s)
=== RUN   TestCodecJSON
--- PASS: TestCodecJSON (0.00s)
=== RUN   TestReference
--- PASS: TestReference (0.00s)
=== RUN   TestN1QLEncode
--- PASS: TestN1QLEncode (0.00s)
=== RUN   TestArrayExplodeJoin
--- PASS: TestArrayExplodeJoin (0.00s)
=== RUN   TestN1QLDecode
--- PASS: TestN1QLDecode (0.00s)
=== RUN   TestN1QLDecode2
--- PASS: TestN1QLDecode2 (0.00s)
=== RUN   TestArrayExplodeJoin2
--- PASS: TestArrayExplodeJoin2 (0.00s)
=== RUN   TestMB28956
--- PASS: TestMB28956 (0.00s)
=== RUN   TestFixEncodedInt
--- PASS: TestFixEncodedInt (0.00s)
=== RUN   TestN1QLDecodeLargeInt64
--- PASS: TestN1QLDecodeLargeInt64 (0.00s)
=== RUN   TestMixedModeFixEncodedInt
TESTING [4111686018427387900, -8223372036854775808, 822337203685477618] 
PASS 
TESTING [0] 
PASS 
TESTING [0.0] 
PASS 
TESTING [0.0000] 
PASS 
TESTING [0.0000000] 
PASS 
TESTING [-0] 
PASS 
TESTING [-0.0] 
PASS 
TESTING [-0.0000] 
PASS 
TESTING [-0.0000000] 
PASS 
TESTING [1] 
PASS 
TESTING [20] 
PASS 
TESTING [3456] 
PASS 
TESTING [7645000] 
PASS 
TESTING [9223372036854775807] 
PASS 
TESTING [9223372036854775806] 
PASS 
TESTING [9223372036854775808] 
PASS 
TESTING [92233720368547758071234000] 
PASS 
TESTING [92233720368547758071234987437653] 
PASS 
TESTING [12300000000000000000000000000000056] 
PASS 
TESTING [12300000000000000000000000000000000] 
PASS 
TESTING [123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000056] 
PASS 
TESTING [123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000] 
PASS 
TESTING [12300000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000056] 
PASS 
TESTING [210690] 
PASS 
TESTING [90000] 
PASS 
TESTING [123000000] 
PASS 
TESTING [3.60e2] 
PASS 
TESTING [36e2] 
PASS 
TESTING [1.9999999999e10] 
PASS 
TESTING [1.99999e10] 
PASS 
TESTING [1.99999e5] 
PASS 
TESTING [0.00000000000012e15] 
PASS 
TESTING [7.64507352e8] 
PASS 
TESTING [9.2233720368547758071234987437653e31] 
PASS 
TESTING [2650e-1] 
PASS 
TESTING [26500e-1] 
PASS 
TESTING [-1] 
PASS 
TESTING [-20] 
PASS 
TESTING [-3456] 
PASS 
TESTING [-7645000] 
PASS 
TESTING [-9223372036854775808] 
PASS 
TESTING [-9223372036854775807] 
PASS 
TESTING [-9223372036854775806] 
PASS 
TESTING [-9223372036854775809] 
PASS 
TESTING [-92233720368547758071234000] 
PASS 
TESTING [-92233720368547758071234987437653] 
PASS 
TESTING [-12300000000000000000000000000000056] 
PASS 
TESTING [-12300000000000000000000000000000000] 
PASS 
TESTING [-123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000056] 
PASS 
TESTING [-123000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000] 
PASS 
TESTING [-210690] 
PASS 
TESTING [-90000] 
PASS 
TESTING [-123000000] 
PASS 
TESTING [-3.60e2] 
PASS 
TESTING [-36e2] 
PASS 
TESTING [-1.9999999999e10] 
PASS 
TESTING [-1.99999e10] 
PASS 
TESTING [-1.99999e5] 
PASS 
TESTING [-0.00000000000012e15] 
PASS 
TESTING [-2650e-1] 
PASS 
TESTING [-26500e-1] 
PASS 
TESTING [0.03] 
PASS 
TESTING [198.60] 
PASS 
TESTING [2000045.178] 
PASS 
TESTING [1.7976931348623157e+308] 
PASS 
TESTING [0.000000000000000000890] 
PASS 
TESTING [257953786.9864236576] 
PASS 
TESTING [257953786.9864236576e8] 
PASS 
TESTING [36.912e3] 
PASS 
TESTING [2761.67e0] 
PASS 
TESTING [2761.67e00] 
PASS 
TESTING [2761.67e000] 
PASS 
TESTING [7676546.67e-3] 
PASS 
TESTING [-0.03] 
PASS 
TESTING [-198.60] 
PASS 
TESTING [-2000045.178] 
PASS 
TESTING [-1.7976931348623157e+308] 
PASS 
TESTING [-0.000000000000000000890] 
PASS 
TESTING [-257953786.9864236576] 
PASS 
TESTING [-257953786.9864236576e8] 
PASS 
TESTING [-36.912e3] 
PASS 
TESTING [-2761.67e0] 
PASS 
TESTING [-2761.67e00] 
PASS 
TESTING [-2761.67e000] 
PASS 
TESTING [-7676546.67e-3] 
PASS 
--- PASS: TestMixedModeFixEncodedInt (0.01s)
=== RUN   TestCodecDesc
--- PASS: TestCodecDesc (0.00s)
=== RUN   TestCodecDescPropLen
--- PASS: TestCodecDescPropLen (0.00s)
=== RUN   TestCodecDescSplChar
--- PASS: TestCodecDescSplChar (0.00s)
PASS
ok  	github.com/couchbase/indexing/secondary/collatejson	0.035s
Initializing write barrier = 8000
=== RUN   TestForestDBIterator
2023-01-06T20:51:19.435+05:30 [INFO][FDB] Forestdb blockcache size 134217728 initialized in 5023 us

2023-01-06T20:51:19.436+05:30 [INFO][FDB] Forestdb opened database file test
2023-01-06T20:51:19.443+05:30 [INFO][FDB] Forestdb closed database file test
--- PASS: TestForestDBIterator (0.01s)
=== RUN   TestForestDBIteratorSeek
2023-01-06T20:51:19.444+05:30 [INFO][FDB] Forestdb opened database file test
2023-01-06T20:51:19.453+05:30 [INFO][FDB] Forestdb closed database file test
--- PASS: TestForestDBIteratorSeek (0.01s)
=== RUN   TestPrimaryIndexEntry
--- PASS: TestPrimaryIndexEntry (0.00s)
=== RUN   TestSecondaryIndexEntry
--- PASS: TestSecondaryIndexEntry (0.00s)
=== RUN   TestPrimaryIndexEntryMatch
--- PASS: TestPrimaryIndexEntryMatch (0.00s)
=== RUN   TestSecondaryIndexEntryMatch
--- PASS: TestSecondaryIndexEntryMatch (0.00s)
=== RUN   TestLongDocIdEntry
--- PASS: TestLongDocIdEntry (0.00s)
=== RUN   TestMemDBInsertionPerf
Maximum number of file descriptors = 200000
Set IO Concurrency: 7200
Initial build: 10000000 items took 1m56.013653215s -> 86196.75118296377 items/s
Incr build: 10000000 items took 1m56.296567358s -> 85987.06072911534 items/s
Main Index: {
"node_count":             18000000,
"soft_deletes":           0,
"read_conflicts":         0,
"insert_conflicts":       5,
"next_pointers_per_node": 1.3333,
"memory_used":            1695887676,
"node_allocs":            18000000,
"node_frees":             0,
"level_node_distribution":{
"level0": 13500190,
"level1": 3375550,
"level2": 843073,
"level3": 210794,
"level4": 52735,
"level5": 13347,
"level6": 3246,
"level7": 782,
"level8": 220,
"level9": 42,
"level10": 17,
"level11": 3,
"level12": 1,
"level13": 0,
"level14": 0,
"level15": 0,
"level16": 0,
"level17": 0,
"level18": 0,
"level19": 0,
"level20": 0,
"level21": 0,
"level22": 0,
"level23": 0,
"level24": 0,
"level25": 0,
"level26": 0,
"level27": 0,
"level28": 0,
"level29": 0,
"level30": 0,
"level31": 0,
"level32": 0
}
}
Back Index 0 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 1 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 2 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 3 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 4 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 5 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 6 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 7 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 8 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 9 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 10 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 11 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 12 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 13 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 14 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
Back Index 15 : {
"FastHTCount":  625000,
"SlowHTCount":  0,
"Conflicts":   0,
"MemoryInUse": 26250000
}
--- PASS: TestMemDBInsertionPerf (232.33s)
=== RUN   TestBasicsA
--- PASS: TestBasicsA (0.00s)
=== RUN   TestSizeA
--- PASS: TestSizeA (0.00s)
=== RUN   TestSizeWithFreelistA
--- PASS: TestSizeWithFreelistA (0.00s)
=== RUN   TestDequeueUptoSeqnoA
--- PASS: TestDequeueUptoSeqnoA (0.10s)
=== RUN   TestDequeueA
--- PASS: TestDequeueA (1.21s)
=== RUN   TestMultipleVbucketsA
--- PASS: TestMultipleVbucketsA (0.00s)
=== RUN   TestDequeueUptoFreelistA
--- PASS: TestDequeueUptoFreelistA (0.00s)
=== RUN   TestDequeueUptoFreelistMultVbA
--- PASS: TestDequeueUptoFreelistMultVbA (0.00s)
=== RUN   TestConcurrentEnqueueDequeueA
--- PASS: TestConcurrentEnqueueDequeueA (0.00s)
=== RUN   TestConcurrentEnqueueDequeueA1
--- PASS: TestConcurrentEnqueueDequeueA1 (10.01s)
=== RUN   TestEnqueueAppCh
--- PASS: TestEnqueueAppCh (2.00s)
=== RUN   TestDequeueN
--- PASS: TestDequeueN (0.00s)
=== RUN   TestConcurrentEnqueueDequeueN
--- PASS: TestConcurrentEnqueueDequeueN (0.00s)
=== RUN   TestConcurrentEnqueueDequeueN1
--- PASS: TestConcurrentEnqueueDequeueN1 (10.01s)
PASS
ok  	github.com/couchbase/indexing/secondary/indexer	256.452s
=== RUN   TestConnPoolBasicSanity
2023-01-06T20:55:39.230+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 3 overflow 6 low WM 3 relConn batch size 1 ...
2023-01-06T20:55:39.440+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2023-01-06T20:55:40.231+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestConnPoolBasicSanity (5.00s)
=== RUN   TestConnRelease
2023-01-06T20:55:44.233+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 500 overflow 10 low WM 40 relConn batch size 10 ...
Waiting for connections to get released
Waiting for more connections to get released
Waiting for further more connections to get released
2023-01-06T20:56:23.988+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2023-01-06T20:56:24.252+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestConnRelease (43.76s)
=== RUN   TestLongevity
2023-01-06T20:56:27.990+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 500 overflow 10 low WM 40 relConn batch size 10 ...
Releasing 1 conns.
Getting 2 conns.
Releasing 2 conns.
Getting 4 conns.
Releasing 1 conns.
Getting 3 conns.
Releasing 0 conns.
Getting 0 conns.
Releasing 1 conns.
Getting 0 conns.
Releasing 4 conns.
Getting 1 conns.
Releasing 2 conns.
Getting 4 conns.
Releasing 3 conns.
Getting 4 conns.
Releasing 1 conns.
Getting 0 conns.
Releasing 2 conns.
Getting 1 conns.
Releasing 0 conns.
Getting 1 conns.
Releasing 3 conns.
Getting 3 conns.
Releasing 2 conns.
Getting 2 conns.
Releasing 2 conns.
Getting 3 conns.
Releasing 0 conns.
Getting 0 conns.
2023-01-06T20:57:06.572+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2023-01-06T20:57:07.011+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestLongevity (42.58s)
=== RUN   TestSustainedHighConns
2023-01-06T20:57:10.573+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 500 overflow 10 low WM 40 relConn batch size 10 ...
Allocating 16 Connections
cp.curActConns = 0
Returning 3 Connections
cp.curActConns = 11
Returning 2 Connections
cp.curActConns = 11
Allocating 6 Connections
Returning 4 Connections
cp.curActConns = 13
Returning 1 Connections
Allocating 12 Connections
cp.curActConns = 22
Returning 1 Connections
cp.curActConns = 23
Allocating 10 Connections
Returning 1 Connections
cp.curActConns = 32
Returning 3 Connections
Allocating 15 Connections
cp.curActConns = 33
Returning 4 Connections
cp.curActConns = 40
Returning 3 Connections
Allocating 8 Connections
cp.curActConns = 44
Returning 2 Connections
cp.curActConns = 43
Allocating 3 Connections
Returning 4 Connections
cp.curActConns = 42
Allocating 9 Connections
Returning 3 Connections
cp.curActConns = 48
Returning 2 Connections
Allocating 21 Connections
cp.curActConns = 55
Returning 4 Connections
cp.curActConns = 63
Returning 4 Connections
Allocating 0 Connections
cp.curActConns = 59
Returning 0 Connections
Allocating 13 Connections
cp.curActConns = 67
Returning 3 Connections
cp.curActConns = 69
Returning 3 Connections
Allocating 5 Connections
cp.curActConns = 71
Returning 1 Connections
Allocating 10 Connections
cp.curActConns = 79
Returning 0 Connections
cp.curActConns = 80
Allocating 6 Connections
Returning 1 Connections
cp.curActConns = 85
Returning 3 Connections
Allocating 11 Connections
cp.curActConns = 91
Returning 2 Connections
cp.curActConns = 91
Allocating 8 Connections
Returning 1 Connections
cp.curActConns = 98
Returning 3 Connections
Allocating 1 Connections
cp.curActConns = 96
Returning 2 Connections
Allocating 18 Connections
cp.curActConns = 103
Returning 2 Connections
cp.curActConns = 110
Returning 4 Connections
Allocating 2 Connections
cp.curActConns = 108
Returning 3 Connections
Allocating 21 Connections
cp.curActConns = 115
Returning 0 Connections
cp.curActConns = 126
Returning 3 Connections
Allocating 3 Connections
cp.curActConns = 126
Returning 2 Connections
Allocating 8 Connections
cp.curActConns = 127
Returning 1 Connections
cp.curActConns = 131
Returning 4 Connections
Allocating 8 Connections
cp.curActConns = 135
Returning 3 Connections
Allocating 16 Connections
cp.curActConns = 139
Returning 2 Connections
cp.curActConns = 146
Returning 3 Connections
Allocating 11 Connections
cp.curActConns = 151
Returning 1 Connections
cp.curActConns = 153
Returning 2 Connections
Allocating 15 Connections
cp.curActConns = 163
Returning 3 Connections
cp.curActConns = 163
Allocating 2 Connections
Returning 3 Connections
cp.curActConns = 162
Allocating 5 Connections
Returning 3 Connections
cp.curActConns = 164
Allocating 0 Connections
Returning 1 Connections
cp.curActConns = 163
Returning 0 Connections
Allocating 7 Connections
cp.curActConns = 170
Returning 2 Connections
Allocating 10 Connections
cp.curActConns = 175
Returning 0 Connections
cp.curActConns = 178
Returning 0 Connections
Allocating 15 Connections
cp.curActConns = 187
Returning 2 Connections
cp.curActConns = 191
Returning 3 Connections
Allocating 3 Connections
cp.curActConns = 191
Returning 1 Connections
Allocating 7 Connections
cp.curActConns = 197
Returning 4 Connections
Allocating 22 Connections
cp.curActConns = 199
Returning 2 Connections
cp.curActConns = 211
Returning 2 Connections
cp.curActConns = 211
Allocating 1 Connections
Returning 4 Connections
cp.curActConns = 208
Allocating 1 Connections
Returning 2 Connections
cp.curActConns = 207
Allocating 1 Connections
Returning 4 Connections
cp.curActConns = 204
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 203
Allocating 3 Connections
Returning 1 Connections
cp.curActConns = 205
Allocating 4 Connections
Returning 1 Connections
cp.curActConns = 208
Returning 2 Connections
Allocating 0 Connections
cp.curActConns = 206
Allocating 0 Connections
Returning 0 Connections
cp.curActConns = 206
Returning 2 Connections
Allocating 4 Connections
cp.curActConns = 208
Returning 3 Connections
Allocating 3 Connections
cp.curActConns = 208
Allocating 4 Connections
Returning 3 Connections
cp.curActConns = 209
Returning 4 Connections
Allocating 2 Connections
cp.curActConns = 207
Allocating 2 Connections
Returning 2 Connections
cp.curActConns = 207
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 210
Allocating 0 Connections
Returning 1 Connections
cp.curActConns = 209
Returning 1 Connections
Allocating 1 Connections
cp.curActConns = 209
Allocating 0 Connections
Returning 1 Connections
cp.curActConns = 208
Returning 4 Connections
Allocating 4 Connections
cp.curActConns = 208
Allocating 0 Connections
Returning 3 Connections
cp.curActConns = 205
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 204
Allocating 2 Connections
Returning 3 Connections
cp.curActConns = 203
Allocating 2 Connections
Returning 1 Connections
cp.curActConns = 204
Returning 3 Connections
Allocating 4 Connections
cp.curActConns = 205
Allocating 2 Connections
Returning 3 Connections
cp.curActConns = 204
Returning 2 Connections
Allocating 3 Connections
cp.curActConns = 205
Allocating 3 Connections
Returning 3 Connections
cp.curActConns = 205
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 204
Returning 0 Connections
Allocating 0 Connections
cp.curActConns = 204
Allocating 4 Connections
Returning 2 Connections
cp.curActConns = 206
Allocating 2 Connections
Returning 0 Connections
cp.curActConns = 208
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 211
Returning 0 Connections
Allocating 3 Connections
cp.curActConns = 214
Returning 0 Connections
Allocating 4 Connections
cp.curActConns = 218
Returning 0 Connections
Allocating 3 Connections
cp.curActConns = 221
Returning 0 Connections
Allocating 4 Connections
cp.curActConns = 222
Returning 2 Connections
cp.curActConns = 223
Allocating 0 Connections
Returning 0 Connections
cp.curActConns = 223
Allocating 2 Connections
Returning 4 Connections
cp.curActConns = 221
Allocating 0 Connections
Returning 3 Connections
cp.curActConns = 218
Returning 1 Connections
Allocating 0 Connections
cp.curActConns = 217
Allocating 1 Connections
Returning 0 Connections
cp.curActConns = 218
Returning 1 Connections
Allocating 4 Connections
cp.curActConns = 221
Allocating 2 Connections
Returning 1 Connections
cp.curActConns = 222
Returning 2 Connections
Allocating 1 Connections
cp.curActConns = 221
Allocating 0 Connections
Returning 4 Connections
cp.curActConns = 217
Returning 2 Connections
Allocating 2 Connections
cp.curActConns = 217
Retuning from startAllocatorRoutine
Retuning from startDeallocatorRoutine
2023-01-06T20:58:05.652+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2023-01-06T20:58:06.602+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestSustainedHighConns (59.08s)
=== RUN   TestLowWM
2023-01-06T20:58:09.654+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 20 overflow 5 low WM 10 relConn batch size 2 ...
2023-01-06T20:59:09.670+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] active conns 0, free conns 10
2023-01-06T21:00:09.686+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] active conns 0, free conns 10
2023-01-06T21:00:15.149+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2023-01-06T21:00:15.688+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestLowWM (129.50s)
=== RUN   TestTotalConns
2023-01-06T21:00:19.151+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 120 overflow 5 low WM 10 relConn batch size 10 ...
2023-01-06T21:00:33.324+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2023-01-06T21:00:34.159+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestTotalConns (18.17s)
=== RUN   TestUpdateTickRate
2023-01-06T21:00:37.326+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] started poolsize 40 overflow 5 low WM 2 relConn batch size 2 ...
2023-01-06T21:00:58.173+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] ... stopped
2023-01-06T21:00:58.336+05:30 [Info] [Queryport-connpool:127.0.0.1:15151] Stopping releaseConnsRoutine
--- PASS: TestUpdateTickRate (24.85s)
PASS
ok  	github.com/couchbase/indexing/secondary/queryport/client	323.001s
Starting server: attempt 1

Functional tests

2023/01/06 21:03:14 In TestMain()
2023/01/06 21:03:14 otp node fetch error: json: cannot unmarshal string into Go value of type couchbase.Pool
2023/01/06 21:03:14 Initialising services with role: kv,n1ql on node: 127.0.0.1:9000
2023/01/06 21:03:15 Initialising web UI on node: 127.0.0.1:9000
2023/01/06 21:03:15 InitWebCreds, response is: {"newBaseUri":"http://127.0.0.1:9000/"}
2023/01/06 21:03:16 Setting data quota of 1500M and Index quota of 1500M
2023/01/06 21:03:17 Adding node: https://127.0.0.1:19001 with role: kv,index to the cluster
2023/01/06 21:03:31 AddNode: Successfully added node: 127.0.0.1:9001 (role kv,index), response: {"otpNode":"n_1@127.0.0.1"}
2023/01/06 21:03:37 Rebalance progress: 0
2023/01/06 21:03:42 Rebalance progress: 0
2023/01/06 21:03:47 Rebalance progress: 100
2023/01/06 21:03:52 Created bucket default, responseBody: 
2023/01/06 21:03:57 Cluster status: map[127.0.0.1:9001:[index kv] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 21:03:57 Successfully initialised cluster
2023/01/06 21:03:57 Cluster status: map[127.0.0.1:9001:[index kv] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 21:03:57 Changing config key queryport.client.settings.backfillLimit to value 0
2023/01/06 21:03:57 Changing config key queryport.client.log_level to value Warn
2023/01/06 21:03:57 Changing config key indexer.api.enableTestServer to value true
2023/01/06 21:03:57 Changing config key indexer.settings.persisted_snapshot_init_build.moi.interval to value 60000
2023/01/06 21:03:57 Changing config key indexer.settings.persisted_snapshot.moi.interval to value 60000
2023/01/06 21:03:58 Changing config key indexer.settings.log_level to value info
2023/01/06 21:03:58 Changing config key indexer.settings.storage_mode.disable_upgrade to value true
2023/01/06 21:03:58 Using memory_optimized for creating indexes
2023/01/06 21:03:58 Changing config key indexer.settings.storage_mode to value memory_optimized
2023/01/06 21:04:03 Data file exists. Skipping download
2023/01/06 21:04:03 Data file exists. Skipping download
2023/01/06 21:04:04 In DropAllSecondaryIndexes()
2023/01/06 21:04:04 Emptying the default bucket
2023/01/06 21:04:07 Flush Enabled on bucket default, responseBody: 
2023/01/06 21:04:46 Flushed the bucket default, Response body: 
2023/01/06 21:04:46 Create Index On the empty default Bucket()
2023/01/06 21:04:49 Created the secondary index index_eyeColor. Waiting for it become active
2023/01/06 21:04:49 Index is 12189826602627268026 now active
2023/01/06 21:04:49 Populating the default bucket
=== RUN   TestScanAfterBucketPopulate
2023/01/06 21:04:58 In TestScanAfterBucketPopulate()
2023/01/06 21:04:58 Create an index on empty bucket, populate the bucket and Run a scan on the index
2023/01/06 21:04:59 Using n1ql client
2023-01-06T21:04:59.009+05:30 [Info] creating GsiClient for 127.0.0.1:9000
2023/01/06 21:04:59 Expected and Actual scan responses are the same
--- PASS: TestScanAfterBucketPopulate (0.11s)
=== RUN   TestRestartNilSnapshot
2023/01/06 21:04:59 In TestRestartNilSnapshot()
2023/01/06 21:05:03 Created the secondary index idx_age. Waiting for it become active
2023/01/06 21:05:03 Index is 10697514288270459730 now active
2023/01/06 21:05:03 Restarting indexer process ...
2023/01/06 21:05:03 []
2023-01-06T21:05:03.331+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T21:05:03.331+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T21:05:03.337+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T21:05:03.337+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023/01/06 21:08:23 Using n1ql client
2023-01-06T21:08:23.319+05:30 [Error] transport error between 127.0.0.1:50440->127.0.0.1:9107: write tcp 127.0.0.1:50440->127.0.0.1:9107: write: broken pipe
2023-01-06T21:08:23.319+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] 5764037828965832209 request transport failed `write tcp 127.0.0.1:50440->127.0.0.1:9107: write: broken pipe`
2023-01-06T21:08:23.319+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T21:08:23.319+05:30 [Error] metadataClient:PickRandom: Replicas - [5992897510621413391], PrunedReplica - map[], FilteredReplica map[]
2023/01/06 21:08:23 Expected and Actual scan responses are the same
--- PASS: TestRestartNilSnapshot (204.26s)
=== RUN   TestThreeIndexCreates
2023/01/06 21:08:23 In TestThreeIndexCreates()
2023/01/06 21:08:27 Created the secondary index index_balance. Waiting for it become active
2023/01/06 21:08:27 Index is 14551276939385747802 now active
2023/01/06 21:08:27 Create docs mutations
2023/01/06 21:08:27 Using n1ql client
2023/01/06 21:08:27 Expected and Actual scan responses are the same
2023/01/06 21:08:33 Created the secondary index index_email. Waiting for it become active
2023/01/06 21:08:33 Index is 17643497253962348978 now active
2023/01/06 21:08:33 Create docs mutations
2023/01/06 21:08:34 Using n1ql client
2023/01/06 21:08:34 Expected and Actual scan responses are the same
2023/01/06 21:08:40 Created the secondary index index_pin. Waiting for it become active
2023/01/06 21:08:40 Index is 10795397649579180027 now active
2023/01/06 21:08:40 Delete docs mutations
2023/01/06 21:08:40 Using n1ql client
2023/01/06 21:08:40 Expected and Actual scan responses are the same
--- PASS: TestThreeIndexCreates (17.37s)
=== RUN   TestMultipleIndexCreatesDropsWithMutations
2023/01/06 21:08:40 In TestThreeIndexCreates()
2023/01/06 21:08:46 Created the secondary index index_state. Waiting for it become active
2023/01/06 21:08:46 Index is 9164601617027771969 now active
2023/01/06 21:08:46 Create docs mutations
2023/01/06 21:08:47 Using n1ql client
2023/01/06 21:08:47 Expected and Actual scan responses are the same
2023/01/06 21:08:53 Created the secondary index index_registered. Waiting for it become active
2023/01/06 21:08:53 Index is 7847168281098603305 now active
2023/01/06 21:08:53 Create docs mutations
2023/01/06 21:08:53 Using n1ql client
2023/01/06 21:08:53 Expected and Actual scan responses are the same
2023/01/06 21:08:59 Created the secondary index index_gender. Waiting for it become active
2023/01/06 21:08:59 Index is 1069832335400188054 now active
2023/01/06 21:08:59 Create docs mutations
2023/01/06 21:09:00 Using n1ql client
2023/01/06 21:09:00 Expected and Actual scan responses are the same
2023/01/06 21:09:00 Dropping the secondary index index_registered
2023/01/06 21:09:00 Index dropped
2023/01/06 21:09:00 Create docs mutations
2023/01/06 21:09:00 Delete docs mutations
2023/01/06 21:09:00 Using n1ql client
2023/01/06 21:09:00 Expected and Actual scan responses are the same
2023/01/06 21:09:06 Created the secondary index index_longitude. Waiting for it become active
2023/01/06 21:09:06 Index is 8029755138789889423 now active
2023/01/06 21:09:06 Create docs mutations
2023/01/06 21:09:07 Using n1ql client
2023/01/06 21:09:07 Expected and Actual scan responses are the same
--- PASS: TestMultipleIndexCreatesDropsWithMutations (26.39s)
=== RUN   TestCreateDropScan
2023/01/06 21:09:07 In TestCreateDropScan()
2023/01/06 21:09:13 Created the secondary index index_cd. Waiting for it become active
2023/01/06 21:09:13 Index is 17023214466570125265 now active
2023/01/06 21:09:13 Using n1ql client
2023/01/06 21:09:13 Expected and Actual scan responses are the same
2023/01/06 21:09:13 Dropping the secondary index index_cd
2023/01/06 21:09:13 Index dropped
2023/01/06 21:09:13 Using n1ql client
2023/01/06 21:09:13 Scan failed as expected with error: Index Not Found - cause: GSI index index_cd not found.
--- PASS: TestCreateDropScan (6.31s)
=== RUN   TestCreateDropCreate
2023/01/06 21:09:13 In TestCreateDropCreate()
2023/01/06 21:09:19 Created the secondary index index_cdc. Waiting for it become active
2023/01/06 21:09:19 Index is 1836710466774107014 now active
2023/01/06 21:09:19 Using n1ql client
2023/01/06 21:09:19 Expected and Actual scan responses are the same
2023/01/06 21:09:19 Dropping the secondary index index_cdc
2023/01/06 21:09:19 Index dropped
2023/01/06 21:09:19 Using n1ql client
2023/01/06 21:09:19 Scan 2 failed as expected with error: Index Not Found - cause: GSI index index_cdc not found.
2023/01/06 21:09:25 Created the secondary index index_cdc. Waiting for it become active
2023/01/06 21:09:25 Index is 16772065726646301487 now active
2023/01/06 21:09:25 Using n1ql client
2023/01/06 21:09:25 Expected and Actual scan responses are the same
2023/01/06 21:09:25 (Inclusion 1) Lengths of expected and actual scan results are 5035 and 5035. Num of docs in bucket = 10402
2023/01/06 21:09:25 Using n1ql client
2023/01/06 21:09:25 Expected and Actual scan responses are the same
2023/01/06 21:09:25 (Inclusion 3) Lengths of expected and actual scan results are 5035 and 5035. Num of docs in bucket = 10402
--- PASS: TestCreateDropCreate (12.54s)
=== RUN   TestCreate2Drop1Scan2
2023/01/06 21:09:25 In TestCreate2Drop1Scan2()
2023/01/06 21:09:31 Created the secondary index index_i1. Waiting for it become active
2023/01/06 21:09:31 Index is 241750979367326273 now active
2023/01/06 21:09:38 Created the secondary index index_i2. Waiting for it become active
2023/01/06 21:09:38 Index is 9192713496018645628 now active
2023/01/06 21:09:38 Using n1ql client
2023/01/06 21:09:38 Expected and Actual scan responses are the same
2023/01/06 21:09:38 Using n1ql client
2023/01/06 21:09:38 Expected and Actual scan responses are the same
2023/01/06 21:09:38 Dropping the secondary index index_i1
2023/01/06 21:09:38 Index dropped
2023/01/06 21:09:38 Using n1ql client
2023/01/06 21:09:38 Expected and Actual scan responses are the same
--- PASS: TestCreate2Drop1Scan2 (12.49s)
=== RUN   TestIndexNameCaseSensitivity
2023/01/06 21:09:38 In TestIndexNameCaseSensitivity()
2023/01/06 21:09:44 Created the secondary index index_age. Waiting for it become active
2023/01/06 21:09:44 Index is 8302717505433624523 now active
2023/01/06 21:09:44 Using n1ql client
2023/01/06 21:09:44 Expected and Actual scan responses are the same
2023/01/06 21:09:44 Using n1ql client
2023/01/06 21:09:44 Scan failed as expected with error: Index Not Found - cause: GSI index index_Age not found.
--- PASS: TestIndexNameCaseSensitivity (6.12s)
=== RUN   TestCreateDuplicateIndex
2023/01/06 21:09:44 In TestCreateDuplicateIndex()
2023/01/06 21:09:50 Created the secondary index index_di1. Waiting for it become active
2023/01/06 21:09:50 Index is 9278851385750526119 now active
2023/01/06 21:09:50 Index found:  index_di1
2023/01/06 21:09:50 Create failed as expected with error: Index index_di1 already exists.
--- PASS: TestCreateDuplicateIndex (6.13s)
=== RUN   TestDropNonExistingIndex
2023/01/06 21:09:50 In TestDropNonExistingIndex()
2023/01/06 21:09:50 Dropping the secondary index 123456
2023/01/06 21:09:50 Index drop failed as expected with error: Index does not exist.
--- PASS: TestDropNonExistingIndex (0.05s)
=== RUN   TestCreateIndexNonExistentBucket
2023/01/06 21:09:50 In TestCreateIndexNonExistentBucket()
2023-01-06T21:09:51.283+05:30 [Error] Encountered error during create index.  Error: [Bucket Not Found] Bucket BlahBucket does not exist or temporarily unavailable for creating new index. Please retry the operation at a later time.
2023-01-06T21:10:02.286+05:30 [Error] Fail to create index: [Bucket Not Found] Bucket BlahBucket does not exist or temporarily unavailable for creating new index. Please retry the operation at a later time.
2023/01/06 21:10:02 Index create failed as expected with error: [Bucket Not Found] Bucket BlahBucket does not exist or temporarily unavailable for creating new index. Please retry the operation at a later time.
--- PASS: TestCreateIndexNonExistentBucket (11.53s)
=== RUN   TestScanWithNoTimeout
2023/01/06 21:10:02 Create an index on empty bucket, populate the bucket and Run a scan on the index
2023/01/06 21:10:02 Changing config key indexer.settings.scan_timeout to value 0
2023/01/06 21:10:02 Using n1ql client
2023/01/06 21:10:02 Expected and Actual scan responses are the same
--- PASS: TestScanWithNoTimeout (0.41s)
=== RUN   TestIndexingOnBinaryBucketMeta
2023/01/06 21:10:02 In TestIndexingOnBinaryBucketMeta()
2023/01/06 21:10:02 	 1. Populate a bucekt with binary docs and create indexs on the `id`, `cas` and `expiration` fields of Metadata
2023/01/06 21:10:02 	 2. Validate the test by comparing the items_count of indexes and the number of docs in the bucket for each of the fields
2023/01/06 21:10:05 Modified parameters of bucket default, responseBody: 
2023/01/06 21:10:05 Created bucket binaryBucket, responseBody: 
2023/01/06 21:10:24 Created the secondary index index_binary_meta_id. Waiting for it become active
2023/01/06 21:10:24 Index is 16597639902651810259 now active
2023/01/06 21:10:29 items_count stat is 10 for index index_binary_meta_id
2023/01/06 21:10:29 Dropping the secondary index index_binary_meta_id
2023/01/06 21:10:29 Index dropped
2023/01/06 21:10:32 Created the secondary index index_binary_meta_cas. Waiting for it become active
2023/01/06 21:10:32 Index is 10030079630555319037 now active
2023/01/06 21:10:37 items_count stat is 10 for index index_binary_meta_cas
2023/01/06 21:10:37 Dropping the secondary index index_binary_meta_cas
2023/01/06 21:10:37 Index dropped
2023/01/06 21:10:41 Created the secondary index index_binary_meta_expiration. Waiting for it become active
2023/01/06 21:10:41 Index is 817250850756289813 now active
2023/01/06 21:10:46 items_count stat is 10 for index index_binary_meta_expiration
2023/01/06 21:10:46 Dropping the secondary index index_binary_meta_expiration
2023/01/06 21:10:46 Index dropped
2023/01/06 21:10:47 Deleted bucket binaryBucket, responseBody: 
2023/01/06 21:10:50 Modified parameters of bucket default, responseBody: 
--- PASS: TestIndexingOnBinaryBucketMeta (63.12s)
=== RUN   TestRetainDeleteXATTRBinaryDocs
2023/01/06 21:11:05 In TestRetainDeleteXATTRBinaryDocs()
2023/01/06 21:11:05 	 1. Populate a bucket with binary docs having system XATTRS
2023/01/06 21:11:05 	 2. Create index on the system XATTRS with "retain_deleted_xattr" attribute set to true
2023/01/06 21:11:05 	 3. Delete the documents in the bucket
2023/01/06 21:11:05 	 4. Query for the meta() information in the source bucket. The total number of results should be equivalent to the number of documents in the bucket before deletion of documents
2023/01/06 21:11:08 Modified parameters of bucket default, responseBody: 
2023/01/06 21:11:08 Created bucket binaryBucket, responseBody: 
2023/01/06 21:11:27 Created the secondary index index_system_xattr. Waiting for it become active
2023/01/06 21:11:27 Index is 4360235554924688756 now active
2023/01/06 21:11:32 Deleted all the documents in the bucket: binaryBucket successfully
2023/01/06 21:11:35 Deleted bucket binaryBucket, responseBody: 
2023/01/06 21:11:38 Modified parameters of bucket default, responseBody: 
--- PASS: TestRetainDeleteXATTRBinaryDocs (47.41s)
=== RUN   TestIndexingOnXATTRs
2023/01/06 21:11:53 In TestIndexingOnXATTRs()
2023/01/06 21:11:56 Modified parameters of bucket default, responseBody: 
2023/01/06 21:11:56 Created bucket bucket_xattrs, responseBody: 
2023/01/06 21:12:15 Created the secondary index index_sync_rev. Waiting for it become active
2023/01/06 21:12:15 Index is 11893057939773186400 now active
2023/01/06 21:12:22 Created the secondary index index_sync_channels. Waiting for it become active
2023/01/06 21:12:22 Index is 16572208727867096499 now active
2023/01/06 21:12:28 Created the secondary index index_sync_sequence. Waiting for it become active
2023/01/06 21:12:28 Index is 6636278120296066331 now active
2023/01/06 21:12:33 items_count stat is 100 for index index_sync_rev
2023/01/06 21:12:33 items_count stat is 100 for index index_sync_channels
2023/01/06 21:12:33 items_count stat is 100 for index index_sync_sequence
2023/01/06 21:12:33 Using n1ql client
2023-01-06T21:12:33.879+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T21:12:33.882+05:30 [Info] GSIC[default/bucket_xattrs-_default-_default-1673019753877851193] started ...
2023/01/06 21:12:33 Dropping the secondary index index_sync_rev
2023/01/06 21:12:33 Index dropped
2023/01/06 21:12:33 Using n1ql client
2023/01/06 21:12:33 Dropping the secondary index index_sync_channels
2023/01/06 21:12:34 Index dropped
2023/01/06 21:12:34 Using n1ql client
2023/01/06 21:12:34 Dropping the secondary index index_sync_sequence
2023/01/06 21:12:34 Index dropped
2023/01/06 21:12:36 Deleted bucket bucket_xattrs, responseBody: 
2023/01/06 21:12:39 Modified parameters of bucket default, responseBody: 
--- PASS: TestIndexingOnXATTRs (61.05s)
=== RUN   TestSimpleIndex_FloatDataType
2023/01/06 21:12:54 In TestSimpleIndex_FloatDataType()
2023/01/06 21:12:54 Index found:  index_age
2023/01/06 21:12:54 Using n1ql client
2023/01/06 21:12:54 Expected and Actual scan responses are the same
--- PASS: TestSimpleIndex_FloatDataType (0.02s)
=== RUN   TestSimpleIndex_StringDataType
2023/01/06 21:12:54 In TestSimpleIndex_StringDataType()
2023/01/06 21:12:58 Created the secondary index index_company. Waiting for it become active
2023/01/06 21:12:58 Index is 5126315396515339957 now active
2023/01/06 21:12:58 Using n1ql client
2023/01/06 21:12:58 Expected and Actual scan responses are the same
2023/01/06 21:12:58 Using n1ql client
2023/01/06 21:12:58 Expected and Actual scan responses are the same
--- PASS: TestSimpleIndex_StringDataType (4.65s)
=== RUN   TestSimpleIndex_FieldValueCaseSensitivity
2023/01/06 21:12:58 In TestSimpleIndex_StringCaseSensitivity()
2023/01/06 21:12:58 Index found:  index_company
2023/01/06 21:12:58 Using n1ql client
2023/01/06 21:12:58 Expected and Actual scan responses are the same
2023/01/06 21:12:58 Using n1ql client
2023/01/06 21:12:59 Expected and Actual scan responses are the same
--- PASS: TestSimpleIndex_FieldValueCaseSensitivity (0.07s)
=== RUN   TestSimpleIndex_BoolDataType
2023/01/06 21:12:59 In TestSimpleIndex_BoolDataType()
2023/01/06 21:13:05 Created the secondary index index_isActive. Waiting for it become active
2023/01/06 21:13:05 Index is 7534259065515551131 now active
2023/01/06 21:13:05 Using n1ql client
2023/01/06 21:13:05 Expected and Actual scan responses are the same
--- PASS: TestSimpleIndex_BoolDataType (6.71s)
=== RUN   TestBasicLookup
2023/01/06 21:13:05 In TestBasicLookup()
2023/01/06 21:13:05 Index found:  index_company
2023/01/06 21:13:05 Using n1ql client
2023/01/06 21:13:05 Expected and Actual scan responses are the same
--- PASS: TestBasicLookup (0.01s)
=== RUN   TestIndexOnNonExistentField
2023/01/06 21:13:05 In TestIndexOnNonExistentField()
2023/01/06 21:13:12 Created the secondary index index_height. Waiting for it become active
2023/01/06 21:13:12 Index is 772363789629456577 now active
2023/01/06 21:13:12 Using n1ql client
2023/01/06 21:13:12 Expected and Actual scan responses are the same
--- PASS: TestIndexOnNonExistentField (6.39s)
=== RUN   TestIndexPartiallyMissingField
2023/01/06 21:13:12 In TestIndexPartiallyMissingField()
2023/01/06 21:13:18 Created the secondary index index_nationality. Waiting for it become active
2023/01/06 21:13:18 Index is 2529520712916944780 now active
2023/01/06 21:13:18 Using n1ql client
2023/01/06 21:13:18 Expected and Actual scan responses are the same
--- PASS: TestIndexPartiallyMissingField (6.51s)
=== RUN   TestScanNonMatchingDatatype
2023/01/06 21:13:18 In TestScanNonMatchingDatatype()
2023/01/06 21:13:18 Index found:  index_age
2023/01/06 21:13:18 Using n1ql client
2023/01/06 21:13:18 Expected and Actual scan responses are the same
--- PASS: TestScanNonMatchingDatatype (0.02s)
=== RUN   TestInclusionNeither
2023/01/06 21:13:18 In TestInclusionNeither()
2023/01/06 21:13:18 Index found:  index_age
2023/01/06 21:13:18 Using n1ql client
2023/01/06 21:13:18 Expected and Actual scan responses are the same
--- PASS: TestInclusionNeither (0.03s)
=== RUN   TestInclusionLow
2023/01/06 21:13:18 In TestInclusionLow()
2023/01/06 21:13:18 Index found:  index_age
2023/01/06 21:13:18 Using n1ql client
2023/01/06 21:13:18 Expected and Actual scan responses are the same
--- PASS: TestInclusionLow (0.03s)
=== RUN   TestInclusionHigh
2023/01/06 21:13:18 In TestInclusionHigh()
2023/01/06 21:13:18 Index found:  index_age
2023/01/06 21:13:18 Using n1ql client
2023/01/06 21:13:18 Expected and Actual scan responses are the same
--- PASS: TestInclusionHigh (0.03s)
=== RUN   TestInclusionBoth
2023/01/06 21:13:18 In TestInclusionBoth()
2023/01/06 21:13:18 Index found:  index_age
2023/01/06 21:13:18 Using n1ql client
2023/01/06 21:13:18 Expected and Actual scan responses are the same
--- PASS: TestInclusionBoth (0.02s)
=== RUN   TestNestedIndex_String
2023/01/06 21:13:18 In TestNestedIndex_String()
2023/01/06 21:13:25 Created the secondary index index_streetname. Waiting for it become active
2023/01/06 21:13:25 Index is 3390608192586333704 now active
2023/01/06 21:13:25 Using n1ql client
2023/01/06 21:13:26 Expected and Actual scan responses are the same
--- PASS: TestNestedIndex_String (7.49s)
=== RUN   TestNestedIndex_Float
2023/01/06 21:13:26 In TestNestedIndex_Float()
2023/01/06 21:13:31 Created the secondary index index_floor. Waiting for it become active
2023/01/06 21:13:31 Index is 6752489311835381998 now active
2023/01/06 21:13:31 Using n1ql client
2023/01/06 21:13:31 Expected and Actual scan responses are the same
--- PASS: TestNestedIndex_Float (5.41s)
=== RUN   TestNestedIndex_Bool
2023/01/06 21:13:31 In TestNestedIndex_Bool()
2023/01/06 21:13:38 Created the secondary index index_isresidential. Waiting for it become active
2023/01/06 21:13:38 Index is 2586307090067135225 now active
2023/01/06 21:13:38 Using n1ql client
2023/01/06 21:13:38 Expected and Actual scan responses are the same
--- PASS: TestNestedIndex_Bool (6.56s)
=== RUN   TestLookupJsonObject
2023/01/06 21:13:38 In TestLookupJsonObject()
2023/01/06 21:13:44 Created the secondary index index_streetaddress. Waiting for it become active
2023/01/06 21:13:44 Index is 15564723020190754570 now active
2023/01/06 21:13:44 Using n1ql client
2023/01/06 21:13:44 Count of docScanResults is 1
2023/01/06 21:13:44 Key: User3bf51f08-0bac-4c03-bcec-5c255cbdde2c  Value: [map[buildingname:Sterling Heights doornumber:12B floor:5 streetname:Hill Street]]
2023/01/06 21:13:44 Count of scanResults is 1
2023/01/06 21:13:44 Key: string User3bf51f08-0bac-4c03-bcec-5c255cbdde2c  Value: value.Values [{"buildingname":"Sterling Heights","doornumber":"12B","floor":5,"streetname":"Hill Street"}] false
2023/01/06 21:13:44 Expected and Actual scan responses are the same
--- PASS: TestLookupJsonObject (6.64s)
=== RUN   TestLookupObjDifferentOrdering
2023/01/06 21:13:44 In TestLookupObjDifferentOrdering()
2023/01/06 21:13:44 Index found:  index_streetaddress
2023/01/06 21:13:44 Using n1ql client
2023/01/06 21:13:44 Count of docScanResults is 1
2023/01/06 21:13:44 Key: User3bf51f08-0bac-4c03-bcec-5c255cbdde2c  Value: [map[buildingname:Sterling Heights doornumber:12B floor:5 streetname:Hill Street]]
2023/01/06 21:13:44 Count of scanResults is 1
2023/01/06 21:13:44 Key: string User3bf51f08-0bac-4c03-bcec-5c255cbdde2c  Value: value.Values [{"buildingname":"Sterling Heights","doornumber":"12B","floor":5,"streetname":"Hill Street"}] false
2023/01/06 21:13:44 Expected and Actual scan responses are the same
--- PASS: TestLookupObjDifferentOrdering (0.02s)
=== RUN   TestRangeJsonObject
2023/01/06 21:13:44 In TestRangeJsonObject()
2023/01/06 21:13:44 Index found:  index_streetaddress
2023/01/06 21:13:44 Using n1ql client
2023/01/06 21:13:44 Count of scanResults is 2
2023/01/06 21:13:44 Key: string Userbb48952f-f8d1-4e04-a0e1-96b9019706fb  Value: value.Values [{"buildingname":"Rosewood Gardens","doornumber":"514","floor":2,"streetname":"Karweg Place"}] false
2023/01/06 21:13:44 Key: string User3bf51f08-0bac-4c03-bcec-5c255cbdde2c  Value: value.Values [{"buildingname":"Sterling Heights","doornumber":"12B","floor":5,"streetname":"Hill Street"}] false
2023/01/06 21:13:44 Count of docScanResults is 2
2023/01/06 21:13:44 Key: User3bf51f08-0bac-4c03-bcec-5c255cbdde2c  Value: [map[buildingname:Sterling Heights doornumber:12B floor:5 streetname:Hill Street]]
2023/01/06 21:13:44 Key: Userbb48952f-f8d1-4e04-a0e1-96b9019706fb  Value: [map[buildingname:Rosewood Gardens doornumber:514 floor:2 streetname:Karweg Place]]
2023/01/06 21:13:44 Expected and Actual scan responses are the same
--- PASS: TestRangeJsonObject (0.00s)
=== RUN   TestLookupFloatDiffForms
2023/01/06 21:13:44 In TestLookupFloatDiffForms()
2023/01/06 21:13:51 Created the secondary index index_latitude. Waiting for it become active
2023/01/06 21:13:51 Index is 5136732142558365507 now active
2023/01/06 21:13:51 Scan 1
2023/01/06 21:13:51 Using n1ql client
2023/01/06 21:13:51 Expected and Actual scan responses are the same
2023/01/06 21:13:51 Scan 2
2023/01/06 21:13:51 Using n1ql client
2023/01/06 21:13:51 Expected and Actual scan responses are the same
2023/01/06 21:13:51 Scan 3
2023/01/06 21:13:51 Using n1ql client
2023/01/06 21:13:51 Expected and Actual scan responses are the same
2023/01/06 21:13:51 Scan 4
2023/01/06 21:13:51 Using n1ql client
2023/01/06 21:13:51 Expected and Actual scan responses are the same
2023/01/06 21:13:51 Scan 5
2023/01/06 21:13:51 Using n1ql client
2023/01/06 21:13:51 Expected and Actual scan responses are the same
2023/01/06 21:13:51 Scan 6
2023/01/06 21:13:51 Using n1ql client
2023/01/06 21:13:51 Expected and Actual scan responses are the same
--- PASS: TestLookupFloatDiffForms (6.59s)
=== RUN   TestRangeFloatInclVariations
2023/01/06 21:13:51 In TestRangeFloatInclVariations()
2023/01/06 21:13:51 Index found:  index_latitude
2023/01/06 21:13:51 Scan 1
2023/01/06 21:13:51 Using n1ql client
2023/01/06 21:13:51 Expected and Actual scan responses are the same
2023/01/06 21:13:51 Scan 2
2023/01/06 21:13:51 Using n1ql client
2023/01/06 21:13:51 Expected and Actual scan responses are the same
2023/01/06 21:13:51 Scan 3
2023/01/06 21:13:51 Using n1ql client
2023/01/06 21:13:51 Expected and Actual scan responses are the same
2023/01/06 21:13:51 Scan 4
2023/01/06 21:13:51 Using n1ql client
2023/01/06 21:13:51 Expected and Actual scan responses are the same
2023/01/06 21:13:51 Scan 5
2023/01/06 21:13:51 Using n1ql client
2023/01/06 21:13:51 Expected and Actual scan responses are the same
2023/01/06 21:13:51 Scan 6
2023/01/06 21:13:51 Using n1ql client
2023/01/06 21:13:51 Expected and Actual scan responses are the same
--- PASS: TestRangeFloatInclVariations (0.02s)
=== RUN   TestScanAll
2023/01/06 21:13:51 In TestScanAll()
2023/01/06 21:13:57 Created the secondary index index_name. Waiting for it become active
2023/01/06 21:13:57 Index is 40268889032397347 now active
2023/01/06 21:13:57 Length of docScanResults = 10502
2023/01/06 21:13:57 Using n1ql client
2023/01/06 21:13:58 Length of scanResults = 10502
2023/01/06 21:13:58 Expected and Actual scan responses are the same
--- PASS: TestScanAll (6.58s)
=== RUN   TestScanAllNestedField
2023/01/06 21:13:58 In TestScanAllNestedField()
2023/01/06 21:13:58 Index found:  index_streetname
2023/01/06 21:13:58 Length of docScanResults = 2
2023/01/06 21:13:58 Using n1ql client
2023/01/06 21:13:58 Length of scanResults = 2
2023/01/06 21:13:58 Expected and Actual scan responses are the same
--- PASS: TestScanAllNestedField (0.01s)
=== RUN   TestBasicPrimaryIndex
2023/01/06 21:13:58 In TestBasicPrimaryIndex()
2023/01/06 21:14:05 Created the secondary index index_p1. Waiting for it become active
2023/01/06 21:14:05 Index is 13713866112512522233 now active
2023-01-06T21:14:05.169+05:30 [Error] transport error between 127.0.0.1:49746->127.0.0.1:9107: write tcp 127.0.0.1:49746->127.0.0.1:9107: write: broken pipe
2023-01-06T21:14:05.169+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"]  request transport failed `write tcp 127.0.0.1:49746->127.0.0.1:9107: write: broken pipe`
2023-01-06T21:14:05.169+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T21:14:05.169+05:30 [Error] metadataClient:PickRandom: Replicas - [13616195842569333487], PrunedReplica - map[], FilteredReplica map[]
2023/01/06 21:14:05 Expected and Actual scan responses are the same
2023/01/06 21:14:05 CountRange() expected and actual is:  1900 and 1900
2023/01/06 21:14:05 lookupkey for CountLookup() = Userf18fdd2b-d116-48dc-a195-1b0d617779f9
2023/01/06 21:14:05 CountLookup() = 1
--- PASS: TestBasicPrimaryIndex (7.18s)
=== RUN   TestBasicNullDataType
2023/01/06 21:14:05 In TestBasicNullDataType()
2023/01/06 21:14:05 Index found:  index_email
2023/01/06 21:14:05 Using n1ql client
2023/01/06 21:14:05 Expected and Actual scan responses are the same
--- PASS: TestBasicNullDataType (0.02s)
=== RUN   TestBasicArrayDataType_ScanAll
2023/01/06 21:14:05 In TestBasicArrayDataType_ScanAll()
2023/01/06 21:14:11 Created the secondary index index_tags. Waiting for it become active
2023/01/06 21:14:11 Index is 2408923426412623160 now active
2023/01/06 21:14:11 Using n1ql client
2023/01/06 21:14:12 Expected and Actual scan responses are the same
--- PASS: TestBasicArrayDataType_ScanAll (6.86s)
=== RUN   TestBasicArrayDataType_Lookup
2023/01/06 21:14:12 In TestBasicArrayDataType_Lookup()
2023/01/06 21:14:14 Index found:  index_tags
2023/01/06 21:14:14 Count of scanResults is 1
2023/01/06 21:14:14 Key: string Usere46cea01-38f6-4e7b-92e5-69d64668ae75  Value: value.Values [["reprehenderit","tempor","officia","exercitation","labore","sunt","tempor"]] false
--- PASS: TestBasicArrayDataType_Lookup (2.00s)
=== RUN   TestArrayDataType_LookupMissingArrayValue
2023/01/06 21:14:14 In TestArrayDataType_LookupMissingArrayValue()
2023/01/06 21:14:14 Index found:  index_tags
2023/01/06 21:14:14 Count of scanResults is 0
--- PASS: TestArrayDataType_LookupMissingArrayValue (0.00s)
=== RUN   TestArrayDataType_LookupWrongOrder
2023/01/06 21:14:14 In TestArrayDataType_LookupWrongOrder()
2023/01/06 21:14:14 Index found:  index_tags
2023/01/06 21:14:14 Count of scanResults is 0
--- PASS: TestArrayDataType_LookupWrongOrder (0.00s)
=== RUN   TestArrayDataType_LookupSubset
2023/01/06 21:14:14 In TestArrayDataType_LookupSubset()
2023/01/06 21:14:14 Index found:  index_tags
2023/01/06 21:14:14 Count of scanResults is 0
--- PASS: TestArrayDataType_LookupSubset (0.00s)
=== RUN   TestScanLimitParameter
2023/01/06 21:14:14 In TestScanLimitParameter()
2023/01/06 21:14:14 Index found:  index_age
2023/01/06 21:14:14 Using n1ql client
2023/01/06 21:14:14 Using n1ql client
--- PASS: TestScanLimitParameter (0.01s)
=== RUN   TestCountRange
2023/01/06 21:14:14 In TestRangeCount()
2023/01/06 21:14:14 Index found:  index_age
2023/01/06 21:14:14 Count of expected and actual Range is:  2389 and 2389
2023/01/06 21:14:14 Count of expected and actual Range is: 10002 and 10002
2023/01/06 21:14:14 Count of expected and actual Range are: 0 and 0
2023/01/06 21:14:14 Count of expected and actual Range are: 500 and 500
2023/01/06 21:14:14 Testing CountRange() for key <= val
2023/01/06 21:14:14 Count of expected and actual CountRange for key <= 30 are: 5221 and 5221
2023/01/06 21:14:14 Testing CountRange() for key >= val
2023/01/06 21:14:14 Count of expected and actual CountRange for key >= 25 are: 7720 and 7720
2023/01/06 21:14:14 Testing CountRange() for null < key <= val
2023/01/06 21:14:14 Count of expected and actual CountRange for key > null && key <= 30 are: 5221 and 5221
2023/01/06 21:14:14 Testing CountRange() for val <= key < null 
2023/01/06 21:14:14 Count of expected and actual CountRange for key >= 25 && key < null are: 0 and 0
2023/01/06 21:14:14 Count of expected and actual Range are: 0 and 0
--- PASS: TestCountRange (0.05s)
=== RUN   TestCountLookup
2023/01/06 21:14:14 In TestCountLookup()
2023/01/06 21:14:14 Index found:  index_age
2023/01/06 21:14:14 Count of expected and actual Range are: 513 and 513
2023/01/06 21:14:14 Count of expected and actual Range are: 0 and 0
--- PASS: TestCountLookup (0.01s)
=== RUN   TestRangeStatistics
2023/01/06 21:14:14 In TestRangeCount()
2023/01/06 21:14:14 Index found:  index_age
--- PASS: TestRangeStatistics (0.00s)
=== RUN   TestIndexCreateWithWhere
2023/01/06 21:14:14 In TestIndexCreateWithWhere()
2023/01/06 21:14:18 Created the secondary index index_ageabove30. Waiting for it become active
2023/01/06 21:14:18 Index is 4872785282042595286 now active
2023/01/06 21:14:18 Using n1ql client
2023/01/06 21:14:18 Expected and Actual scan responses are the same
2023/01/06 21:14:18 Lengths of expected and actual scanReuslts are:  4281 and 4281
2023/01/06 21:14:25 Created the secondary index index_ageteens. Waiting for it become active
2023/01/06 21:14:25 Index is 5667713641734999843 now active
2023/01/06 21:14:25 Using n1ql client
2023/01/06 21:14:25 Expected and Actual scan responses are the same
2023/01/06 21:14:25 Lengths of expected and actual scanReuslts are:  0 and 0
2023/01/06 21:14:31 Created the secondary index index_age35to45. Waiting for it become active
2023/01/06 21:14:31 Index is 16217575912952141587 now active
2023/01/06 21:14:31 Using n1ql client
2023/01/06 21:14:31 Expected and Actual scan responses are the same
2023/01/06 21:14:31 Lengths of expected and actual scanReuslts are:  2889 and 2889
--- PASS: TestIndexCreateWithWhere (17.63s)
=== RUN   TestDeferredIndexCreate
2023/01/06 21:14:31 In TestDeferredIndexCreate()
2023/01/06 21:14:31 Created the index index_deferred in deferred mode. Index state is INDEX_STATE_READY
2023/01/06 21:14:33 Build the deferred index index_deferred. Waiting for the index to become active
2023/01/06 21:14:33 Waiting for index 10404696342860353539 to go active ...
2023/01/06 21:14:34 Waiting for index 10404696342860353539 to go active ...
2023/01/06 21:14:35 Waiting for index 10404696342860353539 to go active ...
2023/01/06 21:14:36 Waiting for index 10404696342860353539 to go active ...
2023/01/06 21:14:37 Waiting for index 10404696342860353539 to go active ...
2023/01/06 21:14:38 Index is 10404696342860353539 now active
2023/01/06 21:14:38 Using n1ql client
2023/01/06 21:14:38 Expected and Actual scan responses are the same
--- PASS: TestDeferredIndexCreate (7.14s)
=== RUN   TestCompositeIndex_NumAndString
2023/01/06 21:14:38 In TestCompositeIndex()
2023/01/06 21:14:45 Created the secondary index index_composite1. Waiting for it become active
2023/01/06 21:14:45 Index is 10047732884066383613 now active
2023/01/06 21:14:45 Using n1ql client
2023/01/06 21:14:45 Using n1ql client
2023/01/06 21:14:45 Using n1ql client
2023/01/06 21:14:45 Expected and Actual scan responses are the same
--- PASS: TestCompositeIndex_NumAndString (6.71s)
=== RUN   TestCompositeIndex_TwoNumberFields
2023/01/06 21:14:45 In TestCompositeIndex()
2023/01/06 21:14:52 Created the secondary index index_composite2. Waiting for it become active
2023/01/06 21:14:52 Index is 15535879687608599544 now active
2023/01/06 21:14:52 Using n1ql client
--- PASS: TestCompositeIndex_TwoNumberFields (7.66s)
=== RUN   TestNumbers_Int64_Float64
2023/01/06 21:14:53 In TestNumbers_Int64_Float64()
2023/01/06 21:14:58 Created the secondary index idx_numbertest. Waiting for it become active
2023/01/06 21:14:58 Index is 2180793420769353831 now active
2023/01/06 21:14:58 
 ==== Int64 test #0
2023/01/06 21:14:58 Using n1ql client
2023/01/06 21:14:58 Expected and Actual scan responses are the same
2023/01/06 21:14:58 
 ==== Int64 test #1
2023/01/06 21:14:58 Using n1ql client
2023/01/06 21:14:58 Expected and Actual scan responses are the same
2023/01/06 21:14:58 
 ==== Int64 test #2
2023/01/06 21:14:59 Using n1ql client
2023/01/06 21:14:59 Expected and Actual scan responses are the same
2023/01/06 21:14:59 
 ==== Int64 test #3
2023/01/06 21:14:59 Using n1ql client
2023/01/06 21:14:59 Expected and Actual scan responses are the same
2023/01/06 21:14:59 
 ==== Int64 test #4
2023/01/06 21:14:59 Using n1ql client
2023/01/06 21:14:59 Expected and Actual scan responses are the same
2023/01/06 21:14:59 
 ==== Int64 test #5
2023/01/06 21:14:59 Using n1ql client
2023/01/06 21:14:59 Expected and Actual scan responses are the same
2023/01/06 21:14:59 
 ==== Int64 test #6
2023/01/06 21:14:59 Using n1ql client
2023/01/06 21:14:59 Expected and Actual scan responses are the same
2023/01/06 21:14:59 
 ==== Int64 test #7
2023/01/06 21:14:59 Using n1ql client
2023/01/06 21:14:59 Expected and Actual scan responses are the same
2023/01/06 21:14:59 
 ==== Int64 test #8
2023/01/06 21:14:59 Using n1ql client
2023/01/06 21:14:59 Expected and Actual scan responses are the same
2023/01/06 21:14:59 
 ==== Float64 test #0
2023/01/06 21:14:59 Using n1ql client
2023/01/06 21:14:59 Expected and Actual scan responses are the same
2023/01/06 21:14:59 
 ==== Float64 test #1
2023/01/06 21:14:59 Using n1ql client
2023/01/06 21:14:59 Expected and Actual scan responses are the same
2023/01/06 21:14:59 
 ==== Float64 test #2
2023/01/06 21:14:59 Using n1ql client
2023/01/06 21:14:59 Expected and Actual scan responses are the same
2023/01/06 21:14:59 
 ==== Float64 test #3
2023/01/06 21:14:59 Using n1ql client
2023/01/06 21:14:59 Expected and Actual scan responses are the same
--- PASS: TestNumbers_Int64_Float64 (6.27s)
=== RUN   TestRestartIndexer
2023/01/06 21:14:59 In TestRestartIndexer()
2023/01/06 21:14:59 []
2023-01-06T21:14:59.662+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T21:14:59.662+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T21:14:59.663+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T21:14:59.663+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023/01/06 21:15:19 Using n1ql client
2023-01-06T21:15:19.646+05:30 [Error] transport error between 127.0.0.1:52202->127.0.0.1:9107: write tcp 127.0.0.1:52202->127.0.0.1:9107: write: broken pipe
2023-01-06T21:15:19.646+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] 6477734271476427619 request transport failed `write tcp 127.0.0.1:52202->127.0.0.1:9107: write: broken pipe`
2023/01/06 21:15:19 Len of expected and actual scan results are :  10002 and 10002
2023/01/06 21:15:19 Expected and Actual scan responses are the same
--- PASS: TestRestartIndexer (20.09s)
=== RUN   TestCreateDocsMutation
2023/01/06 21:15:19 In TestCreateDocsMutation()
2023/01/06 21:15:19 Index found:  index_age
2023/01/06 21:15:19 Using n1ql client
2023/01/06 21:15:19 Len of expected and actual scan results are :  10002 and 10002
2023/01/06 21:15:19 Expected and Actual scan responses are the same
2023/01/06 21:15:19 Using n1ql client
2023/01/06 21:15:20 Index Scan after mutations took 46.766661ms
2023/01/06 21:15:20 Len of expected and actual scan results are :  10102 and 10102
2023/01/06 21:15:20 Expected and Actual scan responses are the same
--- PASS: TestCreateDocsMutation (0.36s)
=== RUN   TestRestartProjector
2023/01/06 21:15:20 In TestRestartProjector()
2023/01/06 21:15:20 []
2023/01/06 21:15:40 Using n1ql client
2023/01/06 21:15:40 Len of expected and actual scan results are :  10102 and 10102
2023/01/06 21:15:40 Expected and Actual scan responses are the same
--- PASS: TestRestartProjector (20.09s)
=== RUN   TestDeleteDocsMutation
2023/01/06 21:15:40 In TestDeleteDocsMutation()
2023/01/06 21:15:40 Index found:  index_age
2023/01/06 21:15:40 Using n1ql client
2023/01/06 21:15:40 Len of expected and actual scan results are :  10102 and 10102
2023/01/06 21:15:40 Expected and Actual scan responses are the same
2023/01/06 21:15:40 Using n1ql client
2023/01/06 21:15:40 Index Scan after mutations took 34.712409ms
2023/01/06 21:15:40 Len of expected and actual scan results are :  9902 and 9902
2023/01/06 21:15:40 Expected and Actual scan responses are the same
--- PASS: TestDeleteDocsMutation (0.35s)
=== RUN   TestUpdateDocsMutation
2023/01/06 21:15:40 In TestUpdateDocsMutation()
2023/01/06 21:15:40 Index found:  index_age
2023/01/06 21:15:40 Using n1ql client
2023/01/06 21:15:40 Len of expected and actual scan results are :  9445 and 9445
2023/01/06 21:15:40 Expected and Actual scan responses are the same
2023/01/06 21:15:40 Num of keysFromMutDocs: 100
2023/01/06 21:15:40 Updating number of documents: 100
2023/01/06 21:15:40 Using n1ql client
2023/01/06 21:15:40 Index Scan after mutations took 50.447747ms
2023/01/06 21:15:40 Len of expected and actual scan results are :  9445 and 9445
2023/01/06 21:15:40 Expected and Actual scan responses are the same
--- PASS: TestUpdateDocsMutation (0.40s)
=== RUN   TestLargeMutations
2023/01/06 21:15:40 In TestLargeMutations()
2023/01/06 21:15:40 In DropAllSecondaryIndexes()
2023/01/06 21:15:40 Index found:  index_ageabove30
2023/01/06 21:15:41 Dropped index index_ageabove30
2023/01/06 21:15:41 Index found:  index_di1
2023/01/06 21:15:41 Dropped index index_di1
2023/01/06 21:15:41 Index found:  index_pin
2023/01/06 21:15:41 Dropped index index_pin
2023/01/06 21:15:41 Index found:  index_age
2023/01/06 21:15:41 Dropped index index_age
2023/01/06 21:15:41 Index found:  idx_age
2023/01/06 21:15:41 Dropped index idx_age
2023/01/06 21:15:41 Index found:  index_state
2023/01/06 21:15:41 Dropped index index_state
2023/01/06 21:15:41 Index found:  index_latitude
2023/01/06 21:15:41 Dropped index index_latitude
2023/01/06 21:15:41 Index found:  index_streetaddress
2023/01/06 21:15:41 Dropped index index_streetaddress
2023/01/06 21:15:41 Index found:  index_deferred
2023/01/06 21:15:41 Dropped index index_deferred
2023/01/06 21:15:41 Index found:  index_company
2023/01/06 21:15:41 Dropped index index_company
2023/01/06 21:15:41 Index found:  index_nationality
2023/01/06 21:15:41 Dropped index index_nationality
2023/01/06 21:15:41 Index found:  index_ageteens
2023/01/06 21:15:42 Dropped index index_ageteens
2023/01/06 21:15:42 Index found:  index_balance
2023/01/06 21:15:42 Dropped index index_balance
2023/01/06 21:15:42 Index found:  index_composite1
2023/01/06 21:15:42 Dropped index index_composite1
2023/01/06 21:15:42 Index found:  index_isresidential
2023/01/06 21:15:42 Dropped index index_isresidential
2023/01/06 21:15:42 Index found:  index_email
2023/01/06 21:15:42 Dropped index index_email
2023/01/06 21:15:42 Index found:  index_tags
2023/01/06 21:15:42 Dropped index index_tags
2023/01/06 21:15:42 Index found:  index_composite2
2023/01/06 21:15:42 Dropped index index_composite2
2023/01/06 21:15:42 Index found:  index_name
2023/01/06 21:15:42 Dropped index index_name
2023/01/06 21:15:42 Index found:  idx_numbertest
2023/01/06 21:15:42 Dropped index idx_numbertest
2023/01/06 21:15:42 Index found:  index_isActive
2023/01/06 21:15:42 Dropped index index_isActive
2023/01/06 21:15:42 Index found:  index_streetname
2023/01/06 21:15:42 Dropped index index_streetname
2023/01/06 21:15:42 Index found:  index_i2
2023/01/06 21:15:42 Dropped index index_i2
2023/01/06 21:15:42 Index found:  index_cdc
2023/01/06 21:15:42 Dropped index index_cdc
2023/01/06 21:15:42 Index found:  index_gender
2023/01/06 21:15:42 Dropped index index_gender
2023/01/06 21:15:42 Index found:  index_p1
2023/01/06 21:15:43 Dropped index index_p1
2023/01/06 21:15:43 Index found:  index_floor
2023/01/06 21:15:43 Dropped index index_floor
2023/01/06 21:15:43 Index found:  index_longitude
2023/01/06 21:15:43 Dropped index index_longitude
2023/01/06 21:15:43 Index found:  index_height
2023/01/06 21:15:43 Dropped index index_height
2023/01/06 21:15:43 Index found:  index_eyeColor
2023/01/06 21:15:43 Dropped index index_eyeColor
2023/01/06 21:15:43 Index found:  index_age35to45
2023/01/06 21:15:43 Dropped index index_age35to45
2023/01/06 21:16:03 Created the secondary index indexmut_1. Waiting for it become active
2023/01/06 21:16:03 Index is 4604978879759970921 now active
2023/01/06 21:16:03 Using n1ql client
2023/01/06 21:16:04 Expected and Actual scan responses are the same
2023/01/06 21:16:04 Len of expected and actual scan results are :  29902 and 29902
2023/01/06 21:16:04 ITERATION 0
2023/01/06 21:16:22 Created the secondary index indexmut_2. Waiting for it become active
2023/01/06 21:16:22 Index is 3269424535493598188 now active
2023/01/06 21:16:22 Using n1ql client
2023/01/06 21:16:22 Expected and Actual scan responses are the same
2023/01/06 21:16:22 Len of expected and actual scan results are :  39902 and 39902
2023/01/06 21:16:22 Using n1ql client
2023/01/06 21:16:23 Expected and Actual scan responses are the same
2023/01/06 21:16:23 Len of expected and actual scan results are :  39902 and 39902
2023/01/06 21:16:23 Dropping the secondary index indexmut_2
2023/01/06 21:16:23 Index dropped
2023/01/06 21:16:23 ITERATION 1
2023/01/06 21:16:40 Created the secondary index indexmut_2. Waiting for it become active
2023/01/06 21:16:40 Index is 14409868158294156637 now active
2023/01/06 21:16:40 Using n1ql client
2023/01/06 21:16:41 Expected and Actual scan responses are the same
2023/01/06 21:16:41 Len of expected and actual scan results are :  49902 and 49902
2023/01/06 21:16:41 Using n1ql client
2023/01/06 21:16:41 Expected and Actual scan responses are the same
2023/01/06 21:16:41 Len of expected and actual scan results are :  49902 and 49902
2023/01/06 21:16:41 Dropping the secondary index indexmut_2
2023/01/06 21:16:41 Index dropped
2023/01/06 21:16:41 ITERATION 2
2023/01/06 21:16:59 Created the secondary index indexmut_2. Waiting for it become active
2023/01/06 21:16:59 Index is 5154883373313096774 now active
2023/01/06 21:16:59 Using n1ql client
2023/01/06 21:17:00 Expected and Actual scan responses are the same
2023/01/06 21:17:00 Len of expected and actual scan results are :  59902 and 59902
2023/01/06 21:17:00 Using n1ql client
2023/01/06 21:17:00 Expected and Actual scan responses are the same
2023/01/06 21:17:00 Len of expected and actual scan results are :  59902 and 59902
2023/01/06 21:17:00 Dropping the secondary index indexmut_2
2023/01/06 21:17:00 Index dropped
2023/01/06 21:17:00 ITERATION 3
2023/01/06 21:17:19 Created the secondary index indexmut_2. Waiting for it become active
2023/01/06 21:17:19 Index is 1824733360377898577 now active
2023/01/06 21:17:19 Using n1ql client
2023/01/06 21:17:20 Expected and Actual scan responses are the same
2023/01/06 21:17:20 Len of expected and actual scan results are :  69902 and 69902
2023/01/06 21:17:20 Using n1ql client
2023/01/06 21:17:21 Expected and Actual scan responses are the same
2023/01/06 21:17:21 Len of expected and actual scan results are :  69902 and 69902
2023/01/06 21:17:21 Dropping the secondary index indexmut_2
2023/01/06 21:17:21 Index dropped
2023/01/06 21:17:21 ITERATION 4
2023/01/06 21:17:42 Created the secondary index indexmut_2. Waiting for it become active
2023/01/06 21:17:42 Index is 8738897010167725819 now active
2023/01/06 21:17:42 Using n1ql client
2023/01/06 21:17:43 Expected and Actual scan responses are the same
2023/01/06 21:17:43 Len of expected and actual scan results are :  79902 and 79902
2023/01/06 21:17:44 Using n1ql client
2023/01/06 21:17:44 Expected and Actual scan responses are the same
2023/01/06 21:17:44 Len of expected and actual scan results are :  79902 and 79902
2023/01/06 21:17:44 Dropping the secondary index indexmut_2
2023/01/06 21:17:44 Index dropped
2023/01/06 21:17:44 ITERATION 5
2023/01/06 21:18:05 Created the secondary index indexmut_2. Waiting for it become active
2023/01/06 21:18:05 Index is 2251967565230344805 now active
2023/01/06 21:18:05 Using n1ql client
2023/01/06 21:18:06 Expected and Actual scan responses are the same
2023/01/06 21:18:06 Len of expected and actual scan results are :  89902 and 89902
2023/01/06 21:18:06 Using n1ql client
2023/01/06 21:18:06 Expected and Actual scan responses are the same
2023/01/06 21:18:06 Len of expected and actual scan results are :  89902 and 89902
2023/01/06 21:18:06 Dropping the secondary index indexmut_2
2023/01/06 21:18:06 Index dropped
2023/01/06 21:18:06 ITERATION 6
2023/01/06 21:18:27 Created the secondary index indexmut_2. Waiting for it become active
2023/01/06 21:18:27 Index is 11118612615643813103 now active
2023/01/06 21:18:27 Using n1ql client
2023/01/06 21:18:28 Expected and Actual scan responses are the same
2023/01/06 21:18:28 Len of expected and actual scan results are :  99902 and 99902
2023/01/06 21:18:28 Using n1ql client
2023/01/06 21:18:28 Expected and Actual scan responses are the same
2023/01/06 21:18:28 Len of expected and actual scan results are :  99902 and 99902
2023/01/06 21:18:28 Dropping the secondary index indexmut_2
2023/01/06 21:18:28 Index dropped
2023/01/06 21:18:28 ITERATION 7
2023/01/06 21:18:49 Created the secondary index indexmut_2. Waiting for it become active
2023/01/06 21:18:49 Index is 7247292249212141551 now active
2023/01/06 21:18:49 Using n1ql client
2023/01/06 21:18:50 Expected and Actual scan responses are the same
2023/01/06 21:18:50 Len of expected and actual scan results are :  109902 and 109902
2023/01/06 21:18:50 Using n1ql client
2023/01/06 21:18:51 Expected and Actual scan responses are the same
2023/01/06 21:18:51 Len of expected and actual scan results are :  109902 and 109902
2023/01/06 21:18:51 Dropping the secondary index indexmut_2
2023/01/06 21:18:51 Index dropped
2023/01/06 21:18:51 ITERATION 8
2023/01/06 21:19:17 Created the secondary index indexmut_2. Waiting for it become active
2023/01/06 21:19:17 Index is 11254783333258906947 now active
2023/01/06 21:19:17 Using n1ql client
2023/01/06 21:19:18 Expected and Actual scan responses are the same
2023/01/06 21:19:18 Len of expected and actual scan results are :  119902 and 119902
2023/01/06 21:19:18 Using n1ql client
2023/01/06 21:19:19 Expected and Actual scan responses are the same
2023/01/06 21:19:19 Len of expected and actual scan results are :  119902 and 119902
2023/01/06 21:19:19 Dropping the secondary index indexmut_2
2023/01/06 21:19:20 Index dropped
2023/01/06 21:19:20 ITERATION 9
2023/01/06 21:19:41 Created the secondary index indexmut_2. Waiting for it become active
2023/01/06 21:19:41 Index is 12309726299407706699 now active
2023/01/06 21:19:41 Using n1ql client
2023/01/06 21:19:42 Expected and Actual scan responses are the same
2023/01/06 21:19:42 Len of expected and actual scan results are :  129902 and 129902
2023/01/06 21:19:42 Using n1ql client
2023/01/06 21:19:43 Expected and Actual scan responses are the same
2023/01/06 21:19:43 Len of expected and actual scan results are :  129902 and 129902
2023/01/06 21:19:43 Dropping the secondary index indexmut_2
2023/01/06 21:19:43 Index dropped
2023/01/06 21:19:43 ITERATION 10
2023/01/06 21:20:09 Created the secondary index indexmut_2. Waiting for it become active
2023/01/06 21:20:09 Index is 15358293162006059618 now active
2023/01/06 21:20:10 Using n1ql client
2023/01/06 21:20:11 Expected and Actual scan responses are the same
2023/01/06 21:20:11 Len of expected and actual scan results are :  139902 and 139902
2023/01/06 21:20:11 Using n1ql client
2023/01/06 21:20:12 Expected and Actual scan responses are the same
2023/01/06 21:20:12 Len of expected and actual scan results are :  139902 and 139902
2023/01/06 21:20:12 Dropping the secondary index indexmut_2
2023/01/06 21:20:12 Index dropped
--- PASS: TestLargeMutations (271.37s)
=== RUN   TestPlanner
2023/01/06 21:20:12 In TestPlanner()
2023/01/06 21:20:12 -------------------------------------------
2023/01/06 21:20:12 initial placement - 20-50M, 10 index, 3 replica, 2x
2023-01-06T21:20:12.272+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0
2023-01-06T21:20:12.282+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2023-01-06T21:20:12.282+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2023-01-06T21:20:12.284+05:30 [Info] switched currmeta from 476 -> 478 force true 
2023-01-06T21:20:12.287+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2023-01-06T21:20:12.287+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2023-01-06T21:20:12.289+05:30 [Info] switched currmeta from 482 -> 482 force true 
2023-01-06T21:20:12.293+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2023-01-06T21:20:12.293+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2023-01-06T21:20:12.300+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2023-01-06T21:20:12.300+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2023-01-06T21:20:12.305+05:30 [Info] switched currmeta from 482 -> 482 force true 
2023-01-06T21:20:12.308+05:30 [Info] switched currmeta from 478 -> 478 force true 
2023-01-06T21:20:12.358+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:12.359+05:30 [Info] Score: 0.03946715010265177
2023-01-06T21:20:12.359+05:30 [Info] Memory Quota: 63140022504 (58.8037G)
2023-01-06T21:20:12.359+05:30 [Info] CPU Quota: 12
2023-01-06T21:20:12.359+05:30 [Info] Indexer Memory Mean 37749137512 (35.1566G)
2023-01-06T21:20:12.359+05:30 [Info] Indexer Memory Deviation 2979701752 (2.77506G) (7.89%)
2023-01-06T21:20:12.359+05:30 [Info] Indexer Memory Utilization 0.5979
2023-01-06T21:20:12.359+05:30 [Info] Indexer CPU Mean 11.9194
2023-01-06T21:20:12.359+05:30 [Info] Indexer CPU Deviation 1.86 (15.58%)
2023-01-06T21:20:12.359+05:30 [Info] Indexer CPU Utilization 0.9933
2023-01-06T21:20:12.359+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:12.359+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:12.359+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:12.359+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:12.359+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:12.359+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:12.359+05:30 [Info] Indexer Data Size Mean 37749137512 (35.1566G)
2023-01-06T21:20:12.359+05:30 [Info] Indexer Data Size Deviation 2979701752 (2.77506G) (7.89%)
2023-01-06T21:20:12.359+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:12.359+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:12.359+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:12.359+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:12 -------------------------------------------
2023/01/06 21:20:12 initial placement - 20-50M, 30 index, 3 replica, 2x
2023-01-06T21:20:12.359+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0
2023-01-06T21:20:13.182+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:13.182+05:30 [Info] Score: 0.026773023047403678
2023-01-06T21:20:13.183+05:30 [Info] Memory Quota: 70296111570 (65.4684G)
2023-01-06T21:20:13.183+05:30 [Info] CPU Quota: 12
2023-01-06T21:20:13.183+05:30 [Info] Indexer Memory Mean 46284586296 (43.1059G)
2023-01-06T21:20:13.183+05:30 [Info] Indexer Memory Deviation 2478356591 (2.30815G) (5.35%)
2023-01-06T21:20:13.183+05:30 [Info] Indexer Memory Utilization 0.6584
2023-01-06T21:20:13.183+05:30 [Info] Indexer CPU Mean 12.5644
2023-01-06T21:20:13.183+05:30 [Info] Indexer CPU Deviation 2.59 (20.58%)
2023-01-06T21:20:13.183+05:30 [Info] Indexer CPU Utilization 1.0470
2023-01-06T21:20:13.183+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:13.183+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:13.183+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:13.183+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:13.183+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:13.183+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:13.183+05:30 [Info] Indexer Data Size Mean 46284586296 (43.1059G)
2023-01-06T21:20:13.183+05:30 [Info] Indexer Data Size Deviation 2478356591 (2.30815G) (5.35%)
2023-01-06T21:20:13.183+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:13.183+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:13.183+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:13.183+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:13 -------------------------------------------
2023/01/06 21:20:13 initial placement - 20-50M, 30 index, 3 replica, 4x
2023-01-06T21:20:13.183+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0
2023-01-06T21:20:13.425+05:30 [Info] Score: 0.012407542778862616
2023-01-06T21:20:13.425+05:30 [Info] Memory Quota: 118513315160 (110.374G)
2023-01-06T21:20:13.426+05:30 [Info] CPU Quota: 24
2023-01-06T21:20:13.426+05:30 [Info] Indexer Memory Mean 77479460584 (72.1584G)
2023-01-06T21:20:13.426+05:30 [Info] Indexer Memory Deviation 1922659443 (1.79062G) (2.48%)
2023-01-06T21:20:13.426+05:30 [Info] Indexer Memory Utilization 0.6538
2023-01-06T21:20:13.426+05:30 [Info] Indexer CPU Mean 21.7651
2023-01-06T21:20:13.426+05:30 [Info] Indexer CPU Deviation 3.97 (18.25%)
2023-01-06T21:20:13.426+05:30 [Info] Indexer CPU Utilization 0.9069
2023-01-06T21:20:13.426+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:13.426+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:13.426+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:13.426+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:13.426+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:13.426+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:13.426+05:30 [Info] Indexer Data Size Mean 77479460584 (72.1584G)
2023-01-06T21:20:13.426+05:30 [Info] Indexer Data Size Deviation 1922659443 (1.79062G) (2.48%)
2023-01-06T21:20:13.426+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:13.426+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:13.426+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:13.426+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:13 -------------------------------------------
2023/01/06 21:20:13 initial placement - 200-500M, 10 index, 3 replica, 2x
2023-01-06T21:20:13.426+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0
2023-01-06T21:20:13.498+05:30 [Info] Score: 0.010026755107907934
2023-01-06T21:20:13.498+05:30 [Info] Memory Quota: 507885239060 (473.005G)
2023-01-06T21:20:13.498+05:30 [Info] CPU Quota: 12
2023-01-06T21:20:13.498+05:30 [Info] Indexer Memory Mean 402595981257 (374.947G)
2023-01-06T21:20:13.498+05:30 [Info] Indexer Memory Deviation 8073462622 (7.519G) (2.01%)
2023-01-06T21:20:13.498+05:30 [Info] Indexer Memory Utilization 0.7927
2023-01-06T21:20:13.498+05:30 [Info] Indexer CPU Mean 12.4574
2023-01-06T21:20:13.498+05:30 [Info] Indexer CPU Deviation 4.29 (34.47%)
2023-01-06T21:20:13.498+05:30 [Info] Indexer CPU Utilization 1.0381
2023-01-06T21:20:13.498+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:13.498+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:13.498+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:13.498+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:13.498+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:13.498+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:13.498+05:30 [Info] Indexer Data Size Mean 402595981257 (374.947G)
2023-01-06T21:20:13.498+05:30 [Info] Indexer Data Size Deviation 8073462622 (7.519G) (2.01%)
2023-01-06T21:20:13.498+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:13.498+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:13.498+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:13.498+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:13 -------------------------------------------
2023/01/06 21:20:13 initial placement - 200-500M, 30 index, 3 replica, 2x
2023-01-06T21:20:13.499+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0
2023-01-06T21:20:13.926+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2023-01-06T21:20:13.980+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2023-01-06T21:20:13.980+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2023-01-06T21:20:13.981+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2023-01-06T21:20:13.981+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2023-01-06T21:20:13.992+05:30 [Info] switched currmeta from 478 -> 478 force true 
2023-01-06T21:20:13.998+05:30 [Info] switched currmeta from 482 -> 482 force true 
2023-01-06T21:20:14.827+05:30 [Info] Score: 0.02850604744517909
2023-01-06T21:20:14.827+05:30 [Info] Memory Quota: 507786139466 (472.913G)
2023-01-06T21:20:14.827+05:30 [Info] CPU Quota: 12
2023-01-06T21:20:14.827+05:30 [Info] Indexer Memory Mean 399276225040 (371.855G)
2023-01-06T21:20:14.827+05:30 [Info] Indexer Memory Deviation 22763574029 (21.2002G) (5.70%)
2023-01-06T21:20:14.827+05:30 [Info] Indexer Memory Utilization 0.7863
2023-01-06T21:20:14.827+05:30 [Info] Indexer CPU Mean 11.1938
2023-01-06T21:20:14.827+05:30 [Info] Indexer CPU Deviation 2.83 (25.31%)
2023-01-06T21:20:14.827+05:30 [Info] Indexer CPU Utilization 0.9328
2023-01-06T21:20:14.827+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:14.827+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:14.827+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:14.827+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:14.827+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:14.827+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:14.827+05:30 [Info] Indexer Data Size Mean 399276225040 (371.855G)
2023-01-06T21:20:14.827+05:30 [Info] Indexer Data Size Deviation 22763574029 (21.2002G) (5.70%)
2023-01-06T21:20:14.827+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:14.827+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:14.827+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:14.827+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:14 -------------------------------------------
2023/01/06 21:20:14 initial placement - mixed small/medium, 30 index, 3 replica, 1.5/4x
2023-01-06T21:20:14.828+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0
2023-01-06T21:20:15.086+05:30 [Info] Score: 0.0065697908112598
2023-01-06T21:20:15.086+05:30 [Info] Memory Quota: 370404632779 (344.966G)
2023-01-06T21:20:15.086+05:30 [Info] CPU Quota: 12
2023-01-06T21:20:15.086+05:30 [Info] Indexer Memory Mean 304134763342 (283.248G)
2023-01-06T21:20:15.086+05:30 [Info] Indexer Memory Deviation 3996203547 (3.72175G) (1.31%)
2023-01-06T21:20:15.086+05:30 [Info] Indexer Memory Utilization 0.8211
2023-01-06T21:20:15.086+05:30 [Info] Indexer CPU Mean 10.8821
2023-01-06T21:20:15.086+05:30 [Info] Indexer CPU Deviation 6.11 (56.13%)
2023-01-06T21:20:15.086+05:30 [Info] Indexer CPU Utilization 0.9068
2023-01-06T21:20:15.086+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:15.086+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:15.086+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:15.086+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:15.086+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:15.086+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:15.086+05:30 [Info] Indexer Data Size Mean 304134763342 (283.248G)
2023-01-06T21:20:15.086+05:30 [Info] Indexer Data Size Deviation 3996203547 (3.72175G) (1.31%)
2023-01-06T21:20:15.086+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:15.086+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:15.086+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:15.086+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:15 -------------------------------------------
2023/01/06 21:20:15 initial placement - mixed all, 30 index, 3 replica, 1.5/4x
2023-01-06T21:20:15.087+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0
2023-01-06T21:20:15.570+05:30 [Info] Score: 0.01549957046756453
2023-01-06T21:20:15.570+05:30 [Info] Memory Quota: 414965384334 (386.467G)
2023-01-06T21:20:15.570+05:30 [Info] CPU Quota: 20
2023-01-06T21:20:15.570+05:30 [Info] Indexer Memory Mean 330499425012 (307.802G)
2023-01-06T21:20:15.570+05:30 [Info] Indexer Memory Deviation 10245198254 (9.54158G) (3.10%)
2023-01-06T21:20:15.570+05:30 [Info] Indexer Memory Utilization 0.7965
2023-01-06T21:20:15.570+05:30 [Info] Indexer CPU Mean 9.8412
2023-01-06T21:20:15.570+05:30 [Info] Indexer CPU Deviation 4.63 (47.03%)
2023-01-06T21:20:15.570+05:30 [Info] Indexer CPU Utilization 0.4921
2023-01-06T21:20:15.570+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:15.570+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:15.570+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:15.570+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:15.570+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:15.570+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:15.570+05:30 [Info] Indexer Data Size Mean 330499425012 (307.802G)
2023-01-06T21:20:15.570+05:30 [Info] Indexer Data Size Deviation 10245198254 (9.54158G) (3.10%)
2023-01-06T21:20:15.570+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:15.570+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:15.570+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:15.570+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:15 -------------------------------------------
2023/01/06 21:20:15 initial placement - 6 2M index, 1 replica, 2x
2023-01-06T21:20:15.581+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0
2023-01-06T21:20:15.596+05:30 [Info] Score: 0
2023-01-06T21:20:15.596+05:30 [Info] Memory Quota: 4848128000 (4.51517G)
2023-01-06T21:20:15.596+05:30 [Info] CPU Quota: 2
2023-01-06T21:20:15.596+05:30 [Info] Indexer Memory Mean 2080000000 (1.93715G)
2023-01-06T21:20:15.596+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2023-01-06T21:20:15.596+05:30 [Info] Indexer Memory Utilization 0.4290
2023-01-06T21:20:15.596+05:30 [Info] Indexer CPU Mean 1.2000
2023-01-06T21:20:15.596+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2023-01-06T21:20:15.596+05:30 [Info] Indexer CPU Utilization 0.6000
2023-01-06T21:20:15.596+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:15.596+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:15.596+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:15.596+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:15.596+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:15.596+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:15.596+05:30 [Info] Indexer Data Size Mean 2080000000 (1.93715G)
2023-01-06T21:20:15.596+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2023-01-06T21:20:15.596+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:15.596+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:15.596+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:15.596+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:15 -------------------------------------------
2023/01/06 21:20:15 initial placement - 5 20M primary index, 2 replica, 2x
2023-01-06T21:20:15.606+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0
2023-01-06T21:20:15.646+05:30 [Info] Score: 0
2023-01-06T21:20:15.646+05:30 [Info] Memory Quota: 14310128000 (13.3273G)
2023-01-06T21:20:15.646+05:30 [Info] CPU Quota: 2
2023-01-06T21:20:15.646+05:30 [Info] Indexer Memory Mean 10960000000 (10.2073G)
2023-01-06T21:20:15.646+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2023-01-06T21:20:15.646+05:30 [Info] Indexer Memory Utilization 0.7659
2023-01-06T21:20:15.646+05:30 [Info] Indexer CPU Mean 1.2000
2023-01-06T21:20:15.646+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2023-01-06T21:20:15.646+05:30 [Info] Indexer CPU Utilization 0.6000
2023-01-06T21:20:15.646+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:15.646+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:15.646+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:15.646+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:15.646+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:15.646+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:15.646+05:30 [Info] Indexer Data Size Mean 10960000000 (10.2073G)
2023-01-06T21:20:15.646+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2023-01-06T21:20:15.646+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:15.646+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:15.646+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:15.646+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:15 -------------------------------------------
2023/01/06 21:20:15 initial placement - 5 20M array index, 2 replica, 2x
2023-01-06T21:20:15.657+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0
2023-01-06T21:20:15.699+05:30 [Info] Score: 0
2023-01-06T21:20:15.699+05:30 [Info] Memory Quota: 237416768000 (221.112G)
2023-01-06T21:20:15.699+05:30 [Info] CPU Quota: 2
2023-01-06T21:20:15.699+05:30 [Info] Indexer Memory Mean 191440000000 (178.292G)
2023-01-06T21:20:15.699+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2023-01-06T21:20:15.699+05:30 [Info] Indexer Memory Utilization 0.8063
2023-01-06T21:20:15.699+05:30 [Info] Indexer CPU Mean 1.2000
2023-01-06T21:20:15.699+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2023-01-06T21:20:15.699+05:30 [Info] Indexer CPU Utilization 0.6000
2023-01-06T21:20:15.699+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:15.699+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:15.699+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:15.699+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:15.699+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:15.699+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:15.699+05:30 [Info] Indexer Data Size Mean 191440000000 (178.292G)
2023-01-06T21:20:15.699+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2023-01-06T21:20:15.699+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:15.699+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:15.699+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:15.699+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:15 -------------------------------------------
2023/01/06 21:20:15 initial placement - 3 replica constraint, 2 index, 2x
2023-01-06T21:20:15.703+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0
2023-01-06T21:20:15.741+05:30 [Info] Score: 0
2023-01-06T21:20:15.741+05:30 [Info] Memory Quota: 530294000 (505.728M)
2023-01-06T21:20:15.741+05:30 [Info] CPU Quota: 2
2023-01-06T21:20:15.741+05:30 [Info] Indexer Memory Mean 2600000 (2.47955M)
2023-01-06T21:20:15.741+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2023-01-06T21:20:15.741+05:30 [Info] Indexer Memory Utilization 0.0049
2023-01-06T21:20:15.741+05:30 [Info] Indexer CPU Mean 0.0000
2023-01-06T21:20:15.741+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2023-01-06T21:20:15.741+05:30 [Info] Indexer CPU Utilization 0.0000
2023-01-06T21:20:15.741+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:15.741+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:15.741+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:15.741+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:15.741+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:15.741+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:15.741+05:30 [Info] Indexer Data Size Mean 2600000 (2.47955M)
2023-01-06T21:20:15.741+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2023-01-06T21:20:15.741+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:15.741+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:15.741+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:15.741+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:15 -------------------------------------------
2023/01/06 21:20:15 incr placement - 20-50M, 5 2M index, 1 replica, 1x
2023-01-06T21:20:15.752+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.02890031405333445
2023-01-06T21:20:15.894+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:15.894+05:30 [Info] Score: 0.007189938568471697
2023-01-06T21:20:15.894+05:30 [Info] Memory Quota: 125233041042 (116.632G)
2023-01-06T21:20:15.894+05:30 [Info] CPU Quota: 27
2023-01-06T21:20:15.894+05:30 [Info] Indexer Memory Mean 71117238485 (66.2331G)
2023-01-06T21:20:15.894+05:30 [Info] Indexer Memory Deviation 1022657151 (975.282M) (1.44%)
2023-01-06T21:20:15.894+05:30 [Info] Indexer Memory Utilization 0.5679
2023-01-06T21:20:15.894+05:30 [Info] Indexer CPU Mean 20.1734
2023-01-06T21:20:15.894+05:30 [Info] Indexer CPU Deviation 1.87 (9.29%)
2023-01-06T21:20:15.894+05:30 [Info] Indexer CPU Utilization 0.7472
2023-01-06T21:20:15.894+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:15.894+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:15.894+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:15.894+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:15.894+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:15.894+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:15.894+05:30 [Info] Indexer Data Size Mean 71117238485 (66.2331G)
2023-01-06T21:20:15.894+05:30 [Info] Indexer Data Size Deviation 1022657151 (975.282M) (1.44%)
2023-01-06T21:20:15.894+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:15.894+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:15.894+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:15.894+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:15 -------------------------------------------
2023/01/06 21:20:15 incr placement - mixed small/medium, 6 2M index, 1 replica, 1x
2023-01-06T21:20:15.906+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.0014379395448970375
2023-01-06T21:20:16.138+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:16.138+05:30 [Info] Score: 0.00043718596495524715
2023-01-06T21:20:16.138+05:30 [Info] Memory Quota: 536870912000 (500G)
2023-01-06T21:20:16.138+05:30 [Info] CPU Quota: 20
2023-01-06T21:20:16.138+05:30 [Info] Indexer Memory Mean 393025602195 (366.034G)
2023-01-06T21:20:16.138+05:30 [Info] Indexer Memory Deviation 343650554 (327.731M) (0.09%)
2023-01-06T21:20:16.138+05:30 [Info] Indexer Memory Utilization 0.7321
2023-01-06T21:20:16.138+05:30 [Info] Indexer CPU Mean 14.2305
2023-01-06T21:20:16.138+05:30 [Info] Indexer CPU Deviation 0.95 (6.65%)
2023-01-06T21:20:16.138+05:30 [Info] Indexer CPU Utilization 0.7115
2023-01-06T21:20:16.138+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:16.138+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:16.138+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:16.138+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:16.138+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:16.138+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:16.138+05:30 [Info] Indexer Data Size Mean 393025602195 (366.034G)
2023-01-06T21:20:16.138+05:30 [Info] Indexer Data Size Deviation 343650554 (327.731M) (0.09%)
2023-01-06T21:20:16.138+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:16.138+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:16.138+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:16.138+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:16 -------------------------------------------
2023/01/06 21:20:16 incr placement - 3 server group, 3 replica, 1x
2023-01-06T21:20:16.141+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0
2023-01-06T21:20:16.154+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:16.154+05:30 [Info] Score: 0
2023-01-06T21:20:16.154+05:30 [Info] Memory Quota: 530294000 (505.728M)
2023-01-06T21:20:16.155+05:30 [Info] CPU Quota: 16
2023-01-06T21:20:16.155+05:30 [Info] Indexer Memory Mean 2600000 (2.47955M)
2023-01-06T21:20:16.155+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2023-01-06T21:20:16.155+05:30 [Info] Indexer Memory Utilization 0.0049
2023-01-06T21:20:16.155+05:30 [Info] Indexer CPU Mean 0.0000
2023-01-06T21:20:16.155+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2023-01-06T21:20:16.155+05:30 [Info] Indexer CPU Utilization 0.0000
2023-01-06T21:20:16.155+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:16.155+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:16.155+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:16.155+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:16.155+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:16.155+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:16.155+05:30 [Info] Indexer Data Size Mean 2600000 (2.47955M)
2023-01-06T21:20:16.155+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2023-01-06T21:20:16.155+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:16.155+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:16.155+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:16.155+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:16 -------------------------------------------
2023/01/06 21:20:16 incr placement - 2 server group, 3 replica, 1x
2023-01-06T21:20:16.157+05:30 [Warn] Index has more replica than server group. Index=index1 0 Bucket=bucket2 Scope= Collection=
2023-01-06T21:20:16.157+05:30 [Warn] Index has more replica than server group. Index=index1 0 (replica 1) Bucket=bucket2 Scope= Collection=
2023-01-06T21:20:16.157+05:30 [Warn] Index has more replica than server group. Index=index1 0 (replica 2) Bucket=bucket2 Scope= Collection=
2023-01-06T21:20:16.157+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0
2023-01-06T21:20:16.170+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:16.170+05:30 [Info] Score: 0
2023-01-06T21:20:16.170+05:30 [Info] Memory Quota: 530294000 (505.728M)
2023-01-06T21:20:16.170+05:30 [Info] CPU Quota: 16
2023-01-06T21:20:16.170+05:30 [Info] Indexer Memory Mean 2600000 (2.47955M)
2023-01-06T21:20:16.170+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2023-01-06T21:20:16.170+05:30 [Info] Indexer Memory Utilization 0.0049
2023-01-06T21:20:16.170+05:30 [Info] Indexer CPU Mean 0.0000
2023-01-06T21:20:16.170+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2023-01-06T21:20:16.170+05:30 [Info] Indexer CPU Utilization 0.0000
2023-01-06T21:20:16.171+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:16.171+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:16.171+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:16.171+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:16.171+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:16.171+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:16.171+05:30 [Info] Indexer Data Size Mean 2600000 (2.47955M)
2023-01-06T21:20:16.171+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2023-01-06T21:20:16.171+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:16.171+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:16.171+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:16.171+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:16 -------------------------------------------
2023/01/06 21:20:16 rebalance - 20-50M, 90 index, 20% shuffle, 1x, utilization 90%+
2023-01-06T21:20:16.175+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.1602006279485891
2023-01-06T21:20:16.175+05:30 [Info] Planner::initial temperature: initial resource variation 0.1602006279485891 temp 0.016020062794858913
2023-01-06T21:20:16.355+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:16.355+05:30 [Info] Score: 0.1942048001855833
2023-01-06T21:20:16.355+05:30 [Info] Memory Quota: 139586437120 (130G)
2023-01-06T21:20:16.355+05:30 [Info] CPU Quota: 30
2023-01-06T21:20:16.355+05:30 [Info] Indexer Memory Mean 88876568718 (82.7728G)
2023-01-06T21:20:16.355+05:30 [Info] Indexer Memory Deviation 2229516061 (2.0764G) (2.51%)
2023-01-06T21:20:16.355+05:30 [Info] Indexer Memory Utilization 0.6367
2023-01-06T21:20:16.355+05:30 [Info] Indexer CPU Mean 24.0538
2023-01-06T21:20:16.355+05:30 [Info] Indexer CPU Deviation 3.14 (13.05%)
2023-01-06T21:20:16.355+05:30 [Info] Indexer CPU Utilization 0.8018
2023-01-06T21:20:16.355+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:16.355+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:16.355+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:16.355+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:16.355+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:16.355+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:16.355+05:30 [Info] Indexer Data Size Mean 88876568718 (82.7728G)
2023-01-06T21:20:16.355+05:30 [Info] Indexer Data Size Deviation 2229516061 (2.0764G) (2.51%)
2023-01-06T21:20:16.355+05:30 [Info] Total Index Data (from non-deleted node) 993.273G
2023-01-06T21:20:16.355+05:30 [Info] Index Data Moved (exclude new node) 387.168G (38.98%)
2023-01-06T21:20:16.355+05:30 [Info] No. Index (from non-deleted node) 90
2023-01-06T21:20:16.355+05:30 [Info] No. Index Moved (exclude new node) 32 (35.56%)
2023/01/06 21:20:16 -------------------------------------------
2023/01/06 21:20:16 rebalance - mixed small/medium, 90 index, 20% shuffle, 1x
2023-01-06T21:20:16.357+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.21549759676904523
2023-01-06T21:20:16.357+05:30 [Info] Planner::initial temperature: initial resource variation 0.21549759676904523 temp 0.021549759676904524
2023-01-06T21:20:16.591+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:16.591+05:30 [Info] Score: 0.19768468990568752
2023-01-06T21:20:16.591+05:30 [Info] Memory Quota: 536870912000 (500G)
2023-01-06T21:20:16.591+05:30 [Info] CPU Quota: 20
2023-01-06T21:20:16.591+05:30 [Info] Indexer Memory Mean 392505602195 (365.549G)
2023-01-06T21:20:16.591+05:30 [Info] Indexer Memory Deviation 23237756670 (21.6418G) (5.92%)
2023-01-06T21:20:16.591+05:30 [Info] Indexer Memory Utilization 0.7311
2023-01-06T21:20:16.591+05:30 [Info] Indexer CPU Mean 13.9305
2023-01-06T21:20:16.591+05:30 [Info] Indexer CPU Deviation 4.04 (29.04%)
2023-01-06T21:20:16.591+05:30 [Info] Indexer CPU Utilization 0.6965
2023-01-06T21:20:16.591+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:16.591+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:16.591+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:16.591+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:16.591+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:16.591+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:16.591+05:30 [Info] Indexer Data Size Mean 392505602195 (365.549G)
2023-01-06T21:20:16.591+05:30 [Info] Indexer Data Size Deviation 23237756670 (21.6418G) (5.92%)
2023-01-06T21:20:16.591+05:30 [Info] Total Index Data (from non-deleted node) 4.28378T
2023-01-06T21:20:16.591+05:30 [Info] Index Data Moved (exclude new node) 2.06182T (48.13%)
2023-01-06T21:20:16.591+05:30 [Info] No. Index (from non-deleted node) 90
2023-01-06T21:20:16.591+05:30 [Info] No. Index Moved (exclude new node) 21 (23.33%)
2023/01/06 21:20:16 -------------------------------------------
2023/01/06 21:20:16 rebalance - travel sample, 10% shuffle, 1x
2023-01-06T21:20:16.592+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.6293300198704364
2023-01-06T21:20:16.592+05:30 [Info] Planner::initial temperature: initial resource variation 0.6293300198704364 temp 0.06293300198704364
2023-01-06T21:20:16.624+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:16.624+05:30 [Info] Score: 0.07695313793294842
2023-01-06T21:20:16.624+05:30 [Info] Memory Quota: 536870912 (512M)
2023-01-06T21:20:16.624+05:30 [Info] CPU Quota: 8
2023-01-06T21:20:16.624+05:30 [Info] Indexer Memory Mean 17503138 (16.6923M)
2023-01-06T21:20:16.624+05:30 [Info] Indexer Memory Deviation 160875 (157.104K) (0.92%)
2023-01-06T21:20:16.624+05:30 [Info] Indexer Memory Utilization 0.0326
2023-01-06T21:20:16.624+05:30 [Info] Indexer CPU Mean 0.0000
2023-01-06T21:20:16.624+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2023-01-06T21:20:16.624+05:30 [Info] Indexer CPU Utilization 0.0000
2023-01-06T21:20:16.624+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:16.624+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:16.624+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:16.624+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:16.624+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:16.624+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:16.625+05:30 [Info] Indexer Data Size Mean 0 (0)
2023-01-06T21:20:16.625+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2023-01-06T21:20:16.625+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:16.625+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:16.625+05:30 [Info] No. Index (from non-deleted node) 10
2023-01-06T21:20:16.625+05:30 [Info] No. Index Moved (exclude new node) 3 (30.00%)
2023/01/06 21:20:16 -------------------------------------------
2023/01/06 21:20:16 rebalance - 20-50M, 90 index, swap 2, 1x
2023-01-06T21:20:16.626+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.4095843409677649
2023-01-06T21:20:16.626+05:30 [Info] Planner::initial temperature: initial resource variation 0.4095843409677649 temp 0.04095843409677649
2023-01-06T21:20:16.889+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:16.889+05:30 [Info] Score: 0.016432207609294088
2023-01-06T21:20:16.889+05:30 [Info] Memory Quota: 139586437120 (130G)
2023-01-06T21:20:16.889+05:30 [Info] CPU Quota: 30
2023-01-06T21:20:16.889+05:30 [Info] Indexer Memory Mean 88876568718 (82.7728G)
2023-01-06T21:20:16.889+05:30 [Info] Indexer Memory Deviation 2920876457 (2.72028G) (3.29%)
2023-01-06T21:20:16.889+05:30 [Info] Indexer Memory Utilization 0.6367
2023-01-06T21:20:16.889+05:30 [Info] Indexer CPU Mean 24.0538
2023-01-06T21:20:16.889+05:30 [Info] Indexer CPU Deviation 1.40 (5.83%)
2023-01-06T21:20:16.889+05:30 [Info] Indexer CPU Utilization 0.8018
2023-01-06T21:20:16.889+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:16.889+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:16.889+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:16.889+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:16.889+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:16.889+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:16.889+05:30 [Info] Indexer Data Size Mean 88876568718 (82.7728G)
2023-01-06T21:20:16.889+05:30 [Info] Indexer Data Size Deviation 2920876457 (2.72028G) (3.29%)
2023-01-06T21:20:16.889+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:16.889+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:16.889+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:16.889+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:16 -------------------------------------------
2023/01/06 21:20:16 rebalance - mixed small/medium, 90 index, swap 2, 1x
2023-01-06T21:20:16.891+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.4082556623939647
2023-01-06T21:20:16.891+05:30 [Info] Planner::initial temperature: initial resource variation 0.4082556623939647 temp 0.040825566239396475
2023-01-06T21:20:17.065+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:17.065+05:30 [Info] Score: 0.0020377054775790903
2023-01-06T21:20:17.065+05:30 [Info] Memory Quota: 536870912000 (500G)
2023-01-06T21:20:17.065+05:30 [Info] CPU Quota: 20
2023-01-06T21:20:17.065+05:30 [Info] Indexer Memory Mean 392505602195 (365.549G)
2023-01-06T21:20:17.065+05:30 [Info] Indexer Memory Deviation 1599621631 (1.48976G) (0.41%)
2023-01-06T21:20:17.065+05:30 [Info] Indexer Memory Utilization 0.7311
2023-01-06T21:20:17.065+05:30 [Info] Indexer CPU Mean 13.9305
2023-01-06T21:20:17.065+05:30 [Info] Indexer CPU Deviation 0.62 (4.46%)
2023-01-06T21:20:17.065+05:30 [Info] Indexer CPU Utilization 0.6965
2023-01-06T21:20:17.065+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:17.065+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:17.065+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:17.065+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:17.065+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:17.065+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:17.065+05:30 [Info] Indexer Data Size Mean 392505602195 (365.549G)
2023-01-06T21:20:17.065+05:30 [Info] Indexer Data Size Deviation 1599621631 (1.48976G) (0.41%)
2023-01-06T21:20:17.065+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:17.065+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:17.065+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:17.065+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:17 -------------------------------------------
2023/01/06 21:20:17 rebalance - travel sample, swap 2, 1x
2023-01-06T21:20:17.065+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 1.0086622738872069
2023-01-06T21:20:17.065+05:30 [Info] Planner::initial temperature: initial resource variation 1.0086622738872069 temp 0.1008662273887207
2023-01-06T21:20:17.193+05:30 [Info] Score: 0.0003181929343700274
2023-01-06T21:20:17.193+05:30 [Info] Memory Quota: 536870912 (512M)
2023-01-06T21:20:17.193+05:30 [Info] CPU Quota: 8
2023-01-06T21:20:17.193+05:30 [Info] Indexer Memory Mean 17503138 (16.6923M)
2023-01-06T21:20:17.193+05:30 [Info] Indexer Memory Deviation 22277 (21.7549K) (0.13%)
2023-01-06T21:20:17.193+05:30 [Info] Indexer Memory Utilization 0.0326
2023-01-06T21:20:17.193+05:30 [Info] Indexer CPU Mean 0.0000
2023-01-06T21:20:17.193+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2023-01-06T21:20:17.193+05:30 [Info] Indexer CPU Utilization 0.0000
2023-01-06T21:20:17.193+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:17.193+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:17.193+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:17.193+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:17.193+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:17.193+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:17.193+05:30 [Info] Indexer Data Size Mean 0 (0)
2023-01-06T21:20:17.193+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2023-01-06T21:20:17.193+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:17.193+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:17.193+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:17.193+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:17 -------------------------------------------
2023/01/06 21:20:17 rebalance - 8 identical index, add 4, 1x
2023-01-06T21:20:17.194+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 1
2023-01-06T21:20:17.194+05:30 [Info] Planner::initial temperature: initial resource variation 1 temp 0.1
2023-01-06T21:20:17.211+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:17.211+05:30 [Info] Score: 0.125
2023-01-06T21:20:17.211+05:30 [Info] Memory Quota: 530294000 (505.728M)
2023-01-06T21:20:17.211+05:30 [Info] CPU Quota: 2
2023-01-06T21:20:17.211+05:30 [Info] Indexer Memory Mean 2600000 (2.47955M)
2023-01-06T21:20:17.211+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2023-01-06T21:20:17.211+05:30 [Info] Indexer Memory Utilization 0.0049
2023-01-06T21:20:17.211+05:30 [Info] Indexer CPU Mean 0.0000
2023-01-06T21:20:17.211+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2023-01-06T21:20:17.211+05:30 [Info] Indexer CPU Utilization 0.0000
2023-01-06T21:20:17.211+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:17.211+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:17.211+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:17.211+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:17.211+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:17.211+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:17.211+05:30 [Info] Indexer Data Size Mean 2600000 (2.47955M)
2023-01-06T21:20:17.211+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2023-01-06T21:20:17.211+05:30 [Info] Total Index Data (from non-deleted node) 19.8364M
2023-01-06T21:20:17.211+05:30 [Info] Index Data Moved (exclude new node) 4.95911M (25.00%)
2023-01-06T21:20:17.211+05:30 [Info] No. Index (from non-deleted node) 8
2023-01-06T21:20:17.211+05:30 [Info] No. Index Moved (exclude new node) 2 (25.00%)
2023/01/06 21:20:17 -------------------------------------------
2023/01/06 21:20:17 rebalance - 8 identical index, delete 2, 2x
2023-01-06T21:20:17.212+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0
2023-01-06T21:20:17.212+05:30 [Info] Planner::initial temperature: initial resource variation 0 temp 1e-05
2023-01-06T21:20:17.259+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:17.259+05:30 [Info] Score: 0
2023-01-06T21:20:17.259+05:30 [Info] Memory Quota: 1060588000 (1011.46M)
2023-01-06T21:20:17.259+05:30 [Info] CPU Quota: 4
2023-01-06T21:20:17.259+05:30 [Info] Indexer Memory Mean 10400000 (9.91821M)
2023-01-06T21:20:17.259+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2023-01-06T21:20:17.259+05:30 [Info] Indexer Memory Utilization 0.0098
2023-01-06T21:20:17.259+05:30 [Info] Indexer CPU Mean 0.0000
2023-01-06T21:20:17.259+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2023-01-06T21:20:17.259+05:30 [Info] Indexer CPU Utilization 0.0000
2023-01-06T21:20:17.259+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:17.259+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:17.259+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:17.259+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:17.259+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:17.259+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:17.259+05:30 [Info] Indexer Data Size Mean 10400000 (9.91821M)
2023-01-06T21:20:17.259+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2023-01-06T21:20:17.259+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:17.259+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:17.259+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:17.259+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:17 -------------------------------------------
2023/01/06 21:20:17 rebalance - drop replcia - 3 replica, 3 zone, delete 1, 2x
2023-01-06T21:20:17.260+05:30 [Warn] There is more replica than available nodes.  Will not move index replica (default,,,country) from ejected node 127.0.0.1:9001
2023-01-06T21:20:17.260+05:30 [Info] Planner::planSingleRun Initial variance of the solution: NaN
2023-01-06T21:20:17.260+05:30 [Info] Planner::initial temperature: initial resource variation NaN temp NaN
2023-01-06T21:20:17.260+05:30 [Info] Score: 0
2023-01-06T21:20:17.260+05:30 [Info] Memory Quota: 536870912 (512M)
2023-01-06T21:20:17.260+05:30 [Info] CPU Quota: 16
2023-01-06T21:20:17.260+05:30 [Info] Indexer Memory Mean 0 (0)
2023-01-06T21:20:17.260+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2023-01-06T21:20:17.260+05:30 [Info] Indexer Memory Utilization 0.0000
2023-01-06T21:20:17.260+05:30 [Info] Indexer CPU Mean 0.0000
2023-01-06T21:20:17.260+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2023-01-06T21:20:17.260+05:30 [Info] Indexer CPU Utilization 0.0000
2023-01-06T21:20:17.260+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:17.260+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:17.260+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:17.260+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:17.260+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:17.260+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:17.260+05:30 [Info] Indexer Data Size Mean 0 (0)
2023-01-06T21:20:17.260+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2023-01-06T21:20:17.260+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:17.260+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:17.260+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:17.260+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:17 -------------------------------------------
2023/01/06 21:20:17 rebalance - rebuid replica - 3 replica, 3 zone, add 1, delete 1, 1x
2023-01-06T21:20:17.260+05:30 [Info] Planner::planSingleRun Initial variance of the solution: NaN
2023-01-06T21:20:17.260+05:30 [Info] Planner::initial temperature: initial resource variation NaN temp NaN
2023-01-06T21:20:17.260+05:30 [Info] Planner::finalizing the solution as final solution is found.
2023-01-06T21:20:17.260+05:30 [Info] Score: 0
2023-01-06T21:20:17.260+05:30 [Info] Memory Quota: 268435456 (256M)
2023-01-06T21:20:17.260+05:30 [Info] CPU Quota: 8
2023-01-06T21:20:17.260+05:30 [Info] Indexer Memory Mean 0 (0)
2023-01-06T21:20:17.260+05:30 [Info] Indexer Memory Deviation 0 (0) (0.00%)
2023-01-06T21:20:17.260+05:30 [Info] Indexer Memory Utilization 0.0000
2023-01-06T21:20:17.260+05:30 [Info] Indexer CPU Mean 0.0000
2023-01-06T21:20:17.260+05:30 [Info] Indexer CPU Deviation 0.00 (0.00%)
2023-01-06T21:20:17.260+05:30 [Info] Indexer CPU Utilization 0.0000
2023-01-06T21:20:17.260+05:30 [Info] Indexer IO Mean 0.0000
2023-01-06T21:20:17.260+05:30 [Info] Indexer IO Deviation 0.00 (0.00%)
2023-01-06T21:20:17.260+05:30 [Info] Indexer Drain Rate Mean 0.0000
2023-01-06T21:20:17.260+05:30 [Info] Indexer Drain Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:17.260+05:30 [Info] Indexer Scan Rate Mean 0.0000
2023-01-06T21:20:17.260+05:30 [Info] Indexer Scan Rate Deviation 0.00 (0.00%)
2023-01-06T21:20:17.260+05:30 [Info] Indexer Data Size Mean 0 (0)
2023-01-06T21:20:17.260+05:30 [Info] Indexer Data Size Deviation 0 (0) (0.00%)
2023-01-06T21:20:17.260+05:30 [Info] Total Index Data (from non-deleted node) 0
2023-01-06T21:20:17.260+05:30 [Info] Index Data Moved (exclude new node) 0 (0.00%)
2023-01-06T21:20:17.260+05:30 [Info] No. Index (from non-deleted node) 0
2023-01-06T21:20:17.260+05:30 [Info] No. Index Moved (exclude new node) 0 (0.00%)
2023/01/06 21:20:17 -------------------------------------------
2023/01/06 21:20:17 Minimum memory test 1: min memory = 0
2023-01-06T21:20:17.261+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.25
2023-01-06T21:20:17.261+05:30 [Info] Planner::initial temperature: initial resource variation 0.25 temp 0.025
2023-01-06T21:20:17.263+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023/01/06 21:20:17 -------------------------------------------
2023/01/06 21:20:17 Minimum memory test 2: min memory > quota
2023-01-06T21:20:17.264+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=1.  Elapsed Time=5us
2023-01-06T21:20:17.264+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=2.  Elapsed Time=5us
2023-01-06T21:20:17.264+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=3.  Elapsed Time=5us
2023-01-06T21:20:17.264+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.25
2023-01-06T21:20:17.264+05:30 [Info] Planner::initial temperature: initial resource variation 0.25 temp 0.025
2023/01/06 21:20:18 -------------------------------------------
2023/01/06 21:20:18 Minimum memory test 3: min memory < quota
2023-01-06T21:20:18.196+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.25
2023-01-06T21:20:18.196+05:30 [Info] Planner::initial temperature: initial resource variation 0.25 temp 0.025
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Minimum memory test 4: replica repair with min memory > quota
2023-01-06T21:20:19.125+05:30 [Info] Rebuilding lost replica for (default,,,country,0)
2023-01-06T21:20:19.125+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=1.  Elapsed Time=6us
2023-01-06T21:20:19.125+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=2.  Elapsed Time=5us
2023-01-06T21:20:19.125+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=3.  Elapsed Time=4us
2023-01-06T21:20:19.125+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.3535533905932738
2023-01-06T21:20:19.126+05:30 [Info] Planner::initial temperature: initial resource variation 0.3535533905932738 temp 0.03535533905932738
2023-01-06T21:20:19.126+05:30 [Info] Planner::finalizing the solution as final solution is found.
2023-01-06T21:20:19.126+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:19.126+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=4.  Elapsed Time=206us
2023-01-06T21:20:19.126+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.3535533905932738
2023-01-06T21:20:19.126+05:30 [Info] Planner::initial temperature: initial resource variation 0.3535533905932738 temp 0.03535533905932738
2023-01-06T21:20:19.126+05:30 [Info] Planner::finalizing the solution as final solution is found.
2023-01-06T21:20:19.126+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:19.126+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=5.  Elapsed Time=266us
2023-01-06T21:20:19.126+05:30 [Info] Cannot rebuild lost replica due to resource constraint in cluster.  Will not rebuild lost replica.
2023-01-06T21:20:19.126+05:30 [Warn] 
MemoryQuota: 200
CpuQuota: 8
--- Violations for index  (mem 130, cpu 0) at node 127.0.0.1:9003 
	Cannot move to 127.0.0.1:9001: ReplicaViolation (free mem 1.67772e+07T, free cpu 8)
	Cannot move to 127.0.0.1:9002: ReplicaViolation (free mem 1.67772e+07T, free cpu 8)
2023-01-06T21:20:19.126+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0
2023-01-06T21:20:19.126+05:30 [Info] Planner::planSingleRun Skip running planner as current solution resource variation: 0 is less than threshold: 0. No nodes have been added or deleted and there are no violations observed
2023-01-06T21:20:19.126+05:30 [Info] Planner::initial temperature: initial resource variation 0 temp 1e-05
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Minimum memory test 5: replica repair with min memory < quota
2023-01-06T21:20:19.126+05:30 [Info] Rebuilding lost replica for (default,,,country,0)
2023-01-06T21:20:19.126+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.3535533905932738
2023-01-06T21:20:19.127+05:30 [Info] Planner::initial temperature: initial resource variation 0.3535533905932738 temp 0.03535533905932738
2023-01-06T21:20:19.140+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Minimum memory test 6: rebalance with min memory > quota
2023-01-06T21:20:19.140+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=1.  Elapsed Time=7us
2023-01-06T21:20:19.140+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=2.  Elapsed Time=5us
2023-01-06T21:20:19.140+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=3.  Elapsed Time=3us
2023-01-06T21:20:19.140+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=4.  Elapsed Time=3us
2023-01-06T21:20:19.140+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=5.  Elapsed Time=4us
2023-01-06T21:20:19.140+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=6.  Elapsed Time=8us
2023-01-06T21:20:19.140+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=7.  Elapsed Time=5us
2023-01-06T21:20:19.140+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=8.  Elapsed Time=3us
2023-01-06T21:20:19.140+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=9.  Elapsed Time=4us
2023-01-06T21:20:19.140+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=10.  Elapsed Time=3us
2023-01-06T21:20:19.140+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=11.  Elapsed Time=3us
2023-01-06T21:20:19.141+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=12.  Elapsed Time=4us
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Minimum memory test 7: rebalance-out with min memory > quota
2023-01-06T21:20:19.141+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=1.  Elapsed Time=5us
2023-01-06T21:20:19.141+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=2.  Elapsed Time=5us
2023-01-06T21:20:19.141+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=3.  Elapsed Time=3us
2023-01-06T21:20:19.141+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=4.  Elapsed Time=3us
2023-01-06T21:20:19.141+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=5.  Elapsed Time=3us
2023-01-06T21:20:19.141+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=6.  Elapsed Time=3us
2023-01-06T21:20:19.141+05:30 [Warn] Unable to find a solution with resource constraint.  Relax resource constraint check.
2023-01-06T21:20:19.141+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.25
2023-01-06T21:20:19.141+05:30 [Info] Planner::initial temperature: initial resource variation 0.25 temp 0.025
2023-01-06T21:20:19.142+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Minimum memory test 8: plan with min memory > quota
2023-01-06T21:20:19.144+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.041666666666666664
2023-01-06T21:20:19.145+05:30 [Info] Planner::finalizing the solution as final solution is found.
2023-01-06T21:20:19.145+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:19.145+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=1.  Elapsed Time=1ms
2023-01-06T21:20:19.145+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=2.  Elapsed Time=7us
2023-01-06T21:20:19.145+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=3.  Elapsed Time=5us
2023-01-06T21:20:19.145+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=4.  Elapsed Time=7us
2023-01-06T21:20:19.145+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=5.  Elapsed Time=9us
2023-01-06T21:20:19.145+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=6.  Elapsed Time=6us
2023-01-06T21:20:19.146+05:30 [Warn] Unable to find a solution with resource constraint.  Relax resource constraint check.
2023-01-06T21:20:19.146+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.041666666666666664
2023-01-06T21:20:19.146+05:30 [Info] Planner::finalizing the solution as final solution is found.
2023-01-06T21:20:19.146+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Minimum memory test 9: single node rebalance with min memory > quota
2023-01-06T21:20:19.146+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=1.  Elapsed Time=5us
2023-01-06T21:20:19.146+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=2.  Elapsed Time=9us
2023-01-06T21:20:19.146+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=3.  Elapsed Time=5us
2023-01-06T21:20:19.146+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=4.  Elapsed Time=5us
2023-01-06T21:20:19.146+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=5.  Elapsed Time=8us
2023-01-06T21:20:19.146+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=6.  Elapsed Time=5us
2023-01-06T21:20:19.146+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=7.  Elapsed Time=5us
2023-01-06T21:20:19.146+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=8.  Elapsed Time=6us
2023-01-06T21:20:19.146+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=9.  Elapsed Time=4us
2023-01-06T21:20:19.147+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=10.  Elapsed Time=5us
2023-01-06T21:20:19.147+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=11.  Elapsed Time=8us
2023-01-06T21:20:19.147+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=12.  Elapsed Time=5us
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Minimum memory test 10: plan with partitioned index on empty cluster
2023-01-06T21:20:19.149+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 1.7320508075688774
2023-01-06T21:20:19.196+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 iterationTest :: Remove one node - failure
2023-01-06T21:20:19.197+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.5697646091336376
2023-01-06T21:20:19.273+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:19.273+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.012211031557114685
2023-01-06T21:20:19.273+05:30 [Info] Planner::initial temperature: initial resource variation 0.012211031557114685 temp 0.03377889684428853
2023-01-06T21:20:19.273+05:30 [Info] Planner::Running more iterations than 1 because of deleted nodes.
2023-01-06T21:20:19.273+05:30 [Info] Planner::stop planner iter per temp as maxIterPerTemp limit (2, 2) exceeded.
2023-01-06T21:20:19.273+05:30 [Info] Planner::finalizing the solution as the current resource variation is under control (0.1681845925586878).
2023-01-06T21:20:19.273+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=1.  Elapsed Time=362us
2023-01-06T21:20:19.273+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.012211031557114685
2023-01-06T21:20:19.273+05:30 [Info] Planner::initial temperature: initial resource variation 0.012211031557114685 temp 0.03377889684428853
2023-01-06T21:20:19.273+05:30 [Info] Planner::Running more iterations than 1 because of deleted nodes.
2023-01-06T21:20:19.273+05:30 [Info] Planner::stop planner iter per temp as maxIterPerTemp limit (2, 2) exceeded.
2023-01-06T21:20:19.273+05:30 [Info] Planner::finalizing the solution as the current resource variation is under control (0.1772076764226848).
2023-01-06T21:20:19.273+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=2.  Elapsed Time=336us
2023-01-06T21:20:19.273+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.012211031557114685
2023-01-06T21:20:19.274+05:30 [Info] Planner::initial temperature: initial resource variation 0.012211031557114685 temp 0.03377889684428853
2023-01-06T21:20:19.274+05:30 [Info] Planner::Running more iterations than 1 because of deleted nodes.
2023-01-06T21:20:19.274+05:30 [Info] Planner::stop planner iter per temp as maxIterPerTemp limit (2, 2) exceeded.
2023-01-06T21:20:19.274+05:30 [Info] Planner::finalizing the solution as the current resource variation is under control (0.15733742992848196).
2023-01-06T21:20:19.274+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=3.  Elapsed Time=341us
2023-01-06T21:20:19.274+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.012211031557114685
2023-01-06T21:20:19.274+05:30 [Info] Planner::initial temperature: initial resource variation 0.012211031557114685 temp 0.03377889684428853
2023-01-06T21:20:19.274+05:30 [Info] Planner::Running more iterations than 1 because of deleted nodes.
2023-01-06T21:20:19.274+05:30 [Info] Planner::stop planner iter per temp as maxIterPerTemp limit (2, 2) exceeded.
2023-01-06T21:20:19.274+05:30 [Info] Planner::finalizing the solution as the current resource variation is under control (0.16095192579200832).
2023-01-06T21:20:19.274+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=4.  Elapsed Time=308us
2023-01-06T21:20:19.274+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.012211031557114685
2023-01-06T21:20:19.274+05:30 [Info] Planner::initial temperature: initial resource variation 0.012211031557114685 temp 0.03377889684428853
2023-01-06T21:20:19.274+05:30 [Info] Planner::Running more iterations than 1 because of deleted nodes.
2023-01-06T21:20:19.274+05:30 [Info] Planner::stop planner iter per temp as maxIterPerTemp limit (2, 2) exceeded.
2023-01-06T21:20:19.274+05:30 [Info] Planner::finalizing the solution as the current resource variation is under control (0.1772076764226848).
2023-01-06T21:20:19.275+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=5.  Elapsed Time=361us
2023-01-06T21:20:19.275+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.012211031557114685
2023-01-06T21:20:19.275+05:30 [Info] Planner::initial temperature: initial resource variation 0.012211031557114685 temp 0.03377889684428853
2023-01-06T21:20:19.275+05:30 [Info] Planner::Running more iterations than 1 because of deleted nodes.
2023-01-06T21:20:19.275+05:30 [Info] Planner::stop planner iter per temp as maxIterPerTemp limit (2, 2) exceeded.
2023-01-06T21:20:19.275+05:30 [Info] Planner::finalizing the solution as the current resource variation is under control (0.18658561986380226).
2023-01-06T21:20:19.275+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=6.  Elapsed Time=352us
2023-01-06T21:20:19.275+05:30 [Warn] Unable to find a solution with resource constraint.  Relax resource constraint check.
2023-01-06T21:20:19.275+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.012211031557114685
2023-01-06T21:20:19.275+05:30 [Info] Planner::initial temperature: initial resource variation 0.012211031557114685 temp 0.03377889684428853
2023-01-06T21:20:19.275+05:30 [Info] Planner::Running more iterations than 1 because of deleted nodes.
2023-01-06T21:20:19.275+05:30 [Info] Planner::stop planner iter per temp as maxIterPerTemp limit (2, 2) exceeded.
2023-01-06T21:20:19.275+05:30 [Info] Planner::finalizing the solution as the current resource variation is under control (0.14888922695789564).
2023-01-06T21:20:19.275+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=7.  Elapsed Time=381us
2023-01-06T21:20:19.275+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.012211031557114685
2023-01-06T21:20:19.276+05:30 [Info] Planner::initial temperature: initial resource variation 0.012211031557114685 temp 0.03377889684428853
2023-01-06T21:20:19.276+05:30 [Info] Planner::Running more iterations than 1 because of deleted nodes.
2023-01-06T21:20:19.276+05:30 [Info] Planner::stop planner iter per temp as maxIterPerTemp limit (2, 2) exceeded.
2023-01-06T21:20:19.276+05:30 [Info] Planner::finalizing the solution as the current resource variation is under control (0.21660777384642865).
2023-01-06T21:20:19.276+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=8.  Elapsed Time=355us
2023-01-06T21:20:19.276+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.012211031557114685
2023-01-06T21:20:19.276+05:30 [Info] Planner::initial temperature: initial resource variation 0.012211031557114685 temp 0.03377889684428853
2023-01-06T21:20:19.276+05:30 [Info] Planner::Running more iterations than 1 because of deleted nodes.
2023-01-06T21:20:19.276+05:30 [Info] Planner::stop planner iter per temp as maxIterPerTemp limit (2, 2) exceeded.
2023-01-06T21:20:19.276+05:30 [Info] Planner::finalizing the solution as the current resource variation is under control (0.1107109451819674).
2023-01-06T21:20:19.276+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=9.  Elapsed Time=362us
2023-01-06T21:20:19.276+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.012211031557114685
2023-01-06T21:20:19.276+05:30 [Info] Planner::initial temperature: initial resource variation 0.012211031557114685 temp 0.03377889684428853
2023-01-06T21:20:19.276+05:30 [Info] Planner::Running more iterations than 1 because of deleted nodes.
2023-01-06T21:20:19.277+05:30 [Info] Planner::stop planner iter per temp as maxIterPerTemp limit (2, 2) exceeded.
2023-01-06T21:20:19.277+05:30 [Info] Planner::finalizing the solution as the current resource variation is under control (0.15659690989512468).
2023-01-06T21:20:19.277+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=10.  Elapsed Time=395us
2023-01-06T21:20:19.277+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.012211031557114685
2023-01-06T21:20:19.277+05:30 [Info] Planner::initial temperature: initial resource variation 0.012211031557114685 temp 0.03377889684428853
2023-01-06T21:20:19.277+05:30 [Info] Planner::Running more iterations than 1 because of deleted nodes.
2023-01-06T21:20:19.277+05:30 [Info] Planner::stop planner iter per temp as maxIterPerTemp limit (2, 2) exceeded.
2023-01-06T21:20:19.277+05:30 [Info] Planner::finalizing the solution as the current resource variation is under control (0.11915291317194429).
2023-01-06T21:20:19.277+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=11.  Elapsed Time=315us
2023-01-06T21:20:19.277+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.012211031557114685
2023-01-06T21:20:19.277+05:30 [Info] Planner::initial temperature: initial resource variation 0.012211031557114685 temp 0.03377889684428853
2023-01-06T21:20:19.277+05:30 [Info] Planner::Running more iterations than 1 because of deleted nodes.
2023-01-06T21:20:19.277+05:30 [Info] Planner::stop planner iter per temp as maxIterPerTemp limit (2, 2) exceeded.
2023-01-06T21:20:19.277+05:30 [Info] Planner::finalizing the solution as the current resource variation is under control (0.183072198169457).
2023-01-06T21:20:19.277+05:30 [Info] Planner::Fail to create plan satisfying constraint. Re-planning. Num of Try=12.  Elapsed Time=324us
2023-01-06T21:20:19.277+05:30 [Error] 
MemoryQuota: 5302940000000
CpuQuota: 6
--- Violations for index <9869264472035245117, bucket1, , > (mem 11.7617G, cpu 1.8469600000000002) at node 127.0.0.1:9001 
	Can move to 127.0.0.1:9002: NoViolation (free mem 4.65387T, free cpu -37.00036)
	Can move to 127.0.0.1:9003: NoViolation (free mem 4.61965T, free cpu -41.932520000000004)
--- Violations for index <1946732439605335068, bucket1, , > (mem 17.0646G, cpu 2.15156) at node 127.0.0.1:9001 
	Can move to 127.0.0.1:9002: NoViolation (free mem 4.65387T, free cpu -37.00036)
	Can move to 127.0.0.1:9003: NoViolation (free mem 4.61965T, free cpu -41.932520000000004)
--- Violations for index <6970866035609823109, bucket1, , > (mem 15.3745G, cpu 0.83084) at node 127.0.0.1:9001 
	Can move to 127.0.0.1:9002: NoViolation (free mem 4.65387T, free cpu -37.00036)
	Can move to 127.0.0.1:9003: NoViolation (free mem 4.61965T, free cpu -41.932520000000004)
--- Violations for index <4417420166937838487, bucket1, , > (mem 11.7892G, cpu 3.1889600000000002) at node 127.0.0.1:9001 
	Can move to 127.0.0.1:9002: NoViolation (free mem 4.65387T, free cpu -37.00036)
	Can move to 127.0.0.1:9003: NoViolation (free mem 4.61965T, free cpu -41.932520000000004)
--- Violations for index <14408061584443043197, bucket1, , > (mem 14.4096G, cpu 3.80668) at node 127.0.0.1:9001 
	Can move to 127.0.0.1:9002: NoViolation (free mem 4.65387T, free cpu -37.00036)
	Can move to 127.0.0.1:9003: NoViolation (free mem 4.61965T, free cpu -41.932520000000004)
--- Violations for index <14723783278160467203, bucket1, , > (mem 16.831G, cpu 3.16488) at node 127.0.0.1:9001 
	Can move to 127.0.0.1:9002: NoViolation (free mem 4.65387T, free cpu -37.00036)
	Can move to 127.0.0.1:9003: NoViolation (free mem 4.61965T, free cpu -41.932520000000004)
--- Violations for index <3969638527622314900, bucket1, , > (mem 18.0528G, cpu 4.89724) at node 127.0.0.1:9001 
	Can move to 127.0.0.1:9002: NoViolation (free mem 4.65387T, free cpu -37.00036)
	Can move to 127.0.0.1:9003: NoViolation (free mem 4.61965T, free cpu -41.932520000000004)
--- Violations for index <8775855671353469337, bucket1, , > (mem 17.38G, cpu 4.26428) at node 127.0.0.1:9001 
	Can move to 127.0.0.1:9002: NoViolation (free mem 4.65387T, free cpu -37.00036)
	Can move to 127.0.0.1:9003: NoViolation (free mem 4.61965T, free cpu -41.932520000000004)
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 iterationTest :: Remove one node - success
2023-01-06T21:20:19.278+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.1868130748497835
2023-01-06T21:20:19.357+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:19.358+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.01702550616177791
2023-01-06T21:20:19.358+05:30 [Info] Planner::initial temperature: initial resource variation 0.01702550616177791 temp 0.023297449383822208
2023-01-06T21:20:19.358+05:30 [Info] Planner::Running more iterations than 1 because of deleted nodes.
2023-01-06T21:20:19.358+05:30 [Info] Planner::Finished planner run after 10 iterations.
2023-01-06T21:20:19.358+05:30 [Info] Planner::finalizing the solution as the current resource variation is under control (0.029122181158095384).
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 iterationTest :: Index rebuild - success
2023-01-06T21:20:19.360+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i15,1)
2023-01-06T21:20:19.360+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i18,1)
2023-01-06T21:20:19.360+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i22,1)
2023-01-06T21:20:19.360+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i0,1)
2023-01-06T21:20:19.360+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i5,1)
2023-01-06T21:20:19.360+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i10,1)
2023-01-06T21:20:19.360+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i13,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i4,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i8,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i11,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i24,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i26,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i28,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i2,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i27,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i1,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i3,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i6,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i12,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i14,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i20,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i23,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i19,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i21,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i25,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i29,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i7,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i9,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i16,1)
2023-01-06T21:20:19.361+05:30 [Info] Rebuilding lost replica for (default,_default,_default,i17,1)
2023-01-06T21:20:19.362+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.5792715732327589
2023-01-06T21:20:19.362+05:30 [Info] Planner::planSingleRun Skip running planner as current solution resource variation: 0.5792715732327589 is less than threshold: 0.8. No nodes have been added or deleted and there are no violations observed
2023-01-06T21:20:19.362+05:30 [Info] Planner::initial temperature: initial resource variation 0.5792715732327589 temp 0.022072842676724116
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 iterationTest :: Index rebuild - initial placement - success
2023-01-06T21:20:19.363+05:30 [Info] Rebuilding lost replica for (default,,,idx1,1)
2023-01-06T21:20:19.363+05:30 [Info] Rebuilding lost replica for (default,,,idx2,1)
2023-01-06T21:20:19.363+05:30 [Info] Rebuilding lost replica for (default,,,idx3,1)
2023-01-06T21:20:19.363+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.6236095644623236
2023-01-06T21:20:19.363+05:30 [Info] Planner::initial temperature: initial resource variation 0.6236095644623236 temp 0.03663904355376764
2023-01-06T21:20:19.363+05:30 [Info] Planner::stop planner iter per temp as maxIterPerTemp limit (0, 0) exceeded.
2023-01-06T21:20:19.363+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:19.363+05:30 [Info] Planner::finalizing the solution as the current resource variation is under control (0.6236095644623236).
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 iterationTest :: Index rebuild - initial placement - numReplica > numSG - success
2023-01-06T21:20:19.364+05:30 [Info] Rebuilding lost replica for (default,,,idx1,1)
2023-01-06T21:20:19.364+05:30 [Info] Rebuilding lost replica for (default,,,idx1,2)
2023-01-06T21:20:19.364+05:30 [Info] Rebuilding lost replica for (default,,,idx2,1)
2023-01-06T21:20:19.364+05:30 [Info] Rebuilding lost replica for (default,,,idx2,2)
2023-01-06T21:20:19.364+05:30 [Info] Rebuilding lost replica for (default,,,idx3,1)
2023-01-06T21:20:19.364+05:30 [Info] Rebuilding lost replica for (default,,,idx3,2)
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx2 0 (replica 1) Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx3 0 (replica 1) Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx2 0 Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx1 0 (replica 1) Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx1 0 (replica 2) Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx2 0 (replica 2) Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx3 0 (replica 2) Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx1 0 Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx3 0 Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx3 0 (replica 1) Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx2 0 Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx1 0 (replica 1) Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx2 0 (replica 1) Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx2 0 (replica 2) Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx3 0 (replica 2) Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx1 0 Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx3 0 Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Warn] Index has more replica than server group. Index=idx1 0 (replica 2) Bucket=default Scope= Collection=
2023-01-06T21:20:19.364+05:30 [Info] Planner::planSingleRun Initial variance of the solution: 0.4581228472908511
2023-01-06T21:20:19.364+05:30 [Info] Planner::initial temperature: initial resource variation 0.4581228472908511 temp 0.053187715270914884
2023-01-06T21:20:19.364+05:30 [Info] Planner::stop planner iter per temp as maxIterPerTemp limit (0, 0) exceeded.
2023-01-06T21:20:19.364+05:30 [Info] Planner::finalizing the solution as there are no more valid index movements.
2023-01-06T21:20:19.364+05:30 [Info] Planner::finalizing the solution as the current resource variation is under control (0.4581228472908511).
--- PASS: TestPlanner (7.09s)
=== RUN   TestGreedyPlanner
2023/01/06 21:20:19 In TestGreedyPlanner()
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 3 empty nodes - 1 SG
2023-01-06T21:20:19.367+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 2 empty nodes, 1 non-empty node - 1 SG
2023-01-06T21:20:19.369+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 1 empty node, 2 non-empty nodes - 1 SG
2023-01-06T21:20:19.371+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 3 non-empty nodes - 1 SG
2023-01-06T21:20:19.373+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Index With 1 Replica - 3 empty nodes - 1 SG
2023-01-06T21:20:19.375+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Index With 1 Replica - 2 empty nodes, 1 non-empty node - 1 SG
2023-01-06T21:20:19.377+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Index With 1 Replica - 1 empty node, 2 non-empty nodes - 1 SG
2023-01-06T21:20:19.379+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Index With 1 Replica - 3 non-empty nodes - 1 SG
2023-01-06T21:20:19.380+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Index With 2 Replica - 3 empty nodes - 1 SG
2023-01-06T21:20:19.383+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Index With 2 Replica - 3 non-empty nodes - 1 SG
2023-01-06T21:20:19.384+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Index With 1 Replica - 2 empty nodes, 1 non-empty node - 2 SG
2023-01-06T21:20:19.386+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Index With 1 Replica - 1 empty node, 2 non-empty nodes - 2 SG
2023-01-06T21:20:19.388+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Index With 1 Replica - 3 non-empty nodes - 2 SG
2023-01-06T21:20:19.391+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Equivalent Index Without any replica - 3 non-empty nodes - 1 SG
2023-01-06T21:20:19.393+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Equivalent Index With 1 Replica - 3 non-empty nodes - 1 SG - Skip least loaded node
2023-01-06T21:20:19.395+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Equivalent Index With 1 Replica - 3 non-empty nodes - 1 SG - Use least loaded node
2023-01-06T21:20:19.397+05:30 [Info] Using greedy index placement for index 987654
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place 60 index instaces on 3 empty nodes - 1 SG
2023-01-06T21:20:19.399+05:30 [Info] Using greedy index placement for index 1000987654
2023-01-06T21:20:19.401+05:30 [Info] Using greedy index placement for index 1001987654
2023-01-06T21:20:19.403+05:30 [Info] Using greedy index placement for index 1002987654
2023-01-06T21:20:19.405+05:30 [Info] Using greedy index placement for index 1003987654
2023-01-06T21:20:19.407+05:30 [Info] Using greedy index placement for index 1004987654
2023-01-06T21:20:19.408+05:30 [Info] Using greedy index placement for index 1005987654
2023-01-06T21:20:19.410+05:30 [Info] Using greedy index placement for index 1006987654
2023-01-06T21:20:19.412+05:30 [Info] Using greedy index placement for index 1007987654
2023-01-06T21:20:19.414+05:30 [Info] Using greedy index placement for index 1008987654
2023-01-06T21:20:19.416+05:30 [Info] Using greedy index placement for index 1009987654
2023-01-06T21:20:19.417+05:30 [Info] Using greedy index placement for index 1010987654
2023-01-06T21:20:19.419+05:30 [Info] Using greedy index placement for index 1011987654
2023-01-06T21:20:19.421+05:30 [Info] Using greedy index placement for index 1012987654
2023-01-06T21:20:19.423+05:30 [Info] Using greedy index placement for index 1013987654
2023-01-06T21:20:19.425+05:30 [Info] Using greedy index placement for index 1014987654
2023-01-06T21:20:19.427+05:30 [Info] Using greedy index placement for index 1015987654
2023-01-06T21:20:19.428+05:30 [Info] Using greedy index placement for index 1016987654
2023-01-06T21:20:19.430+05:30 [Info] Using greedy index placement for index 1017987654
2023-01-06T21:20:19.432+05:30 [Info] Using greedy index placement for index 1018987654
2023-01-06T21:20:19.434+05:30 [Info] Using greedy index placement for index 1019987654
2023-01-06T21:20:19.436+05:30 [Info] Using greedy index placement for index 1020987654
2023-01-06T21:20:19.437+05:30 [Info] Using greedy index placement for index 1021987654
2023-01-06T21:20:19.439+05:30 [Info] Using greedy index placement for index 1022987654
2023-01-06T21:20:19.441+05:30 [Info] Using greedy index placement for index 1023987654
2023-01-06T21:20:19.443+05:30 [Info] Using greedy index placement for index 1024987654
2023-01-06T21:20:19.444+05:30 [Info] Using greedy index placement for index 1025987654
2023-01-06T21:20:19.446+05:30 [Info] Using greedy index placement for index 1026987654
2023-01-06T21:20:19.448+05:30 [Info] Using greedy index placement for index 1027987654
2023-01-06T21:20:19.450+05:30 [Info] Using greedy index placement for index 1028987654
2023-01-06T21:20:19.451+05:30 [Info] Using greedy index placement for index 1029987654
2023-01-06T21:20:19.453+05:30 [Info] Using greedy index placement for index 1030987654
2023-01-06T21:20:19.455+05:30 [Info] Using greedy index placement for index 1031987654
2023-01-06T21:20:19.457+05:30 [Info] Using greedy index placement for index 1032987654
2023-01-06T21:20:19.458+05:30 [Info] Using greedy index placement for index 1033987654
2023-01-06T21:20:19.460+05:30 [Info] Using greedy index placement for index 1034987654
2023-01-06T21:20:19.462+05:30 [Info] Using greedy index placement for index 1035987654
2023-01-06T21:20:19.464+05:30 [Info] Using greedy index placement for index 1036987654
2023-01-06T21:20:19.466+05:30 [Info] Using greedy index placement for index 1037987654
2023-01-06T21:20:19.468+05:30 [Info] Using greedy index placement for index 1038987654
2023-01-06T21:20:19.470+05:30 [Info] Using greedy index placement for index 1039987654
2023-01-06T21:20:19.471+05:30 [Info] Using greedy index placement for index 1040987654
2023-01-06T21:20:19.473+05:30 [Info] Using greedy index placement for index 1041987654
2023-01-06T21:20:19.475+05:30 [Info] Using greedy index placement for index 1042987654
2023-01-06T21:20:19.477+05:30 [Info] Using greedy index placement for index 1043987654
2023-01-06T21:20:19.478+05:30 [Info] Using greedy index placement for index 1044987654
2023-01-06T21:20:19.480+05:30 [Info] Using greedy index placement for index 1045987654
2023-01-06T21:20:19.482+05:30 [Info] Using greedy index placement for index 1046987654
2023-01-06T21:20:19.484+05:30 [Info] Using greedy index placement for index 1047987654
2023-01-06T21:20:19.485+05:30 [Info] Using greedy index placement for index 1048987654
2023-01-06T21:20:19.487+05:30 [Info] Using greedy index placement for index 1049987654
2023-01-06T21:20:19.489+05:30 [Info] Using greedy index placement for index 1050987654
2023-01-06T21:20:19.491+05:30 [Info] Using greedy index placement for index 1051987654
2023-01-06T21:20:19.493+05:30 [Info] Using greedy index placement for index 1052987654
2023-01-06T21:20:19.495+05:30 [Info] Using greedy index placement for index 1053987654
2023-01-06T21:20:19.497+05:30 [Info] Using greedy index placement for index 1054987654
2023-01-06T21:20:19.498+05:30 [Info] Using greedy index placement for index 1055987654
2023-01-06T21:20:19.500+05:30 [Info] Using greedy index placement for index 1056987654
2023-01-06T21:20:19.502+05:30 [Info] Using greedy index placement for index 1057987654
2023-01-06T21:20:19.504+05:30 [Info] Using greedy index placement for index 1058987654
2023-01-06T21:20:19.506+05:30 [Info] Using greedy index placement for index 1059987654
2023-01-06T21:20:19.506+05:30 [Info] Actual variance of deferred index count across nodes is 0
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place 60 index instaces on 1 empty and 1 10 percent filled node - 1 SG
2023-01-06T21:20:19.508+05:30 [Info] Using greedy index placement for index 1000987654
2023-01-06T21:20:19.509+05:30 [Info] Using greedy index placement for index 1001987654
2023-01-06T21:20:19.511+05:30 [Info] Using greedy index placement for index 1002987654
2023-01-06T21:20:19.513+05:30 [Info] Using greedy index placement for index 1003987654
2023-01-06T21:20:19.514+05:30 [Info] Using greedy index placement for index 1004987654
2023-01-06T21:20:19.516+05:30 [Info] Using greedy index placement for index 1005987654
2023-01-06T21:20:19.518+05:30 [Info] Using greedy index placement for index 1006987654
2023-01-06T21:20:19.520+05:30 [Info] Using greedy index placement for index 1007987654
2023-01-06T21:20:19.521+05:30 [Info] Using greedy index placement for index 1008987654
2023-01-06T21:20:19.523+05:30 [Info] Using greedy index placement for index 1009987654
2023-01-06T21:20:19.525+05:30 [Info] Using greedy index placement for index 1010987654
2023-01-06T21:20:19.526+05:30 [Info] Using greedy index placement for index 1011987654
2023-01-06T21:20:19.528+05:30 [Info] Using greedy index placement for index 1012987654
2023-01-06T21:20:19.530+05:30 [Info] Using greedy index placement for index 1013987654
2023-01-06T21:20:19.531+05:30 [Info] Using greedy index placement for index 1014987654
2023-01-06T21:20:19.533+05:30 [Info] Using greedy index placement for index 1015987654
2023-01-06T21:20:19.535+05:30 [Info] Using greedy index placement for index 1016987654
2023-01-06T21:20:19.536+05:30 [Info] Using greedy index placement for index 1017987654
2023-01-06T21:20:19.538+05:30 [Info] Using greedy index placement for index 1018987654
2023-01-06T21:20:19.540+05:30 [Info] Using greedy index placement for index 1019987654
2023-01-06T21:20:19.542+05:30 [Info] Using greedy index placement for index 1020987654
2023-01-06T21:20:19.543+05:30 [Info] Using greedy index placement for index 1021987654
2023-01-06T21:20:19.545+05:30 [Info] Using greedy index placement for index 1022987654
2023-01-06T21:20:19.547+05:30 [Info] Using greedy index placement for index 1023987654
2023-01-06T21:20:19.549+05:30 [Info] Using greedy index placement for index 1024987654
2023-01-06T21:20:19.551+05:30 [Info] Using greedy index placement for index 1025987654
2023-01-06T21:20:19.552+05:30 [Info] Using greedy index placement for index 1026987654
2023-01-06T21:20:19.554+05:30 [Info] Using greedy index placement for index 1027987654
2023-01-06T21:20:19.556+05:30 [Info] Using greedy index placement for index 1028987654
2023-01-06T21:20:19.558+05:30 [Info] Using greedy index placement for index 1029987654
2023-01-06T21:20:19.560+05:30 [Info] Using greedy index placement for index 1030987654
2023-01-06T21:20:19.562+05:30 [Info] Using greedy index placement for index 1031987654
2023-01-06T21:20:19.564+05:30 [Info] Using greedy index placement for index 1032987654
2023-01-06T21:20:19.566+05:30 [Info] Using greedy index placement for index 1033987654
2023-01-06T21:20:19.567+05:30 [Info] Using greedy index placement for index 1034987654
2023-01-06T21:20:19.569+05:30 [Info] Using greedy index placement for index 1035987654
2023-01-06T21:20:19.572+05:30 [Info] Using greedy index placement for index 1036987654
2023-01-06T21:20:19.573+05:30 [Info] Using greedy index placement for index 1037987654
2023-01-06T21:20:19.575+05:30 [Info] Using greedy index placement for index 1038987654
2023-01-06T21:20:19.577+05:30 [Info] Using greedy index placement for index 1039987654
2023-01-06T21:20:19.578+05:30 [Info] Using greedy index placement for index 1040987654
2023-01-06T21:20:19.580+05:30 [Info] Using greedy index placement for index 1041987654
2023-01-06T21:20:19.582+05:30 [Info] Using greedy index placement for index 1042987654
2023-01-06T21:20:19.584+05:30 [Info] Using greedy index placement for index 1043987654
2023-01-06T21:20:19.586+05:30 [Info] Using greedy index placement for index 1044987654
2023-01-06T21:20:19.588+05:30 [Info] Using greedy index placement for index 1045987654
2023-01-06T21:20:19.590+05:30 [Info] Using greedy index placement for index 1046987654
2023-01-06T21:20:19.592+05:30 [Info] Using greedy index placement for index 1047987654
2023-01-06T21:20:19.594+05:30 [Info] Using greedy index placement for index 1048987654
2023-01-06T21:20:19.595+05:30 [Info] Using greedy index placement for index 1049987654
2023-01-06T21:20:19.597+05:30 [Info] Using greedy index placement for index 1050987654
2023-01-06T21:20:19.599+05:30 [Info] Using greedy index placement for index 1051987654
2023-01-06T21:20:19.601+05:30 [Info] Using greedy index placement for index 1052987654
2023-01-06T21:20:19.602+05:30 [Info] Using greedy index placement for index 1053987654
2023-01-06T21:20:19.604+05:30 [Info] Using greedy index placement for index 1054987654
2023-01-06T21:20:19.606+05:30 [Info] Using greedy index placement for index 1055987654
2023-01-06T21:20:19.608+05:30 [Info] Using greedy index placement for index 1056987654
2023-01-06T21:20:19.610+05:30 [Info] Using greedy index placement for index 1057987654
2023-01-06T21:20:19.611+05:30 [Info] Using greedy index placement for index 1058987654
2023-01-06T21:20:19.613+05:30 [Info] Using greedy index placement for index 1059987654
2023-01-06T21:20:19.613+05:30 [Info] Actual variance of deferred index count across nodes is 8
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place 60 index instaces on 1 empty and 1 30 percent filled node - 1 SG
2023-01-06T21:20:19.616+05:30 [Info] Using greedy index placement for index 1000987654
2023-01-06T21:20:19.618+05:30 [Info] Using greedy index placement for index 1001987654
2023-01-06T21:20:19.620+05:30 [Info] Using greedy index placement for index 1002987654
2023-01-06T21:20:19.621+05:30 [Info] Using greedy index placement for index 1003987654
2023-01-06T21:20:19.624+05:30 [Info] Using greedy index placement for index 1004987654
2023-01-06T21:20:19.626+05:30 [Info] Using greedy index placement for index 1005987654
2023-01-06T21:20:19.628+05:30 [Info] Using greedy index placement for index 1006987654
2023-01-06T21:20:19.629+05:30 [Info] Using greedy index placement for index 1007987654
2023-01-06T21:20:19.632+05:30 [Info] Using greedy index placement for index 1008987654
2023-01-06T21:20:19.634+05:30 [Info] Using greedy index placement for index 1009987654
2023-01-06T21:20:19.635+05:30 [Info] Using greedy index placement for index 1010987654
2023-01-06T21:20:19.638+05:30 [Info] Using greedy index placement for index 1011987654
2023-01-06T21:20:19.640+05:30 [Info] Using greedy index placement for index 1012987654
2023-01-06T21:20:19.642+05:30 [Info] Using greedy index placement for index 1013987654
2023-01-06T21:20:19.644+05:30 [Info] Using greedy index placement for index 1014987654
2023-01-06T21:20:19.646+05:30 [Info] Using greedy index placement for index 1015987654
2023-01-06T21:20:19.648+05:30 [Info] Using greedy index placement for index 1016987654
2023-01-06T21:20:19.650+05:30 [Info] Using greedy index placement for index 1017987654
2023-01-06T21:20:19.653+05:30 [Info] Using greedy index placement for index 1018987654
2023-01-06T21:20:19.655+05:30 [Info] Using greedy index placement for index 1019987654
2023-01-06T21:20:19.657+05:30 [Info] Using greedy index placement for index 1020987654
2023-01-06T21:20:19.659+05:30 [Info] Using greedy index placement for index 1021987654
2023-01-06T21:20:19.661+05:30 [Info] Using greedy index placement for index 1022987654
2023-01-06T21:20:19.663+05:30 [Info] Using greedy index placement for index 1023987654
2023-01-06T21:20:19.665+05:30 [Info] Using greedy index placement for index 1024987654
2023-01-06T21:20:19.668+05:30 [Info] Using greedy index placement for index 1025987654
2023-01-06T21:20:19.670+05:30 [Info] Using greedy index placement for index 1026987654
2023-01-06T21:20:19.672+05:30 [Info] Using greedy index placement for index 1027987654
2023-01-06T21:20:19.674+05:30 [Info] Using greedy index placement for index 1028987654
2023-01-06T21:20:19.676+05:30 [Info] Using greedy index placement for index 1029987654
2023-01-06T21:20:19.678+05:30 [Info] Using greedy index placement for index 1030987654
2023-01-06T21:20:19.681+05:30 [Info] Using greedy index placement for index 1031987654
2023-01-06T21:20:19.683+05:30 [Info] Using greedy index placement for index 1032987654
2023-01-06T21:20:19.685+05:30 [Info] Using greedy index placement for index 1033987654
2023-01-06T21:20:19.687+05:30 [Info] Using greedy index placement for index 1034987654
2023-01-06T21:20:19.690+05:30 [Info] Using greedy index placement for index 1035987654
2023-01-06T21:20:19.692+05:30 [Info] Using greedy index placement for index 1036987654
2023-01-06T21:20:19.694+05:30 [Info] Using greedy index placement for index 1037987654
2023-01-06T21:20:19.696+05:30 [Info] Using greedy index placement for index 1038987654
2023-01-06T21:20:19.698+05:30 [Info] Using greedy index placement for index 1039987654
2023-01-06T21:20:19.700+05:30 [Info] Using greedy index placement for index 1040987654
2023-01-06T21:20:19.702+05:30 [Info] Using greedy index placement for index 1041987654
2023-01-06T21:20:19.704+05:30 [Info] Using greedy index placement for index 1042987654
2023-01-06T21:20:19.707+05:30 [Info] Using greedy index placement for index 1043987654
2023-01-06T21:20:19.709+05:30 [Info] Using greedy index placement for index 1044987654
2023-01-06T21:20:19.711+05:30 [Info] Using greedy index placement for index 1045987654
2023-01-06T21:20:19.713+05:30 [Info] Using greedy index placement for index 1046987654
2023-01-06T21:20:19.715+05:30 [Info] Using greedy index placement for index 1047987654
2023-01-06T21:20:19.717+05:30 [Info] Using greedy index placement for index 1048987654
2023-01-06T21:20:19.719+05:30 [Info] Using greedy index placement for index 1049987654
2023-01-06T21:20:19.721+05:30 [Info] Using greedy index placement for index 1050987654
2023-01-06T21:20:19.723+05:30 [Info] Using greedy index placement for index 1051987654
2023-01-06T21:20:19.725+05:30 [Info] Using greedy index placement for index 1052987654
2023-01-06T21:20:19.727+05:30 [Info] Using greedy index placement for index 1053987654
2023-01-06T21:20:19.729+05:30 [Info] Using greedy index placement for index 1054987654
2023-01-06T21:20:19.731+05:30 [Info] Using greedy index placement for index 1055987654
2023-01-06T21:20:19.733+05:30 [Info] Using greedy index placement for index 1056987654
2023-01-06T21:20:19.736+05:30 [Info] Using greedy index placement for index 1057987654
2023-01-06T21:20:19.738+05:30 [Info] Using greedy index placement for index 1058987654
2023-01-06T21:20:19.740+05:30 [Info] Using greedy index placement for index 1059987654
2023-01-06T21:20:19.740+05:30 [Info] Actual variance of deferred index count across nodes is 98
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place 5 index instaces on 1 empty and 1 60 percent filled node - 1 SG
2023-01-06T21:20:19.742+05:30 [Info] Using greedy index placement for index 1000987654
2023-01-06T21:20:19.744+05:30 [Info] Using greedy index placement for index 1001987654
2023-01-06T21:20:19.746+05:30 [Info] Using greedy index placement for index 1002987654
2023-01-06T21:20:19.748+05:30 [Info] Using greedy index placement for index 1003987654
2023-01-06T21:20:19.750+05:30 [Info] Using greedy index placement for index 1004987654
2023-01-06T21:20:19.751+05:30 [Info] Actual variance of deferred index count across nodes is 4.5
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place 60 index instaces on 1 empty and 1 60 percent filled node - 1 SG
2023-01-06T21:20:19.753+05:30 [Info] Using greedy index placement for index 1000987654
2023-01-06T21:20:19.755+05:30 [Info] Using greedy index placement for index 1001987654
2023-01-06T21:20:19.757+05:30 [Info] Using greedy index placement for index 1002987654
2023-01-06T21:20:19.759+05:30 [Info] Using greedy index placement for index 1003987654
2023-01-06T21:20:19.761+05:30 [Info] Using greedy index placement for index 1004987654
2023-01-06T21:20:19.763+05:30 [Info] Using greedy index placement for index 1005987654
2023-01-06T21:20:19.765+05:30 [Info] Using greedy index placement for index 1006987654
2023-01-06T21:20:19.767+05:30 [Info] Using greedy index placement for index 1007987654
2023-01-06T21:20:19.769+05:30 [Info] Using greedy index placement for index 1008987654
2023-01-06T21:20:19.771+05:30 [Info] Using greedy index placement for index 1009987654
2023-01-06T21:20:19.773+05:30 [Info] Using greedy index placement for index 1010987654
2023-01-06T21:20:19.775+05:30 [Info] Using greedy index placement for index 1011987654
2023-01-06T21:20:19.778+05:30 [Info] Using greedy index placement for index 1012987654
2023-01-06T21:20:19.780+05:30 [Info] Using greedy index placement for index 1013987654
2023-01-06T21:20:19.782+05:30 [Info] Using greedy index placement for index 1014987654
2023-01-06T21:20:19.784+05:30 [Info] Using greedy index placement for index 1015987654
2023-01-06T21:20:19.786+05:30 [Info] Using greedy index placement for index 1016987654
2023-01-06T21:20:19.788+05:30 [Info] Using greedy index placement for index 1017987654
2023-01-06T21:20:19.790+05:30 [Info] Using greedy index placement for index 1018987654
2023-01-06T21:20:19.792+05:30 [Info] Using greedy index placement for index 1019987654
2023-01-06T21:20:19.793+05:30 [Info] Using greedy index placement for index 1020987654
2023-01-06T21:20:19.795+05:30 [Info] Using greedy index placement for index 1021987654
2023-01-06T21:20:19.797+05:30 [Info] Using greedy index placement for index 1022987654
2023-01-06T21:20:19.799+05:30 [Info] Using greedy index placement for index 1023987654
2023-01-06T21:20:19.801+05:30 [Info] Using greedy index placement for index 1024987654
2023-01-06T21:20:19.803+05:30 [Info] Using greedy index placement for index 1025987654
2023-01-06T21:20:19.805+05:30 [Info] Using greedy index placement for index 1026987654
2023-01-06T21:20:19.807+05:30 [Info] Using greedy index placement for index 1027987654
2023-01-06T21:20:19.809+05:30 [Info] Using greedy index placement for index 1028987654
2023-01-06T21:20:19.811+05:30 [Info] Using greedy index placement for index 1029987654
2023-01-06T21:20:19.813+05:30 [Info] Using greedy index placement for index 1030987654
2023-01-06T21:20:19.816+05:30 [Info] Using greedy index placement for index 1031987654
2023-01-06T21:20:19.818+05:30 [Info] Using greedy index placement for index 1032987654
2023-01-06T21:20:19.820+05:30 [Info] Using greedy index placement for index 1033987654
2023-01-06T21:20:19.822+05:30 [Info] Using greedy index placement for index 1034987654
2023-01-06T21:20:19.824+05:30 [Info] Using greedy index placement for index 1035987654
2023-01-06T21:20:19.826+05:30 [Info] Using greedy index placement for index 1036987654
2023-01-06T21:20:19.828+05:30 [Info] Using greedy index placement for index 1037987654
2023-01-06T21:20:19.831+05:30 [Info] Using greedy index placement for index 1038987654
2023-01-06T21:20:19.833+05:30 [Info] Using greedy index placement for index 1039987654
2023-01-06T21:20:19.835+05:30 [Info] Using greedy index placement for index 1040987654
2023-01-06T21:20:19.837+05:30 [Info] Using greedy index placement for index 1041987654
2023-01-06T21:20:19.839+05:30 [Info] Using greedy index placement for index 1042987654
2023-01-06T21:20:19.841+05:30 [Info] Using greedy index placement for index 1043987654
2023-01-06T21:20:19.843+05:30 [Info] Using greedy index placement for index 1044987654
2023-01-06T21:20:19.845+05:30 [Info] Using greedy index placement for index 1045987654
2023-01-06T21:20:19.848+05:30 [Info] Using greedy index placement for index 1046987654
2023-01-06T21:20:19.850+05:30 [Info] Using greedy index placement for index 1047987654
2023-01-06T21:20:19.852+05:30 [Info] Using greedy index placement for index 1048987654
2023-01-06T21:20:19.854+05:30 [Info] Using greedy index placement for index 1049987654
2023-01-06T21:20:19.856+05:30 [Info] Using greedy index placement for index 1050987654
2023-01-06T21:20:19.858+05:30 [Info] Using greedy index placement for index 1051987654
2023-01-06T21:20:19.860+05:30 [Info] Using greedy index placement for index 1052987654
2023-01-06T21:20:19.862+05:30 [Info] Using greedy index placement for index 1053987654
2023-01-06T21:20:19.864+05:30 [Info] Using greedy index placement for index 1054987654
2023-01-06T21:20:19.866+05:30 [Info] Using greedy index placement for index 1055987654
2023-01-06T21:20:19.868+05:30 [Info] Using greedy index placement for index 1056987654
2023-01-06T21:20:19.870+05:30 [Info] Using greedy index placement for index 1057987654
2023-01-06T21:20:19.873+05:30 [Info] Using greedy index placement for index 1058987654
2023-01-06T21:20:19.875+05:30 [Info] Using greedy index placement for index 1059987654
2023-01-06T21:20:19.876+05:30 [Info] Actual variance of deferred index count across nodes is 648
--- PASS: TestGreedyPlanner (0.51s)
=== RUN   TestTenantAwarePlanner
2023/01/06 21:20:19 In TestTenantAwarePlanner()
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 1 empty node - 1 SG
2023-01-06T21:20:19.877+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9001]]
2023-01-06T21:20:19.877+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity []
2023-01-06T21:20:19.877+05:30 [Info] Planner::executeTenantAwarePlan Found Result [127.0.0.1:9001]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 4 empty nodes - 2 SG
2023-01-06T21:20:19.880+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9004 127.0.0.1:9001] [127.0.0.1:9003 127.0.0.1:9002]]
2023-01-06T21:20:19.880+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity []
2023-01-06T21:20:19.880+05:30 [Info] Planner::executeTenantAwarePlan Found Result [127.0.0.1:9003 127.0.0.1:9002]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 1 node - 1 SG
2023-01-06T21:20:19.883+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 300000000 Units 1000
2023-01-06T21:20:19.883+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9001]]
2023-01-06T21:20:19.883+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity [127.0.0.1:9001]
2023-01-06T21:20:19.883+05:30 [Info] Planner::executeTenantAwarePlan Found Result [127.0.0.1:9001]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 2 nodes - 1 SG
2023-01-06T21:20:19.886+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 300000000 Units 1000
2023-01-06T21:20:19.886+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg1 Memory 300000000 Units 1000
2023-01-06T21:20:19.886+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9001] [127.0.0.1:9002]]
2023-01-06T21:20:19.886+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity [127.0.0.1:9001]
2023-01-06T21:20:19.886+05:30 [Info] Planner::executeTenantAwarePlan Found Result [127.0.0.1:9001]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 4 nodes - 2 SG - Tenant Affinity(a)
2023-01-06T21:20:19.888+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 300000000 Units 1000
2023-01-06T21:20:19.888+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 300000000 Units 1000
2023-01-06T21:20:19.888+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9002] [127.0.0.1:9004 127.0.0.1:9003]]
2023-01-06T21:20:19.888+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity [127.0.0.1:9001 127.0.0.1:9002]
2023-01-06T21:20:19.888+05:30 [Info] Planner::executeTenantAwarePlan Found Result [127.0.0.1:9001 127.0.0.1:9002]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 4 nodes - 2 SG - Tenant Affinity(b)
2023-01-06T21:20:19.891+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 300000000 Units 1000
2023-01-06T21:20:19.891+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg3 Memory 300000000 Units 1000
2023-01-06T21:20:19.891+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9003 127.0.0.1:9002]]
2023-01-06T21:20:19.891+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.891+05:30 [Info] Planner::executeTenantAwarePlan Found Result [127.0.0.1:9001 127.0.0.1:9004]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 4 nodes - 2 SG - Tenant Affinity(c)
2023-01-06T21:20:19.893+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 300000000 Units 1000
2023-01-06T21:20:19.893+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 300000000 Units 1000
2023-01-06T21:20:19.893+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 300000000 Units 1000
2023-01-06T21:20:19.893+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg3 Memory 300000000 Units 1000
2023-01-06T21:20:19.893+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9003]]
2023-01-06T21:20:19.893+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.893+05:30 [Info] Planner::executeTenantAwarePlan Found Result [127.0.0.1:9001 127.0.0.1:9004]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 6 nodes - 3 SG - Tenant Affinity Memory Above LWM
2023-01-06T21:20:19.896+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 300000000 Units 1000
2023-01-06T21:20:19.896+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 700000000 Units 1000
2023-01-06T21:20:19.896+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 300000000 Units 1000
2023-01-06T21:20:19.896+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 700000000 Units 1000
2023-01-06T21:20:19.896+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.896+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.896+05:30 [Info] Planner::executeTenantAwarePlan Found Result [127.0.0.1:9002 127.0.0.1:9005]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 6 nodes - 3 SG - Tenant Affinity Units Above LWM
2023-01-06T21:20:19.899+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 300000000 Units 1000
2023-01-06T21:20:19.899+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 300000000 Units 5000
2023-01-06T21:20:19.899+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 300000000 Units 1000
2023-01-06T21:20:19.899+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 300000000 Units 5000
2023-01-06T21:20:19.899+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.899+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.899+05:30 [Info] Planner::executeTenantAwarePlan Found Result [127.0.0.1:9002 127.0.0.1:9005]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 6 nodes - 3 SG - Tenant Affinity New Tenant(a)
2023-01-06T21:20:19.901+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 300000000 Units 1000
2023-01-06T21:20:19.901+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 700000000 Units 1000
2023-01-06T21:20:19.901+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 300000000 Units 1000
2023-01-06T21:20:19.901+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 700000000 Units 1000
2023-01-06T21:20:19.901+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.901+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity []
2023-01-06T21:20:19.901+05:30 [Info] Planner::executeTenantAwarePlan Found Result [127.0.0.1:9006 127.0.0.1:9003]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 6 nodes - 3 SG - Tenant Affinity New Tenant(b)
2023-01-06T21:20:19.904+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 300000000 Units 1000
2023-01-06T21:20:19.904+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 700000000 Units 1000
2023-01-06T21:20:19.904+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 300000000 Units 1000
2023-01-06T21:20:19.904+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 700000000 Units 1000
2023-01-06T21:20:19.904+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.904+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity []
2023-01-06T21:20:19.904+05:30 [Info] Planner::executeTenantAwarePlan Found Result [127.0.0.1:9006 127.0.0.1:9003]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 2 empty nodes - 1 SG
2023-01-06T21:20:19.907+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9001] [127.0.0.1:9002]]
2023-01-06T21:20:19.907+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity []
2023-01-06T21:20:19.907+05:30 [Info] Planner::filterPartialSubClusters Filter partial subcluster [127.0.0.1:9002]
2023-01-06T21:20:19.907+05:30 [Info] Planner::filterPartialSubClusters Filter partial subcluster [127.0.0.1:9001]
2023-01-06T21:20:19.907+05:30 [Info] Planner::executeTenantAwarePlan Found no matching candidate for tenant &{idx1 bucket1   987654 false [name1]  false false false false 0  0 [] 2 []   false 0 0 0 0 0 0 0 0}. Error - No SubCluster Below Low Usage Threshold
2023/01/06 21:20:19 Expected error Planner not able to find any node for placement - No SubCluster Below Low Usage Threshold
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 4 nodes - 2 SG - Tenant Affinity Above Memory HWM
2023-01-06T21:20:19.908+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 1000
2023-01-06T21:20:19.908+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 300000000 Units 1000
2023-01-06T21:20:19.908+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 300000000 Units 1000
2023-01-06T21:20:19.908+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg3 Memory 900000000 Units 1000
2023-01-06T21:20:19.908+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9003]]
2023-01-06T21:20:19.908+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.908+05:30 [Info] Planner::executeTenantAwarePlan Found no matching candidate for tenant &{idx1 bucket1   987654 false [name1]  false false false false 0  0 [] 2 []   false 0 0 0 0 0 0 0 0}. Error - Tenant SubCluster Above High Usage Threshold
2023/01/06 21:20:19 Expected error Planner not able to find any node for placement - Tenant SubCluster Above High Usage Threshold
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 4 nodes - 2 SG - Tenant Affinity Above Units HWM
2023-01-06T21:20:19.909+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 300000000 Units 8000
2023-01-06T21:20:19.909+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 300000000 Units 1000
2023-01-06T21:20:19.909+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 300000000 Units 1000
2023-01-06T21:20:19.909+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg3 Memory 300000000 Units 8000
2023-01-06T21:20:19.909+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9003]]
2023-01-06T21:20:19.909+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.909+05:30 [Info] Planner::executeTenantAwarePlan Found no matching candidate for tenant &{idx1 bucket1   987654 false [name1]  false false false false 0  0 [] 2 []   false 0 0 0 0 0 0 0 0}. Error - Tenant SubCluster Above High Usage Threshold
2023/01/06 21:20:19 Expected error Planner not able to find any node for placement - Tenant SubCluster Above High Usage Threshold
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 6 nodes - 3 SG - New Tenant Memory Above LWM
2023-01-06T21:20:19.909+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 680000000 Units 1000
2023-01-06T21:20:19.909+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 700000000 Units 1000
2023-01-06T21:20:19.909+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 680000000 Units 1000
2023-01-06T21:20:19.909+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 700000000 Units 1000
2023-01-06T21:20:19.909+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.909+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity []
2023-01-06T21:20:19.909+05:30 [Info] Planner::executeTenantAwarePlan Found no matching candidate for tenant &{idx1 bucket7   987654 false [name1]  false false false false 0  0 [] 2 []   false 0 0 0 0 0 0 0 0}. Error - No SubCluster Below Low Usage Threshold
2023/01/06 21:20:19 Expected error Planner not able to find any node for placement - No SubCluster Below Low Usage Threshold
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Place Single Index Instance - 6 nodes - 3 SG - New Tenant Units Above LWM
2023-01-06T21:20:19.910+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 300000000 Units 5000
2023-01-06T21:20:19.910+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 300000000 Units 5500
2023-01-06T21:20:19.910+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 300000000 Units 5000
2023-01-06T21:20:19.910+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 300000000 Units 5500
2023-01-06T21:20:19.910+05:30 [Info] Planner::executeTenantAwarePlan Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.910+05:30 [Info] Planner::executeTenantAwarePlan Found Candidate Based on Tenant Affinity []
2023-01-06T21:20:19.910+05:30 [Info] Planner::executeTenantAwarePlan Found no matching candidate for tenant &{idx1 bucket7   987654 false [name1]  false false false false 0  0 [] 2 []   false 0 0 0 0 0 0 0 0}. Error - No SubCluster Below Low Usage Threshold
2023/01/06 21:20:19 Expected error Planner not able to find any node for placement - No SubCluster Below Low Usage Threshold
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 3 SG, 1 empty, 1 Memory Above HWM
2023-01-06T21:20:19.911+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 300000000 Units 1000
2023-01-06T21:20:19.911+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 900000000 Units 1000
2023-01-06T21:20:19.911+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 300000000 Units 1000
2023-01-06T21:20:19.911+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 900000000 Units 1000
2023-01-06T21:20:19.911+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.911+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9002 127.0.0.1:9005]]
2023-01-06T21:20:19.911+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.911+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket3 MemoryUsage 400000000 UnitsUsage 500 
2023-01-06T21:20:19.912+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.912+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket3 MemoryUsage 400000000 UnitsUsage 500  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 3 SG, 1 empty, 1 Units Above HWM
2023-01-06T21:20:19.913+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 400000000 Units 8000
2023-01-06T21:20:19.913+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 400000000 Units 1000
2023-01-06T21:20:19.913+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 400000000 Units 8000
2023-01-06T21:20:19.913+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 400000000 Units 1000
2023-01-06T21:20:19.913+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.913+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9001 127.0.0.1:9004]]
2023-01-06T21:20:19.913+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.913+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 3000 
2023-01-06T21:20:19.913+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.913+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 3000  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 3 SG, 1 empty, Both Memory/Units Above HWM
2023-01-06T21:20:19.914+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 9000
2023-01-06T21:20:19.914+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 300000000 Units 1000
2023-01-06T21:20:19.914+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 9000
2023-01-06T21:20:19.914+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 300000000 Units 1000
2023-01-06T21:20:19.914+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.914+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9001 127.0.0.1:9004]]
2023-01-06T21:20:19.914+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.914+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000 
2023-01-06T21:20:19.914+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 2000 
2023-01-06T21:20:19.914+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.914+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.914+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 2000  can be placed on [127.0.0.1:9002 127.0.0.1:9005]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 3 SG, Multiple tenants to move, single source, multiple destination
2023-01-06T21:20:19.915+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 9000
2023-01-06T21:20:19.915+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 200000000 Units 1000
2023-01-06T21:20:19.915+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 9000
2023-01-06T21:20:19.915+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 200000000 Units 1000
2023-01-06T21:20:19.915+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.915+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9001 127.0.0.1:9004]]
2023-01-06T21:20:19.915+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.915+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 4000 
2023-01-06T21:20:19.915+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket5 MemoryUsage 400000000 UnitsUsage 2000 
2023-01-06T21:20:19.915+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.915+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 4000  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.915+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket5 MemoryUsage 400000000 UnitsUsage 2000  can be placed on [127.0.0.1:9002 127.0.0.1:9005]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 3 SG, Multiple tenants to move, no nodes below LWM
2023-01-06T21:20:19.916+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 300000000 Units 5000
2023-01-06T21:20:19.916+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 900000000 Units 3000
2023-01-06T21:20:19.916+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 700000000 Units 3000
2023-01-06T21:20:19.916+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 300000000 Units 5000
2023-01-06T21:20:19.916+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 900000000 Units 3000
2023-01-06T21:20:19.916+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 700000000 Units 3000
2023-01-06T21:20:19.916+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.916+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9002 127.0.0.1:9005]]
2023-01-06T21:20:19.916+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.916+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket3 MemoryUsage 300000000 UnitsUsage 500 
2023-01-06T21:20:19.916+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket1 MemoryUsage 200000000 UnitsUsage 500 
2023-01-06T21:20:19.916+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 4 SG, Multiple tenants to move, multiple source, multiple destination(non-uniform memory/units usage)
2023-01-06T21:20:19.917+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 8000
2023-01-06T21:20:19.918+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 1000000000 Units 7000
2023-01-06T21:20:19.918+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 8000
2023-01-06T21:20:19.918+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 1000000000 Units 7000
2023-01-06T21:20:19.918+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.918+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.918+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9007 127.0.0.1:9008] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.918+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005]]
2023-01-06T21:20:19.918+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.918+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 70000000 UnitsUsage 1200 
2023-01-06T21:20:19.918+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket3 MemoryUsage 80000000 UnitsUsage 800 
2023-01-06T21:20:19.918+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket1 MemoryUsage 30000000 UnitsUsage 1500 
2023-01-06T21:20:19.918+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 120000000 UnitsUsage 500 
2023-01-06T21:20:19.918+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.918+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket9 MemoryUsage 100000000 UnitsUsage 500 
2023-01-06T21:20:19.918+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket10 MemoryUsage 200000000 UnitsUsage 800 
2023-01-06T21:20:19.918+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket8 MemoryUsage 75000000 UnitsUsage 700 
2023-01-06T21:20:19.918+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket7 MemoryUsage 25000000 UnitsUsage 1000 
2023-01-06T21:20:19.918+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9007 127.0.0.1:9008] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.918+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 70000000 UnitsUsage 1200  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.918+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket9 MemoryUsage 100000000 UnitsUsage 500  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.918+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket3 MemoryUsage 80000000 UnitsUsage 800  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.918+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket10 MemoryUsage 200000000 UnitsUsage 800  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.918+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket1 MemoryUsage 30000000 UnitsUsage 1500  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.918+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket8 MemoryUsage 75000000 UnitsUsage 700  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.918+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 120000000 UnitsUsage 500  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.918+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket7 MemoryUsage 25000000 UnitsUsage 1000  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 4 SG, Multiple tenants to move, multiple source, multiple destination(non-uniform memory/units usage)
2023-01-06T21:20:19.919+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 8000
2023-01-06T21:20:19.919+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 1000000000 Units 7000
2023-01-06T21:20:19.919+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 8000
2023-01-06T21:20:19.919+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 1000000000 Units 7000
2023-01-06T21:20:19.919+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.919+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.919+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9007 127.0.0.1:9008] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.919+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005]]
2023-01-06T21:20:19.919+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.919+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 120000000 UnitsUsage 1500 
2023-01-06T21:20:19.920+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket3 MemoryUsage 80000000 UnitsUsage 1200 
2023-01-06T21:20:19.920+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 70000000 UnitsUsage 800 
2023-01-06T21:20:19.920+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket1 MemoryUsage 30000000 UnitsUsage 500 
2023-01-06T21:20:19.920+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.920+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket9 MemoryUsage 100000000 UnitsUsage 700 
2023-01-06T21:20:19.920+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket10 MemoryUsage 200000000 UnitsUsage 800 
2023-01-06T21:20:19.920+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket8 MemoryUsage 75000000 UnitsUsage 1000 
2023-01-06T21:20:19.920+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket7 MemoryUsage 25000000 UnitsUsage 500 
2023-01-06T21:20:19.920+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9007 127.0.0.1:9008] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.920+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 120000000 UnitsUsage 1500  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.920+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket9 MemoryUsage 100000000 UnitsUsage 700  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.920+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket3 MemoryUsage 80000000 UnitsUsage 1200  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.920+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket10 MemoryUsage 200000000 UnitsUsage 800  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.920+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 70000000 UnitsUsage 800  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.920+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket8 MemoryUsage 75000000 UnitsUsage 1000  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.920+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket1 MemoryUsage 30000000 UnitsUsage 500  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.920+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket7 MemoryUsage 25000000 UnitsUsage 500  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 3 SG, Single Large Tenant, Nothing to move
2023-01-06T21:20:19.921+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 1000
2023-01-06T21:20:19.921+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 900000000 Units 2000
2023-01-06T21:20:19.921+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 1000
2023-01-06T21:20:19.921+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 900000000 Units 2000
2023-01-06T21:20:19.921+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.921+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005]]
2023-01-06T21:20:19.921+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.921+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket3 MemoryUsage 200000000 UnitsUsage 1000 
2023-01-06T21:20:19.921+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.921+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket3 MemoryUsage 200000000 UnitsUsage 1000  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 4 SG, Multiple tenants to move, multiple source, multiple destination(zero usage tenants)
2023-01-06T21:20:19.923+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 8000
2023-01-06T21:20:19.923+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 1000000000 Units 7000
2023-01-06T21:20:19.923+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 8000
2023-01-06T21:20:19.923+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 1000000000 Units 7000
2023-01-06T21:20:19.923+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.923+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9007 127.0.0.1:9008] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005]]
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 120000000 UnitsUsage 1500 
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket3 MemoryUsage 80000000 UnitsUsage 1200 
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 70000000 UnitsUsage 800 
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket1 MemoryUsage 30000000 UnitsUsage 500 
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket17 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket14 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket9 MemoryUsage 100000000 UnitsUsage 700 
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket10 MemoryUsage 200000000 UnitsUsage 800 
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket8 MemoryUsage 75000000 UnitsUsage 1000 
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket7 MemoryUsage 25000000 UnitsUsage 500 
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket15 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket16 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.923+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9007 127.0.0.1:9008] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.923+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 120000000 UnitsUsage 1500  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.923+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket9 MemoryUsage 100000000 UnitsUsage 700  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.923+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket3 MemoryUsage 80000000 UnitsUsage 1200  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.923+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket10 MemoryUsage 200000000 UnitsUsage 800  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.923+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 70000000 UnitsUsage 800  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.923+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket8 MemoryUsage 75000000 UnitsUsage 1000  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.923+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket1 MemoryUsage 30000000 UnitsUsage 500  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.923+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket7 MemoryUsage 25000000 UnitsUsage 500  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.923+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket17 MemoryUsage 0 UnitsUsage 0  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.923+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket15 MemoryUsage 0 UnitsUsage 0  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.923+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket14 MemoryUsage 0 UnitsUsage 0  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.923+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket16 MemoryUsage 0 UnitsUsage 0  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 3 SG, 1 Partial Subcluster
2023-01-06T21:20:19.924+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 100000000 Units 1000
2023-01-06T21:20:19.924+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 900000000 Units 3000
2023-01-06T21:20:19.924+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 100000000 Units 1000
2023-01-06T21:20:19.924+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 900000000 Units 3000
2023-01-06T21:20:19.924+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9003]]
2023-01-06T21:20:19.924+05:30 [Info] Planner::repairMissingReplica Found SubCluster [127.0.0.1:9003] with len 1. Skipping replica repair attempt.
2023-01-06T21:20:19.924+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9002 127.0.0.1:9005]]
2023-01-06T21:20:19.924+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.924+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket3 MemoryUsage 300000000 UnitsUsage 500 
2023-01-06T21:20:19.924+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket1 MemoryUsage 200000000 UnitsUsage 500 
2023-01-06T21:20:19.924+05:30 [Info] Planner::filterPartialSubClusters Filter partial subcluster [127.0.0.1:9003]
2023-01-06T21:20:19.924+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9001 127.0.0.1:9004]]
2023-01-06T21:20:19.924+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket3 MemoryUsage 300000000 UnitsUsage 500  can be placed on [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.924+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket1 MemoryUsage 200000000 UnitsUsage 500  can be placed on [127.0.0.1:9001 127.0.0.1:9004]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Replica Repair - 4 SG, Missing Replicas for multiple tenants in SG
2023-01-06T21:20:19.926+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 100000000 Units 1000
2023-01-06T21:20:19.926+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 100000000 Units 1000
2023-01-06T21:20:19.926+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 100000000 Units 1000
2023-01-06T21:20:19.926+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 100000000 Units 1000
2023-01-06T21:20:19.926+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.926+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.926+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9007 127.0.0.1:9008] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.926+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket2,,,idx2,1) on 127.0.0.1:9004
2023-01-06T21:20:19.926+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket4,,,idx2,0) on 127.0.0.1:9001
2023-01-06T21:20:19.926+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket8,,,idx1,1) on 127.0.0.1:9005
2023-01-06T21:20:19.926+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket8,,,idx1,1) on 127.0.0.1:9005
2023-01-06T21:20:19.926+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Replica Repair - 4 SG, Missing Replicas, Buddy Node Failed over
2023-01-06T21:20:19.927+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 100000000 Units 1000
2023-01-06T21:20:19.927+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 100000000 Units 1000
2023-01-06T21:20:19.927+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 100000000 Units 1000
2023-01-06T21:20:19.927+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.927+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.927+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9007 127.0.0.1:9008] [127.0.0.1:9003]]
2023-01-06T21:20:19.927+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket1,,,idx1,1) on 127.0.0.1:9004
2023-01-06T21:20:19.927+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket2,,,idx2,1) on 127.0.0.1:9004
2023-01-06T21:20:19.927+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket3,,,idx1,1) on 127.0.0.1:9004
2023-01-06T21:20:19.927+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket8,,,idx1,1) on 127.0.0.1:9005
2023-01-06T21:20:19.927+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket8,,,idx1,1) on 127.0.0.1:9005
2023-01-06T21:20:19.927+05:30 [Info] Planner::repairMissingReplica Found SubCluster [127.0.0.1:9003] with len 1. Skipping replica repair attempt.
2023-01-06T21:20:19.927+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Replica Repair - 4 SG, Missing Replicas, Buddy Node Failed over, No replacement
2023-01-06T21:20:19.928+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 100000000 Units 1000
2023-01-06T21:20:19.928+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 100000000 Units 1000
2023-01-06T21:20:19.928+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 100000000 Units 1000
2023-01-06T21:20:19.928+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.928+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.928+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9007 127.0.0.1:9008]]
2023-01-06T21:20:19.928+05:30 [Info] Planner::repairMissingReplica Found SubCluster [127.0.0.1:9001] with len 1. Skipping replica repair attempt.
2023-01-06T21:20:19.928+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Replica Repair - 4 SG, Missing Replicas, one replica missing with pendingDelete true 
2023-01-06T21:20:19.929+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 100000000 Units 1000
2023-01-06T21:20:19.929+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 100000000 Units 1000
2023-01-06T21:20:19.929+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 100000000 Units 1000
2023-01-06T21:20:19.929+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 100000000 Units 1000
2023-01-06T21:20:19.929+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.929+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.929+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9007 127.0.0.1:9008] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.929+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket2,,,idx2,1) on 127.0.0.1:9004
2023-01-06T21:20:19.929+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket4,,,idx2,0) on 127.0.0.1:9001
2023-01-06T21:20:19.929+05:30 [Info] Planner::findMissingReplicaForIndexerNode Skipping Replica Repair for 88883:81813:0. PendingDelete true
2023-01-06T21:20:19.929+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket8,,,idx2,1) on 127.0.0.1:9005
2023-01-06T21:20:19.929+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Replica Repair - 2 SG, Missing Replicas with Nodes over HWM
2023-01-06T21:20:19.930+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 7000
2023-01-06T21:20:19.931+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 100000000 Units 500
2023-01-06T21:20:19.931+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 7000
2023-01-06T21:20:19.931+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005]]
2023-01-06T21:20:19.931+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket9,,,idx2,1) on 127.0.0.1:9005
2023-01-06T21:20:19.931+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9001 127.0.0.1:9004]]
2023-01-06T21:20:19.931+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.931+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket3 MemoryUsage 80000000 UnitsUsage 800 
2023-01-06T21:20:19.931+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 70000000 UnitsUsage 700 
2023-01-06T21:20:19.931+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 120000000 UnitsUsage 500 
2023-01-06T21:20:19.931+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket1 MemoryUsage 30000000 UnitsUsage 1000 
2023-01-06T21:20:19.931+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9002 127.0.0.1:9005]]
2023-01-06T21:20:19.931+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket3 MemoryUsage 80000000 UnitsUsage 800  can be placed on [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.931+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 70000000 UnitsUsage 700  can be placed on [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.931+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 120000000 UnitsUsage 500  can be placed on [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.931+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket1 MemoryUsage 30000000 UnitsUsage 1000  can be placed on [127.0.0.1:9002 127.0.0.1:9005]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Swap Rebalance - 4 SG, Swap 1 node each from 2 SG with 2 new nodes
2023-01-06T21:20:19.932+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9002 127.0.0.1:9004]
2023-01-06T21:20:19.932+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9002 127.0.0.1:9004]
2023-01-06T21:20:19.932+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9005 127.0.0.1:9001]
2023-01-06T21:20:19.932+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes [127.0.0.1:9003 127.0.0.1:9006 127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.932+05:30 [Info] Planner::moveTenantsFromDeletedNodes selected newNodes for swap [127.0.0.1:9003 127.0.0.1:9008 127.0.0.1:9006 127.0.0.1:9007]
2023-01-06T21:20:19.932+05:30 [Info] Moving index 7777:7171:0 from source 127.0.0.1:9002 to dest 127.0.0.1:9003
2023-01-06T21:20:19.932+05:30 [Info] Moving index 8888:8181:0 from source 127.0.0.1:9002 to dest 127.0.0.1:9003
2023-01-06T21:20:19.932+05:30 [Info] Moving index 9999:9191:0 from source 127.0.0.1:9002 to dest 127.0.0.1:9003
2023-01-06T21:20:19.932+05:30 [Info] Moving index 101010:101101:0 from source 127.0.0.1:9002 to dest 127.0.0.1:9003
2023-01-06T21:20:19.932+05:30 [Info] Moving index 111111:112112:0 from source 127.0.0.1:9002 to dest 127.0.0.1:9003
2023-01-06T21:20:19.932+05:30 [Info] Moving index 121212:121121:0 from source 127.0.0.1:9002 to dest 127.0.0.1:9003
2023-01-06T21:20:19.932+05:30 [Info] Moving index 1111:11212:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.932+05:30 [Info] Moving index 2222:22121:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.932+05:30 [Info] Moving index 3333:33131:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.932+05:30 [Info] Moving index 4444:44141:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.932+05:30 [Info] Moving index 5555:55151:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.932+05:30 [Info] Moving index 6666:66161:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.932+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9002 SG sg2 Memory 0 Units 0
2023-01-06T21:20:19.932+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9004 SG sg2 Memory 0 Units 0
2023-01-06T21:20:19.932+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 8000
2023-01-06T21:20:19.932+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 1000000000 Units 7000
2023-01-06T21:20:19.932+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 1000000000 Units 7000
2023-01-06T21:20:19.932+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg3 Memory 1100000000 Units 8100
2023-01-06T21:20:19.932+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9008] [127.0.0.1:9003 127.0.0.1:9005] [127.0.0.1:9006] [127.0.0.1:9007]]
2023-01-06T21:20:19.932+05:30 [Info] Planner::repairMissingReplica Found SubCluster [127.0.0.1:9006] with len 1. Skipping replica repair attempt.
2023-01-06T21:20:19.932+05:30 [Info] Planner::repairMissingReplica Found SubCluster [127.0.0.1:9007] with len 1. Skipping replica repair attempt.
2023-01-06T21:20:19.932+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Swap Rebalance - 4 SG, Swap 1 node each from 2 SG with 2 new nodes(different SG)
2023-01-06T21:20:19.934+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9004 127.0.0.1:9005]
2023-01-06T21:20:19.934+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9004 127.0.0.1:9005]
2023-01-06T21:20:19.934+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9001 127.0.0.1:9002]
2023-01-06T21:20:19.934+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.934+05:30 [Info] Planner::moveTenantsFromDeletedNodes selected newNodes for swap [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.934+05:30 [Info] Moving index 1111:11212:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.934+05:30 [Info] Moving index 2222:22121:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.934+05:30 [Info] Moving index 3333:33131:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.934+05:30 [Info] Moving index 4444:44141:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.934+05:30 [Info] Moving index 5555:55151:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.934+05:30 [Info] Moving index 6666:66161:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.934+05:30 [Info] Moving index 7777:17171:0 from source 127.0.0.1:9005 to dest 127.0.0.1:9003
2023-01-06T21:20:19.934+05:30 [Info] Moving index 8888:18181:0 from source 127.0.0.1:9005 to dest 127.0.0.1:9003
2023-01-06T21:20:19.934+05:30 [Info] Moving index 9999:19191:0 from source 127.0.0.1:9005 to dest 127.0.0.1:9003
2023-01-06T21:20:19.934+05:30 [Info] Moving index 101010:1101101:0 from source 127.0.0.1:9005 to dest 127.0.0.1:9003
2023-01-06T21:20:19.934+05:30 [Info] Moving index 111111:1112112:0 from source 127.0.0.1:9005 to dest 127.0.0.1:9003
2023-01-06T21:20:19.934+05:30 [Info] Moving index 121212:1121121:0 from source 127.0.0.1:9005 to dest 127.0.0.1:9003
2023-01-06T21:20:19.934+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9004 SG sg2 Memory 0 Units 0
2023-01-06T21:20:19.934+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9005 SG sg3 Memory 0 Units 0
2023-01-06T21:20:19.934+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 8000
2023-01-06T21:20:19.934+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 1000000000 Units 7000
2023-01-06T21:20:19.934+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 1000000000 Units 7000
2023-01-06T21:20:19.934+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 900000000 Units 8000
2023-01-06T21:20:19.934+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.934+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.934+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9006] [127.0.0.1:9002 127.0.0.1:9003] [127.0.0.1:9007 127.0.0.1:9008]]
2023-01-06T21:20:19.934+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Swap Rebalance - 4 SG, Swap 1 SG with 2 new nodes
2023-01-06T21:20:19.935+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.935+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.935+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9004 127.0.0.1:9001]
2023-01-06T21:20:19.935+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.935+05:30 [Info] Planner::moveTenantsFromDeletedNodes selected newNodes for swap [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.935+05:30 [Info] Moving index 1111:1212:0 from source 127.0.0.1:9001 to dest 127.0.0.1:9003
2023-01-06T21:20:19.935+05:30 [Info] Moving index 2222:2121:0 from source 127.0.0.1:9001 to dest 127.0.0.1:9003
2023-01-06T21:20:19.935+05:30 [Info] Moving index 3333:3131:0 from source 127.0.0.1:9001 to dest 127.0.0.1:9003
2023-01-06T21:20:19.935+05:30 [Info] Moving index 4444:4141:0 from source 127.0.0.1:9001 to dest 127.0.0.1:9003
2023-01-06T21:20:19.935+05:30 [Info] Moving index 5555:5151:0 from source 127.0.0.1:9001 to dest 127.0.0.1:9003
2023-01-06T21:20:19.935+05:30 [Info] Moving index 6666:6161:0 from source 127.0.0.1:9001 to dest 127.0.0.1:9003
2023-01-06T21:20:19.935+05:30 [Info] Moving index 1111:11212:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.935+05:30 [Info] Moving index 2222:22121:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.935+05:30 [Info] Moving index 3333:33131:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.935+05:30 [Info] Moving index 4444:44141:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.935+05:30 [Info] Moving index 5555:55151:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.935+05:30 [Info] Moving index 6666:66161:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.935+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9001 SG sg1 Memory 0 Units 0
2023-01-06T21:20:19.935+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9004 SG sg2 Memory 0 Units 0
2023-01-06T21:20:19.935+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 1000000000 Units 7000
2023-01-06T21:20:19.935+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg2 Memory 900000000 Units 8000
2023-01-06T21:20:19.935+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 1000000000 Units 7000
2023-01-06T21:20:19.935+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 900000000 Units 8000
2023-01-06T21:20:19.935+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.935+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.935+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9003 127.0.0.1:9006] [127.0.0.1:9007 127.0.0.1:9008]]
2023-01-06T21:20:19.935+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Swap Rebalance - 4 SG, Swap 1 node with 2 new nodes
2023-01-06T21:20:19.936+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9004]
2023-01-06T21:20:19.936+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9004]
2023-01-06T21:20:19.936+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9001]
2023-01-06T21:20:19.936+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes [127.0.0.1:9003 127.0.0.1:9007 127.0.0.1:9008 127.0.0.1:9006]
2023-01-06T21:20:19.936+05:30 [Info] Planner::moveTenantsFromDeletedNodes selected newNodes for swap [127.0.0.1:9006 127.0.0.1:9003 127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.936+05:30 [Info] Moving index 1111:11212:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.936+05:30 [Info] Moving index 2222:22121:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.936+05:30 [Info] Moving index 3333:33131:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.936+05:30 [Info] Moving index 4444:44141:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.937+05:30 [Info] Moving index 5555:55151:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.937+05:30 [Info] Moving index 6666:66161:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9006
2023-01-06T21:20:19.937+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9004 SG sg2 Memory 0 Units 0
2023-01-06T21:20:19.937+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 8000
2023-01-06T21:20:19.937+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 1000000000 Units 7000
2023-01-06T21:20:19.937+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 1000000000 Units 7000
2023-01-06T21:20:19.937+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 900000000 Units 8000
2023-01-06T21:20:19.937+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9006] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9003] [127.0.0.1:9008] [127.0.0.1:9007]]
2023-01-06T21:20:19.937+05:30 [Info] Planner::repairMissingReplica Found SubCluster [127.0.0.1:9003] with len 1. Skipping replica repair attempt.
2023-01-06T21:20:19.937+05:30 [Info] Planner::repairMissingReplica Found SubCluster [127.0.0.1:9008] with len 1. Skipping replica repair attempt.
2023-01-06T21:20:19.937+05:30 [Info] Planner::repairMissingReplica Found SubCluster [127.0.0.1:9007] with len 1. Skipping replica repair attempt.
2023-01-06T21:20:19.937+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Swap Rebalance - 4 SG, Swap 1 empty node with 2 new nodes
2023-01-06T21:20:19.938+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9004]
2023-01-06T21:20:19.938+05:30 [Info] Planner::moveTenantsFromDeletedNodes No non-empty deleted nodes found.
2023-01-06T21:20:19.938+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9004 SG sg2 Memory 900000000 Units 8000
2023-01-06T21:20:19.938+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 8000
2023-01-06T21:20:19.938+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 1000000000 Units 7000
2023-01-06T21:20:19.938+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 1000000000 Units 7000
2023-01-06T21:20:19.938+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.938+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.938+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9006] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9007 127.0.0.1:9008] [127.0.0.1:9003]]
2023-01-06T21:20:19.938+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket1,,,idx1,1) on 127.0.0.1:9006
2023-01-06T21:20:19.938+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket2,,,idx2,1) on 127.0.0.1:9006
2023-01-06T21:20:19.938+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket3,,,idx1,1) on 127.0.0.1:9006
2023-01-06T21:20:19.938+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket4,,,idx2,1) on 127.0.0.1:9006
2023-01-06T21:20:19.938+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket5,,,idx1,1) on 127.0.0.1:9006
2023-01-06T21:20:19.938+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket6,,,idx2,1) on 127.0.0.1:9006
2023-01-06T21:20:19.938+05:30 [Info] Planner::repairMissingReplica Found SubCluster [127.0.0.1:9003] with len 1. Skipping replica repair attempt.
2023-01-06T21:20:19.938+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Swap Rebalance - 1 SG, Swap 1 node - Failed swap rebalance
2023-01-06T21:20:19.939+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9004]
2023-01-06T21:20:19.939+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9004]
2023-01-06T21:20:19.939+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9001]
2023-01-06T21:20:19.939+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes []
2023-01-06T21:20:19.939+05:30 [Info] Planner::moveTenantsFromDeletedNodes Num deleted nodes 1 is more than num new/empty nodes 0
2023-01-06T21:20:19.939+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 8000
2023-01-06T21:20:19.939+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 8000
2023-01-06T21:20:19.939+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg3 Memory 200000000 Units 100
2023-01-06T21:20:19.939+05:30 [Info] Planner::moveTenantsFromDeletedNodes Considering 127.0.0.1:9008 as replacement node for deleted node 127.0.0.1:9004.
2023-01-06T21:20:19.939+05:30 [Info] Moving index 1111:11212:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.939+05:30 [Info] Moving index 2222:22121:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.939+05:30 [Info] Moving index 3333:33131:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.939+05:30 [Info] Moving index 4444:44141:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.939+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9004 SG sg2 Memory 600000000 Units 4000
2023-01-06T21:20:19.939+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 8000
2023-01-06T21:20:19.939+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg3 Memory 500000000 Units 4100
2023-01-06T21:20:19.939+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9008]]
2023-01-06T21:20:19.939+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Swap Rebalance - 1 SG, Swap 2 node - server group mismatch
2023-01-06T21:20:19.940+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9001 127.0.0.1:9002]
2023-01-06T21:20:19.940+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9001 127.0.0.1:9002]
2023-01-06T21:20:19.940+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9002 127.0.0.1:9001]
2023-01-06T21:20:19.940+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes [127.0.0.1:9003 127.0.0.1:9004]
2023/01/06 21:20:19 Expected error Planner - Unable to satisfy server group constraint while replacing removed nodes with new nodes.
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 4 SG, Move out 1 subcluster, Enough Capacity
2023-01-06T21:20:19.941+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.941+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.942+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9004 127.0.0.1:9001]
2023-01-06T21:20:19.942+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes []
2023-01-06T21:20:19.942+05:30 [Info] Planner::moveTenantsFromDeletedNodes Num deleted nodes 2 is more than num new/empty nodes 0
2023-01-06T21:20:19.942+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.942+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket5 MemoryUsage 350000000 UnitsUsage 2000 
2023-01-06T21:20:19.942+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket6 MemoryUsage 250000000 UnitsUsage 2000 
2023-01-06T21:20:19.942+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 120000000 UnitsUsage 1500 
2023-01-06T21:20:19.942+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket3 MemoryUsage 80000000 UnitsUsage 1200 
2023-01-06T21:20:19.942+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 70000000 UnitsUsage 800 
2023-01-06T21:20:19.942+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket1 MemoryUsage 30000000 UnitsUsage 500 
2023-01-06T21:20:19.942+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket17 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.942+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket14 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.942+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 8000
2023-01-06T21:20:19.942+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 1000000000 Units 7000
2023-01-06T21:20:19.942+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 100000000 Units 500
2023-01-06T21:20:19.942+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 8000
2023-01-06T21:20:19.942+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 1000000000 Units 7000
2023-01-06T21:20:19.942+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 100000000 Units 500
2023-01-06T21:20:19.942+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.942+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.942+05:30 [Info] Planner::moveTenantsFromDeletedNodes Found SubClusters Below LWM [[127.0.0.1:9003 127.0.0.1:9006] [127.0.0.1:9007 127.0.0.1:9008]]
2023-01-06T21:20:19.942+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket5 MemoryUsage 350000000 UnitsUsage 2000  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.942+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket6 MemoryUsage 250000000 UnitsUsage 2000  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.942+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 120000000 UnitsUsage 1500  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.942+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket3 MemoryUsage 80000000 UnitsUsage 1200  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.942+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 70000000 UnitsUsage 800  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.942+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket1 MemoryUsage 30000000 UnitsUsage 500  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.942+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket17 MemoryUsage 0 UnitsUsage 0  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.942+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket14 MemoryUsage 0 UnitsUsage 0  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.942+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9001 SG sg1 Memory 0 Units 0
2023-01-06T21:20:19.942+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9004 SG sg2 Memory 0 Units 0
2023-01-06T21:20:19.942+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 1000000000 Units 7000
2023-01-06T21:20:19.942+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 600000000 Units 4500
2023-01-06T21:20:19.942+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 1000000000 Units 7000
2023-01-06T21:20:19.942+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 600000000 Units 4500
2023-01-06T21:20:19.942+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 600000000 Units 4100
2023-01-06T21:20:19.942+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 600000000 Units 4100
2023-01-06T21:20:19.942+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9003 127.0.0.1:9006] [127.0.0.1:9007 127.0.0.1:9008]]
2023-01-06T21:20:19.942+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9002 127.0.0.1:9005]]
2023-01-06T21:20:19.942+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.942+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket10 MemoryUsage 200000000 UnitsUsage 800 
2023-01-06T21:20:19.942+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket7 MemoryUsage 25000000 UnitsUsage 500 
2023-01-06T21:20:19.942+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket9 MemoryUsage 100000000 UnitsUsage 700 
2023-01-06T21:20:19.942+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket15 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.942+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket8 MemoryUsage 75000000 UnitsUsage 1000 
2023-01-06T21:20:19.942+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket16 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.942+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 4 SG, Move out 1 subcluster, Not Enough Capacity
2023-01-06T21:20:19.943+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.943+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.943+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9004 127.0.0.1:9001]
2023-01-06T21:20:19.943+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes []
2023-01-06T21:20:19.943+05:30 [Info] Planner::moveTenantsFromDeletedNodes Num deleted nodes 2 is more than num new/empty nodes 0
2023-01-06T21:20:19.943+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.943+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket5 MemoryUsage 300000000 UnitsUsage 2000 
2023-01-06T21:20:19.943+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket6 MemoryUsage 300000000 UnitsUsage 2000 
2023-01-06T21:20:19.943+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 120000000 UnitsUsage 1500 
2023-01-06T21:20:19.943+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket3 MemoryUsage 80000000 UnitsUsage 1200 
2023-01-06T21:20:19.943+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 70000000 UnitsUsage 800 
2023-01-06T21:20:19.943+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket1 MemoryUsage 30000000 UnitsUsage 500 
2023-01-06T21:20:19.943+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket14 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.943+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket17 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.943+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 8000
2023-01-06T21:20:19.943+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 1000000000 Units 7000
2023-01-06T21:20:19.943+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 100000000 Units 500
2023-01-06T21:20:19.943+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 8000
2023-01-06T21:20:19.943+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 1000000000 Units 7000
2023-01-06T21:20:19.943+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 100000000 Units 500
2023-01-06T21:20:19.943+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.943+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.943+05:30 [Info] Planner::moveTenantsFromDeletedNodes Found SubClusters Below LWM [[127.0.0.1:9003 127.0.0.1:9006] [127.0.0.1:9007 127.0.0.1:9008]]
2023-01-06T21:20:19.944+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket5 MemoryUsage 300000000 UnitsUsage 2000  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.944+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket6 MemoryUsage 300000000 UnitsUsage 2000  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.944+05:30 [Info] Planner::moveTenantsToLowUsageSubCluster Unable to place TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 120000000 UnitsUsage 1500  on any target
2023-01-06T21:20:19.944+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket3 MemoryUsage 80000000 UnitsUsage 1200  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.944+05:30 [Info] Planner::moveTenantsToLowUsageSubCluster Unable to place TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 70000000 UnitsUsage 800  on any target
2023-01-06T21:20:19.944+05:30 [Info] Planner::moveTenantsToLowUsageSubCluster Unable to place TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket1 MemoryUsage 30000000 UnitsUsage 500  on any target
2023-01-06T21:20:19.944+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket14 MemoryUsage 0 UnitsUsage 0  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.944+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket17 MemoryUsage 0 UnitsUsage 0  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.944+05:30 [Error] Planner - Not enough capacity to place indexes of deleted nodes.
2023/01/06 21:20:19 Expected error Planner - Not enough capacity to place indexes of deleted nodes.
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 4 SG, Move out 1 node, Pair node not deleted(Failed swap rebalance of 1 node)
2023-01-06T21:20:19.945+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9001]
2023-01-06T21:20:19.945+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9001]
2023-01-06T21:20:19.945+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9004]
2023-01-06T21:20:19.945+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes []
2023-01-06T21:20:19.945+05:30 [Info] Planner::moveTenantsFromDeletedNodes Num deleted nodes 1 is more than num new/empty nodes 0
2023-01-06T21:20:19.945+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 8000
2023-01-06T21:20:19.945+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 1000000000 Units 7000
2023-01-06T21:20:19.945+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 100000000 Units 500
2023-01-06T21:20:19.945+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 8000
2023-01-06T21:20:19.945+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 1000000000 Units 7000
2023-01-06T21:20:19.945+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 100000000 Units 500
2023-01-06T21:20:19.945+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.945+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.945+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9009 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.946+05:30 [Info] Planner::moveTenantsFromDeletedNodes Considering 127.0.0.1:9009 as replacement node for deleted node 127.0.0.1:9001.
2023-01-06T21:20:19.946+05:30 [Info] Moving index 1111:1212:0 from source 127.0.0.1:9001 to dest 127.0.0.1:9009
2023-01-06T21:20:19.946+05:30 [Info] Moving index 2222:2121:0 from source 127.0.0.1:9001 to dest 127.0.0.1:9009
2023-01-06T21:20:19.946+05:30 [Info] Moving index 3333:3131:0 from source 127.0.0.1:9001 to dest 127.0.0.1:9009
2023-01-06T21:20:19.946+05:30 [Info] Moving index 4444:4141:0 from source 127.0.0.1:9001 to dest 127.0.0.1:9009
2023-01-06T21:20:19.946+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9001 SG sg1 Memory 600000000 Units 4000
2023-01-06T21:20:19.946+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 1000000000 Units 7000
2023-01-06T21:20:19.946+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 100000000 Units 500
2023-01-06T21:20:19.946+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 8000
2023-01-06T21:20:19.946+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 1000000000 Units 7000
2023-01-06T21:20:19.946+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 100000000 Units 500
2023-01-06T21:20:19.946+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.946+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.946+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9009 SG sg1 Memory 500000000 Units 4100
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9003 127.0.0.1:9006] [127.0.0.1:9004 127.0.0.1:9009] [127.0.0.1:9007 127.0.0.1:9008]]
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9004 127.0.0.1:9009]]
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket10 MemoryUsage 200000000 UnitsUsage 800 
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket7 MemoryUsage 25000000 UnitsUsage 500 
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket9 MemoryUsage 100000000 UnitsUsage 700 
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket15 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket8 MemoryUsage 75000000 UnitsUsage 1000 
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket16 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9004 127.0.0.1:9009]
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket4 MemoryUsage 120000000 UnitsUsage 1500 
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket2 MemoryUsage 70000000 UnitsUsage 800 
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket3 MemoryUsage 80000000 UnitsUsage 1200 
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket1 MemoryUsage 30000000 UnitsUsage 500 
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket17 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket14 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.946+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9003 127.0.0.1:9006] [127.0.0.1:9007 127.0.0.1:9008]]
2023-01-06T21:20:19.946+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket10 MemoryUsage 200000000 UnitsUsage 800  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.946+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket4 MemoryUsage 120000000 UnitsUsage 1500  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.946+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket7 MemoryUsage 25000000 UnitsUsage 500  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.946+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket2 MemoryUsage 70000000 UnitsUsage 800  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.946+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket9 MemoryUsage 100000000 UnitsUsage 700  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.946+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket3 MemoryUsage 80000000 UnitsUsage 1200  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.946+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket15 MemoryUsage 0 UnitsUsage 0  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.946+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket1 MemoryUsage 30000000 UnitsUsage 500  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.946+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket8 MemoryUsage 75000000 UnitsUsage 1000  can be placed on [127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.946+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket17 MemoryUsage 0 UnitsUsage 0  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.946+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket16 MemoryUsage 0 UnitsUsage 0  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.946+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket14 MemoryUsage 0 UnitsUsage 0  can be placed on [127.0.0.1:9003 127.0.0.1:9006]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 4 SG, Move out 1 node, Pair node already deleted
2023-01-06T21:20:19.947+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9002]
2023-01-06T21:20:19.947+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9002]
2023-01-06T21:20:19.947+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes []
2023-01-06T21:20:19.947+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes []
2023-01-06T21:20:19.947+05:30 [Info] Planner::moveTenantsFromDeletedNodes Num deleted nodes 1 is more than num new/empty nodes 0
2023-01-06T21:20:19.947+05:30 [Info] Planner::moveTenantsFromDeletedNodes Pair node not found for deleted node 127.0.0.1:9002.
2023-01-06T21:20:19.947+05:30 [Error] Planner - Pair node for 127.0.0.1:9002 not found. Provide additional node as replacement.
2023/01/06 21:20:19 Expected error Planner - Pair node for 127.0.0.1:9002 not found. Provide additional node as replacement.
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 4 SG, Move out 1 subcluster, empty nodes
2023-01-06T21:20:19.949+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9005 127.0.0.1:9007 127.0.0.1:9008]
2023-01-06T21:20:19.949+05:30 [Info] Planner::moveTenantsFromDeletedNodes No non-empty deleted nodes found.
2023-01-06T21:20:19.949+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9005 SG sg3 Memory 0 Units 0
2023-01-06T21:20:19.949+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.949+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.949+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 800000000 Units 8000
2023-01-06T21:20:19.949+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 800000000 Units 7000
2023-01-06T21:20:19.949+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 100000000 Units 500
2023-01-06T21:20:19.949+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 800000000 Units 8000
2023-01-06T21:20:19.949+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 100000000 Units 500
2023-01-06T21:20:19.949+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.949+05:30 [Info] Planner::repairMissingReplica Found SubCluster [127.0.0.1:9002] with len 1. Skipping replica repair attempt.
2023-01-06T21:20:19.949+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 4 SG, Move out 1 subcluster, Deleted Nodes more than added nodes
2023-01-06T21:20:19.950+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.950+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.950+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9004 127.0.0.1:9001]
2023-01-06T21:20:19.950+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes [127.0.0.1:9008]
2023-01-06T21:20:19.950+05:30 [Info] Planner::moveTenantsFromDeletedNodes Num deleted nodes 2 is more than num new/empty nodes 1
2023-01-06T21:20:19.950+05:30 [Error] Planner - Number of non-empty deleted nodes cannot be greater than number of added nodes.
2023/01/06 21:20:19 Expected error Planner - Number of non-empty deleted nodes cannot be greater than number of added nodes.
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 4 SG, Move out 1 node, Add 1 node in, server group mismatch
2023-01-06T21:20:19.951+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9006]
2023-01-06T21:20:19.951+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9006]
2023-01-06T21:20:19.951+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9003]
2023-01-06T21:20:19.951+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes [127.0.0.1:9007]
2023/01/06 21:20:19 Expected error Planner - Unable to satisfy server group constraint while replacing removed nodes with new nodes.
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 4 SG, Move out 1 node, Pair node exists
2023-01-06T21:20:19.952+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9004]
2023-01-06T21:20:19.952+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9004]
2023-01-06T21:20:19.952+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9001]
2023-01-06T21:20:19.952+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes []
2023-01-06T21:20:19.952+05:30 [Info] Planner::moveTenantsFromDeletedNodes Num deleted nodes 1 is more than num new/empty nodes 0
2023-01-06T21:20:19.952+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 8000
2023-01-06T21:20:19.952+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 1000000000 Units 7000
2023-01-06T21:20:19.952+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 100000000 Units 500
2023-01-06T21:20:19.952+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 8000
2023-01-06T21:20:19.952+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 1000000000 Units 7000
2023-01-06T21:20:19.952+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 100000000 Units 500
2023-01-06T21:20:19.952+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9007 SG sg1 Memory 200000000 Units 100
2023-01-06T21:20:19.952+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 200000000 Units 100
2023-01-06T21:20:19.952+05:30 [Info] Planner::moveTenantsFromDeletedNodes No replacement node found for deleted node 127.0.0.1:9004.
2023-01-06T21:20:19.952+05:30 [Error] Planner - Removing node 127.0.0.1:9004 will result in losing indexes. Provide additional node as replacement.
2023/01/06 21:20:19 Expected error Planner - Removing node 127.0.0.1:9004 will result in losing indexes. Provide additional node as replacement.
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 4 SG, Move out 1 subcluster, No nodes under LWM
2023-01-06T21:20:19.953+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.953+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.953+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9004 127.0.0.1:9001]
2023-01-06T21:20:19.953+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes []
2023-01-06T21:20:19.953+05:30 [Info] Planner::moveTenantsFromDeletedNodes Num deleted nodes 2 is more than num new/empty nodes 0
2023-01-06T21:20:19.953+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.953+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket6 MemoryUsage 300000000 UnitsUsage 2000 
2023-01-06T21:20:19.953+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket5 MemoryUsage 300000000 UnitsUsage 2000 
2023-01-06T21:20:19.953+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 120000000 UnitsUsage 1500 
2023-01-06T21:20:19.953+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket3 MemoryUsage 80000000 UnitsUsage 1200 
2023-01-06T21:20:19.953+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 70000000 UnitsUsage 800 
2023-01-06T21:20:19.953+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket1 MemoryUsage 30000000 UnitsUsage 500 
2023-01-06T21:20:19.953+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket17 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.953+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket14 MemoryUsage 0 UnitsUsage 0 
2023-01-06T21:20:19.953+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 8000
2023-01-06T21:20:19.953+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 1000000000 Units 7000
2023-01-06T21:20:19.953+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 8000
2023-01-06T21:20:19.953+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 1000000000 Units 7000
2023-01-06T21:20:19.953+05:30 [Info] Planner::moveTenantsFromDeletedNodes Found SubClusters Below LWM []
2023-01-06T21:20:19.953+05:30 [Error] Planner - Not enough capacity to place indexes of deleted nodes.
2023/01/06 21:20:19 Expected error Planner - Not enough capacity to place indexes of deleted nodes.
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 4 SG, Move out 1 subcluster, Not Enough Capacity, Partial Subcluster
2023-01-06T21:20:19.953+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.953+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.954+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9004 127.0.0.1:9001]
2023-01-06T21:20:19.954+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes []
2023-01-06T21:20:19.954+05:30 [Info] Planner::moveTenantsFromDeletedNodes Num deleted nodes 2 is more than num new/empty nodes 0
2023-01-06T21:20:19.954+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.954+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket1 MemoryUsage 100000000 UnitsUsage 500 
2023-01-06T21:20:19.954+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 100000000 UnitsUsage 500 
2023-01-06T21:20:19.954+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9001 SG sg1 Memory 200000000 Units 1000
2023-01-06T21:20:19.954+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 1000000000 Units 7000
2023-01-06T21:20:19.954+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9004 SG sg2 Memory 200000000 Units 1000
2023-01-06T21:20:19.954+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 1000000000 Units 7000
2023-01-06T21:20:19.954+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 100000000 Units 500
2023-01-06T21:20:19.954+05:30 [Info] Planner::filterPartialSubClusters Filter partial subcluster [127.0.0.1:9006]
2023-01-06T21:20:19.954+05:30 [Info] Planner::moveTenantsFromDeletedNodes Found SubClusters Below LWM []
2023-01-06T21:20:19.954+05:30 [Error] Planner - Not enough capacity to place indexes of deleted nodes.
2023/01/06 21:20:19 Expected error Planner - Not enough capacity to place indexes of deleted nodes.
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 2 SG, Move out 1 non-empty and 1 empty  node
2023-01-06T21:20:19.955+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9007 127.0.0.1:9004]
2023-01-06T21:20:19.955+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9004]
2023-01-06T21:20:19.955+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9001]
2023-01-06T21:20:19.955+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes [127.0.0.1:9008]
2023-01-06T21:20:19.955+05:30 [Info] Planner::moveTenantsFromDeletedNodes selected newNodes for swap [127.0.0.1:9008]
2023-01-06T21:20:19.955+05:30 [Info] Moving index 1111:11212:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.955+05:30 [Info] Moving index 2222:22121:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.955+05:30 [Info] Moving index 3333:33131:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.955+05:30 [Info] Moving index 4444:44141:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.955+05:30 [Info] Moving index 5555:55151:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.955+05:30 [Info] Moving index 6666:66161:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.955+05:30 [Info] Moving index 171717:1171171:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.955+05:30 [Info] Moving index 141414:1141141:0 from source 127.0.0.1:9004 to dest 127.0.0.1:9008
2023-01-06T21:20:19.955+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9007 SG sg1 Memory 0 Units 0
2023-01-06T21:20:19.955+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9004 SG sg2 Memory 0 Units 0
2023-01-06T21:20:19.955+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9008 SG sg2 Memory 800000000 Units 8000
2023-01-06T21:20:19.955+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 800000000 Units 8000
2023-01-06T21:20:19.955+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9008 127.0.0.1:9001]]
2023-01-06T21:20:19.955+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 2 Subclusters, 1 empty, 1 Above HWM
2023-01-06T21:20:19.956+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 9000
2023-01-06T21:20:19.956+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 9000
2023-01-06T21:20:19.956+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.956+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9001 127.0.0.1:9004]]
2023-01-06T21:20:19.956+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.956+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000 
2023-01-06T21:20:19.956+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 2000 
2023-01-06T21:20:19.956+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.956+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.956+05:30 [Info] Planner::moveTenantsToLowUsageSubCluster Unable to place TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 2000  on any target
2023-01-06T21:20:19.956+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 600000000 Units 6000
2023-01-06T21:20:19.956+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 300000000 Units 3000
2023-01-06T21:20:19.956+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 600000000 Units 6000
2023-01-06T21:20:19.956+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 300000000 Units 3000
2023-01-06T21:20:19.956+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.957+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters below LWM [[127.0.0.1:9003 127.0.0.1:9006]]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 2 Subclusters, 1 below LWM, 1 above HWM
2023-01-06T21:20:19.958+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 9000
2023-01-06T21:20:19.958+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 300000000 Units 1000
2023-01-06T21:20:19.958+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 9000
2023-01-06T21:20:19.958+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 300000000 Units 1000
2023-01-06T21:20:19.958+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005]]
2023-01-06T21:20:19.958+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9001 127.0.0.1:9004]]
2023-01-06T21:20:19.958+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.958+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000 
2023-01-06T21:20:19.958+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 2000 
2023-01-06T21:20:19.958+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9002 127.0.0.1:9005]]
2023-01-06T21:20:19.958+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000  can be placed on [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.958+05:30 [Info] Planner::moveTenantsToLowUsageSubCluster Unable to place TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 2000  on any target
2023-01-06T21:20:19.958+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 600000000 Units 6000
2023-01-06T21:20:19.958+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 600000000 Units 4000
2023-01-06T21:20:19.958+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 600000000 Units 6000
2023-01-06T21:20:19.958+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 600000000 Units 4000
2023-01-06T21:20:19.958+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005]]
2023-01-06T21:20:19.958+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters below LWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 3 Subclusters, 1 empty, 1 Above HWM, 1 below LWM
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 9000
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 300000000 Units 1000
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 9000
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 300000000 Units 1000
2023-01-06T21:20:19.959+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.959+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9001 127.0.0.1:9004]]
2023-01-06T21:20:19.959+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.959+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000 
2023-01-06T21:20:19.959+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 2000 
2023-01-06T21:20:19.959+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.959+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.959+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 2000  can be placed on [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 400000000 Units 4000
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 500000000 Units 3000
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 300000000 Units 3000
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 400000000 Units 4000
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 500000000 Units 3000
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 300000000 Units 3000
2023-01-06T21:20:19.959+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.959+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters below LWM [[127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.959+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.959+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.959+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.959+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes []
2023-01-06T21:20:19.959+05:30 [Info] Planner::moveTenantsFromDeletedNodes Num deleted nodes 2 is more than num new/empty nodes 0
2023-01-06T21:20:19.959+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantsToBeMoved from source [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.959+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9003 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000 
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 400000000 Units 4000
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 500000000 Units 3000
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9003 SG sg1 Memory 300000000 Units 3000
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 400000000 Units 4000
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 500000000 Units 3000
2023-01-06T21:20:19.959+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9006 SG sg3 Memory 300000000 Units 3000
2023-01-06T21:20:19.959+05:30 [Info] Planner::moveTenantsFromDeletedNodes Found SubClusters Below LWM []
2023-01-06T21:20:19.959+05:30 [Error] Planner - Not enough capacity to place indexes of deleted nodes.
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 2 Subclusters, 1 above LWM/below HWM, 1 empty
2023-01-06T21:20:19.960+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 700000000 Units 5000
2023-01-06T21:20:19.960+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 700000000 Units 5000
2023-01-06T21:20:19.960+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.960+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 2 Subclusters, Both above LWM/below HWM
2023-01-06T21:20:19.961+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 700000000 Units 5000
2023-01-06T21:20:19.961+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 650000000 Units 4500
2023-01-06T21:20:19.961+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 700000000 Units 5000
2023-01-06T21:20:19.961+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 650000000 Units 4500
2023-01-06T21:20:19.961+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.961+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023-01-06T21:20:19.961+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 700000000 Units 5000
2023-01-06T21:20:19.961+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 650000000 Units 4500
2023-01-06T21:20:19.961+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 700000000 Units 5000
2023-01-06T21:20:19.961+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 650000000 Units 4500
2023-01-06T21:20:19.961+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.961+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters below LWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 2 Subclusters, 1 empty, 1 Above HWM (partial replica repair)
2023-01-06T21:20:19.962+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 900000000 Units 9000
2023-01-06T21:20:19.962+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 550000000 Units 5500
2023-01-06T21:20:19.962+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.962+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket4,,,idx2,0) on 127.0.0.1:9004
2023-01-06T21:20:19.962+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket5,,,idx2,0) on 127.0.0.1:9004
2023-01-06T21:20:19.962+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9001 127.0.0.1:9004]]
2023-01-06T21:20:19.962+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.962+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000 
2023-01-06T21:20:19.962+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 2000 
2023-01-06T21:20:19.962+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.962+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.962+05:30 [Info] Planner::moveTenantsToLowUsageSubCluster Unable to place TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 2000  on any target
2023-01-06T21:20:19.962+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 600000000 Units 6000
2023-01-06T21:20:19.962+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 300000000 Units 3000
2023-01-06T21:20:19.962+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 600000000 Units 6000
2023-01-06T21:20:19.962+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 300000000 Units 3000
2023-01-06T21:20:19.962+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.962+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters below LWM [[127.0.0.1:9003 127.0.0.1:9006]]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 2 Subclusters, 1 empty, 1 Above HWM(full replica repair)
2023-01-06T21:20:19.963+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 900000000 Units 9000
2023-01-06T21:20:19.963+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9004 127.0.0.1:9001] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.963+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket2,,,idx1,1) on 127.0.0.1:9001
2023-01-06T21:20:19.963+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket2,,,idx2,1) on 127.0.0.1:9001
2023-01-06T21:20:19.963+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket4,,,idx1,1) on 127.0.0.1:9001
2023-01-06T21:20:19.963+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket4,,,idx2,1) on 127.0.0.1:9001
2023-01-06T21:20:19.963+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket5,,,idx1,1) on 127.0.0.1:9001
2023-01-06T21:20:19.963+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket5,,,idx2,1) on 127.0.0.1:9001
2023-01-06T21:20:19.963+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9004 127.0.0.1:9001]]
2023-01-06T21:20:19.963+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9004 127.0.0.1:9001]
2023-01-06T21:20:19.963+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000 
2023-01-06T21:20:19.963+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 2000 
2023-01-06T21:20:19.963+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.963+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.963+05:30 [Info] Planner::moveTenantsToLowUsageSubCluster Unable to place TenantUsage - SourceId 127.0.0.1:9004 TenantId bucket2 MemoryUsage 200000000 UnitsUsage 2000  on any target
2023-01-06T21:20:19.963+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 600000000 Units 6000
2023-01-06T21:20:19.963+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 300000000 Units 3000
2023-01-06T21:20:19.963+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 600000000 Units 6000
2023-01-06T21:20:19.963+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 300000000 Units 3000
2023-01-06T21:20:19.963+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.963+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters below LWM [[127.0.0.1:9003 127.0.0.1:9006]]
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 ScaleIn- 2 Subclusters, Both below LWM, Positive Case
2023-01-06T21:20:19.964+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 200000000 Units 1000
2023-01-06T21:20:19.964+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 100000000 Units 2000
2023-01-06T21:20:19.964+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 200000000 Units 1000
2023-01-06T21:20:19.964+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 100000000 Units 2000
2023-01-06T21:20:19.964+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.964+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023-01-06T21:20:19.964+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 200000000 Units 1000
2023-01-06T21:20:19.964+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 100000000 Units 2000
2023-01-06T21:20:19.964+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 200000000 Units 1000
2023-01-06T21:20:19.964+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 100000000 Units 2000
2023-01-06T21:20:19.964+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters  [[127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.964+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters below LWM [[127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.964+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.964+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.964+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.964+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes []
2023-01-06T21:20:19.964+05:30 [Info] Planner::moveTenantsFromDeletedNodes Num deleted nodes 2 is more than num new/empty nodes 0
2023-01-06T21:20:19.964+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantsToBeMoved from source [127.0.0.1:9003 127.0.0.1:9006]
2023-01-06T21:20:19.964+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9003 TenantId bucket3 MemoryUsage 100000000 UnitsUsage 2000 
2023-01-06T21:20:19.964+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 200000000 Units 1000
2023-01-06T21:20:19.964+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9003 SG sg1 Memory 100000000 Units 2000
2023-01-06T21:20:19.964+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 200000000 Units 1000
2023-01-06T21:20:19.964+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9006 SG sg3 Memory 100000000 Units 2000
2023-01-06T21:20:19.964+05:30 [Info] Planner::moveTenantsFromDeletedNodes Found SubClusters Below LWM [[127.0.0.1:9002 127.0.0.1:9005]]
2023-01-06T21:20:19.964+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9003 TenantId bucket3 MemoryUsage 100000000 UnitsUsage 2000  can be placed on [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.964+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9003 SG sg1 Memory 0 Units 0
2023-01-06T21:20:19.964+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9006 SG sg3 Memory 0 Units 0
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 ScaleIn- 2 Subclusters, One below LWM/ 1 Empty
2023-01-06T21:20:19.965+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 200000000 Units 1000
2023-01-06T21:20:19.965+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 200000000 Units 1000
2023-01-06T21:20:19.965+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.965+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 ScaleIn- 3 Subclusters, One above HWM, one below LWM and 1 Empty. No ScaleIn.
2023-01-06T21:20:19.965+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 800000000 Units 8000
2023-01-06T21:20:19.966+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 100000000 Units 1000
2023-01-06T21:20:19.966+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 800000000 Units 8000
2023-01-06T21:20:19.966+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 100000000 Units 1000
2023-01-06T21:20:19.966+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.966+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9001 127.0.0.1:9004]]
2023-01-06T21:20:19.966+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.966+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000 
2023-01-06T21:20:19.966+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 100000000 UnitsUsage 1000 
2023-01-06T21:20:19.966+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.966+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.966+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 100000000 UnitsUsage 1000  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.966+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 400000000 Units 4000
2023-01-06T21:20:19.966+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 100000000 Units 1000
2023-01-06T21:20:19.966+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 400000000 Units 4000
2023-01-06T21:20:19.966+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 400000000 Units 4000
2023-01-06T21:20:19.966+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 100000000 Units 1000
2023-01-06T21:20:19.966+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 400000000 Units 4000
2023-01-06T21:20:19.966+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.966+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters below LWM [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.966+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.966+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.966+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9005 127.0.0.1:9002]
2023-01-06T21:20:19.966+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes []
2023-01-06T21:20:19.966+05:30 [Info] Planner::moveTenantsFromDeletedNodes Num deleted nodes 2 is more than num new/empty nodes 0
2023-01-06T21:20:19.966+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantsToBeMoved from source [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.966+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket3 MemoryUsage 100000000 UnitsUsage 1000 
2023-01-06T21:20:19.966+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 400000000 Units 4000
2023-01-06T21:20:19.966+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9002 SG sg2 Memory 100000000 Units 1000
2023-01-06T21:20:19.966+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 400000000 Units 4000
2023-01-06T21:20:19.966+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 400000000 Units 4000
2023-01-06T21:20:19.966+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9005 SG sg3 Memory 100000000 Units 1000
2023-01-06T21:20:19.966+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 400000000 Units 4000
2023-01-06T21:20:19.966+05:30 [Info] Planner::moveTenantsFromDeletedNodes Found SubClusters Below LWM []
2023-01-06T21:20:19.966+05:30 [Error] Planner - Not enough capacity to place indexes of deleted nodes.
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 ScaleIn- 3 Subclusters, One above HWM, one below LWM and 1 Empty. ScaleIn. 
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 800000000 Units 8000
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 100000000 Units 1000
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 800000000 Units 8000
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 100000000 Units 1000
2023-01-06T21:20:19.967+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.967+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM [[127.0.0.1:9001 127.0.0.1:9004]]
2023-01-06T21:20:19.967+05:30 [Info] Planner::executeTenantAwareRebal TenantsToBeMoved from source [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.967+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000 
2023-01-06T21:20:19.967+05:30 [Info] Planner::executeTenantAwareRebal TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 100000000 UnitsUsage 1000 
2023-01-06T21:20:19.967+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters Below LWM [[127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9006 127.0.0.1:9003]]
2023-01-06T21:20:19.967+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket4 MemoryUsage 300000000 UnitsUsage 3000  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.967+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9001 TenantId bucket2 MemoryUsage 100000000 UnitsUsage 1000  can be placed on [127.0.0.1:9006 127.0.0.1:9003]
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 400000000 Units 4000
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9002 SG sg2 Memory 100000000 Units 1000
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 400000000 Units 4000
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 400000000 Units 4000
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9005 SG sg3 Memory 100000000 Units 1000
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 400000000 Units 4000
2023-01-06T21:20:19.967+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.967+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters below LWM [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9002 127.0.0.1:9005] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.967+05:30 [Info] Planner::findPlacementForDeletedNodes Deleted Nodes [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.967+05:30 [Info] Planner::moveTenantsFromDeletedNodes nonEmptyDeletedNodes [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.967+05:30 [Info] Planner::moveTenantsFromDeletedNodes pairForDeletedNodes [127.0.0.1:9005 127.0.0.1:9002]
2023-01-06T21:20:19.967+05:30 [Info] Planner::moveTenantsFromDeletedNodes newNodes []
2023-01-06T21:20:19.967+05:30 [Info] Planner::moveTenantsFromDeletedNodes Num deleted nodes 2 is more than num new/empty nodes 0
2023-01-06T21:20:19.967+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantsToBeMoved from source [127.0.0.1:9002 127.0.0.1:9005]
2023-01-06T21:20:19.967+05:30 [Info] Planner::moveTenantsFromDeletedNodes TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket3 MemoryUsage 100000000 UnitsUsage 1000 
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 400000000 Units 4000
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9002 SG sg2 Memory 100000000 Units 1000
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9003 SG sg1 Memory 400000000 Units 4000
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 400000000 Units 4000
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Skip Deleted Index Node 127.0.0.1:9005 SG sg3 Memory 100000000 Units 1000
2023-01-06T21:20:19.967+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9006 SG sg3 Memory 400000000 Units 4000
2023-01-06T21:20:19.967+05:30 [Info] Planner::moveTenantsFromDeletedNodes Found SubClusters Below LWM [[127.0.0.1:9001 127.0.0.1:9004] [127.0.0.1:9003 127.0.0.1:9006]]
2023-01-06T21:20:19.967+05:30 [Info] Planner::findTenantPlacement TenantUsage - SourceId 127.0.0.1:9002 TenantId bucket3 MemoryUsage 100000000 UnitsUsage 1000  can be placed on [127.0.0.1:9001 127.0.0.1:9004]
2023-01-06T21:20:19.967+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9002 SG sg2 Memory 0 Units 0
2023-01-06T21:20:19.967+05:30 [Info] Planner::findPlacementForDeletedNodes Remove Deleted Node from solution 127.0.0.1:9005 SG sg3 Memory 0 Units 0
2023/01/06 21:20:19 -------------------------------------------
2023/01/06 21:20:19 Rebalance - 1 Subcluster, Below HWM (partial replica repair)
2023-01-06T21:20:19.968+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 500000000 Units 800
2023-01-06T21:20:19.968+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 350000000 Units 500
2023-01-06T21:20:19.968+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004]]
2023-01-06T21:20:19.968+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket4,,,idx2,0) on 127.0.0.1:9004
2023-01-06T21:20:19.968+05:30 [Info] Planner::placeMissingReplicaOnTarget Rebuilding lost replica for (bucket5,,,idx2,0) on 127.0.0.1:9004
2023-01-06T21:20:19.968+05:30 [Info] Planner::executeTenantAwareRebal Found SubClusters above HWM []
2023-01-06T21:20:19.968+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9001 SG sg1 Memory 500000000 Units 800
2023-01-06T21:20:19.968+05:30 [Info] Planner::groupIndexNodesIntoSubClusters Index Node 127.0.0.1:9004 SG sg2 Memory 500000000 Units 800
2023-01-06T21:20:19.968+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters  [[127.0.0.1:9001 127.0.0.1:9004]]
2023-01-06T21:20:19.968+05:30 [Info] Planner::evaluateSolutionForScaleIn Found SubClusters below LWM [[127.0.0.1:9001 127.0.0.1:9004]]
--- PASS: TestTenantAwarePlanner (0.09s)
=== RUN   TestRestfulAPI
2023/01/06 21:20:19 In TestRestfulAPI()
2023/01/06 21:20:19 In DropAllSecondaryIndexes()
2023/01/06 21:20:19 Index found:  indexmut_1
2023/01/06 21:20:20 Dropped index indexmut_1
2023/01/06 21:20:20 Setting JSON docs in KV
2023/01/06 21:20:20 GET all indexes
2023/01/06 21:20:21 200 OK
2023/01/06 21:20:21 FOUND indexes: []
2023/01/06 21:20:21 DROP index: badindexid
2023/01/06 21:20:21 status: 400 Bad Request
2023/01/06 21:20:21 DROP index: 23544142
2023/01/06 21:20:21 status: 500 Internal Server Error
2023/01/06 21:20:21 TEST: malformed body
2023/01/06 21:20:21 400 Bad Request "invalid request body ({name:), unmarshal failed invalid character 'n' looking for beginning of object key string"

2023/01/06 21:20:21 TEST: missing field ``name``
2023/01/06 21:20:21 400 Bad Request "missing field name"
2023/01/06 21:20:21 TEST: empty field ``name``
2023/01/06 21:20:21 400 Bad Request "empty field name"
2023/01/06 21:20:21 TEST: missing field ``bucket``
2023/01/06 21:20:21 400 Bad Request "missing field bucket"
2023/01/06 21:20:21 TEST: empty field ``bucket``
2023/01/06 21:20:21 400 Bad Request "empty field bucket"
2023/01/06 21:20:21 TEST: missing field ``secExprs``
2023/01/06 21:20:21 400 Bad Request "missing field secExprs"
2023/01/06 21:20:21 TEST: empty field ``secExprs``
2023/01/06 21:20:21 400 Bad Request "empty field secExprs"
2023/01/06 21:20:21 TEST: incomplete field ``desc``
2023/01/06 21:20:21 400 Bad Request "incomplete desc information [true]"
2023/01/06 21:20:21 TEST: invalid field ``desc``
2023/01/06 21:20:21 400 Bad Request "incomplete desc information [1]"
2023/01/06 21:20:21 
2023/01/06 21:20:21 CREATE INDEX: idx1
2023/01/06 21:20:31 status : 201 Created
2023/01/06 21:20:31 {"id": "10054306832653691695"} 
2023/01/06 21:20:31 CREATE INDEX: idx2 (defer)
2023/01/06 21:20:32 status : 201 Created
2023/01/06 21:20:32 {"id": "10179418996770853296"} 
2023/01/06 21:20:32 CREATE INDEX: idx3 (defer)
2023/01/06 21:20:32 status : 201 Created
2023/01/06 21:20:32 {"id": "6494300763007877159"} 
2023/01/06 21:20:32 CREATE INDEX: idx4 (defer)
2023/01/06 21:20:32 status : 201 Created
2023/01/06 21:20:32 {"id": "8911423749395657095"} 
2023/01/06 21:20:32 CREATE INDEX: idx5
2023/01/06 21:20:45 status : 201 Created
2023/01/06 21:20:45 {"id": "18132890284472592463"} 
2023/01/06 21:20:45 BUILD single deferred index
2023/01/06 21:20:45 202 Accepted
2023/01/06 21:20:45 GET all indexes
2023/01/06 21:20:45 200 OK
2023/01/06 21:20:45 index idx1 in INDEX_STATE_ACTIVE
2023/01/06 21:20:45 GET all indexes
2023/01/06 21:20:45 200 OK
2023/01/06 21:20:45 index idx2 in INDEX_STATE_INITIAL
2023/01/06 21:20:46 GET all indexes
2023/01/06 21:20:46 200 OK
2023/01/06 21:20:46 index idx2 in INDEX_STATE_INITIAL
2023/01/06 21:20:47 GET all indexes
2023/01/06 21:20:47 200 OK
2023/01/06 21:20:47 index idx2 in INDEX_STATE_INITIAL
2023/01/06 21:20:48 GET all indexes
2023/01/06 21:20:48 200 OK
2023/01/06 21:20:48 index idx2 in INDEX_STATE_INITIAL
2023/01/06 21:20:49 GET all indexes
2023/01/06 21:20:49 200 OK
2023/01/06 21:20:49 index idx2 in INDEX_STATE_INITIAL
2023/01/06 21:20:50 GET all indexes
2023/01/06 21:20:50 200 OK
2023/01/06 21:20:50 index idx2 in INDEX_STATE_INITIAL
2023/01/06 21:20:51 GET all indexes
2023/01/06 21:20:51 200 OK
2023/01/06 21:20:51 index idx2 in INDEX_STATE_INITIAL
2023/01/06 21:20:52 GET all indexes
2023/01/06 21:20:52 200 OK
2023/01/06 21:20:52 index idx2 in INDEX_STATE_INITIAL
2023/01/06 21:20:53 GET all indexes
2023/01/06 21:20:53 200 OK
2023/01/06 21:20:53 index idx2 in INDEX_STATE_INITIAL
2023/01/06 21:20:54 GET all indexes
2023/01/06 21:20:54 200 OK
2023/01/06 21:20:54 index idx2 in INDEX_STATE_INITIAL
2023/01/06 21:20:55 GET all indexes
2023/01/06 21:20:55 200 OK
2023/01/06 21:20:55 index idx2 in INDEX_STATE_INITIAL
2023/01/06 21:20:56 GET all indexes
2023/01/06 21:20:56 200 OK
2023/01/06 21:20:56 index idx2 in INDEX_STATE_INITIAL
2023/01/06 21:20:57 GET all indexes
2023/01/06 21:20:57 200 OK
2023/01/06 21:20:57 index idx2 in INDEX_STATE_CATCHUP
2023/01/06 21:20:58 GET all indexes
2023/01/06 21:20:58 200 OK
2023/01/06 21:20:58 index idx2 in INDEX_STATE_ACTIVE
2023/01/06 21:20:58 BUILD many deferred index
2023/01/06 21:20:58 202 Accepted 
2023/01/06 21:20:58 GET all indexes
2023/01/06 21:20:58 200 OK
2023/01/06 21:20:58 index idx1 in INDEX_STATE_ACTIVE
2023/01/06 21:20:59 GET all indexes
2023/01/06 21:20:59 200 OK
2023/01/06 21:20:59 index idx2 in INDEX_STATE_ACTIVE
2023/01/06 21:20:59 GET all indexes
2023/01/06 21:20:59 200 OK
2023/01/06 21:20:59 index idx3 in INDEX_STATE_INITIAL
2023/01/06 21:21:00 GET all indexes
2023/01/06 21:21:00 200 OK
2023/01/06 21:21:00 index idx3 in INDEX_STATE_INITIAL
2023/01/06 21:21:01 GET all indexes
2023/01/06 21:21:01 200 OK
2023/01/06 21:21:01 index idx3 in INDEX_STATE_INITIAL
2023/01/06 21:21:02 GET all indexes
2023/01/06 21:21:02 200 OK
2023/01/06 21:21:02 index idx3 in INDEX_STATE_INITIAL
2023/01/06 21:21:03 GET all indexes
2023/01/06 21:21:03 200 OK
2023/01/06 21:21:03 index idx3 in INDEX_STATE_INITIAL
2023/01/06 21:21:04 GET all indexes
2023/01/06 21:21:04 200 OK
2023/01/06 21:21:04 index idx3 in INDEX_STATE_INITIAL
2023/01/06 21:21:05 GET all indexes
2023/01/06 21:21:05 200 OK
2023/01/06 21:21:05 index idx3 in INDEX_STATE_INITIAL
2023/01/06 21:21:06 GET all indexes
2023/01/06 21:21:06 200 OK
2023/01/06 21:21:06 index idx3 in INDEX_STATE_INITIAL
2023/01/06 21:21:07 GET all indexes
2023/01/06 21:21:07 200 OK
2023/01/06 21:21:07 index idx3 in INDEX_STATE_INITIAL
2023/01/06 21:21:08 GET all indexes
2023/01/06 21:21:08 200 OK
2023/01/06 21:21:08 index idx3 in INDEX_STATE_INITIAL
2023/01/06 21:21:09 GET all indexes
2023/01/06 21:21:09 200 OK
2023/01/06 21:21:09 index idx3 in INDEX_STATE_INITIAL
2023/01/06 21:21:10 GET all indexes
2023/01/06 21:21:10 200 OK
2023/01/06 21:21:10 index idx3 in INDEX_STATE_INITIAL
2023/01/06 21:21:11 GET all indexes
2023/01/06 21:21:11 200 OK
2023/01/06 21:21:11 index idx3 in INDEX_STATE_INITIAL
2023/01/06 21:21:12 GET all indexes
2023/01/06 21:21:12 200 OK
2023/01/06 21:21:12 index idx3 in INDEX_STATE_INITIAL
2023/01/06 21:21:13 GET all indexes
2023/01/06 21:21:13 200 OK
2023/01/06 21:21:13 index idx3 in INDEX_STATE_CATCHUP
2023/01/06 21:21:14 GET all indexes
2023/01/06 21:21:14 200 OK
2023/01/06 21:21:14 index idx3 in INDEX_STATE_ACTIVE
2023/01/06 21:21:14 GET all indexes
2023/01/06 21:21:14 200 OK
2023/01/06 21:21:14 index idx4 in INDEX_STATE_ACTIVE
2023/01/06 21:21:14 GET all indexes
2023/01/06 21:21:14 200 OK
2023/01/06 21:21:14 index idx5 in INDEX_STATE_ACTIVE
2023/01/06 21:21:14 GET all indexes
2023/01/06 21:21:14 200 OK
2023/01/06 21:21:14 CREATED indexes: [10054306832653691695 10179418996770853296 6494300763007877159 8911423749395657095 18132890284472592463]
2023/01/06 21:21:14 
2023/01/06 21:21:14 LOOKUP missing index
2023/01/06 21:21:14 status : 404 Not Found
2023/01/06 21:21:14 LOOKUP Pyongyang
2023/01/06 21:21:14 status : 200 OK
2023/01/06 21:21:14 number of entries 554
2023/01/06 21:21:15 Expected and Actual scan responses are the same
2023/01/06 21:21:15 LOOKUP with stale as false
2023/01/06 21:21:15 status : 200 OK
2023/01/06 21:21:15 number of entries 554
2023/01/06 21:21:15 Expected and Actual scan responses are the same
2023/01/06 21:21:15 LOOKUP with Rome
2023/01/06 21:21:15 status : 200 OK
2023/01/06 21:21:15 number of entries 540
2023/01/06 21:21:15 Expected and Actual scan responses are the same
2023/01/06 21:21:15 RANGE missing index
2023/01/06 21:21:15 Status : 404 Not Found
2023/01/06 21:21:15 RANGE cities - none
2023/01/06 21:21:15 Status : 200 OK
2023/01/06 21:21:20 number of entries 140902
2023/01/06 21:21:21 Expected and Actual scan responses are the same
2023/01/06 21:21:21 RANGE cities -low
2023/01/06 21:21:21 Status : 200 OK
2023/01/06 21:21:26 number of entries 140902
2023/01/06 21:21:27 Expected and Actual scan responses are the same
2023/01/06 21:21:27 RANGE cities -high
2023/01/06 21:21:27 Status : 200 OK
2023/01/06 21:21:31 number of entries 140902
2023/01/06 21:21:32 Expected and Actual scan responses are the same
2023/01/06 21:21:32 RANGE cities - both
2023/01/06 21:21:32 Status : 200 OK
2023/01/06 21:21:37 number of entries 140902
2023/01/06 21:21:38 Expected and Actual scan responses are the same
2023/01/06 21:21:38 RANGE missing cities
2023/01/06 21:21:38 Status : 200 OK
2023/01/06 21:21:38 number of entries 0
2023/01/06 21:21:38 Expected and Actual scan responses are the same
2023/01/06 21:21:38 
2023/01/06 21:21:38 SCANALL missing index
2023/01/06 21:21:38 {"limit":1000000,"stale":"ok"}
2023/01/06 21:21:38 Status : 404 Not Found
2023/01/06 21:21:38 SCANALL stale ok
2023/01/06 21:21:38 {"limit":1000000,"stale":"ok"}
2023/01/06 21:21:38 Status : 200 OK
2023/01/06 21:21:43 number of entries 140902
2023/01/06 21:21:44 Expected and Actual scan responses are the same
2023/01/06 21:21:44 SCANALL stale false
2023/01/06 21:21:44 {"limit":1000000,"stale":"false"}
2023/01/06 21:21:44 Status : 200 OK
2023/01/06 21:21:49 number of entries 140902
2023/01/06 21:21:50 Expected and Actual scan responses are the same
2023/01/06 21:21:50 
2023/01/06 21:21:50 COUNT missing index
2023/01/06 21:21:50 Status : 404 Not Found
2023/01/06 21:21:50 COUNT cities - none
2023/01/06 21:21:50 Status : 200 OK
2023/01/06 21:21:50 number of entries 140902
2023/01/06 21:21:51 COUNT cities -low
2023/01/06 21:21:51 Status : 200 OK
2023/01/06 21:21:51 number of entries 140902
2023/01/06 21:21:51 COUNT cities -high
2023/01/06 21:21:51 Status : 200 OK
2023/01/06 21:21:51 number of entries 140902
2023/01/06 21:21:51 COUNT cities - both
2023/01/06 21:21:52 Status : 200 OK
2023/01/06 21:21:52 number of entries 140902
2023/01/06 21:21:52 COUNT missing cities
2023/01/06 21:21:52 Status : 200 OK
2023/01/06 21:21:52 number of entries 0
2023/01/06 21:21:52 
2023/01/06 21:21:53 STATS: Testing URLs with valid authentication
2023/01/06 21:21:53 STATS: Testing URLs with invalid authentication
2023/01/06 21:21:53 STATS: Testing invalid URLs
2023/01/06 21:21:53 STATS: Testing unsupported methods
2023/01/06 21:21:53 
--- PASS: TestRestfulAPI (93.95s)
=== RUN   TestStatIndexInstFilter
2023/01/06 21:21:53 CREATE INDEX: statIdx1
2023/01/06 21:22:05 status : 201 Created
2023/01/06 21:22:05 {"id": "1318921295283145353"} 
2023/01/06 21:22:05 CREATE INDEX: statIdx2
2023/01/06 21:22:19 status : 201 Created
2023/01/06 21:22:19 {"id": "7293438131611810601"} 
2023/01/06 21:22:19 Instance Id for statIdx2 is 10285604704520340598, common.IndexInstId
--- PASS: TestStatIndexInstFilter (25.34s)
=== RUN   TestBucketDefaultDelete
2023-01-06T21:22:19.387+05:30 [Warn] Client:runObserveStreamingEndpoint streaming endpoint for /pools/default/bs/default returned err EOF
2023-01-06T21:22:19.387+05:30 [Warn] serviceChangeNotifier: Connection terminated for collection manifest notifier instance of http://%40query@127.0.0.1:9000, default, bucket: default, (EOF)
2023/01/06 21:22:21 Deleted bucket default, responseBody: 
2023/01/06 21:22:36 Created bucket default, responseBody: 
2023/01/06 21:22:52 Populating the default bucket
2023/01/06 21:23:01 Using n1ql client
2023-01-06T21:23:01.036+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T21:23:01.036+05:30 [Info] GSIC[default/default-_default-_default-1673020381034320560] started ...
2023/01/06 21:23:01 Scan failed as expected with error: Index Not Found - cause: GSI index index_isActive not found.
2023/01/06 21:23:01 Populating the default bucket after it was deleted
2023/01/06 21:23:12 Created the secondary index index_isActive. Waiting for it become active
2023/01/06 21:23:12 Index is 16602401946805619911 now active
2023/01/06 21:23:12 Using n1ql client
2023/01/06 21:23:12 Expected and Actual scan responses are the same
--- PASS: TestBucketDefaultDelete (53.23s)
=== RUN   TestMixedDatatypesScanAll
2023/01/06 21:23:12 In TestMixedDatatypesScanAll()
2023/01/06 21:23:12 Before test begin: Length of kv docs is 10002
2023/01/06 21:23:12 In DropAllSecondaryIndexes()
2023/01/06 21:23:12 Index found:  index_isActive
2023/01/06 21:23:12 Dropped index index_isActive
2023/01/06 21:23:12 Number of number fields is: 246
2023/01/06 21:23:12 Number of string fields is: 259
2023/01/06 21:23:12 Number of json fields is: 254
2023/01/06 21:23:12 Number of true bool fields is: 121
2023/01/06 21:23:12 Number of false bool fields is: 120
2023/01/06 21:23:12 After generate docs: Length of kv docs is 11002
2023/01/06 21:23:12 Setting mixed datatypes JSON docs in KV
2023/01/06 21:23:18 Created the secondary index index_mixeddt. Waiting for it become active
2023/01/06 21:23:18 Index is 16040839439147443974 now active
2023/01/06 21:23:18 Using n1ql client
2023/01/06 21:23:18 Expected and Actual scan responses are the same
2023/01/06 21:23:18 Lengths of expected and actual scan results are:  1000 and 1000
2023/01/06 21:23:18 End: Length of kv docs is 11002
--- PASS: TestMixedDatatypesScanAll (6.19s)
=== RUN   TestMixedDatatypesRange_Float
2023/01/06 21:23:18 In TestMixedDatatypesRange_Float()
2023/01/06 21:23:18 In DropAllSecondaryIndexes()
2023/01/06 21:23:18 Index found:  index_mixeddt
2023/01/06 21:23:18 Dropped index index_mixeddt
2023/01/06 21:23:18 Number of number fields is: 233
2023/01/06 21:23:18 Number of string fields is: 267
2023/01/06 21:23:18 Number of json fields is: 236
2023/01/06 21:23:18 Number of true bool fields is: 132
2023/01/06 21:23:18 Number of false bool fields is: 132
2023/01/06 21:23:18 Setting mixed datatypes JSON docs in KV
2023/01/06 21:23:24 Created the secondary index index_mixeddt. Waiting for it become active
2023/01/06 21:23:24 Index is 34242933025370671 now active
2023/01/06 21:23:24 Using n1ql client
2023/01/06 21:23:24 Expected and Actual scan responses are the same
2023/01/06 21:23:24 Lengths of expected and actual scan results are:  26 and 26
2023/01/06 21:23:24 Using n1ql client
2023/01/06 21:23:24 Expected and Actual scan responses are the same
2023/01/06 21:23:24 Lengths of expected and actual scan results are:  2 and 2
2023/01/06 21:23:24 Length of kv docs is 12002
--- PASS: TestMixedDatatypesRange_Float (6.14s)
=== RUN   TestMixedDatatypesRange_String
2023/01/06 21:23:24 In TestMixedDatatypesRange_String()
2023/01/06 21:23:24 In DropAllSecondaryIndexes()
2023/01/06 21:23:24 Index found:  index_mixeddt
2023/01/06 21:23:25 Dropped index index_mixeddt
2023/01/06 21:23:25 Number of number fields is: 269
2023/01/06 21:23:25 Number of string fields is: 241
2023/01/06 21:23:25 Number of json fields is: 244
2023/01/06 21:23:25 Number of true bool fields is: 134
2023/01/06 21:23:25 Number of false bool fields is: 112
2023/01/06 21:23:25 Setting mixed datatypes JSON docs in KV
2023/01/06 21:23:30 Created the secondary index index_mixeddt. Waiting for it become active
2023/01/06 21:23:30 Index is 14213486690477983893 now active
2023/01/06 21:23:30 Using n1ql client
2023/01/06 21:23:30 Expected and Actual scan responses are the same
2023/01/06 21:23:30 Lengths of expected and actual scan results are:  189 and 189
2023/01/06 21:23:30 Length of kv docs is 13002
--- PASS: TestMixedDatatypesRange_String (6.13s)
=== RUN   TestMixedDatatypesRange_Json
2023/01/06 21:23:30 In TestMixedDatatypesRange_Json()
2023/01/06 21:23:30 In DropAllSecondaryIndexes()
2023/01/06 21:23:30 Index found:  index_mixeddt
2023/01/06 21:23:31 Dropped index index_mixeddt
2023/01/06 21:23:31 Number of number fields is: 248
2023/01/06 21:23:31 Number of string fields is: 251
2023/01/06 21:23:31 Number of json fields is: 250
2023/01/06 21:23:31 Number of true bool fields is: 130
2023/01/06 21:23:31 Number of false bool fields is: 121
2023/01/06 21:23:31 Setting mixed datatypes JSON docs in KV
2023/01/06 21:23:37 Created the secondary index index_mixeddt. Waiting for it become active
2023/01/06 21:23:37 Index is 14707761190999542609 now active
2023/01/06 21:23:37 Using n1ql client
2023/01/06 21:23:37 Expected and Actual scan responses are the same
2023/01/06 21:23:37 Lengths of expected and actual scan results are:  730 and 730
2023/01/06 21:23:37 Length of kv docs is 14002
--- PASS: TestMixedDatatypesRange_Json (6.72s)
=== RUN   TestMixedDatatypesScan_Bool
2023/01/06 21:23:37 In TestMixedDatatypesScan_Bool()
2023/01/06 21:23:37 In DropAllSecondaryIndexes()
2023/01/06 21:23:37 Index found:  index_mixeddt
2023/01/06 21:23:37 Dropped index index_mixeddt
2023/01/06 21:23:37 Number of number fields is: 245
2023/01/06 21:23:37 Number of string fields is: 250
2023/01/06 21:23:37 Number of json fields is: 277
2023/01/06 21:23:37 Number of true bool fields is: 108
2023/01/06 21:23:37 Number of false bool fields is: 120
2023/01/06 21:23:37 Setting mixed datatypes JSON docs in KV
2023/01/06 21:23:43 Created the secondary index index_mixeddt. Waiting for it become active
2023/01/06 21:23:43 Index is 13858584833691741324 now active
2023/01/06 21:23:43 Using n1ql client
2023/01/06 21:23:43 Expected and Actual scan responses are the same
2023/01/06 21:23:43 Lengths of expected and actual scan results are:  504 and 504
2023/01/06 21:23:43 Using n1ql client
2023/01/06 21:23:43 Expected and Actual scan responses are the same
2023/01/06 21:23:43 Lengths of expected and actual scan results are:  485 and 485
2023/01/06 21:23:43 Length of kv docs is 15002
--- PASS: TestMixedDatatypesScan_Bool (5.84s)
=== RUN   TestLargeSecondaryKeyLength
2023/01/06 21:23:43 In TestLargeSecondaryKeyLength()
2023/01/06 21:23:43 In DropAllSecondaryIndexes()
2023/01/06 21:23:43 Index found:  index_mixeddt
2023/01/06 21:23:43 Dropped index index_mixeddt
2023/01/06 21:23:43 Setting JSON docs in KV
2023/01/06 21:23:49 Created the secondary index index_LongSecField. Waiting for it become active
2023/01/06 21:23:49 Index is 12844679115011696371 now active
2023/01/06 21:23:49 Using n1ql client
2023/01/06 21:23:49 ScanAll: Lengths of expected and actual scan results are:  1000 and 1000
2023/01/06 21:23:49 Expected and Actual scan responses are the same
2023/01/06 21:23:49 Using n1ql client
2023/01/06 21:23:49 Range: Lengths of expected and actual scan results are:  817 and 817
2023/01/06 21:23:49 Expected and Actual scan responses are the same
2023/01/06 21:23:49 End: Length of kv docs is 16002
--- PASS: TestLargeSecondaryKeyLength (6.23s)
=== RUN   TestLargePrimaryKeyLength
2023/01/06 21:23:49 In TestLargePrimaryKeyLength()
2023/01/06 21:23:49 In DropAllSecondaryIndexes()
2023/01/06 21:23:49 Index found:  index_LongSecField
2023/01/06 21:23:49 Dropped index index_LongSecField
2023/01/06 21:23:50 Setting JSON docs in KV
2023/01/06 21:23:56 Created the secondary index index_LongPrimaryField. Waiting for it become active
2023/01/06 21:23:56 Index is 15229226999365372245 now active
2023/01/06 21:23:56 Using n1ql client
2023/01/06 21:23:57 Lengths of num of docs and scanResults are:  17002 and 17002
2023/01/06 21:23:57 End: Length of kv docs is 17002
--- PASS: TestLargePrimaryKeyLength (7.83s)
=== RUN   TestUpdateMutations_DeleteField
2023/01/06 21:23:57 In TestUpdateMutations_DeleteField()
2023/01/06 21:23:58 Setting JSON docs in KV
2023/01/06 21:24:04 Created the secondary index index_bal. Waiting for it become active
2023/01/06 21:24:04 Index is 4578445815788854618 now active
2023/01/06 21:24:05 Using n1ql client
2023/01/06 21:24:05 Expected and Actual scan responses are the same
2023/01/06 21:24:05 Using n1ql client
2023/01/06 21:24:05 Expected and Actual scan responses are the same
--- PASS: TestUpdateMutations_DeleteField (7.91s)
=== RUN   TestUpdateMutations_AddField
2023/01/06 21:24:05 In TestUpdateMutations_AddField()
2023/01/06 21:24:05 Setting JSON docs in KV
2023/01/06 21:24:12 Created the secondary index index_newField. Waiting for it become active
2023/01/06 21:24:12 Index is 13684257865885784589 now active
2023/01/06 21:24:12 Using n1ql client
2023/01/06 21:24:12 Count of scan results before add field mutations:  0
2023/01/06 21:24:12 Expected and Actual scan responses are the same
2023/01/06 21:24:13 Using n1ql client
2023/01/06 21:24:13 Count of scan results after add field mutations:  300
2023/01/06 21:24:13 Expected and Actual scan responses are the same
--- PASS: TestUpdateMutations_AddField (7.81s)
=== RUN   TestUpdateMutations_DataTypeChange
2023/01/06 21:24:13 In TestUpdateMutations_DataTypeChange()
2023/01/06 21:24:13 Setting JSON docs in KV
2023/01/06 21:24:21 Created the secondary index index_isUserActive. Waiting for it become active
2023/01/06 21:24:21 Index is 5534195851831148324 now active
2023/01/06 21:24:21 Using n1ql client
2023/01/06 21:24:21 Expected and Actual scan responses are the same
2023/01/06 21:24:22 Using n1ql client
2023/01/06 21:24:22 Expected and Actual scan responses are the same
2023/01/06 21:24:22 Using n1ql client
2023/01/06 21:24:22 Expected and Actual scan responses are the same
2023/01/06 21:24:22 Using n1ql client
2023/01/06 21:24:22 Expected and Actual scan responses are the same
--- PASS: TestUpdateMutations_DataTypeChange (9.05s)
=== RUN   TestMultipleBuckets
2023/01/06 21:24:22 In TestMultipleBuckets()
2023/01/06 21:24:22 In DropAllSecondaryIndexes()
2023/01/06 21:24:22 Index found:  index_isUserActive
2023/01/06 21:24:22 Dropped index index_isUserActive
2023/01/06 21:24:22 Index found:  index_newField
2023/01/06 21:24:22 Dropped index index_newField
2023/01/06 21:24:22 Index found:  index_LongPrimaryField
2023/01/06 21:24:22 Dropped index index_LongPrimaryField
2023/01/06 21:24:22 Index found:  index_bal
2023/01/06 21:24:22 Dropped index index_bal
2023/01/06 21:25:00 Flushed the bucket default, Response body: 
2023/01/06 21:25:03 Modified parameters of bucket default, responseBody: 
2023/01/06 21:25:03 Created bucket testbucket2, responseBody: 
2023/01/06 21:25:03 Created bucket testbucket3, responseBody: 
2023/01/06 21:25:03 Created bucket testbucket4, responseBody: 
2023/01/06 21:25:18 Generating docs and Populating all the buckets
2023/01/06 21:25:22 Created the secondary index bucket1_age. Waiting for it become active
2023/01/06 21:25:22 Index is 4567251622027161292 now active
2023/01/06 21:25:28 Created the secondary index bucket2_city. Waiting for it become active
2023/01/06 21:25:28 Index is 10065797233611028694 now active
2023/01/06 21:25:35 Created the secondary index bucket3_gender. Waiting for it become active
2023/01/06 21:25:35 Index is 13301069686788929548 now active
2023/01/06 21:25:41 Created the secondary index bucket4_balance. Waiting for it become active
2023/01/06 21:25:41 Index is 16070806377752659682 now active
2023/01/06 21:25:44 Using n1ql client
2023/01/06 21:25:44 Expected and Actual scan responses are the same
2023/01/06 21:25:44 Using n1ql client
2023-01-06T21:25:44.833+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T21:25:44.833+05:30 [Info] GSIC[default/testbucket2-_default-_default-1673020544830282655] started ...
2023/01/06 21:25:44 Expected and Actual scan responses are the same
2023/01/06 21:25:44 Using n1ql client
2023-01-06T21:25:44.852+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T21:25:44.852+05:30 [Info] GSIC[default/testbucket3-_default-_default-1673020544849810649] started ...
2023/01/06 21:25:44 Expected and Actual scan responses are the same
2023/01/06 21:25:44 Using n1ql client
2023-01-06T21:25:44.864+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T21:25:44.864+05:30 [Info] GSIC[default/testbucket4-_default-_default-1673020544861670461] started ...
2023/01/06 21:25:44 Expected and Actual scan responses are the same
2023/01/06 21:25:47 Deleted bucket testbucket2, responseBody: 
2023/01/06 21:25:49 Deleted bucket testbucket3, responseBody: 
2023/01/06 21:25:52 Deleted bucket testbucket4, responseBody: 
2023/01/06 21:25:55 Modified parameters of bucket default, responseBody: 
--- PASS: TestMultipleBuckets (107.78s)
=== RUN   TestBucketFlush
2023/01/06 21:26:10 In TestBucketFlush()
2023/01/06 21:26:10 In DropAllSecondaryIndexes()
2023/01/06 21:26:10 Index found:  bucket1_age
2023/01/06 21:26:10 Dropped index bucket1_age
2023/01/06 21:26:47 Flushed the bucket default, Response body: 
2023/01/06 21:26:51 Created the secondary index index_age. Waiting for it become active
2023/01/06 21:26:51 Index is 15096635509715885669 now active
2023/01/06 21:26:51 Using n1ql client
2023/01/06 21:26:52 Expected and Actual scan responses are the same
2023/01/06 21:26:58 Created the secondary index index_gender. Waiting for it become active
2023/01/06 21:26:58 Index is 15527005598579698576 now active
2023/01/06 21:26:58 Using n1ql client
2023/01/06 21:26:58 Expected and Actual scan responses are the same
2023/01/06 21:27:05 Created the secondary index index_city. Waiting for it become active
2023/01/06 21:27:05 Index is 14406887279433432885 now active
2023/01/06 21:27:05 Using n1ql client
2023/01/06 21:27:05 Expected and Actual scan responses are the same
2023/01/06 21:27:43 Flushed the bucket default, Response body: 
2023/01/06 21:27:43 TestBucketFlush:: Flushed the bucket
2023/01/06 21:27:43 Using n1ql client
2023/01/06 21:27:43 Using n1ql client
2023/01/06 21:27:43 Using n1ql client
--- PASS: TestBucketFlush (93.01s)
=== RUN   TestLargeDocumentSize
2023/01/06 21:27:43 In TestLargeDocumentSize()
2023/01/06 21:27:43 Data file exists. Skipping download
2023/01/06 21:27:43 Length of docs and largeDocs = 200 and 200
2023/01/06 21:27:47 Created the secondary index index_userscreenname. Waiting for it become active
2023/01/06 21:27:47 Index is 2109650484021458956 now active
2023/01/06 21:27:47 Using n1ql client
2023/01/06 21:27:48 Expected and Actual scan responses are the same
--- PASS: TestLargeDocumentSize (5.68s)
=== RUN   TestFieldsWithSpecialCharacters
2023/01/06 21:27:48 In TestFieldsWithSpecialCharacters()
2023/01/06 21:27:54 Created the secondary index index_specialchar. Waiting for it become active
2023/01/06 21:27:54 Index is 5464481946870056320 now active
2023/01/06 21:27:54 Looking up for value ß™£,#w#
2023/01/06 21:27:54 Using n1ql client
2023/01/06 21:27:55 Expected and Actual scan responses are the same
--- PASS: TestFieldsWithSpecialCharacters (6.35s)
=== RUN   TestLargeKeyLookup
2023/01/06 21:27:55 In TestLargeKeyLookup()
2023/01/06 21:28:01 Created the secondary index index_largeKeyLookup. Waiting for it become active
2023/01/06 21:28:01 Index is 8099907860503713351 now active
2023/01/06 21:28:01 Looking up for a large key
2023/01/06 21:28:01 Using n1ql client
2023/01/06 21:28:01 Expected and Actual scan responses are the same
--- PASS: TestLargeKeyLookup (6.43s)
=== RUN   TestIndexNameValidation
2023/01/06 21:28:01 In TestIndexNameValidation()
2023/01/06 21:28:02 Setting JSON docs in KV
2023/01/06 21:28:03 Creation of index with invalid name ÌñÐÉx&(abc_% failed as expected
2023/01/06 21:28:08 Created the secondary index #primary-Index_test. Waiting for it become active
2023/01/06 21:28:08 Index is 15758503958669656550 now active
2023/01/06 21:28:08 Using n1ql client
2023/01/06 21:28:08 Expected and Actual scan responses are the same
--- PASS: TestIndexNameValidation (7.12s)
=== RUN   TestSameFieldNameAtDifferentLevels
2023/01/06 21:28:08 In TestSameFieldNameAtDifferentLevels()
2023/01/06 21:28:08 Setting JSON docs in KV
2023/01/06 21:28:15 Created the secondary index cityindex. Waiting for it become active
2023/01/06 21:28:15 Index is 11276448041268238771 now active
2023/01/06 21:28:15 Using n1ql client
2023/01/06 21:28:15 Expected and Actual scan responses are the same
--- PASS: TestSameFieldNameAtDifferentLevels (7.03s)
=== RUN   TestSameIndexNameInTwoBuckets
2023/01/06 21:28:15 In TestSameIndexNameInTwoBuckets()
2023/01/06 21:28:15 In DropAllSecondaryIndexes()
2023/01/06 21:28:15 Index found:  cityindex
2023/01/06 21:28:15 Dropped index cityindex
2023/01/06 21:28:15 Index found:  index_largeKeyLookup
2023/01/06 21:28:15 Dropped index index_largeKeyLookup
2023/01/06 21:28:15 Index found:  index_gender
2023/01/06 21:28:15 Dropped index index_gender
2023/01/06 21:28:15 Index found:  index_age
2023/01/06 21:28:16 Dropped index index_age
2023/01/06 21:28:16 Index found:  index_city
2023/01/06 21:28:16 Dropped index index_city
2023/01/06 21:28:16 Index found:  #primary-Index_test
2023/01/06 21:28:16 Dropped index #primary-Index_test
2023/01/06 21:28:16 Index found:  index_userscreenname
2023/01/06 21:28:16 Dropped index index_userscreenname
2023/01/06 21:28:16 Index found:  index_specialchar
2023/01/06 21:28:16 Dropped index index_specialchar
2023/01/06 21:28:54 Flushed the bucket default, Response body: 
2023/01/06 21:28:57 Modified parameters of bucket default, responseBody: 
2023/01/06 21:28:57 Created bucket buck2, responseBody: 
2023/01/06 21:29:12 Generating docs and Populating all the buckets
2023/01/06 21:29:16 Created the secondary index b_idx. Waiting for it become active
2023/01/06 21:29:16 Index is 7595633455235977560 now active
2023/01/06 21:29:22 Created the secondary index b_idx. Waiting for it become active
2023/01/06 21:29:22 Index is 9961343396262803037 now active
2023/01/06 21:29:25 Using n1ql client
2023/01/06 21:29:25 Expected and Actual scan responses are the same
2023/01/06 21:29:25 Using n1ql client
2023-01-06T21:29:25.619+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T21:29:25.620+05:30 [Info] GSIC[default/buck2-_default-_default-1673020765617155664] started ...
2023/01/06 21:29:25 Expected and Actual scan responses are the same
2023/01/06 21:29:28 Modified parameters of bucket default, responseBody: 
2023/01/06 21:29:30 Deleted bucket buck2, responseBody: 
--- PASS: TestSameIndexNameInTwoBuckets (90.03s)
=== RUN   TestLargeKeysSplChars
2023/01/06 21:29:45 In TestLargeKeysSplChars()
2023/01/06 21:29:55 Created the secondary index idspl1. Waiting for it become active
2023/01/06 21:29:55 Index is 1603214338487059394 now active
2023/01/06 21:30:04 Created the secondary index idspl2. Waiting for it become active
2023/01/06 21:30:04 Index is 2360073781178805419 now active
2023/01/06 21:30:11 Created the secondary index idspl3. Waiting for it become active
2023/01/06 21:30:11 Index is 11453795161799713712 now active
2023/01/06 21:30:11 Using n1ql client
2023/01/06 21:30:11 Expected and Actual scan responses are the same
2023-01-06T21:30:11.475+05:30 [Error] transport error between 127.0.0.1:57332->127.0.0.1:9107: write tcp 127.0.0.1:57332->127.0.0.1:9107: write: broken pipe
2023-01-06T21:30:11.476+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"]  request transport failed `write tcp 127.0.0.1:57332->127.0.0.1:9107: write: broken pipe`
2023-01-06T21:30:11.476+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T21:30:11.476+05:30 [Error] metadataClient:PickRandom: Replicas - [5029337224121651345], PrunedReplica - map[], FilteredReplica map[]
2023/01/06 21:30:11 Expected and Actual scan responses are the same
2023/01/06 21:30:11 Using n1ql client
2023/01/06 21:30:11 Expected and Actual scan responses are the same
--- PASS: TestLargeKeysSplChars (26.16s)
=== RUN   TestVeryLargeIndexKey
2023/01/06 21:30:11 In DropAllSecondaryIndexes()
2023/01/06 21:30:11 Index found:  idspl2
2023/01/06 21:30:12 Dropped index idspl2
2023/01/06 21:30:12 Index found:  idspl3
2023/01/06 21:30:12 Dropped index idspl3
2023/01/06 21:30:12 Index found:  idspl1
2023/01/06 21:30:12 Dropped index idspl1
2023/01/06 21:30:12 Index found:  b_idx
2023/01/06 21:30:12 Dropped index b_idx
2023/01/06 21:30:50 Flushed the bucket default, Response body: 
2023/01/06 21:30:50 TestVeryLargeIndexKey:: Flushed the bucket
2023/01/06 21:30:50 clusterconfig.KVAddress = 127.0.0.1:9000
2023/01/06 21:30:55 Created the secondary index i1. Waiting for it become active
2023/01/06 21:30:55 Index is 2756199751881262900 now active
2023/01/06 21:30:55 Using n1ql client
2023/01/06 21:30:55 Expected and Actual scan responses are the same
2023/01/06 21:31:02 Created the secondary index i2. Waiting for it become active
2023/01/06 21:31:02 Index is 6466772759481346745 now active
2023/01/06 21:31:02 Using n1ql client
2023/01/06 21:31:03 Expected and Actual scan responses are the same
2023/01/06 21:31:03 In DropAllSecondaryIndexes()
2023/01/06 21:31:03 Index found:  i2
2023/01/06 21:31:03 Dropped index i2
2023/01/06 21:31:03 Index found:  i1
2023/01/06 21:31:03 Dropped index i1
2023/01/06 21:31:41 Flushed the bucket default, Response body: 
--- PASS: TestVeryLargeIndexKey (89.59s)
=== RUN   TestTempBufScanResult
2023/01/06 21:31:41 In DropAllSecondaryIndexes()
2023/01/06 21:32:19 Flushed the bucket default, Response body: 
2023/01/06 21:32:19 TestTempBufScanResult:: Flushed the bucket
2023/01/06 21:32:22 Created the secondary index index_idxKey. Waiting for it become active
2023/01/06 21:32:22 Index is 11165036968119958780 now active
2023/01/06 21:32:22 Using n1ql client
2023/01/06 21:32:23 Expected and Actual scan responses are the same
2023/01/06 21:32:23 In DropAllSecondaryIndexes()
2023/01/06 21:32:23 Index found:  index_idxKey
2023/01/06 21:32:23 Dropped index index_idxKey
2023/01/06 21:33:01 Flushed the bucket default, Response body: 
--- PASS: TestTempBufScanResult (80.09s)
=== RUN   TestBuildDeferredAnotherBuilding
2023/01/06 21:33:01 In TestBuildDeferredAnotherBuilding()
2023/01/06 21:33:01 In DropAllSecondaryIndexes()
2023/01/06 21:33:46 Setting JSON docs in KV
2023/01/06 21:35:50 Build the deferred index id_age1. Waiting for the index to become active
2023/01/06 21:35:50 Waiting for index 697619596568266809 to go active ...
2023/01/06 21:35:51 Waiting for index 697619596568266809 to go active ...
2023/01/06 21:35:52 Waiting for index 697619596568266809 to go active ...
2023/01/06 21:35:53 Waiting for index 697619596568266809 to go active ...
2023/01/06 21:35:54 Waiting for index 697619596568266809 to go active ...
2023/01/06 21:35:55 Waiting for index 697619596568266809 to go active ...
2023/01/06 21:35:56 Waiting for index 697619596568266809 to go active ...
2023/01/06 21:35:57 Waiting for index 697619596568266809 to go active ...
2023/01/06 21:35:58 Waiting for index 697619596568266809 to go active ...
2023/01/06 21:35:59 Waiting for index 697619596568266809 to go active ...
2023/01/06 21:36:00 Waiting for index 697619596568266809 to go active ...
2023/01/06 21:36:01 Waiting for index 697619596568266809 to go active ...
2023/01/06 21:36:02 Index is 697619596568266809 now active
2023/01/06 21:36:03 Build command issued for the deferred indexes [11779283801271756921]
2023/01/06 21:36:05 Build index failed as expected: Build index fails. Index id_age will retry building in the background for reason: Build Already In Progress. Keyspace default.
2023/01/06 21:36:05 Waiting for index 11779283801271756921 to go active ...
2023/01/06 21:36:06 Waiting for index 11779283801271756921 to go active ...
2023/01/06 21:36:07 Waiting for index 11779283801271756921 to go active ...
2023/01/06 21:36:08 Waiting for index 11779283801271756921 to go active ...
2023/01/06 21:36:09 Waiting for index 11779283801271756921 to go active ...
2023/01/06 21:36:10 Waiting for index 11779283801271756921 to go active ...
2023/01/06 21:36:11 Waiting for index 11779283801271756921 to go active ...
2023/01/06 21:36:12 Waiting for index 11779283801271756921 to go active ...
2023/01/06 21:36:13 Waiting for index 11779283801271756921 to go active ...
2023/01/06 21:36:14 Waiting for index 11779283801271756921 to go active ...
2023/01/06 21:36:15 Waiting for index 11779283801271756921 to go active ...
2023/01/06 21:36:16 Waiting for index 11779283801271756921 to go active ...
2023/01/06 21:36:17 Waiting for index 11779283801271756921 to go active ...
2023/01/06 21:36:18 Waiting for index 11779283801271756921 to go active ...
2023/01/06 21:36:19 Waiting for index 11779283801271756921 to go active ...
2023/01/06 21:36:20 Index is 11779283801271756921 now active
2023/01/06 21:36:20 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:21 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:22 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:23 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:24 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:25 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:26 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:27 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:28 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:29 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:30 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:31 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:32 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:33 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:34 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:35 Waiting for index 5759591472857311401 to go active ...
2023/01/06 21:36:36 Index is 5759591472857311401 now active
2023/01/06 21:36:36 Using n1ql client
2023/01/06 21:36:36 Expected and Actual scan responses are the same
2023/01/06 21:36:37 Using n1ql client
2023/01/06 21:36:37 Expected and Actual scan responses are the same
--- PASS: TestBuildDeferredAnotherBuilding (215.79s)
=== RUN   TestMultipleBucketsDeferredBuild
2023/01/06 21:36:37 In TestMultipleBucketsDeferredBuild()
2023/01/06 21:36:37 In DropAllSecondaryIndexes()
2023/01/06 21:36:37 Index found:  id_age
2023/01/06 21:36:37 Dropped index id_age
2023/01/06 21:36:37 Index found:  id_company
2023/01/06 21:36:37 Dropped index id_company
2023/01/06 21:36:37 Index found:  id_age1
2023/01/06 21:36:37 Dropped index id_age1
2023/01/06 21:37:14 Flushed the bucket default, Response body: 
2023/01/06 21:37:17 Modified parameters of bucket default, responseBody: 
2023/01/06 21:37:17 http://127.0.0.1:9000/pools/default/buckets/defertest_buck2
2023/01/06 21:37:17 &{DELETE http://127.0.0.1:9000/pools/default/buckets/defertest_buck2 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc00013c000}
2023/01/06 21:37:17 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Fri, 06 Jan 2023 16:07:16 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc00bb04380 31 [] false false map[] 0xc00b107000 }
2023/01/06 21:37:17 DeleteBucket failed for bucket defertest_buck2 
2023/01/06 21:37:17 Deleted bucket defertest_buck2, responseBody: Requested resource not found.
2023/01/06 21:37:17 Created bucket defertest_buck2, responseBody: 
2023/01/06 21:37:33 Setting JSON docs in KV
2023/01/06 21:38:52 Build command issued for the deferred indexes [16347144043082126092]
2023/01/06 21:38:53 Build command issued for the deferred indexes [16703451233578561308 2778929225157566995]
2023/01/06 21:38:53 Index state of 2778929225157566995 is INDEX_STATE_READY
2023/01/06 21:38:53 Waiting for index 16347144043082126092 to go active ...
2023/01/06 21:38:54 Waiting for index 16347144043082126092 to go active ...
2023/01/06 21:38:55 Waiting for index 16347144043082126092 to go active ...
2023/01/06 21:38:56 Waiting for index 16347144043082126092 to go active ...
2023/01/06 21:38:57 Waiting for index 16347144043082126092 to go active ...
2023/01/06 21:38:58 Waiting for index 16347144043082126092 to go active ...
2023/01/06 21:38:59 Waiting for index 16347144043082126092 to go active ...
2023/01/06 21:39:00 Waiting for index 16347144043082126092 to go active ...
2023/01/06 21:39:01 Waiting for index 16347144043082126092 to go active ...
2023/01/06 21:39:02 Index is 16347144043082126092 now active
2023/01/06 21:39:02 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:03 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:04 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:05 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:06 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:07 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:08 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:09 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:10 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:11 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:12 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:13 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:14 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:15 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:16 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:17 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:18 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:19 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:20 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:21 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:22 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:23 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:24 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:25 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:26 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:27 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:28 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:29 Waiting for index 16703451233578561308 to go active ...
2023/01/06 21:39:30 Index is 16703451233578561308 now active
2023/01/06 21:39:30 Using n1ql client
2023/01/06 21:39:30 Expected and Actual scan responses are the same
2023/01/06 21:39:30 Using n1ql client
2023/01/06 21:39:30 Expected and Actual scan responses are the same
2023/01/06 21:39:30 Using n1ql client
2023-01-06T21:39:30.321+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T21:39:30.322+05:30 [Info] GSIC[default/defertest_buck2-_default-_default-1673021370319474967] started ...
2023/01/06 21:39:30 Expected and Actual scan responses are the same
2023/01/06 21:39:33 Modified parameters of bucket default, responseBody: 
2023/01/06 21:39:35 Deleted bucket defertest_buck2, responseBody: 
--- PASS: TestMultipleBucketsDeferredBuild (183.37s)
=== RUN   TestCreateDropCreateDeferredIndex
2023/01/06 21:39:40 In TestCreateDropCreateDeferredIndex()
2023/01/06 21:39:40 In DropAllSecondaryIndexes()
2023/01/06 21:39:40 Index found:  buck1_id1
2023/01/06 21:39:40 Dropped index buck1_id1
2023/01/06 21:39:40 Index found:  buck1_id2
2023/01/06 21:39:41 Dropped index buck1_id2
2023/01/06 21:39:43 Setting JSON docs in KV
2023/01/06 21:39:56 Created the secondary index id_company. Waiting for it become active
2023/01/06 21:39:56 Index is 17652475038150034388 now active
2023/01/06 21:39:57 Dropping the secondary index id_age
2023/01/06 21:39:57 Index dropped
2023/01/06 21:40:01 Setting JSON docs in KV
2023/01/06 21:40:10 Using n1ql client
2023/01/06 21:40:10 Expected and Actual scan responses are the same
--- PASS: TestCreateDropCreateDeferredIndex (29.86s)
=== RUN   TestMultipleDeferredIndexes_BuildTogether
2023/01/06 21:40:10 In TestMultipleDeferredIndexes_BuildTogether()
2023/01/06 21:40:10 In DropAllSecondaryIndexes()
2023/01/06 21:40:10 Index found:  id_company
2023/01/06 21:40:10 Dropped index id_company
2023/01/06 21:40:13 Setting JSON docs in KV
2023/01/06 21:40:26 Created the secondary index id_company. Waiting for it become active
2023/01/06 21:40:26 Index is 3276226088548291278 now active
2023/01/06 21:40:28 Build command issued for the deferred indexes [id_age id_gender id_isActive], bucket: default, scope: _default, coll: _default
2023/01/06 21:40:28 Waiting for the index id_age to become active
2023/01/06 21:40:28 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:29 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:30 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:31 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:32 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:33 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:34 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:35 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:36 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:37 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:38 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:39 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:40 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:41 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:42 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:43 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:44 Waiting for index 2772195228519934429 to go active ...
2023/01/06 21:40:45 Index is 2772195228519934429 now active
2023/01/06 21:40:45 Waiting for the index id_gender to become active
2023/01/06 21:40:45 Index is 10486600000964723873 now active
2023/01/06 21:40:45 Waiting for the index id_isActive to become active
2023/01/06 21:40:45 Index is 6468033248589345625 now active
2023/01/06 21:40:45 Using n1ql client
2023/01/06 21:40:45 Expected and Actual scan responses are the same
2023/01/06 21:40:48 Setting JSON docs in KV
2023/01/06 21:40:57 Using n1ql client
2023/01/06 21:40:57 Expected and Actual scan responses are the same
2023/01/06 21:40:57 Using n1ql client
2023/01/06 21:40:57 Expected and Actual scan responses are the same
--- PASS: TestMultipleDeferredIndexes_BuildTogether (47.12s)
=== RUN   TestMultipleDeferredIndexes_BuildOneByOne
2023/01/06 21:40:57 In TestMultipleDeferredIndexes_BuildOneByOne()
2023/01/06 21:40:57 In DropAllSecondaryIndexes()
2023/01/06 21:40:57 Index found:  id_company
2023/01/06 21:40:57 Dropped index id_company
2023/01/06 21:40:57 Index found:  id_age
2023/01/06 21:40:57 Dropped index id_age
2023/01/06 21:40:57 Index found:  id_isActive
2023/01/06 21:40:58 Dropped index id_isActive
2023/01/06 21:40:58 Index found:  id_gender
2023/01/06 21:40:58 Dropped index id_gender
2023/01/06 21:41:00 Setting JSON docs in KV
2023/01/06 21:41:16 Created the secondary index id_company. Waiting for it become active
2023/01/06 21:41:16 Index is 6805313022805213648 now active
2023/01/06 21:41:17 Build command issued for the deferred indexes [id_age], bucket: default, scope: _default, coll: _default
2023/01/06 21:41:17 Waiting for the index id_age to become active
2023/01/06 21:41:17 Waiting for index 18286908573405685039 to go active ...
2023/01/06 21:41:18 Waiting for index 18286908573405685039 to go active ...
2023/01/06 21:41:19 Waiting for index 18286908573405685039 to go active ...
2023/01/06 21:41:20 Waiting for index 18286908573405685039 to go active ...
2023/01/06 21:41:21 Waiting for index 18286908573405685039 to go active ...
2023/01/06 21:41:22 Waiting for index 18286908573405685039 to go active ...
2023/01/06 21:41:23 Waiting for index 18286908573405685039 to go active ...
2023/01/06 21:41:24 Waiting for index 18286908573405685039 to go active ...
2023/01/06 21:41:25 Waiting for index 18286908573405685039 to go active ...
2023/01/06 21:41:26 Waiting for index 18286908573405685039 to go active ...
2023/01/06 21:41:27 Waiting for index 18286908573405685039 to go active ...
2023/01/06 21:41:28 Waiting for index 18286908573405685039 to go active ...
2023/01/06 21:41:29 Index is 18286908573405685039 now active
2023/01/06 21:41:29 Build command issued for the deferred indexes [id_gender], bucket: default, scope: _default, coll: _default
2023/01/06 21:41:29 Waiting for the index id_gender to become active
2023/01/06 21:41:29 Waiting for index 3852308610770215198 to go active ...
2023/01/06 21:41:30 Waiting for index 3852308610770215198 to go active ...
2023/01/06 21:41:31 Waiting for index 3852308610770215198 to go active ...
2023/01/06 21:41:32 Waiting for index 3852308610770215198 to go active ...
2023/01/06 21:41:33 Waiting for index 3852308610770215198 to go active ...
2023/01/06 21:41:34 Waiting for index 3852308610770215198 to go active ...
2023/01/06 21:41:35 Waiting for index 3852308610770215198 to go active ...
2023/01/06 21:41:36 Waiting for index 3852308610770215198 to go active ...
2023/01/06 21:41:37 Waiting for index 3852308610770215198 to go active ...
2023/01/06 21:41:38 Waiting for index 3852308610770215198 to go active ...
2023/01/06 21:41:39 Waiting for index 3852308610770215198 to go active ...
2023/01/06 21:41:40 Waiting for index 3852308610770215198 to go active ...
2023/01/06 21:41:41 Index is 3852308610770215198 now active
2023/01/06 21:41:41 Build command issued for the deferred indexes [id_isActive], bucket: default, scope: _default, coll: _default
2023/01/06 21:41:41 Waiting for the index id_isActive to become active
2023/01/06 21:41:41 Waiting for index 1725302854741851002 to go active ...
2023/01/06 21:41:42 Waiting for index 1725302854741851002 to go active ...
2023/01/06 21:41:43 Waiting for index 1725302854741851002 to go active ...
2023/01/06 21:41:44 Waiting for index 1725302854741851002 to go active ...
2023/01/06 21:41:45 Waiting for index 1725302854741851002 to go active ...
2023/01/06 21:41:46 Waiting for index 1725302854741851002 to go active ...
2023/01/06 21:41:47 Waiting for index 1725302854741851002 to go active ...
2023/01/06 21:41:48 Waiting for index 1725302854741851002 to go active ...
2023/01/06 21:41:49 Waiting for index 1725302854741851002 to go active ...
2023/01/06 21:41:50 Waiting for index 1725302854741851002 to go active ...
2023/01/06 21:41:51 Waiting for index 1725302854741851002 to go active ...
2023/01/06 21:41:52 Waiting for index 1725302854741851002 to go active ...
2023/01/06 21:41:53 Index is 1725302854741851002 now active
2023/01/06 21:41:53 Using n1ql client
2023/01/06 21:41:53 Expected and Actual scan responses are the same
2023/01/06 21:41:56 Setting JSON docs in KV
2023/01/06 21:42:05 Using n1ql client
2023/01/06 21:42:06 Expected and Actual scan responses are the same
2023/01/06 21:42:06 Using n1ql client
2023/01/06 21:42:07 Expected and Actual scan responses are the same
--- PASS: TestMultipleDeferredIndexes_BuildOneByOne (69.28s)
=== RUN   TestDropDeferredIndexWhileOthersBuilding
2023/01/06 21:42:07 In TestDropDeferredIndexWhileOthersBuilding()
2023/01/06 21:42:07 In DropAllSecondaryIndexes()
2023/01/06 21:42:07 Index found:  id_company
2023/01/06 21:42:07 Dropped index id_company
2023/01/06 21:42:07 Index found:  id_isActive
2023/01/06 21:42:07 Dropped index id_isActive
2023/01/06 21:42:07 Index found:  id_age
2023/01/06 21:42:07 Dropped index id_age
2023/01/06 21:42:07 Index found:  id_gender
2023/01/06 21:42:07 Dropped index id_gender
2023/01/06 21:42:09 Setting JSON docs in KV
2023/01/06 21:42:25 Created the secondary index id_company. Waiting for it become active
2023/01/06 21:42:25 Index is 17319223777115578853 now active
2023/01/06 21:42:26 Build command issued for the deferred indexes [11429446416382968736 10508410198259736743]
2023/01/06 21:42:28 Dropping the secondary index id_isActive
2023/01/06 21:42:28 Index dropped
2023/01/06 21:42:28 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:29 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:30 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:31 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:32 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:33 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:34 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:35 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:36 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:37 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:38 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:39 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:40 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:41 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:42 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:43 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:44 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:45 Waiting for index 11429446416382968736 to go active ...
2023/01/06 21:42:46 Index is 11429446416382968736 now active
2023/01/06 21:42:46 Index is 10508410198259736743 now active
2023/01/06 21:42:47 Using n1ql client
2023/01/06 21:42:48 Expected and Actual scan responses are the same
2023/01/06 21:42:48 Using n1ql client
2023/01/06 21:42:48 Expected and Actual scan responses are the same
2023/01/06 21:42:50 Setting JSON docs in KV
2023/01/06 21:42:59 Using n1ql client
2023/01/06 21:43:00 Expected and Actual scan responses are the same
--- PASS: TestDropDeferredIndexWhileOthersBuilding (53.03s)
=== RUN   TestDropBuildingDeferredIndex
2023/01/06 21:43:00 In TestDropBuildingDeferredIndex()
2023/01/06 21:43:00 In DropAllSecondaryIndexes()
2023/01/06 21:43:00 Index found:  id_gender
2023/01/06 21:43:00 Dropped index id_gender
2023/01/06 21:43:00 Index found:  id_company
2023/01/06 21:43:00 Dropped index id_company
2023/01/06 21:43:00 Index found:  id_age
2023/01/06 21:43:00 Dropped index id_age
2023/01/06 21:43:03 Setting JSON docs in KV
2023/01/06 21:43:09 Build command issued for the deferred indexes [9008350497984290021 7979692115931480019]
2023/01/06 21:43:10 Dropping the secondary index id_age
2023/01/06 21:43:10 Index dropped
2023/01/06 21:43:10 Waiting for index 9008350497984290021 to go active ...
2023/01/06 21:43:11 Waiting for index 9008350497984290021 to go active ...
2023/01/06 21:43:12 Waiting for index 9008350497984290021 to go active ...
2023/01/06 21:43:13 Waiting for index 9008350497984290021 to go active ...
2023/01/06 21:43:14 Waiting for index 9008350497984290021 to go active ...
2023/01/06 21:43:15 Waiting for index 9008350497984290021 to go active ...
2023/01/06 21:43:16 Waiting for index 9008350497984290021 to go active ...
2023/01/06 21:43:17 Waiting for index 9008350497984290021 to go active ...
2023/01/06 21:43:18 Waiting for index 9008350497984290021 to go active ...
2023/01/06 21:43:19 Waiting for index 9008350497984290021 to go active ...
2023/01/06 21:43:20 Waiting for index 9008350497984290021 to go active ...
2023/01/06 21:43:21 Waiting for index 9008350497984290021 to go active ...
2023/01/06 21:43:22 Index is 9008350497984290021 now active
2023/01/06 21:43:22 Build command issued for the deferred indexes [id_gender], bucket: default, scope: _default, coll: _default
2023/01/06 21:43:22 Waiting for the index id_gender to become active
2023/01/06 21:43:22 Waiting for index 11270360983595853506 to go active ...
2023/01/06 21:43:23 Waiting for index 11270360983595853506 to go active ...
2023/01/06 21:43:24 Waiting for index 11270360983595853506 to go active ...
2023/01/06 21:43:25 Waiting for index 11270360983595853506 to go active ...
2023/01/06 21:43:26 Waiting for index 11270360983595853506 to go active ...
2023/01/06 21:43:27 Waiting for index 11270360983595853506 to go active ...
2023/01/06 21:43:28 Waiting for index 11270360983595853506 to go active ...
2023/01/06 21:43:29 Waiting for index 11270360983595853506 to go active ...
2023/01/06 21:43:30 Waiting for index 11270360983595853506 to go active ...
2023/01/06 21:43:31 Waiting for index 11270360983595853506 to go active ...
2023/01/06 21:43:32 Waiting for index 11270360983595853506 to go active ...
2023/01/06 21:43:33 Waiting for index 11270360983595853506 to go active ...
2023/01/06 21:43:34 Waiting for index 11270360983595853506 to go active ...
2023/01/06 21:43:35 Waiting for index 11270360983595853506 to go active ...
2023/01/06 21:43:36 Index is 11270360983595853506 now active
2023/01/06 21:43:37 Using n1ql client
2023/01/06 21:43:37 Expected and Actual scan responses are the same
2023/01/06 21:43:38 Using n1ql client
2023/01/06 21:43:38 Expected and Actual scan responses are the same
2023/01/06 21:43:40 Setting JSON docs in KV
2023/01/06 21:43:49 Using n1ql client
2023/01/06 21:43:50 Expected and Actual scan responses are the same
--- PASS: TestDropBuildingDeferredIndex (50.27s)
=== RUN   TestDropMultipleBuildingDeferredIndexes
2023/01/06 21:43:50 In TestDropMultipleBuildingDeferredIndexes()
2023/01/06 21:43:50 In DropAllSecondaryIndexes()
2023/01/06 21:43:50 Index found:  id_gender
2023/01/06 21:43:50 Dropped index id_gender
2023/01/06 21:43:50 Index found:  id_company
2023/01/06 21:43:50 Dropped index id_company
2023/01/06 21:43:57 Setting JSON docs in KV
2023/01/06 21:44:32 Created the secondary index id_company. Waiting for it become active
2023/01/06 21:44:32 Index is 8543189503533952927 now active
2023/01/06 21:44:33 Build command issued for the deferred indexes [727967906892696406 8381154144756329988]
2023/01/06 21:44:34 Dropping the secondary index id_age
2023/01/06 21:44:34 Index dropped
2023/01/06 21:44:34 Dropping the secondary index id_gender
2023/01/06 21:44:49 Index dropped
2023/01/06 21:44:49 Build command issued for the deferred indexes [id_isActive], bucket: default, scope: _default, coll: _default
2023/01/06 21:44:49 Waiting for the index id_isActive to become active
2023/01/06 21:44:49 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:44:50 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:44:51 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:44:52 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:44:53 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:44:54 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:44:55 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:44:56 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:44:57 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:44:58 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:44:59 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:45:00 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:45:01 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:45:02 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:45:03 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:45:04 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:45:05 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:45:06 Waiting for index 14487011408208072799 to go active ...
2023/01/06 21:45:07 Index is 14487011408208072799 now active
2023/01/06 21:45:17 Using n1ql client
2023/01/06 21:45:18 Expected and Actual scan responses are the same
2023/01/06 21:45:18 Number of docScanResults and scanResults = 180000 and 180000
2023/01/06 21:45:19 Using n1ql client
2023/01/06 21:45:20 Expected and Actual scan responses are the same
2023/01/06 21:45:20 Number of docScanResults and scanResults = 180000 and 180000
--- PASS: TestDropMultipleBuildingDeferredIndexes (90.54s)
=== RUN   TestDropOneIndexSecondDeferBuilding
2023/01/06 21:45:20 In TestDropOneIndexSecondDeferBuilding()
2023/01/06 21:45:20 In DropAllSecondaryIndexes()
2023/01/06 21:45:20 Index found:  id_isActive
2023/01/06 21:45:20 Dropped index id_isActive
2023/01/06 21:45:20 Index found:  id_company
2023/01/06 21:45:21 Dropped index id_company
2023/01/06 21:45:23 Setting JSON docs in KV
2023/01/06 21:45:29 Build command issued for the deferred indexes [id_company], bucket: default, scope: _default, coll: _default
2023/01/06 21:45:29 Waiting for the index id_company to become active
2023/01/06 21:45:29 Waiting for index 12816682047145650300 to go active ...
2023/01/06 21:45:30 Waiting for index 12816682047145650300 to go active ...
2023/01/06 21:45:31 Waiting for index 12816682047145650300 to go active ...
2023/01/06 21:45:32 Waiting for index 12816682047145650300 to go active ...
2023/01/06 21:45:33 Waiting for index 12816682047145650300 to go active ...
2023/01/06 21:45:34 Waiting for index 12816682047145650300 to go active ...
2023/01/06 21:45:35 Waiting for index 12816682047145650300 to go active ...
2023/01/06 21:45:36 Waiting for index 12816682047145650300 to go active ...
2023/01/06 21:45:37 Waiting for index 12816682047145650300 to go active ...
2023/01/06 21:45:38 Waiting for index 12816682047145650300 to go active ...
2023/01/06 21:45:39 Waiting for index 12816682047145650300 to go active ...
2023/01/06 21:45:40 Waiting for index 12816682047145650300 to go active ...
2023/01/06 21:45:41 Waiting for index 12816682047145650300 to go active ...
2023/01/06 21:45:42 Waiting for index 12816682047145650300 to go active ...
2023/01/06 21:45:43 Index is 12816682047145650300 now active
2023/01/06 21:45:43 Build command issued for the deferred indexes [13496109731765831401]
2023/01/06 21:45:44 Dropping the secondary index id_company
2023/01/06 21:45:44 Index dropped
2023/01/06 21:45:49 Setting JSON docs in KV
2023/01/06 21:46:07 Index is 13496109731765831401 now active
2023/01/06 21:46:08 Build command issued for the deferred indexes [id_gender], bucket: default, scope: _default, coll: _default
2023/01/06 21:46:08 Waiting for the index id_gender to become active
2023/01/06 21:46:08 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:09 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:10 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:11 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:12 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:13 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:14 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:15 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:16 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:17 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:18 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:19 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:20 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:21 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:22 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:23 Waiting for index 583780031780217596 to go active ...
2023/01/06 21:46:24 Index is 583780031780217596 now active
2023/01/06 21:46:24 Using n1ql client
2023/01/06 21:46:24 Expected and Actual scan responses are the same
2023/01/06 21:46:24 Using n1ql client
2023/01/06 21:46:25 Expected and Actual scan responses are the same
--- PASS: TestDropOneIndexSecondDeferBuilding (64.91s)
=== RUN   TestDropSecondIndexSecondDeferBuilding
2023/01/06 21:46:25 In TestDropSecondIndexSecondDeferBuilding()
2023/01/06 21:46:25 In DropAllSecondaryIndexes()
2023/01/06 21:46:25 Index found:  id_age
2023/01/06 21:46:26 Dropped index id_age
2023/01/06 21:46:26 Index found:  id_gender
2023/01/06 21:46:26 Dropped index id_gender
2023/01/06 21:46:29 Setting JSON docs in KV
2023/01/06 21:46:36 Build command issued for the deferred indexes [id_company], bucket: default, scope: _default, coll: _default
2023/01/06 21:46:36 Waiting for the index id_company to become active
2023/01/06 21:46:36 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:37 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:38 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:39 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:40 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:41 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:42 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:43 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:44 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:45 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:46 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:47 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:48 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:49 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:50 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:51 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:52 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:53 Waiting for index 10466099011074279695 to go active ...
2023/01/06 21:46:54 Index is 10466099011074279695 now active
2023/01/06 21:46:55 Build command issued for the deferred indexes [13820034606865232141]
2023/01/06 21:46:56 Dropping the secondary index id_age
2023/01/06 21:46:56 Index dropped
2023/01/06 21:46:59 Setting JSON docs in KV
2023/01/06 21:47:08 Build command issued for the deferred indexes [id_gender], bucket: default, scope: _default, coll: _default
2023/01/06 21:47:08 Waiting for the index id_gender to become active
2023/01/06 21:47:08 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:09 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:10 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:11 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:12 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:13 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:14 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:15 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:16 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:17 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:18 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:19 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:20 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:21 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:22 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:23 Waiting for index 14737514126759883333 to go active ...
2023/01/06 21:47:24 Index is 14737514126759883333 now active
2023/01/06 21:47:25 Using n1ql client
2023/01/06 21:47:25 Expected and Actual scan responses are the same
2023/01/06 21:47:25 Using n1ql client
2023/01/06 21:47:27 Expected and Actual scan responses are the same
--- PASS: TestDropSecondIndexSecondDeferBuilding (61.37s)
=== RUN   TestCreateAfterDropWhileIndexBuilding
2023/01/06 21:47:27 In TestCreateAfterDropWhileIndexBuilding()
2023/01/06 21:47:27 In DropAllSecondaryIndexes()
2023/01/06 21:47:27 Index found:  id_company
2023/01/06 21:47:27 Dropped index id_company
2023/01/06 21:47:27 Index found:  id_gender
2023/01/06 21:47:27 Dropped index id_gender
2023/01/06 21:47:50 Setting JSON docs in KV
2023/01/06 21:48:53 Build command issued for the deferred indexes [13514342910330965112]
2023/01/06 21:48:54 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:48:55 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:48:56 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:48:57 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:48:58 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:48:59 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:49:00 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:49:01 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:49:02 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:49:03 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:49:04 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:49:05 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:49:06 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:49:07 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:49:08 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:49:09 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:49:10 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:49:11 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:49:12 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:49:13 Waiting for index 13514342910330965112 to go active ...
2023/01/06 21:49:14 Index is 13514342910330965112 now active
2023/01/06 21:49:14 Build command issued for the deferred indexes [16956724482058445035]
2023/01/06 21:49:15 Dropping the secondary index id_company
2023/01/06 21:49:15 Index dropped
2023/01/06 21:49:15 Dropping the secondary index id_age
2023/01/06 21:49:16 Index dropped
2023/01/06 21:49:23 Build command issued for the deferred indexes [id_gender], bucket: default, scope: _default, coll: _default
2023/01/06 21:49:23 Waiting for the index id_gender to become active
2023/01/06 21:49:23 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:24 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:25 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:26 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:27 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:28 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:29 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:30 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:31 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:32 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:33 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:34 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:35 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:36 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:37 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:38 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:39 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:40 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:41 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:42 Waiting for index 10499249940826887135 to go active ...
2023/01/06 21:49:43 Index is 10499249940826887135 now active
2023/01/06 21:49:44 Index is 10499249940826887135 now active
2023/01/06 21:49:44 Using n1ql client
2023/01/06 21:49:46 Expected and Actual scan responses are the same
--- PASS: TestCreateAfterDropWhileIndexBuilding (139.55s)
=== RUN   TestDropBuildingIndex1
2023/01/06 21:49:46 In TestDropBuildingIndex1()
2023/01/06 21:49:46 In DropAllSecondaryIndexes()
2023/01/06 21:49:46 Index found:  id_gender
2023/01/06 21:49:46 Dropped index id_gender
2023/01/06 21:49:51 Setting JSON docs in KV
2023/01/06 21:50:25 Created the secondary index id_company. Waiting for it become active
2023/01/06 21:50:25 Index is 7164613955802927130 now active
2023/01/06 21:50:50 Dropping the secondary index id_age
2023/01/06 21:50:50 Index dropped
2023/01/06 21:51:13 Created the secondary index id_age. Waiting for it become active
2023/01/06 21:51:13 Index is 10905146921741868698 now active
2023/01/06 21:51:15 Setting JSON docs in KV
2023/01/06 21:51:24 Using n1ql client
2023/01/06 21:51:24 Expected and Actual scan responses are the same
2023/01/06 21:51:25 Using n1ql client
2023/01/06 21:51:25 Expected and Actual scan responses are the same
--- PASS: TestDropBuildingIndex1 (98.99s)
=== RUN   TestDropBuildingIndex2
2023/01/06 21:51:25 In TestDropBuildingIndex2()
2023/01/06 21:51:25 In DropAllSecondaryIndexes()
2023/01/06 21:51:25 Index found:  id_company
2023/01/06 21:51:25 Dropped index id_company
2023/01/06 21:51:25 Index found:  id_age
2023/01/06 21:51:26 Dropped index id_age
2023/01/06 21:51:30 Setting JSON docs in KV
2023/01/06 21:52:08 Created the secondary index id_company. Waiting for it become active
2023/01/06 21:52:08 Index is 5610776829011107713 now active
2023/01/06 21:52:36 Dropping the secondary index id_company
2023/01/06 21:52:36 Index dropped
2023/01/06 21:52:36 Index is 3242152527949839887 now active
2023/01/06 21:52:58 Created the secondary index id_company. Waiting for it become active
2023/01/06 21:52:58 Index is 9423679674461295504 now active
2023/01/06 21:53:02 Setting JSON docs in KV
2023/01/06 21:53:11 Using n1ql client
2023/01/06 21:53:12 Expected and Actual scan responses are the same
2023/01/06 21:53:12 Using n1ql client
2023/01/06 21:53:12 Expected and Actual scan responses are the same
--- PASS: TestDropBuildingIndex2 (107.17s)
=== RUN   TestDropIndexWithDataLoad
2023/01/06 21:53:12 In TestDropIndexWithDataLoad()
2023/01/06 21:53:12 In DropAllSecondaryIndexes()
2023/01/06 21:53:12 Index found:  id_company
2023/01/06 21:53:12 Dropped index id_company
2023/01/06 21:53:12 Index found:  id_age
2023/01/06 21:53:13 Dropped index id_age
2023/01/06 21:53:15 Setting JSON docs in KV
2023/01/06 21:53:43 Created the secondary index id_company. Waiting for it become active
2023/01/06 21:53:43 Index is 313532972182690571 now active
2023/01/06 21:54:10 Created the secondary index id_age. Waiting for it become active
2023/01/06 21:54:10 Index is 1239743853726902984 now active
2023/01/06 21:54:34 Created the secondary index id_gender. Waiting for it become active
2023/01/06 21:54:34 Index is 5337837578398165013 now active
2023/01/06 21:54:58 Created the secondary index id_isActive. Waiting for it become active
2023/01/06 21:54:58 Index is 10035186971706279635 now active
2023/01/06 21:55:07 Setting JSON docs in KV
2023/01/06 21:55:07 In LoadKVBucket
2023/01/06 21:55:07 Bucket name = default
2023/01/06 21:55:07 In DropIndexWhileKVLoad
2023/01/06 21:55:08 Dropping the secondary index id_company
2023/01/06 21:55:09 Index dropped
2023/01/06 21:55:35 Using n1ql client
2023/01/06 21:55:36 Expected and Actual scan responses are the same
2023/01/06 21:55:36 Number of docScanResults and scanResults = 96712 and 96712
2023/01/06 21:55:36 Using n1ql client
2023/01/06 21:55:40 Expected and Actual scan responses are the same
2023/01/06 21:55:40 Number of docScanResults and scanResults = 420000 and 420000
--- PASS: TestDropIndexWithDataLoad (147.44s)
=== RUN   TestDropAllIndexesWithDataLoad
2023/01/06 21:55:40 In TestDropAllIndexesWithDataLoad()
2023/01/06 21:55:40 In DropAllSecondaryIndexes()
2023/01/06 21:55:40 Index found:  id_age
2023/01/06 21:55:40 Dropped index id_age
2023/01/06 21:55:40 Index found:  id_isActive
2023/01/06 21:55:40 Dropped index id_isActive
2023/01/06 21:55:40 Index found:  id_gender
2023/01/06 21:55:40 Dropped index id_gender
2023/01/06 21:55:42 Setting JSON docs in KV
2023/01/06 21:56:17 Created the secondary index id_company. Waiting for it become active
2023/01/06 21:56:17 Index is 10388786764959447313 now active
2023/01/06 21:56:43 Created the secondary index id_age. Waiting for it become active
2023/01/06 21:56:43 Index is 13007855975171977595 now active
2023/01/06 21:57:10 Created the secondary index id_gender. Waiting for it become active
2023/01/06 21:57:10 Index is 6047463532739609907 now active
2023/01/06 21:57:36 Created the secondary index id_isActive. Waiting for it become active
2023/01/06 21:57:36 Index is 15371742707956327950 now active
2023/01/06 21:57:44 Setting JSON docs in KV
2023/01/06 21:57:44 In LoadKVBucket
2023/01/06 21:57:44 Bucket name = default
2023/01/06 21:57:44 In DropIndexWhileKVLoad
2023/01/06 21:57:44 In DropIndexWhileKVLoad
2023/01/06 21:57:44 In DropIndexWhileKVLoad
2023/01/06 21:57:44 In DropIndexWhileKVLoad
2023/01/06 21:57:45 Dropping the secondary index id_company
2023/01/06 21:57:45 Dropping the secondary index id_isActive
2023/01/06 21:57:45 Dropping the secondary index id_gender
2023/01/06 21:57:45 Dropping the secondary index id_age
2023/01/06 21:57:45 Index dropped
2023/01/06 21:57:45 Index dropped
2023/01/06 21:57:46 Index dropped
2023/01/06 21:57:46 Index dropped
2023/01/06 21:58:04 Using n1ql client
2023/01/06 21:58:04 Scan failed as expected with error: Index Not Found - cause: GSI index id_company not found.
--- PASS: TestDropAllIndexesWithDataLoad (144.42s)
=== RUN   TestCreateBucket_AnotherIndexBuilding
2023/01/06 21:58:04 In TestCreateBucket_AnotherIndexBuilding()
2023/01/06 21:58:04 In DropAllSecondaryIndexes()
2023/01/06 21:58:43 Flushed the bucket default, Response body: 
2023/01/06 21:58:47 Modified parameters of bucket default, responseBody: 
2023/01/06 21:58:47 http://127.0.0.1:9000/pools/default/buckets/multi_buck2
2023/01/06 21:58:47 &{DELETE http://127.0.0.1:9000/pools/default/buckets/multi_buck2 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc00013c000}
2023/01/06 21:58:47 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Fri, 06 Jan 2023 16:28:46 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc0303b9200 31 [] false false map[] 0xc00f3cf500 }
2023/01/06 21:58:47 DeleteBucket failed for bucket multi_buck2 
2023/01/06 21:58:47 Deleted bucket multi_buck2, responseBody: Requested resource not found.
2023/01/06 21:59:02 Setting JSON docs in KV
2023/01/06 22:01:15 Created bucket multi_buck2, responseBody: 
2023/01/06 22:01:39 Index is 14670210309560116397 now active
2023/01/06 22:01:39 Index is 12502841913584658781 now active
2023/01/06 22:01:39 Using n1ql client
2023-01-06T22:01:39.260+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T22:01:39.260+05:30 [Info] GSIC[default/multi_buck2-_default-_default-1673022699253827549] started ...
2023/01/06 22:01:39 Expected and Actual scan responses are the same
2023/01/06 22:01:39 Number of docScanResults and scanResults = 10000 and 10000
2023/01/06 22:01:39 Using n1ql client
2023/01/06 22:01:42 Expected and Actual scan responses are the same
2023/01/06 22:01:42 Number of docScanResults and scanResults = 200000 and 200000
2023/01/06 22:01:45 Deleted bucket multi_buck2, responseBody: 
2023/01/06 22:02:22 Flushed the bucket default, Response body: 
--- PASS: TestCreateBucket_AnotherIndexBuilding (257.69s)
=== RUN   TestDropBucket2Index_Bucket1IndexBuilding
2023/01/06 22:02:22 In TestDropBucket2Index_Bucket1IndexBuilding()
2023/01/06 22:02:22 In DropAllSecondaryIndexes()
2023/01/06 22:02:22 Index found:  buck1_idx
2023/01/06 22:02:22 Dropped index buck1_idx
2023/01/06 22:03:00 Flushed the bucket default, Response body: 
2023/01/06 22:03:03 Modified parameters of bucket default, responseBody: 
2023/01/06 22:03:03 http://127.0.0.1:9000/pools/default/buckets/multibucket_test3
2023/01/06 22:03:03 &{DELETE http://127.0.0.1:9000/pools/default/buckets/multibucket_test3 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc00013c000}
2023/01/06 22:03:03 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Fri, 06 Jan 2023 16:33:02 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc016cd41c0 31 [] false false map[] 0xc00f178100 }
2023/01/06 22:03:03 DeleteBucket failed for bucket multibucket_test3 
2023/01/06 22:03:03 Deleted bucket multibucket_test3, responseBody: Requested resource not found.
2023/01/06 22:03:03 Created bucket multibucket_test3, responseBody: 
2023/01/06 22:03:18 Setting JSON docs in KV
2023/01/06 22:04:23 Created the secondary index buck2_idx. Waiting for it become active
2023/01/06 22:04:23 Index is 7292059343348968244 now active
2023/01/06 22:04:37 Dropping the secondary index buck2_idx
2023/01/06 22:04:37 Index dropped
2023/01/06 22:04:37 Index is 9561738248043065376 now active
2023/01/06 22:04:37 Using n1ql client
2023/01/06 22:04:39 Expected and Actual scan responses are the same
2023/01/06 22:04:39 Number of docScanResults and scanResults = 100000 and 100000
2023/01/06 22:04:40 Deleted bucket multibucket_test3, responseBody: 
2023/01/06 22:05:18 Flushed the bucket default, Response body: 
--- PASS: TestDropBucket2Index_Bucket1IndexBuilding (176.03s)
=== RUN   TestDeleteBucketWhileInitialIndexBuild
2023/01/06 22:05:18 In TestDeleteBucketWhileInitialIndexBuild()
2023/01/06 22:05:18 ============== DBG: Drop all indexes in all buckets
2023/01/06 22:05:18 In DropAllSecondaryIndexes()
2023/01/06 22:05:18 Index found:  buck1_idx
2023/01/06 22:05:18 Dropped index buck1_idx
2023/01/06 22:05:18 ============== DBG: Delete bucket default
2023/01/06 22:05:20 Deleted bucket default, responseBody: 
2023/01/06 22:05:20 ============== DBG: Create bucket default
2023/01/06 22:05:20 Created bucket default, responseBody: 
2023/01/06 22:05:23 Flush Enabled on bucket default, responseBody: 
2023/01/06 22:05:56 Flushed the bucket default, Response body: 
2023/01/06 22:05:56 ============== DBG: Delete bucket testbucket2
2023/01/06 22:05:56 http://127.0.0.1:9000/pools/default/buckets/testbucket2
2023/01/06 22:05:56 &{DELETE http://127.0.0.1:9000/pools/default/buckets/testbucket2 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc00013c000}
2023/01/06 22:05:56 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Fri, 06 Jan 2023 16:35:55 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc004ced180 31 [] false false map[] 0xc00228a000 }
2023/01/06 22:05:56 DeleteBucket failed for bucket testbucket2 
2023/01/06 22:05:56 Deleted bucket testbucket2, responseBody: Requested resource not found.
2023/01/06 22:05:56 ============== DBG: Create bucket testbucket2
2023/01/06 22:05:57 Created bucket testbucket2, responseBody: 
2023/01/06 22:06:00 Flush Enabled on bucket testbucket2, responseBody: 
2023/01/06 22:06:34 Flushed the bucket testbucket2, Response body: 
2023/01/06 22:06:34 ============== DBG: Delete bucket testbucket3
2023/01/06 22:06:34 http://127.0.0.1:9000/pools/default/buckets/testbucket3
2023/01/06 22:06:34 &{DELETE http://127.0.0.1:9000/pools/default/buckets/testbucket3 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc00013c000}
2023/01/06 22:06:34 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Fri, 06 Jan 2023 16:36:33 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc00027ff80 31 [] false false map[] 0xc009858100 }
2023/01/06 22:06:34 DeleteBucket failed for bucket testbucket3 
2023/01/06 22:06:34 Deleted bucket testbucket3, responseBody: Requested resource not found.
2023/01/06 22:06:34 ============== DBG: Create bucket testbucket3
2023/01/06 22:06:34 Created bucket testbucket3, responseBody: 
2023/01/06 22:06:37 Flush Enabled on bucket testbucket3, responseBody: 
2023/01/06 22:07:10 Flushed the bucket testbucket3, Response body: 
2023/01/06 22:07:10 ============== DBG: Delete bucket testbucket4
2023/01/06 22:07:10 http://127.0.0.1:9000/pools/default/buckets/testbucket4
2023/01/06 22:07:10 &{DELETE http://127.0.0.1:9000/pools/default/buckets/testbucket4 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc00013c000}
2023/01/06 22:07:10 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Fri, 06 Jan 2023 16:37:09 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc016011c00 31 [] false false map[] 0xc00f178500 }
2023/01/06 22:07:10 DeleteBucket failed for bucket testbucket4 
2023/01/06 22:07:10 Deleted bucket testbucket4, responseBody: Requested resource not found.
2023/01/06 22:07:10 ============== DBG: Create bucket testbucket4
2023/01/06 22:07:10 Created bucket testbucket4, responseBody: 
2023/01/06 22:07:14 Flush Enabled on bucket testbucket4, responseBody: 
2023/01/06 22:07:47 Flushed the bucket testbucket4, Response body: 
2023/01/06 22:08:02 Generating docs and Populating all the buckets
2023/01/06 22:08:02 ============== DBG: Creating docs in bucket default
2023/01/06 22:08:03 ============== DBG: Creating index bucket1_age in bucket default
2023/01/06 22:08:07 Created the secondary index bucket1_age. Waiting for it become active
2023/01/06 22:08:07 Index is 4469654760312612436 now active
2023/01/06 22:08:07 ============== DBG: Creating index bucket1_gender in bucket default
2023/01/06 22:08:14 Created the secondary index bucket1_gender. Waiting for it become active
2023/01/06 22:08:14 Index is 7554345238142884608 now active
2023/01/06 22:08:14 ============== DBG: Creating docs in bucket testbucket2
2023/01/06 22:08:15 ============== DBG: Creating index bucket2_city in bucket testbucket2
2023/01/06 22:08:20 Created the secondary index bucket2_city. Waiting for it become active
2023/01/06 22:08:20 Index is 8229190811607547746 now active
2023/01/06 22:08:20 ============== DBG: Creating index bucket2_company in bucket testbucket2
2023/01/06 22:08:27 Created the secondary index bucket2_company. Waiting for it become active
2023/01/06 22:08:27 Index is 9199175854010038825 now active
2023/01/06 22:08:27 ============== DBG: Creating docs in bucket testbucket3
2023/01/06 22:08:28 ============== DBG: Creating index bucket3_gender in bucket testbucket3
2023/01/06 22:08:33 Created the secondary index bucket3_gender. Waiting for it become active
2023/01/06 22:08:33 Index is 9746683644571971999 now active
2023/01/06 22:08:33 ============== DBG: Creating index bucket3_address in bucket testbucket3
2023/01/06 22:08:40 Created the secondary index bucket3_address. Waiting for it become active
2023/01/06 22:08:40 Index is 8868460811107419557 now active
2023/01/06 22:08:40 ============== DBG: First bucket scan:: Scanning index bucket1_age in bucket default
2023/01/06 22:08:40 Using n1ql client
2023-01-06T22:08:40.034+05:30 [Info] metadata provider version changed 1292 -> 1293
2023-01-06T22:08:40.034+05:30 [Info] switched currmeta from 1292 -> 1293 force false 
2023-01-06T22:08:40.035+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T22:08:40.036+05:30 [Info] GSIC[default/default-_default-_default-1673023120021528959] started ...
2023/01/06 22:08:40 ============== DBG: First bucket scan:: Expected results = 294 Actual results = 294
2023/01/06 22:08:40 Expected and Actual scan responses are the same
2023/01/06 22:08:51 ============== DBG: Creating 50K docs in bucket testbucket4
2023/01/06 22:09:20 ============== DBG: Creating index bucket4_balance asynchronously in bucket testbucket4
2023/01/06 22:09:31 ============== DBG: Deleting bucket testbucket4
2023/01/06 22:09:33 Deleted bucket testbucket4, responseBody: 
2023/01/06 22:09:33 ============== DBG: First bucket scan:: Scanning index bucket1_age in bucket default
2023/01/06 22:09:33 Using n1ql client
2023/01/06 22:09:33 ============== DBG: First bucket scan:: Expected results = 294 Actual results = 294
2023/01/06 22:09:33 Expected and Actual scan responses are the same
2023/01/06 22:09:33 ============== DBG: Second bucket scan:: Scanning index bucket2_city in bucket testbucket2
2023/01/06 22:09:33 Using n1ql client
2023-01-06T22:09:33.624+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T22:09:33.625+05:30 [Info] GSIC[default/testbucket2-_default-_default-1673023173622314636] started ...
2023/01/06 22:09:33 ============== DBG: Second bucket scan:: Expected results = 392 Actual results = 392
2023/01/06 22:09:33 Expected and Actual scan responses are the same
2023/01/06 22:09:33 ============== DBG: Third bucket scan:: Scanning index bucket3_gender in bucket testbucket3
2023/01/06 22:09:33 Using n1ql client
2023-01-06T22:09:33.634+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T22:09:33.635+05:30 [Info] GSIC[default/testbucket3-_default-_default-1673023173632085846] started ...
2023/01/06 22:09:33 ============== DBG: Third bucket scan:: Expected results = 492 Actual results = 492
2023/01/06 22:09:33 Expected and Actual scan responses are the same
2023/01/06 22:09:33 ============== DBG: Deleting buckets testbucket2 testbucket3 testbucket4
2023/01/06 22:09:35 Deleted bucket testbucket2, responseBody: 
2023/01/06 22:09:38 Deleted bucket testbucket3, responseBody: 
2023/01/06 22:09:38 http://127.0.0.1:9000/pools/default/buckets/testbucket4
2023/01/06 22:09:38 &{DELETE http://127.0.0.1:9000/pools/default/buckets/testbucket4 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc00013c000}
2023/01/06 22:09:38 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Fri, 06 Jan 2023 16:39:37 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc01187f840 31 [] false false map[] 0xc011ada800 }
2023/01/06 22:09:38 DeleteBucket failed for bucket testbucket4 
2023/01/06 22:09:38 Deleted bucket testbucket4, responseBody: Requested resource not found.
2023/01/06 22:09:41 Modified parameters of bucket default, responseBody: 
--- PASS: TestDeleteBucketWhileInitialIndexBuild (277.87s)
=== RUN   TestWherClause_UpdateDocument
2023/01/06 22:09:56 In TestWherClause_UpdateDocument()
2023/01/06 22:09:56 In DropAllSecondaryIndexes()
2023/01/06 22:09:56 Index found:  bucket1_gender
2023/01/06 22:09:56 Dropped index bucket1_gender
2023/01/06 22:09:56 Index found:  bucket1_age
2023/01/06 22:09:56 Dropped index bucket1_age
2023/01/06 22:10:35 Flushed the bucket default, Response body: 
2023/01/06 22:10:37 Setting JSON docs in KV
2023/01/06 22:10:46 Created the secondary index id_ageGreaterThan40. Waiting for it become active
2023/01/06 22:10:46 Index is 7675045217500741752 now active
2023/01/06 22:10:46 Using n1ql client
2023/01/06 22:10:46 Expected and Actual scan responses are the same
2023/01/06 22:10:46 Number of docScanResults and scanResults = 5981 and 5981
2023/01/06 22:10:51 Using n1ql client
2023/01/06 22:10:51 Expected and Actual scan responses are the same
2023/01/06 22:10:51 Number of docScanResults and scanResults = 1981 and 1981
--- PASS: TestWherClause_UpdateDocument (55.64s)
=== RUN   TestDeferFalse
2023/01/06 22:10:51 In TestDeferFalse()
2023/01/06 22:10:54 Setting JSON docs in KV
2023/01/06 22:11:09 Created the secondary index index_deferfalse1. Waiting for it become active
2023/01/06 22:11:09 Index is 7195797318786577453 now active
2023/01/06 22:11:09 Using n1ql client
2023/01/06 22:11:09 Expected and Actual scan responses are the same
--- PASS: TestDeferFalse (17.16s)
=== RUN   TestDeferFalse_CloseClientConnection
2023/01/06 22:11:09 In TestDeferFalse_CloseClientConnection()
2023/01/06 22:11:09 In CloseClientThread
2023/01/06 22:11:09 In CreateIndexThread
2023/01/06 22:11:11 Create Index call failed as expected due to error : Terminate Request due to client termination
2023-01-06T22:11:11.198+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:48204->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023/01/06 22:11:11 Waiting for index 13878874273911047668 to go active ...
2023/01/06 22:11:12 Waiting for index 13878874273911047668 to go active ...
2023/01/06 22:11:13 Waiting for index 13878874273911047668 to go active ...
2023/01/06 22:11:14 Waiting for index 13878874273911047668 to go active ...
2023/01/06 22:11:15 Waiting for index 13878874273911047668 to go active ...
2023/01/06 22:11:16 Waiting for index 13878874273911047668 to go active ...
2023/01/06 22:11:17 Index is 13878874273911047668 now active
2023/01/06 22:11:17 Using n1ql client
2023/01/06 22:11:17 Expected and Actual scan responses are the same
--- PASS: TestDeferFalse_CloseClientConnection (8.14s)
=== RUN   TestOrphanIndexCleanup
2023-01-06T22:11:17.276+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:48264->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023/01/06 22:11:17 In DropAllSecondaryIndexes()
2023/01/06 22:11:17 Index found:  id_ageGreaterThan40
2023/01/06 22:11:17 Dropped index id_ageGreaterThan40
2023/01/06 22:11:17 Index found:  index_deferfalse1
2023/01/06 22:11:17 Dropped index index_deferfalse1
2023/01/06 22:11:17 Index found:  index_deferfalse2
2023/01/06 22:11:17 Dropped index index_deferfalse2
2023/01/06 22:11:31 Created the secondary index idx1_age_regular. Waiting for it become active
2023/01/06 22:11:31 Index is 17080059202470478774 now active
2023/01/06 22:11:39 Created the secondary index idx2_company_regular. Waiting for it become active
2023/01/06 22:11:39 Index is 9981937952728769294 now active
2023/01/06 22:11:49 Using n1ql client
2023/01/06 22:11:49 Query on idx1_age_regular is successful
2023/01/06 22:11:49 Using n1ql client
2023/01/06 22:11:49 Query on idx2_company_regular is successful
Restarting indexer process ...
2023/01/06 22:11:49 []
2023-01-06T22:11:49.804+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T22:11:49.804+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T22:11:49.804+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T22:11:49.804+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023/01/06 22:12:09 Using n1ql client
2023-01-06T22:12:09.761+05:30 [Error] transport error between 127.0.0.1:58582->127.0.0.1:9107: write tcp 127.0.0.1:58582->127.0.0.1:9107: write: broken pipe
2023-01-06T22:12:09.761+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] -7054895728343728270 request transport failed `write tcp 127.0.0.1:58582->127.0.0.1:9107: write: broken pipe`
2023-01-06T22:12:09.761+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T22:12:09.761+05:30 [Error] metadataClient:PickRandom: Replicas - [13485523133548297291], PrunedReplica - map[], FilteredReplica map[]
2023/01/06 22:12:09 Query on idx1_age_regular is successful - after indexer restart.
2023/01/06 22:12:09 Using n1ql client
2023/01/06 22:12:09 Query on idx2_company_regular is successful - after indexer restart.
--- PASS: TestOrphanIndexCleanup (52.52s)
=== RUN   TestOrphanPartitionCleanup
2023/01/06 22:12:15 Created the secondary index idx3_age_regular. Waiting for it become active
2023/01/06 22:12:15 Index is 17031351110676484627 now active
2023/01/06 22:12:25 Using n1ql client
2023/01/06 22:12:25 Query on idx3_age_regular is successful
Restarting indexer process ...
2023/01/06 22:12:25 []
2023-01-06T22:12:25.083+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T22:12:25.084+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T22:12:25.084+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T22:12:25.084+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T22:12:36.653+05:30 [Error] watcher.updateIndexStats2NoLock: unexpected nil *DedupedIndexStats. bucket default, dedupedIndexStats &{0 0 1673023346043934846 1673023352649292101 map[]}, clientStatsPtr.Stats[bucket] 
2023-01-06T22:12:36.654+05:30 [Error] watcher.updateIndexStats2NoLock: unexpected nil *DedupedIndexStats. bucket default, dedupedIndexStats &{0 0 1673023346043934846 1673023352649292101 map[]}, clientStatsPtr.Stats[bucket] 
2023-01-06T22:12:41.653+05:30 [Error] watcher.updateIndexStats2NoLock: unexpected nil *DedupedIndexStats. bucket default, dedupedIndexStats &{0 0 1673023346043934846 1673023358649442135 map[]}, clientStatsPtr.Stats[bucket] 
2023-01-06T22:12:41.653+05:30 [Error] watcher.updateIndexStats2NoLock: unexpected nil *DedupedIndexStats. bucket default, dedupedIndexStats &{0 0 1673023346043934846 1673023358649442135 map[]}, clientStatsPtr.Stats[bucket] 
2023/01/06 22:12:45 Using n1ql client
2023-01-06T22:12:45.053+05:30 [Error] transport error between 127.0.0.1:46866->127.0.0.1:9107: write tcp 127.0.0.1:46866->127.0.0.1:9107: write: broken pipe
2023-01-06T22:12:45.053+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] -6834392844935814905 request transport failed `write tcp 127.0.0.1:46866->127.0.0.1:9107: write: broken pipe`
2023-01-06T22:12:45.054+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 8.  Partition with instances 0 
2023-01-06T22:12:45.054+05:30 [Error] metadataClient:PickRandom: Replicas - [5988975457664007901], PrunedReplica - map[], FilteredReplica map[]
2023/01/06 22:12:45 Query on idx3_age_regular is successful - after indexer restart.
--- PASS: TestOrphanPartitionCleanup (35.28s)
=== RUN   TestIndexerSettings
2023/01/06 22:12:45 In TestIndexerSettings()
2023/01/06 22:12:45 Changing config key indexer.settings.max_cpu_percent to value 300
2023/01/06 22:12:45 Changing config key indexer.settings.inmemory_snapshot.interval to value 300
2023/01/06 22:12:45 Changing config key indexer.settings.persisted_snapshot.interval to value 20000
2023/01/06 22:12:45 Changing config key indexer.settings.recovery.max_rollbacks to value 3
2023/01/06 22:12:45 Changing config key indexer.settings.log_level to value error
--- PASS: TestIndexerSettings (0.61s)
=== RUN   TestRestoreDefaultSettings
2023/01/06 22:12:45 In TestIndexerSettings_RestoreDefault()
2023/01/06 22:12:45 Changing config key indexer.settings.max_cpu_percent to value 0
2023/01/06 22:12:45 Changing config key indexer.settings.inmemory_snapshot.interval to value 200
2023/01/06 22:12:46 Changing config key indexer.settings.persisted_snapshot.interval to value 5000
2023/01/06 22:12:46 Changing config key indexer.settings.recovery.max_rollbacks to value 5
2023/01/06 22:12:46 Changing config key indexer.settings.log_level to value info
--- PASS: TestRestoreDefaultSettings (0.55s)
=== RUN   TestStat_ItemsCount
2023/01/06 22:12:46 In TestStat_ItemsCount()
2023/01/06 22:12:46 In DropAllSecondaryIndexes()
2023/01/06 22:12:46 Index found:  idx1_age_regular
2023/01/06 22:12:46 Dropped index idx1_age_regular
2023/01/06 22:12:46 Index found:  idx2_company_regular
2023/01/06 22:12:46 Dropped index idx2_company_regular
2023/01/06 22:12:46 Index found:  idx3_age_regular
2023/01/06 22:12:46 Dropped index idx3_age_regular
2023/01/06 22:12:46 Emptying the default bucket
2023-01-06T22:12:46.653+05:30 [Error] watcher.updateIndexStats2NoLock: unexpected nil *DedupedIndexStats. bucket default, dedupedIndexStats &{0 0 1673023346043934846 1673023364649291400 map[]}, clientStatsPtr.Stats[bucket] 
2023-01-06T22:12:46.653+05:30 [Error] watcher.updateIndexStats2NoLock: unexpected nil *DedupedIndexStats. bucket default, dedupedIndexStats &{0 0 1673023346043934846 1673023364649291400 map[]}, clientStatsPtr.Stats[bucket] 
2023/01/06 22:12:49 Flush Enabled on bucket default, responseBody: 
2023/01/06 22:13:27 Flushed the bucket default, Response body: 
2023/01/06 22:13:27 Generating JSON docs
2023/01/06 22:13:27 Setting initial JSON docs in KV
2023/01/06 22:13:32 Creating a 2i
2023/01/06 22:13:35 Created the secondary index index_test1. Waiting for it become active
2023/01/06 22:13:35 Index is 1597018590842360131 now active
2023/01/06 22:13:40 items_count stat is 10000
--- PASS: TestStat_ItemsCount (54.75s)
=== RUN   TestRangeArrayIndex_Distinct
2023/01/06 22:13:40 In TestRangeArrayIndex_Distinct()
2023/01/06 22:13:40 In DropAllSecondaryIndexes()
2023/01/06 22:13:40 Index found:  index_test1
2023/01/06 22:13:41 Dropped index index_test1
2023/01/06 22:14:18 Flushed the bucket default, Response body: 
2023/01/06 22:14:22 Created the secondary index arridx_friends. Waiting for it become active
2023/01/06 22:14:22 Index is 10908080920641336755 now active
2023-01-06T22:14:22.898+05:30 [Error] transport error between 127.0.0.1:43336->127.0.0.1:9107: write tcp 127.0.0.1:43336->127.0.0.1:9107: write: broken pipe
2023-01-06T22:14:22.898+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"]  request transport failed `write tcp 127.0.0.1:43336->127.0.0.1:9107: write: broken pipe`
2023-01-06T22:14:22.898+05:30 [Warn] scan failed: requestId  queryport 127.0.0.1:9107 inst 8279543823291131830 partition [0]
2023-01-06T22:14:22.899+05:30 [Warn] Scan failed with error for index 10908080920641336755.  Trying scan again with replica, reqId: :  write tcp 127.0.0.1:43336->127.0.0.1:9107: write: broken pipe from [127.0.0.1:9107] ...
2023-01-06T22:14:22.899+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T22:14:22.899+05:30 [Error] metadataClient:PickRandom: Replicas - [8279543823291131830], PrunedReplica - map[], FilteredReplica map[]
2023-01-06T22:14:22.899+05:30 [Warn] Fail to find indexers to satisfy query request.  Trying scan again for index 10908080920641336755, reqId: :  write tcp 127.0.0.1:43336->127.0.0.1:9107: write: broken pipe from [127.0.0.1:9107] ...
2023/01/06 22:14:23 Expected and Actual scan responses are the same
2023/01/06 22:14:25 Expected and Actual scan responses are the same
--- PASS: TestRangeArrayIndex_Distinct (44.82s)
=== RUN   TestUpdateArrayIndex_Distinct
2023/01/06 22:14:25 In TestUpdateArrayIndex_Distinct()
2023/01/06 22:14:25 In DropAllSecondaryIndexes()
2023/01/06 22:14:25 Index found:  arridx_friends
2023/01/06 22:14:25 Dropped index arridx_friends
2023/01/06 22:15:04 Flushed the bucket default, Response body: 
2023/01/06 22:15:08 Created the secondary index arridx_friends. Waiting for it become active
2023/01/06 22:15:08 Index is 9400544020957763890 now active
2023/01/06 22:15:08 Expected and Actual scan responses are the same
2023/01/06 22:15:11 Expected and Actual scan responses are the same
2023/01/06 22:15:12 Expected and Actual scan responses are the same
--- PASS: TestUpdateArrayIndex_Distinct (46.32s)
=== RUN   TestRangeArrayIndex_Duplicate
2023/01/06 22:15:12 In TestRangeArrayIndex_Duplicate()
2023/01/06 22:15:12 In DropAllSecondaryIndexes()
2023/01/06 22:15:12 Index found:  arridx_friends
2023/01/06 22:15:12 Dropped index arridx_friends
2023/01/06 22:15:49 Flushed the bucket default, Response body: 
2023/01/06 22:15:53 Created the secondary index arridx_friends. Waiting for it become active
2023/01/06 22:15:53 Index is 13931504826805469787 now active
2023/01/06 22:15:53 Expected and Actual scan responses are the same
2023/01/06 22:15:56 Expected and Actual scan responses are the same
--- PASS: TestRangeArrayIndex_Duplicate (44.35s)
=== RUN   TestUpdateArrayIndex_Duplicate
2023/01/06 22:15:56 In TestUpdateArrayIndex_Duplicate()
2023/01/06 22:15:56 In DropAllSecondaryIndexes()
2023/01/06 22:15:56 Index found:  arridx_friends
2023/01/06 22:15:56 Dropped index arridx_friends
2023/01/06 22:16:34 Flushed the bucket default, Response body: 
2023/01/06 22:16:38 Created the secondary index arridx_friends. Waiting for it become active
2023/01/06 22:16:38 Index is 4263291751946083482 now active
2023/01/06 22:16:38 Expected and Actual scan responses are the same
2023/01/06 22:16:41 Expected and Actual scan responses are the same
2023/01/06 22:16:41 Expected and Actual scan responses are the same
--- PASS: TestUpdateArrayIndex_Duplicate (45.26s)
=== RUN   TestArrayIndexCornerCases
2023/01/06 22:16:41 In TestArrayIndexCornerCases()
2023/01/06 22:16:45 Created the secondary index arr_single. Waiting for it become active
2023/01/06 22:16:45 Index is 18270401265847763062 now active
2023/01/06 22:16:51 Created the secondary index arr_leading. Waiting for it become active
2023/01/06 22:16:51 Index is 540491936649128054 now active
2023/01/06 22:16:57 Created the secondary index arr_nonleading. Waiting for it become active
2023/01/06 22:16:57 Index is 8516456865248466281 now active
2023/01/06 22:16:57 

--------ScanAll for EMPTY array--------
2023/01/06 22:16:57 Count of scanResults is 0
2023/01/06 22:16:57 Count of scanResults is 0
2023/01/06 22:16:57 Count of scanResults is 1
2023/01/06 22:16:57 Key: string -8359481204150631988  Value: value.Values ["nNBOl" Missing field or index.] false
2023/01/06 22:16:57 Expected and Actual scan responses are the same
2023/01/06 22:16:57 

--------ScanAll for MISSING array--------
2023/01/06 22:16:57 Count of scanResults is 0
2023/01/06 22:16:57 Count of scanResults is 0
2023/01/06 22:16:57 Count of scanResults is 1
2023/01/06 22:16:57 Key: string -8359481204150631988  Value: value.Values ["UTlej" Missing field or index.] false
2023/01/06 22:16:57 Expected and Actual scan responses are the same
2023/01/06 22:16:57 

--------ScanAll for NULL array--------
2023/01/06 22:16:57 Count of scanResults is 1
2023/01/06 22:16:57 Key: string -8359481204150631988  Value: value.Values [null] false
2023/01/06 22:16:57 Expected and Actual scan responses are the same
2023/01/06 22:16:57 Count of scanResults is 1
2023/01/06 22:16:57 Key: string -8359481204150631988  Value: value.Values [null "dLHJLO"] false
2023/01/06 22:16:57 Expected and Actual scan responses are the same
2023/01/06 22:16:57 Count of scanResults is 1
2023/01/06 22:16:57 Key: string -8359481204150631988  Value: value.Values ["dLHJLO" null] false
2023/01/06 22:16:57 Expected and Actual scan responses are the same
2023/01/06 22:16:57 

--------ScanAll for SCALARVALUE array--------
2023/01/06 22:16:57 Count of scanResults is 1
2023/01/06 22:16:57 Key: string -8359481204150631988  Value: value.Values ["IamScalar"] false
2023/01/06 22:16:57 Expected and Actual scan responses are the same
2023/01/06 22:16:57 Count of scanResults is 1
2023/01/06 22:16:57 Key: string -8359481204150631988  Value: value.Values ["IamScalar" "JRJj#s"] false
2023/01/06 22:16:57 Expected and Actual scan responses are the same
2023/01/06 22:16:57 Count of scanResults is 1
2023/01/06 22:16:57 Key: string -8359481204150631988  Value: value.Values ["JRJj#s" "IamScalar"] false
2023/01/06 22:16:57 Expected and Actual scan responses are the same
2023/01/06 22:16:57 

--------ScanAll for SCALAROBJECT array--------
2023/01/06 22:16:57 Count of scanResults is 1
2023/01/06 22:16:57 Key: string -8359481204150631988  Value: value.Values [{"1":"abc","2":"def"}] false
2023/01/06 22:16:57 Expected and Actual scan responses are the same
2023/01/06 22:16:57 Count of scanResults is 1
2023/01/06 22:16:57 Key: string -8359481204150631988  Value: value.Values [{"1":"abc","2":"def"} "X0yZEt"] false
2023/01/06 22:16:57 Expected and Actual scan responses are the same
2023/01/06 22:16:57 Count of scanResults is 1
2023/01/06 22:16:57 Key: string -8359481204150631988  Value: value.Values ["X0yZEt" {"1":"abc","2":"def"}] false
2023/01/06 22:16:57 Expected and Actual scan responses are the same
--- PASS: TestArrayIndexCornerCases (16.23s)
=== RUN   TestArraySizeIncreaseDecrease1
2023/01/06 22:16:57 In TestArraySizeIncreaseDecrease1()
2023/01/06 22:16:57 In DropAllSecondaryIndexes()
2023/01/06 22:16:57 Index found:  arr_leading
2023/01/06 22:16:58 Dropped index arr_leading
2023/01/06 22:16:58 Index found:  arr_single
2023/01/06 22:16:58 Dropped index arr_single
2023/01/06 22:16:58 Index found:  arridx_friends
2023/01/06 22:16:58 Dropped index arridx_friends
2023/01/06 22:16:58 Index found:  arr_nonleading
2023/01/06 22:16:58 Dropped index arr_nonleading
2023/01/06 22:17:36 Flushed the bucket default, Response body: 
2023/01/06 22:17:36 Changing config key indexer.settings.allow_large_keys to value false
2023/01/06 22:17:37 Changing config key indexer.settings.max_seckey_size to value 100
2023/01/06 22:17:37 Changing config key indexer.settings.max_array_seckey_size to value 2000
2023/01/06 22:17:38 Start of createArrayDocs()
2023/01/06 22:17:55 End of createArrayDocs()
2023/01/06 22:17:55 Start of createArrayDocs()
2023/01/06 22:17:55 End of createArrayDocs()
2023/01/06 22:17:59 Created the secondary index arr1. Waiting for it become active
2023/01/06 22:17:59 Index is 7636101945329797529 now active
2023/01/06 22:18:05 Created the secondary index arr2. Waiting for it become active
2023/01/06 22:18:05 Index is 12056754986399288214 now active
2023/01/06 22:18:11 Created the secondary index idx3. Waiting for it become active
2023/01/06 22:18:11 Index is 11110248187205462661 now active
2023/01/06 22:18:11 Using n1ql client
2023/01/06 22:18:11 Length of scanResults = 10
2023/01/06 22:18:11 Changing config key indexer.settings.max_seckey_size to value 4096
2023/01/06 22:18:11 Changing config key indexer.settings.max_array_seckey_size to value 51200
2023/01/06 22:18:16 Expected and Actual scan responses are the same
2023/01/06 22:18:16 Using n1ql client
2023/01/06 22:18:16 Expected and Actual scan responses are the same
2023/01/06 22:18:16 Using n1ql client
2023/01/06 22:18:16 Expected and Actual scan responses are the same
2023/01/06 22:18:16 Changing config key indexer.settings.max_seckey_size to value 100
2023/01/06 22:18:17 Changing config key indexer.settings.max_array_seckey_size to value 2200
2023/01/06 22:18:20 Using n1ql client
2023/01/06 22:18:20 Length of scanResults = 10
2023/01/06 22:18:20 Changing config key indexer.settings.max_seckey_size to value 4608
2023/01/06 22:18:20 Changing config key indexer.settings.max_array_seckey_size to value 10240
--- PASS: TestArraySizeIncreaseDecrease1 (84.01s)
=== RUN   TestArraySizeIncreaseDecrease2
2023/01/06 22:18:21 In TestArraySizeIncreaseDecrease2()
2023/01/06 22:18:21 In DropAllSecondaryIndexes()
2023/01/06 22:18:21 Index found:  arr2
2023/01/06 22:18:22 Dropped index arr2
2023/01/06 22:18:22 Index found:  idx3
2023/01/06 22:18:22 Dropped index idx3
2023/01/06 22:18:22 Index found:  arr1
2023/01/06 22:18:22 Dropped index arr1
2023/01/06 22:19:00 Flushed the bucket default, Response body: 
2023/01/06 22:19:00 Changing config key indexer.settings.allow_large_keys to value true
2023/01/06 22:19:01 Changing config key indexer.settings.max_seckey_size to value 100
2023/01/06 22:19:01 Changing config key indexer.settings.max_array_seckey_size to value 2000
2023/01/06 22:19:02 Start of createArrayDocs()
2023/01/06 22:19:19 End of createArrayDocs()
2023/01/06 22:19:19 Start of createArrayDocs()
2023/01/06 22:19:19 End of createArrayDocs()
2023/01/06 22:19:27 Created the secondary index arr1. Waiting for it become active
2023/01/06 22:19:27 Index is 5955529718130554007 now active
2023/01/06 22:19:34 Created the secondary index arr2. Waiting for it become active
2023/01/06 22:19:34 Index is 16831886912848480725 now active
2023/01/06 22:19:40 Created the secondary index idx3. Waiting for it become active
2023/01/06 22:19:40 Index is 15652526724076921509 now active
2023/01/06 22:19:40 Expected and Actual scan responses are the same
2023/01/06 22:19:40 Using n1ql client
2023/01/06 22:19:40 Expected and Actual scan responses are the same
2023/01/06 22:19:40 Using n1ql client
2023/01/06 22:19:40 Expected and Actual scan responses are the same
2023/01/06 22:19:40 Changing config key indexer.settings.max_seckey_size to value 4096
2023/01/06 22:19:41 Changing config key indexer.settings.max_array_seckey_size to value 51200
2023/01/06 22:19:45 Expected and Actual scan responses are the same
2023/01/06 22:19:45 Using n1ql client
2023/01/06 22:19:45 Expected and Actual scan responses are the same
2023/01/06 22:19:45 Using n1ql client
2023/01/06 22:19:45 Expected and Actual scan responses are the same
2023/01/06 22:19:45 Changing config key indexer.settings.max_seckey_size to value 100
2023/01/06 22:19:45 Changing config key indexer.settings.max_array_seckey_size to value 2200
2023/01/06 22:19:49 Expected and Actual scan responses are the same
2023/01/06 22:19:49 Using n1ql client
2023/01/06 22:19:49 Expected and Actual scan responses are the same
2023/01/06 22:19:49 Using n1ql client
2023/01/06 22:19:49 Expected and Actual scan responses are the same
2023/01/06 22:19:49 Changing config key indexer.settings.max_seckey_size to value 4608
2023/01/06 22:19:50 Changing config key indexer.settings.max_array_seckey_size to value 10240
--- PASS: TestArraySizeIncreaseDecrease2 (89.03s)
=== RUN   TestBufferedScan_BackfillDisabled
2023/01/06 22:19:51 In TestBufferedScan_BackfillDisabled()
2023/01/06 22:19:51 In DropAllSecondaryIndexes()
2023/01/06 22:19:51 Index found:  arr1
2023/01/06 22:19:51 Dropped index arr1
2023/01/06 22:19:51 Index found:  idx3
2023/01/06 22:19:51 Dropped index idx3
2023/01/06 22:19:51 Index found:  arr2
2023/01/06 22:19:51 Dropped index arr2
2023/01/06 22:20:28 Flushed the bucket default, Response body: 
2023/01/06 22:21:07 Changing config key queryport.client.settings.backfillLimit to value 0
2023/01/06 22:21:14 Created the secondary index addressidx. Waiting for it become active
2023/01/06 22:21:14 Index is 3044608651592490337 now active
2023-01-06T22:21:14.060+05:30 [Info] metadata provider version changed 1509 -> 1510
2023-01-06T22:21:14.060+05:30 [Info] switched currmeta from 1509 -> 1510 force false 
2023-01-06T22:21:14.060+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T22:21:14.061+05:30 [Info] GSIC[default/default-_default-_default-1673023874053976237] started ...
2023-01-06T22:21:14.061+05:30 [Warn] MonitorIndexer: Indexer for default:_default:_default is already being monitored
2023/01/06 22:21:14 Non-backfill file found: /tmp/.ICE-unix
2023/01/06 22:21:14 Non-backfill file found: /tmp/.Test-unix
2023/01/06 22:21:14 Non-backfill file found: /tmp/.X11-unix
2023/01/06 22:21:14 Non-backfill file found: /tmp/.XIM-unix
2023/01/06 22:21:14 Non-backfill file found: /tmp/.font-unix
2023/01/06 22:21:14 Non-backfill file found: /tmp/fail.log
2023/01/06 22:21:14 Non-backfill file found: /tmp/go-build4001809084
2023/01/06 22:21:14 Non-backfill file found: /tmp/mdbslice
2023/01/06 22:21:14 Non-backfill file found: /tmp/systemd-private-cec4a13140684eff889b539924315380-apache2.service-K63d2y
2023/01/06 22:21:14 Non-backfill file found: /tmp/systemd-private-cec4a13140684eff889b539924315380-systemd-timesyncd.service-m3TTl6
2023/01/06 22:21:14 limit=1,chsize=256; received 1 items; took 3.209527ms
2023-01-06T22:21:14.081+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2023-01-06T22:21:14.081+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2023-01-06T22:21:14.084+05:30 [Info] switched currmeta from 1510 -> 1510 force true 
2023-01-06T22:21:14.093+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2023-01-06T22:21:14.093+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2023-01-06T22:21:14.096+05:30 [Info] switched currmeta from 1514 -> 1514 force true 
2023-01-06T22:21:15.343+05:30 [Info] serviceChangeNotifier: received PoolChangeNotification
2023-01-06T22:21:15.349+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2023-01-06T22:21:15.349+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2023-01-06T22:21:15.349+05:30 [Info] Refreshing indexer list due to cluster changes or auto-refresh.
2023-01-06T22:21:15.349+05:30 [Info] Refreshed Indexer List: [127.0.0.1:9106]
2023-01-06T22:21:15.351+05:30 [Info] switched currmeta from 1514 -> 1514 force true 
2023-01-06T22:21:15.352+05:30 [Info] switched currmeta from 1510 -> 1510 force true 
2023/01/06 22:21:16 limit=1000,chsize=256; received 1000 items; took 1.262935727s
2023-01-06T22:21:16.652+05:30 [Info] Rollback time has changed for index inst 2910022080517681876. New rollback time 1673023346043934846
2023-01-06T22:21:16.653+05:30 [Info] Rollback time has changed for index inst 2910022080517681876. New rollback time 1673023346043934846
2023-01-06T22:21:17.330+05:30 [Info] gsiKeyspace::Close Closing default:_default:_default
--- PASS: TestBufferedScan_BackfillDisabled (86.31s)
=== RUN   TestBufferedScan_BackfillEnabled
2023/01/06 22:21:17 In TestBufferedScan_BackfillEnabled()
2023-01-06T22:21:17.431+05:30 [Info] MetadataProvider.CheckIndexerStatus(): adminport=127.0.0.1:9106 connected=true
2023/01/06 22:21:17 Changing config key queryport.client.settings.backfillLimit to value 1
2023-01-06T22:21:17.445+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T22:21:17.445+05:30 [Info] GSIC[default/default-_default-_default-1673023877440178069] started ...
2023-01-06T22:21:17.448+05:30 [Info] New settings received: 
{"indexer.api.enableTestServer":true,"indexer.plasma.backIndex.enableInMemoryCompression":true,"indexer.plasma.backIndex.enablePageBloomFilter":false,"indexer.plasma.mainIndex.enableInMemoryCompression":true,"indexer.settings.allow_large_keys":true,"indexer.settings.bufferPoolBlockSize":16384,"indexer.settings.build.batch_size":5,"indexer.settings.compaction.abort_exceed_interval":false,"indexer.settings.compaction.check_period":30,"indexer.settings.compaction.compaction_mode":"circular","indexer.settings.compaction.days_of_week":"Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday","indexer.settings.compaction.interval":"00:00,00:00","indexer.settings.compaction.min_frag":30,"indexer.settings.compaction.min_size":524288000,"indexer.settings.compaction.plasma.manual":false,"indexer.settings.compaction.plasma.optional.decrement":5,"indexer.settings.compaction.plasma.optional.min_frag":20,"indexer.settings.compaction.plasma.optional.quota":25,"indexer.settings.corrupt_index_num_backups":1,"indexer.settings.cpuProfDir":"","indexer.settings.cpuProfile":false,"indexer.settings.eTagPeriod":240,"indexer.settings.enable_corrupt_index_backup":false,"indexer.settings.enable_page_bloom_filter":false,"indexer.settings.fast_flush_mode":true,"indexer.settings.gc_percent":100,"indexer.settings.inmemory_snapshot.fdb.interval":200,"indexer.settings.inmemory_snapshot.interval":200,"indexer.settings.inmemory_snapshot.moi.interval":10,"indexer.settings.largeSnapshotThreshold":200,"indexer.settings.log_level":"info","indexer.settings.maxVbQueueLength":0,"indexer.settings.max_array_seckey_size":10240,"indexer.settings.max_cpu_percent":0,"indexer.settings.max_seckey_size":4608,"indexer.settings.max_writer_lock_prob":20,"indexer.settings.memProfDir":"","indexer.settings.memProfile":false,"indexer.settings.memory_quota":1572864000,"indexer.settings.minVbQueueLength":250,"indexer.settings.moi.debug":false,"indexer.settings.moi.persistence_threads":2,"indexer.settings.moi.recovery.max_rollbacks":2,"indexer.settings.moi.recovery_threads":4,"indexer.settings.num_replica":0,"indexer.settings.persisted_snapshot.fdb.interval":5000,"indexer.settings.persisted_snapshot.interval":5000,"indexer.settings.persisted_snapshot.moi.interval":60000,"indexer.settings.persisted_snapshot_init_build.fdb.interval":5000,"indexer.settings.persisted_snapshot_init_build.interval":5000,"indexer.settings.persisted_snapshot_init_build.moi.interval":60000,"indexer.settings.plasma.recovery.max_rollbacks":2,"indexer.settings.rebalance.blob_storage_bucket":"","indexer.settings.rebalance.blob_storage_prefix":"","indexer.settings.rebalance.blob_storage_region":"","indexer.settings.rebalance.blob_storage_scheme":"","indexer.settings.rebalance.redistribute_indexes":false,"indexer.settings.recovery.max_rollbacks":5,"indexer.settings.scan_getseqnos_retries":30,"indexer.settings.scan_timeout":0,"indexer.settings.send_buffer_size":1024,"indexer.settings.serverless.indexLimit":201,"indexer.settings.sliceBufSize":800,"indexer.settings.smallSnapshotThreshold":30,"indexer.settings.snapshotListeners":2,"indexer.settings.snapshotRequestWorkers":2,"indexer.settings.statsLogDumpInterval":60,"indexer.settings.storage_mode":"memory_optimized","indexer.settings.storage_mode.disable_upgrade":true,"indexer.settings.thresholds.mem_high":70,"indexer.settings.thresholds.mem_low":50,"indexer.settings.thresholds.units_high":60,"indexer.settings.thresholds.units_low":40,"indexer.settings.units_quota":100000,"indexer.settings.wal_size":4096,"projector.settings.log_level":"info","queryport.client.log_level":"warn","queryport.client.settings.backfillLimit":1,"queryport.client.settings.minPoolSizeWM":1000,"queryport.client.settings.poolOverflow":30,"queryport.client.settings.poolSize":5000,"queryport.client.settings.relConnBatchSize":100}
2023/01/06 22:21:17 limit=1,chsize=256; received 1 items; took 20.206097ms
2023/01/06 22:21:18 limit=1000,chsize=256; received 1000 items; took 10.391264ms
2023/01/06 22:21:30 limit=1000,chsize=256; received 1000 items; took 10.382799992s
Scan error: bufferedscan temp file size exceeded limit 1, 13 - cause: bufferedscan temp file size exceeded limit 1, 13
Scan error: bufferedscan temp file size exceeded limit 1, 13 - cause: bufferedscan temp file size exceeded limit 1, 13
Scan error:  bufferedscan temp file size exceeded limit 1, 13 - cause:  bufferedscan temp file size exceeded limit 1, 13
Scan error:  bufferedscan temp file size exceeded limit 1, 13 - cause:  bufferedscan temp file size exceeded limit 1, 13
2023/01/06 22:21:45 limit=1000,chsize=256; received 644 items; took 13.193604861s
2023/01/06 22:21:45 limit=1000,chsize=256; received 644 items; took 13.194730701s
2023/01/06 22:21:46 Changing config key queryport.client.settings.backfillLimit to value 0
--- PASS: TestBufferedScan_BackfillEnabled (28.84s)
=== RUN   TestMultiScanSetup
2023/01/06 22:21:46 In TestMultiScanSetup()
2023/01/06 22:21:47 Emptying the default bucket
2023/01/06 22:21:50 Flush Enabled on bucket default, responseBody: 
2023/01/06 22:22:27 Flushed the bucket default, Response body: 
2023/01/06 22:22:27 Populating the default bucket
2023/01/06 22:22:41 Created the secondary index index_companyname. Waiting for it become active
2023/01/06 22:22:41 Index is 15783088401242873413 now active
2023/01/06 22:22:47 Created the secondary index index_company. Waiting for it become active
2023/01/06 22:22:47 Index is 13724136830760528258 now active
2023/01/06 22:22:54 Created the secondary index index_company_name_age. Waiting for it become active
2023/01/06 22:22:54 Index is 11899452028254465987 now active
2023/01/06 22:23:00 Created the secondary index index_primary. Waiting for it become active
2023/01/06 22:23:00 Index is 8929109369020188940 now active
2023/01/06 22:23:07 Created the secondary index index_company_name_age_address. Waiting for it become active
2023/01/06 22:23:07 Index is 2204492660918262706 now active
2023/01/06 22:23:13 Created the secondary index index_company_name_age_address_friends. Waiting for it become active
2023/01/06 22:23:13 Index is 11789817377692224791 now active
--- PASS: TestMultiScanSetup (87.73s)
=== RUN   TestMultiScanCount
2023/01/06 22:23:13 In TestMultiScanCount()
2023/01/06 22:23:13 

--------- Composite Index with 2 fields ---------
2023/01/06 22:23:13 
--- ScanAllNoFilter ---
2023/01/06 22:23:13 distinct = false
2023/01/06 22:23:14 Using n1ql client
2023/01/06 22:23:14 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:23:14 
--- ScanAllFilterNil ---
2023/01/06 22:23:14 distinct = false
2023/01/06 22:23:14 Using n1ql client
2023/01/06 22:23:14 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:23:14 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:23:14 distinct = false
2023/01/06 22:23:15 Using n1ql client
2023/01/06 22:23:15 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:23:15 
--- SingleSeek ---
2023/01/06 22:23:15 distinct = false
2023/01/06 22:23:15 Using n1ql client
2023/01/06 22:23:15 MultiScanCount = 1 ExpectedMultiScanCount = 1
2023/01/06 22:23:15 
--- MultipleSeek ---
2023/01/06 22:23:15 distinct = false
2023/01/06 22:23:16 Using n1ql client
2023/01/06 22:23:16 MultiScanCount = 2 ExpectedMultiScanCount = 2
2023/01/06 22:23:16 
--- SimpleRange ---
2023/01/06 22:23:16 distinct = false
2023/01/06 22:23:16 Using n1ql client
2023/01/06 22:23:16 MultiScanCount = 2273 ExpectedMultiScanCount = 2273
2023/01/06 22:23:16 
--- NonOverlappingRanges ---
2023/01/06 22:23:16 distinct = false
2023/01/06 22:23:16 Using n1ql client
2023/01/06 22:23:16 MultiScanCount = 4283 ExpectedMultiScanCount = 4283
2023/01/06 22:23:16 
--- OverlappingRanges ---
2023/01/06 22:23:16 distinct = false
2023/01/06 22:23:17 Using n1ql client
2023/01/06 22:23:17 MultiScanCount = 5756 ExpectedMultiScanCount = 5756
2023/01/06 22:23:17 
--- NonOverlappingFilters ---
2023/01/06 22:23:17 distinct = false
2023/01/06 22:23:17 Using n1ql client
2023/01/06 22:23:17 MultiScanCount = 337 ExpectedMultiScanCount = 337
2023/01/06 22:23:17 
--- OverlappingFilters ---
2023/01/06 22:23:17 distinct = false
2023/01/06 22:23:18 Using n1ql client
2023/01/06 22:23:18 MultiScanCount = 2559 ExpectedMultiScanCount = 2559
2023/01/06 22:23:18 
--- BoundaryFilters ---
2023/01/06 22:23:18 distinct = false
2023/01/06 22:23:18 Using n1ql client
2023/01/06 22:23:18 MultiScanCount = 499 ExpectedMultiScanCount = 499
2023/01/06 22:23:18 
--- SeekAndFilters_NonOverlapping ---
2023/01/06 22:23:18 distinct = false
2023/01/06 22:23:18 Using n1ql client
2023/01/06 22:23:18 MultiScanCount = 256 ExpectedMultiScanCount = 256
2023/01/06 22:23:18 
--- SeekAndFilters_Overlapping ---
2023/01/06 22:23:18 distinct = false
2023/01/06 22:23:19 Using n1ql client
2023/01/06 22:23:19 MultiScanCount = 255 ExpectedMultiScanCount = 255
2023/01/06 22:23:19 
--- SimpleRangeLowUnbounded ---
2023/01/06 22:23:19 distinct = false
2023/01/06 22:23:19 Using n1ql client
2023/01/06 22:23:19 MultiScanCount = 5618 ExpectedMultiScanCount = 5618
2023/01/06 22:23:19 
--- SimpleRangeHighUnbounded ---
2023/01/06 22:23:19 distinct = false
2023/01/06 22:23:20 Using n1ql client
2023/01/06 22:23:20 MultiScanCount = 3704 ExpectedMultiScanCount = 3704
2023/01/06 22:23:20 
--- SimpleRangeMultipleUnbounded ---
2023/01/06 22:23:20 distinct = false
2023/01/06 22:23:20 Using n1ql client
2023/01/06 22:23:20 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:23:20 
--- FiltersWithUnbounded ---
2023/01/06 22:23:20 distinct = false
2023/01/06 22:23:21 Using n1ql client
2023/01/06 22:23:21 MultiScanCount = 3173 ExpectedMultiScanCount = 3173
2023/01/06 22:23:21 
--- FiltersLowGreaterThanHigh ---
2023/01/06 22:23:21 distinct = false
2023/01/06 22:23:21 Using n1ql client
2023/01/06 22:23:21 MultiScanCount = 418 ExpectedMultiScanCount = 418
2023/01/06 22:23:21 

--------- Simple Index with 1 field ---------
2023/01/06 22:23:21 
--- SingleIndexSimpleRange ---
2023/01/06 22:23:21 distinct = false
2023/01/06 22:23:21 Using n1ql client
2023/01/06 22:23:21 MultiScanCount = 2273 ExpectedMultiScanCount = 2273
2023/01/06 22:23:21 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2023/01/06 22:23:21 distinct = false
2023/01/06 22:23:22 Using n1ql client
2023/01/06 22:23:22 MultiScanCount = 7140 ExpectedMultiScanCount = 7140
2023/01/06 22:23:22 
--- SingleIndex_SimpleRanges_Overlapping ---
2023/01/06 22:23:22 distinct = false
2023/01/06 22:23:22 Using n1ql client
2023/01/06 22:23:22 MultiScanCount = 8701 ExpectedMultiScanCount = 8701
2023/01/06 22:23:22 

--------- Composite Index with 3 fields ---------
2023/01/06 22:23:22 
--- ScanAllNoFilter ---
2023/01/06 22:23:22 distinct = false
2023/01/06 22:23:22 Using n1ql client
2023/01/06 22:23:22 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:23:22 
--- ScanAllFilterNil ---
2023/01/06 22:23:22 distinct = false
2023/01/06 22:23:23 Using n1ql client
2023/01/06 22:23:23 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:23:23 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:23:23 distinct = false
2023/01/06 22:23:23 Using n1ql client
2023/01/06 22:23:23 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:23:23 
--- 3FieldsSingleSeek ---
2023/01/06 22:23:23 distinct = false
2023/01/06 22:23:24 Using n1ql client
2023/01/06 22:23:24 MultiScanCount = 1 ExpectedMultiScanCount = 1
2023/01/06 22:23:24 
--- 3FieldsMultipleSeeks ---
2023/01/06 22:23:24 distinct = false
2023/01/06 22:23:24 Using n1ql client
2023/01/06 22:23:24 MultiScanCount = 3 ExpectedMultiScanCount = 3
2023/01/06 22:23:24 
--- 3FieldsMultipleSeeks_Identical ---
2023/01/06 22:23:24 distinct = false
2023/01/06 22:23:25 Using n1ql client
2023/01/06 22:23:25 MultiScanCount = 2 ExpectedMultiScanCount = 2
2023/01/06 22:23:25 

--------- New scenarios ---------
2023/01/06 22:23:25 
--- CompIndexHighUnbounded1 ---
2023/01/06 22:23:25 
--- Multi Scan 0 ---
2023/01/06 22:23:25 distinct = false
2023/01/06 22:23:25 Using n1ql client
2023/01/06 22:23:25 Using n1ql client
2023/01/06 22:23:25 len(scanResults) = 8 MultiScanCount = 8
2023/01/06 22:23:25 Expected and Actual scan responses are the same
2023/01/06 22:23:25 
--- Multi Scan 1 ---
2023/01/06 22:23:25 distinct = false
2023/01/06 22:23:25 Using n1ql client
2023/01/06 22:23:25 Using n1ql client
2023/01/06 22:23:25 len(scanResults) = 0 MultiScanCount = 0
2023/01/06 22:23:25 Expected and Actual scan responses are the same
2023/01/06 22:23:25 
--- Multi Scan 2 ---
2023/01/06 22:23:25 distinct = false
2023/01/06 22:23:26 Using n1ql client
2023/01/06 22:23:26 Using n1ql client
2023/01/06 22:23:26 len(scanResults) = 9 MultiScanCount = 9
2023/01/06 22:23:26 Expected and Actual scan responses are the same
2023/01/06 22:23:26 
--- CompIndexHighUnbounded2 ---
2023/01/06 22:23:26 
--- Multi Scan 0 ---
2023/01/06 22:23:26 distinct = false
2023/01/06 22:23:26 Using n1ql client
2023/01/06 22:23:26 Using n1ql client
2023/01/06 22:23:26 len(scanResults) = 4138 MultiScanCount = 4138
2023/01/06 22:23:26 Expected and Actual scan responses are the same
2023/01/06 22:23:26 
--- Multi Scan 1 ---
2023/01/06 22:23:26 distinct = false
2023/01/06 22:23:27 Using n1ql client
2023/01/06 22:23:27 Using n1ql client
2023/01/06 22:23:27 len(scanResults) = 2746 MultiScanCount = 2746
2023/01/06 22:23:27 Expected and Actual scan responses are the same
2023/01/06 22:23:27 
--- Multi Scan 2 ---
2023/01/06 22:23:27 distinct = false
2023/01/06 22:23:27 Using n1ql client
2023/01/06 22:23:27 Using n1ql client
2023/01/06 22:23:28 len(scanResults) = 4691 MultiScanCount = 4691
2023/01/06 22:23:28 Expected and Actual scan responses are the same
2023/01/06 22:23:28 
--- CompIndexHighUnbounded3 ---
2023/01/06 22:23:28 
--- Multi Scan 0 ---
2023/01/06 22:23:28 distinct = false
2023/01/06 22:23:28 Using n1ql client
2023/01/06 22:23:28 Using n1ql client
2023/01/06 22:23:28 len(scanResults) = 1329 MultiScanCount = 1329
2023/01/06 22:23:28 Expected and Actual scan responses are the same
2023/01/06 22:23:28 
--- CompIndexHighUnbounded4 ---
2023/01/06 22:23:28 
--- Multi Scan 0 ---
2023/01/06 22:23:28 distinct = false
2023/01/06 22:23:29 Using n1ql client
2023/01/06 22:23:29 Using n1ql client
2023/01/06 22:23:29 len(scanResults) = 5349 MultiScanCount = 5349
2023/01/06 22:23:29 Expected and Actual scan responses are the same
2023/01/06 22:23:29 
--- CompIndexHighUnbounded5 ---
2023/01/06 22:23:29 
--- Multi Scan 0 ---
2023/01/06 22:23:29 distinct = false
2023/01/06 22:23:29 Using n1ql client
2023/01/06 22:23:29 Using n1ql client
2023/01/06 22:23:29 len(scanResults) = 8210 MultiScanCount = 8210
2023/01/06 22:23:29 Expected and Actual scan responses are the same
2023/01/06 22:23:29 
--- SeekBoundaries ---
2023/01/06 22:23:29 
--- Multi Scan 0 ---
2023/01/06 22:23:29 distinct = false
2023/01/06 22:23:30 Using n1ql client
2023/01/06 22:23:30 Using n1ql client
2023/01/06 22:23:30 len(scanResults) = 175 MultiScanCount = 175
2023/01/06 22:23:30 Expected and Actual scan responses are the same
2023/01/06 22:23:30 
--- Multi Scan 1 ---
2023/01/06 22:23:30 distinct = false
2023/01/06 22:23:30 Using n1ql client
2023/01/06 22:23:30 Using n1ql client
2023/01/06 22:23:30 len(scanResults) = 1 MultiScanCount = 1
2023/01/06 22:23:30 Expected and Actual scan responses are the same
2023/01/06 22:23:30 
--- Multi Scan 2 ---
2023/01/06 22:23:30 distinct = false
2023/01/06 22:23:31 Using n1ql client
2023/01/06 22:23:31 Using n1ql client
2023/01/06 22:23:31 len(scanResults) = 555 MultiScanCount = 555
2023/01/06 22:23:31 Expected and Actual scan responses are the same
2023/01/06 22:23:31 
--- Multi Scan 3 ---
2023/01/06 22:23:31 distinct = false
2023/01/06 22:23:31 Using n1ql client
2023/01/06 22:23:31 Using n1ql client
2023/01/06 22:23:31 len(scanResults) = 872 MultiScanCount = 872
2023/01/06 22:23:31 Expected and Actual scan responses are the same
2023/01/06 22:23:31 
--- Multi Scan 4 ---
2023/01/06 22:23:31 distinct = false
2023/01/06 22:23:31 Using n1ql client
2023/01/06 22:23:31 Using n1ql client
2023/01/06 22:23:31 len(scanResults) = 287 MultiScanCount = 287
2023/01/06 22:23:31 Expected and Actual scan responses are the same
2023/01/06 22:23:31 
--- Multi Scan 5 ---
2023/01/06 22:23:31 distinct = false
2023/01/06 22:23:32 Using n1ql client
2023/01/06 22:23:32 Using n1ql client
2023/01/06 22:23:32 len(scanResults) = 5254 MultiScanCount = 5254
2023/01/06 22:23:32 Expected and Actual scan responses are the same
2023/01/06 22:23:32 
--- Multi Scan 6 ---
2023/01/06 22:23:32 distinct = false
2023/01/06 22:23:32 Using n1ql client
2023/01/06 22:23:32 Using n1ql client
2023/01/06 22:23:32 len(scanResults) = 5566 MultiScanCount = 5566
2023/01/06 22:23:32 Expected and Actual scan responses are the same
2023/01/06 22:23:32 
--- Multi Scan 7 ---
2023/01/06 22:23:32 distinct = false
2023/01/06 22:23:33 Using n1ql client
2023/01/06 22:23:33 Using n1ql client
2023/01/06 22:23:33 len(scanResults) = 8 MultiScanCount = 8
2023/01/06 22:23:33 Expected and Actual scan responses are the same
2023/01/06 22:23:33 

--------- With DISTINCT True ---------
2023/01/06 22:23:33 
--- ScanAllNoFilter ---
2023/01/06 22:23:33 distinct = true
2023/01/06 22:23:33 Using n1ql client
2023/01/06 22:23:33 MultiScanCount = 999 ExpectedMultiScanCount = 999
2023/01/06 22:23:33 
--- ScanAllFilterNil ---
2023/01/06 22:23:33 distinct = true
2023/01/06 22:23:34 Using n1ql client
2023/01/06 22:23:34 MultiScanCount = 999 ExpectedMultiScanCount = 999
2023/01/06 22:23:34 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:23:34 distinct = true
2023/01/06 22:23:34 Using n1ql client
2023/01/06 22:23:34 MultiScanCount = 999 ExpectedMultiScanCount = 999
2023/01/06 22:23:34 
--- SingleSeek ---
2023/01/06 22:23:34 distinct = true
2023/01/06 22:23:35 Using n1ql client
2023/01/06 22:23:35 MultiScanCount = 1 ExpectedMultiScanCount = 1
2023/01/06 22:23:35 
--- MultipleSeek ---
2023/01/06 22:23:35 distinct = true
2023/01/06 22:23:35 Using n1ql client
2023/01/06 22:23:35 MultiScanCount = 2 ExpectedMultiScanCount = 2
2023/01/06 22:23:35 
--- SimpleRange ---
2023/01/06 22:23:35 distinct = true
2023/01/06 22:23:35 Using n1ql client
2023/01/06 22:23:35 MultiScanCount = 227 ExpectedMultiScanCount = 227
2023/01/06 22:23:35 
--- NonOverlappingRanges ---
2023/01/06 22:23:35 distinct = true
2023/01/06 22:23:36 Using n1ql client
2023/01/06 22:23:36 MultiScanCount = 428 ExpectedMultiScanCount = 428
2023/01/06 22:23:36 
--- OverlappingRanges ---
2023/01/06 22:23:36 distinct = true
2023/01/06 22:23:36 Using n1ql client
2023/01/06 22:23:36 MultiScanCount = 575 ExpectedMultiScanCount = 575
2023/01/06 22:23:36 
--- NonOverlappingFilters ---
2023/01/06 22:23:36 distinct = true
2023/01/06 22:23:37 Using n1ql client
2023/01/06 22:23:37 MultiScanCount = 186 ExpectedMultiScanCount = 186
2023/01/06 22:23:37 
--- NonOverlappingFilters2 ---
2023/01/06 22:23:37 distinct = true
2023/01/06 22:23:37 Using n1ql client
2023/01/06 22:23:37 MultiScanCount = 1 ExpectedMultiScanCount = 1
2023/01/06 22:23:37 
--- OverlappingFilters ---
2023/01/06 22:23:37 distinct = true
2023/01/06 22:23:37 Using n1ql client
2023/01/06 22:23:37 MultiScanCount = 543 ExpectedMultiScanCount = 543
2023/01/06 22:23:37 
--- BoundaryFilters ---
2023/01/06 22:23:37 distinct = true
2023/01/06 22:23:38 Using n1ql client
2023/01/06 22:23:38 MultiScanCount = 172 ExpectedMultiScanCount = 172
2023/01/06 22:23:38 
--- SeekAndFilters_NonOverlapping ---
2023/01/06 22:23:38 distinct = true
2023/01/06 22:23:38 Using n1ql client
2023/01/06 22:23:38 MultiScanCount = 135 ExpectedMultiScanCount = 135
2023/01/06 22:23:38 
--- SeekAndFilters_Overlapping ---
2023/01/06 22:23:38 distinct = true
2023/01/06 22:23:39 Using n1ql client
2023/01/06 22:23:39 MultiScanCount = 134 ExpectedMultiScanCount = 134
2023/01/06 22:23:39 
--- SimpleRangeLowUnbounded ---
2023/01/06 22:23:39 distinct = false
2023/01/06 22:23:39 Using n1ql client
2023/01/06 22:23:39 MultiScanCount = 5618 ExpectedMultiScanCount = 5618
2023/01/06 22:23:39 
--- SimpleRangeHighUnbounded ---
2023/01/06 22:23:39 distinct = false
2023/01/06 22:23:40 Using n1ql client
2023/01/06 22:23:40 MultiScanCount = 3704 ExpectedMultiScanCount = 3704
2023/01/06 22:23:40 
--- SimpleRangeMultipleUnbounded ---
2023/01/06 22:23:40 distinct = false
2023/01/06 22:23:40 Using n1ql client
2023/01/06 22:23:40 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:23:40 
--- FiltersWithUnbounded ---
2023/01/06 22:23:40 distinct = false
2023/01/06 22:23:40 Using n1ql client
2023/01/06 22:23:40 MultiScanCount = 3173 ExpectedMultiScanCount = 3173
2023/01/06 22:23:40 
--- FiltersLowGreaterThanHigh ---
2023/01/06 22:23:40 distinct = false
2023/01/06 22:23:41 Using n1ql client
2023/01/06 22:23:41 MultiScanCount = 418 ExpectedMultiScanCount = 418
2023/01/06 22:23:41 

--------- Simple Index with 1 field ---------
2023/01/06 22:23:41 
--- SingleIndexSimpleRange ---
2023/01/06 22:23:41 distinct = true
2023/01/06 22:23:41 Using n1ql client
2023/01/06 22:23:41 MultiScanCount = 227 ExpectedMultiScanCount = 227
2023/01/06 22:23:41 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2023/01/06 22:23:41 distinct = true
2023/01/06 22:23:41 Using n1ql client
2023/01/06 22:23:41 MultiScanCount = 713 ExpectedMultiScanCount = 713
2023/01/06 22:23:41 
--- SingleIndex_SimpleRanges_Overlapping ---
2023/01/06 22:23:41 distinct = true
2023/01/06 22:23:42 Using n1ql client
2023/01/06 22:23:42 MultiScanCount = 869 ExpectedMultiScanCount = 869
2023/01/06 22:23:42 

--------- Composite Index with 3 fields ---------
2023/01/06 22:23:42 
--- ScanAllNoFilter ---
2023/01/06 22:23:42 distinct = true
2023/01/06 22:23:42 Using n1ql client
2023/01/06 22:23:42 MultiScanCount = 999 ExpectedMultiScanCount = 999
2023/01/06 22:23:42 
--- ScanAllFilterNil ---
2023/01/06 22:23:42 distinct = true
2023/01/06 22:23:43 Using n1ql client
2023/01/06 22:23:43 MultiScanCount = 999 ExpectedMultiScanCount = 999
2023/01/06 22:23:43 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:23:43 distinct = true
2023/01/06 22:23:43 Using n1ql client
2023/01/06 22:23:43 MultiScanCount = 999 ExpectedMultiScanCount = 999
2023/01/06 22:23:43 
--- 3FieldsSingleSeek ---
2023/01/06 22:23:43 distinct = true
2023/01/06 22:23:44 Using n1ql client
2023/01/06 22:23:44 MultiScanCount = 1 ExpectedMultiScanCount = 1
2023/01/06 22:23:44 
--- 3FieldsMultipleSeeks ---
2023/01/06 22:23:44 distinct = true
2023/01/06 22:23:44 Using n1ql client
2023/01/06 22:23:44 MultiScanCount = 3 ExpectedMultiScanCount = 3
2023/01/06 22:23:44 
--- 3FieldsMultipleSeeks_Identical ---
2023/01/06 22:23:44 distinct = true
2023/01/06 22:23:45 Using n1ql client
2023/01/06 22:23:45 MultiScanCount = 2 ExpectedMultiScanCount = 2
--- PASS: TestMultiScanCount (31.12s)
=== RUN   TestMultiScanScenarios
2023/01/06 22:23:45 In TestMultiScanScenarios()
2023/01/06 22:23:45 

--------- Composite Index with 2 fields ---------
2023/01/06 22:23:45 
--- ScanAllNoFilter ---
2023/01/06 22:23:45 distinct = false
2023/01/06 22:23:45 Using n1ql client
2023/01/06 22:23:45 Expected and Actual scan responses are the same
2023/01/06 22:23:45 
--- ScanAllFilterNil ---
2023/01/06 22:23:45 distinct = false
2023/01/06 22:23:45 Using n1ql client
2023/01/06 22:23:45 Expected and Actual scan responses are the same
2023/01/06 22:23:45 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:23:45 distinct = false
2023/01/06 22:23:46 Using n1ql client
2023/01/06 22:23:46 Expected and Actual scan responses are the same
2023/01/06 22:23:46 
--- SingleSeek ---
2023/01/06 22:23:46 distinct = false
2023/01/06 22:23:46 Using n1ql client
2023/01/06 22:23:46 Expected and Actual scan responses are the same
2023/01/06 22:23:46 
--- MultipleSeek ---
2023/01/06 22:23:46 distinct = false
2023/01/06 22:23:47 Using n1ql client
2023/01/06 22:23:47 Expected and Actual scan responses are the same
2023/01/06 22:23:47 
--- SimpleRange ---
2023/01/06 22:23:47 distinct = false
2023/01/06 22:23:47 Using n1ql client
2023/01/06 22:23:47 Expected and Actual scan responses are the same
2023/01/06 22:23:47 
--- NonOverlappingRanges ---
2023/01/06 22:23:47 distinct = false
2023/01/06 22:23:47 Using n1ql client
2023/01/06 22:23:48 Expected and Actual scan responses are the same
2023/01/06 22:23:48 
--- OverlappingRanges ---
2023/01/06 22:23:48 distinct = false
2023/01/06 22:23:48 Using n1ql client
2023/01/06 22:23:48 Expected and Actual scan responses are the same
2023/01/06 22:23:48 
--- NonOverlappingFilters ---
2023/01/06 22:23:48 distinct = false
2023/01/06 22:23:48 Using n1ql client
2023/01/06 22:23:48 Expected and Actual scan responses are the same
2023/01/06 22:23:48 
--- OverlappingFilters ---
2023/01/06 22:23:48 distinct = false
2023/01/06 22:23:49 Using n1ql client
2023/01/06 22:23:49 Expected and Actual scan responses are the same
2023/01/06 22:23:49 
--- BoundaryFilters ---
2023/01/06 22:23:49 distinct = false
2023/01/06 22:23:49 Using n1ql client
2023/01/06 22:23:49 Expected and Actual scan responses are the same
2023/01/06 22:23:49 
--- SeekAndFilters_NonOverlapping ---
2023/01/06 22:23:49 distinct = false
2023/01/06 22:23:50 Using n1ql client
2023/01/06 22:23:50 Expected and Actual scan responses are the same
2023/01/06 22:23:50 
--- SeekAndFilters_Overlapping ---
2023/01/06 22:23:50 distinct = false
2023/01/06 22:23:50 Using n1ql client
2023/01/06 22:23:50 Expected and Actual scan responses are the same
2023/01/06 22:23:50 
--- SimpleRangeLowUnbounded ---
2023/01/06 22:23:50 distinct = false
2023/01/06 22:23:50 Using n1ql client
2023/01/06 22:23:50 Expected and Actual scan responses are the same
2023/01/06 22:23:50 
--- SimpleRangeHighUnbounded ---
2023/01/06 22:23:50 distinct = false
2023/01/06 22:23:51 Using n1ql client
2023/01/06 22:23:51 Expected and Actual scan responses are the same
2023/01/06 22:23:51 
--- SimpleRangeMultipleUnbounded ---
2023/01/06 22:23:51 distinct = false
2023/01/06 22:23:51 Using n1ql client
2023/01/06 22:23:51 Expected and Actual scan responses are the same
2023/01/06 22:23:51 
--- FiltersWithUnbounded ---
2023/01/06 22:23:51 distinct = false
2023/01/06 22:23:52 Using n1ql client
2023/01/06 22:23:52 Expected and Actual scan responses are the same
2023/01/06 22:23:52 
--- FiltersLowGreaterThanHigh ---
2023/01/06 22:23:52 distinct = false
2023/01/06 22:23:52 Using n1ql client
2023/01/06 22:23:52 Expected and Actual scan responses are the same
2023/01/06 22:23:52 

--------- Simple Index with 1 field ---------
2023/01/06 22:23:52 
--- SingleIndexSimpleRange ---
2023/01/06 22:23:52 distinct = false
2023/01/06 22:23:53 Using n1ql client
2023/01/06 22:23:53 Expected and Actual scan responses are the same
2023/01/06 22:23:53 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2023/01/06 22:23:53 distinct = false
2023/01/06 22:23:53 Using n1ql client
2023/01/06 22:23:53 Expected and Actual scan responses are the same
2023/01/06 22:23:53 
--- SingleIndex_SimpleRanges_Overlapping ---
2023/01/06 22:23:53 distinct = false
2023/01/06 22:23:53 Using n1ql client
2023/01/06 22:23:53 Expected and Actual scan responses are the same
2023/01/06 22:23:53 

--------- Composite Index with 3 fields ---------
2023/01/06 22:23:53 
--- ScanAllNoFilter ---
2023/01/06 22:23:53 distinct = false
2023/01/06 22:23:54 Using n1ql client
2023/01/06 22:23:54 Expected and Actual scan responses are the same
2023/01/06 22:23:54 
--- ScanAllFilterNil ---
2023/01/06 22:23:54 distinct = false
2023/01/06 22:23:54 Using n1ql client
2023/01/06 22:23:54 Expected and Actual scan responses are the same
2023/01/06 22:23:54 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:23:54 distinct = false
2023/01/06 22:23:55 Using n1ql client
2023/01/06 22:23:55 Expected and Actual scan responses are the same
2023/01/06 22:23:55 
--- 3FieldsSingleSeek ---
2023/01/06 22:23:55 distinct = false
2023/01/06 22:23:55 Using n1ql client
2023/01/06 22:23:55 Expected and Actual scan responses are the same
2023/01/06 22:23:55 
--- 3FieldsMultipleSeeks ---
2023/01/06 22:23:55 distinct = false
2023/01/06 22:23:56 Using n1ql client
2023/01/06 22:23:56 Expected and Actual scan responses are the same
2023/01/06 22:23:56 
--- 3FieldsMultipleSeeks_Identical ---
2023/01/06 22:23:56 distinct = false
2023/01/06 22:23:56 Using n1ql client
2023/01/06 22:23:56 Expected and Actual scan responses are the same
2023/01/06 22:23:56 

--------- New scenarios ---------
2023/01/06 22:23:56 
--- CompIndexHighUnbounded1 ---
2023/01/06 22:23:56 
--- Multi Scan 0 ---
2023/01/06 22:23:56 distinct = false
2023/01/06 22:23:57 Using n1ql client
2023/01/06 22:23:57 Expected and Actual scan responses are the same
2023/01/06 22:23:57 
--- Multi Scan 1 ---
2023/01/06 22:23:57 distinct = false
2023/01/06 22:23:57 Using n1ql client
2023/01/06 22:23:57 Expected and Actual scan responses are the same
2023/01/06 22:23:57 
--- Multi Scan 2 ---
2023/01/06 22:23:57 distinct = false
2023/01/06 22:23:57 Using n1ql client
2023/01/06 22:23:57 Expected and Actual scan responses are the same
2023/01/06 22:23:57 
--- CompIndexHighUnbounded2 ---
2023/01/06 22:23:57 
--- Multi Scan 0 ---
2023/01/06 22:23:57 distinct = false
2023/01/06 22:23:58 Using n1ql client
2023/01/06 22:23:58 Expected and Actual scan responses are the same
2023/01/06 22:23:58 
--- Multi Scan 1 ---
2023/01/06 22:23:58 distinct = false
2023/01/06 22:23:58 Using n1ql client
2023/01/06 22:23:58 Expected and Actual scan responses are the same
2023/01/06 22:23:58 
--- Multi Scan 2 ---
2023/01/06 22:23:58 distinct = false
2023/01/06 22:23:59 Using n1ql client
2023/01/06 22:23:59 Expected and Actual scan responses are the same
2023/01/06 22:23:59 
--- CompIndexHighUnbounded3 ---
2023/01/06 22:23:59 
--- Multi Scan 0 ---
2023/01/06 22:23:59 distinct = false
2023/01/06 22:23:59 Using n1ql client
2023/01/06 22:23:59 Expected and Actual scan responses are the same
2023/01/06 22:23:59 
--- CompIndexHighUnbounded4 ---
2023/01/06 22:23:59 
--- Multi Scan 0 ---
2023/01/06 22:23:59 distinct = false
2023/01/06 22:24:00 Using n1ql client
2023/01/06 22:24:00 Expected and Actual scan responses are the same
2023/01/06 22:24:00 
--- CompIndexHighUnbounded5 ---
2023/01/06 22:24:00 
--- Multi Scan 0 ---
2023/01/06 22:24:00 distinct = false
2023/01/06 22:24:00 Using n1ql client
2023/01/06 22:24:00 Expected and Actual scan responses are the same
2023/01/06 22:24:00 
--- SeekBoundaries ---
2023/01/06 22:24:00 
--- Multi Scan 0 ---
2023/01/06 22:24:00 distinct = false
2023/01/06 22:24:00 Using n1ql client
2023/01/06 22:24:00 Expected and Actual scan responses are the same
2023/01/06 22:24:00 
--- Multi Scan 1 ---
2023/01/06 22:24:00 distinct = false
2023/01/06 22:24:01 Using n1ql client
2023/01/06 22:24:01 Expected and Actual scan responses are the same
2023/01/06 22:24:01 
--- Multi Scan 2 ---
2023/01/06 22:24:01 distinct = false
2023/01/06 22:24:01 Using n1ql client
2023/01/06 22:24:01 Expected and Actual scan responses are the same
2023/01/06 22:24:01 
--- Multi Scan 3 ---
2023/01/06 22:24:01 distinct = false
2023/01/06 22:24:02 Using n1ql client
2023/01/06 22:24:02 Expected and Actual scan responses are the same
2023/01/06 22:24:02 
--- Multi Scan 4 ---
2023/01/06 22:24:02 distinct = false
2023/01/06 22:24:02 Using n1ql client
2023/01/06 22:24:02 Expected and Actual scan responses are the same
2023/01/06 22:24:02 
--- Multi Scan 5 ---
2023/01/06 22:24:02 distinct = false
2023/01/06 22:24:03 Using n1ql client
2023/01/06 22:24:03 Expected and Actual scan responses are the same
2023/01/06 22:24:03 
--- Multi Scan 6 ---
2023/01/06 22:24:03 distinct = false
2023/01/06 22:24:03 Using n1ql client
2023/01/06 22:24:03 Expected and Actual scan responses are the same
2023/01/06 22:24:03 
--- Multi Scan 7 ---
2023/01/06 22:24:03 distinct = false
2023/01/06 22:24:03 Using n1ql client
2023/01/06 22:24:03 Expected and Actual scan responses are the same
2023/01/06 22:24:03 
--- PrefixSortVariations ---
2023/01/06 22:24:03 
--- Multi Scan 0 ---
2023/01/06 22:24:03 distinct = false
2023/01/06 22:24:04 Using n1ql client
2023/01/06 22:24:04 Expected and Actual scan responses are the same
2023/01/06 22:24:04 
--- Multi Scan 1 ---
2023/01/06 22:24:04 distinct = false
2023/01/06 22:24:04 Using n1ql client
2023/01/06 22:24:04 Expected and Actual scan responses are the same
--- PASS: TestMultiScanScenarios (19.78s)
=== RUN   TestMultiScanOffset
2023/01/06 22:24:04 In TestMultiScanOffset()
2023/01/06 22:24:04 

--------- Composite Index with 2 fields ---------
2023/01/06 22:24:04 
--- ScanAllNoFilter ---
2023/01/06 22:24:04 distinct = false
2023/01/06 22:24:05 Using n1ql client
2023/01/06 22:24:05 
--- ScanAllFilterNil ---
2023/01/06 22:24:05 distinct = false
2023/01/06 22:24:05 Using n1ql client
2023/01/06 22:24:05 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:24:05 distinct = false
2023/01/06 22:24:06 Using n1ql client
2023/01/06 22:24:06 
--- SingleSeek ---
2023/01/06 22:24:06 distinct = false
2023/01/06 22:24:06 Using n1ql client
2023/01/06 22:24:06 
--- MultipleSeek ---
2023/01/06 22:24:06 distinct = false
2023/01/06 22:24:06 Using n1ql client
2023/01/06 22:24:06 
--- SimpleRange ---
2023/01/06 22:24:06 distinct = false
2023/01/06 22:24:07 Using n1ql client
2023/01/06 22:24:07 
--- NonOverlappingRanges ---
2023/01/06 22:24:07 distinct = false
2023/01/06 22:24:07 Using n1ql client
2023/01/06 22:24:07 
--- OverlappingRanges ---
2023/01/06 22:24:07 distinct = false
2023/01/06 22:24:08 Using n1ql client
2023/01/06 22:24:08 
--- NonOverlappingFilters ---
2023/01/06 22:24:08 distinct = false
2023/01/06 22:24:08 Using n1ql client
2023/01/06 22:24:08 
--- OverlappingFilters ---
2023/01/06 22:24:08 distinct = false
2023/01/06 22:24:08 Using n1ql client
2023/01/06 22:24:08 
--- BoundaryFilters ---
2023/01/06 22:24:08 distinct = false
2023/01/06 22:24:09 Using n1ql client
2023/01/06 22:24:09 
--- SeekAndFilters_NonOverlapping ---
2023/01/06 22:24:09 distinct = false
2023/01/06 22:24:09 Using n1ql client
2023/01/06 22:24:09 
--- SeekAndFilters_Overlapping ---
2023/01/06 22:24:09 distinct = false
2023/01/06 22:24:10 Using n1ql client
2023/01/06 22:24:10 
--- SimpleRangeLowUnbounded ---
2023/01/06 22:24:10 distinct = false
2023/01/06 22:24:10 Using n1ql client
2023/01/06 22:24:10 Expected and Actual scan responses are the same
2023/01/06 22:24:10 
--- SimpleRangeHighUnbounded ---
2023/01/06 22:24:10 distinct = false
2023/01/06 22:24:10 Using n1ql client
2023/01/06 22:24:10 Expected and Actual scan responses are the same
2023/01/06 22:24:10 
--- SimpleRangeMultipleUnbounded ---
2023/01/06 22:24:10 distinct = false
2023/01/06 22:24:11 Using n1ql client
2023/01/06 22:24:11 Expected and Actual scan responses are the same
2023/01/06 22:24:11 
--- FiltersWithUnbounded ---
2023/01/06 22:24:11 distinct = false
2023/01/06 22:24:11 Using n1ql client
2023/01/06 22:24:11 Expected and Actual scan responses are the same
2023/01/06 22:24:11 
--- FiltersLowGreaterThanHigh ---
2023/01/06 22:24:11 distinct = false
2023/01/06 22:24:12 Using n1ql client
2023/01/06 22:24:12 Expected and Actual scan responses are the same
2023/01/06 22:24:12 

--------- Simple Index with 1 field ---------
2023/01/06 22:24:12 
--- SingleIndexSimpleRange ---
2023/01/06 22:24:12 distinct = false
2023/01/06 22:24:12 Using n1ql client
2023/01/06 22:24:12 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2023/01/06 22:24:12 distinct = false
2023/01/06 22:24:12 Using n1ql client
2023/01/06 22:24:13 
--- SingleIndex_SimpleRanges_Overlapping ---
2023/01/06 22:24:13 distinct = false
2023/01/06 22:24:13 Using n1ql client
2023/01/06 22:24:13 

--------- Composite Index with 3 fields ---------
2023/01/06 22:24:13 
--- ScanAllNoFilter ---
2023/01/06 22:24:13 distinct = false
2023/01/06 22:24:13 Using n1ql client
2023/01/06 22:24:13 
--- ScanAllFilterNil ---
2023/01/06 22:24:13 distinct = false
2023/01/06 22:24:14 Using n1ql client
2023/01/06 22:24:14 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:24:14 distinct = false
2023/01/06 22:24:14 Using n1ql client
2023/01/06 22:24:14 
--- 3FieldsSingleSeek ---
2023/01/06 22:24:14 distinct = false
2023/01/06 22:24:15 Using n1ql client
2023/01/06 22:24:15 
--- 3FieldsMultipleSeeks ---
2023/01/06 22:24:15 distinct = false
2023/01/06 22:24:15 Using n1ql client
2023/01/06 22:24:15 
--- 3FieldsMultipleSeeks_Identical ---
2023/01/06 22:24:15 distinct = false
2023/01/06 22:24:16 Using n1ql client
--- PASS: TestMultiScanOffset (11.44s)
=== RUN   TestMultiScanPrimaryIndex
2023/01/06 22:24:16 In TestMultiScanPrimaryIndex()
2023/01/06 22:24:16 
--- PrimaryRange ---
2023/01/06 22:24:16 Using n1ql client
2023/01/06 22:24:16 Expected and Actual scan responses are the same
2023/01/06 22:24:16 
--- PrimaryScanAllNoFilter ---
2023/01/06 22:24:16 Using n1ql client
2023/01/06 22:24:16 Expected and Actual scan responses are the same
--- PASS: TestMultiScanPrimaryIndex (0.08s)
=== RUN   TestMultiScanDistinct
2023/01/06 22:24:16 In TestScansDistinct()
2023/01/06 22:24:16 

--------- Composite Index with 2 fields ---------
2023/01/06 22:24:16 
--- ScanAllNoFilter ---
2023/01/06 22:24:16 distinct = true
2023/01/06 22:24:16 Using n1ql client
2023/01/06 22:24:16 Expected and Actual scan responses are the same
2023/01/06 22:24:16 
--- ScanAllFilterNil ---
2023/01/06 22:24:16 distinct = true
2023/01/06 22:24:17 Using n1ql client
2023/01/06 22:24:17 Expected and Actual scan responses are the same
2023/01/06 22:24:17 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:24:17 distinct = true
2023/01/06 22:24:17 Using n1ql client
2023/01/06 22:24:17 Expected and Actual scan responses are the same
2023/01/06 22:24:17 
--- SingleSeek ---
2023/01/06 22:24:17 distinct = true
2023/01/06 22:24:18 Using n1ql client
2023/01/06 22:24:18 Expected and Actual scan responses are the same
2023/01/06 22:24:18 
--- MultipleSeek ---
2023/01/06 22:24:18 distinct = true
2023/01/06 22:24:18 Using n1ql client
2023/01/06 22:24:18 Expected and Actual scan responses are the same
2023/01/06 22:24:18 
--- SimpleRange ---
2023/01/06 22:24:18 distinct = true
2023/01/06 22:24:19 Using n1ql client
2023/01/06 22:24:19 Expected and Actual scan responses are the same
2023/01/06 22:24:19 
--- NonOverlappingRanges ---
2023/01/06 22:24:19 distinct = true
2023/01/06 22:24:19 Using n1ql client
2023/01/06 22:24:19 Expected and Actual scan responses are the same
2023/01/06 22:24:19 
--- OverlappingRanges ---
2023/01/06 22:24:19 distinct = true
2023/01/06 22:24:19 Using n1ql client
2023/01/06 22:24:19 Expected and Actual scan responses are the same
2023/01/06 22:24:19 
--- NonOverlappingFilters ---
2023/01/06 22:24:19 distinct = true
2023/01/06 22:24:20 Using n1ql client
2023/01/06 22:24:20 Expected and Actual scan responses are the same
2023/01/06 22:24:20 
--- OverlappingFilters ---
2023/01/06 22:24:20 distinct = true
2023/01/06 22:24:20 Using n1ql client
2023/01/06 22:24:20 Expected and Actual scan responses are the same
2023/01/06 22:24:20 
--- BoundaryFilters ---
2023/01/06 22:24:20 distinct = true
2023/01/06 22:24:21 Using n1ql client
2023/01/06 22:24:21 Expected and Actual scan responses are the same
2023/01/06 22:24:21 
--- SeekAndFilters_NonOverlapping ---
2023/01/06 22:24:21 distinct = true
2023/01/06 22:24:21 Using n1ql client
2023/01/06 22:24:21 Expected and Actual scan responses are the same
2023/01/06 22:24:21 
--- SeekAndFilters_Overlapping ---
2023/01/06 22:24:21 distinct = true
2023/01/06 22:24:21 Using n1ql client
2023/01/06 22:24:21 Expected and Actual scan responses are the same
2023/01/06 22:24:21 
--- SimpleRangeLowUnbounded ---
2023/01/06 22:24:21 distinct = false
2023/01/06 22:24:22 Using n1ql client
2023/01/06 22:24:22 Expected and Actual scan responses are the same
2023/01/06 22:24:22 
--- SimpleRangeHighUnbounded ---
2023/01/06 22:24:22 distinct = false
2023/01/06 22:24:22 Using n1ql client
2023/01/06 22:24:22 Expected and Actual scan responses are the same
2023/01/06 22:24:22 
--- SimpleRangeMultipleUnbounded ---
2023/01/06 22:24:22 distinct = false
2023/01/06 22:24:23 Using n1ql client
2023/01/06 22:24:23 Expected and Actual scan responses are the same
2023/01/06 22:24:23 
--- FiltersWithUnbounded ---
2023/01/06 22:24:23 distinct = false
2023/01/06 22:24:23 Using n1ql client
2023/01/06 22:24:23 Expected and Actual scan responses are the same
2023/01/06 22:24:23 
--- FiltersLowGreaterThanHigh ---
2023/01/06 22:24:23 distinct = false
2023/01/06 22:24:24 Using n1ql client
2023/01/06 22:24:24 Expected and Actual scan responses are the same
2023/01/06 22:24:24 

--------- Simple Index with 1 field ---------
2023/01/06 22:24:24 
--- SingleIndexSimpleRange ---
2023/01/06 22:24:24 distinct = true
2023/01/06 22:24:24 Using n1ql client
2023/01/06 22:24:24 Expected and Actual scan responses are the same
2023/01/06 22:24:24 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2023/01/06 22:24:24 distinct = true
2023/01/06 22:24:24 Using n1ql client
2023/01/06 22:24:24 Expected and Actual scan responses are the same
2023/01/06 22:24:24 
--- SingleIndex_SimpleRanges_Overlapping ---
2023/01/06 22:24:24 distinct = true
2023/01/06 22:24:25 Using n1ql client
2023/01/06 22:24:25 Expected and Actual scan responses are the same
2023/01/06 22:24:25 

--------- Composite Index with 3 fields ---------
2023/01/06 22:24:25 
--- ScanAllNoFilter ---
2023/01/06 22:24:25 distinct = true
2023/01/06 22:24:25 Using n1ql client
2023/01/06 22:24:25 Expected and Actual scan responses are the same
2023/01/06 22:24:25 
--- ScanAllFilterNil ---
2023/01/06 22:24:25 distinct = true
2023/01/06 22:24:26 Using n1ql client
2023/01/06 22:24:26 Expected and Actual scan responses are the same
2023/01/06 22:24:26 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:24:26 distinct = true
2023/01/06 22:24:26 Using n1ql client
2023/01/06 22:24:26 Expected and Actual scan responses are the same
2023/01/06 22:24:26 
--- 3FieldsSingleSeek ---
2023/01/06 22:24:26 distinct = true
2023/01/06 22:24:27 Using n1ql client
2023/01/06 22:24:27 Expected and Actual scan responses are the same
2023/01/06 22:24:27 
--- 3FieldsMultipleSeeks ---
2023/01/06 22:24:27 distinct = true
2023/01/06 22:24:28 Using n1ql client
2023/01/06 22:24:28 Expected and Actual scan responses are the same
2023/01/06 22:24:28 
--- 3FieldsMultipleSeeks_Identical ---
2023/01/06 22:24:28 distinct = true
2023/01/06 22:24:28 Using n1ql client
2023/01/06 22:24:28 Expected and Actual scan responses are the same
--- PASS: TestMultiScanDistinct (12.59s)
=== RUN   TestMultiScanProjection
2023/01/06 22:24:28 In TestMultiScanProjection()
2023/01/06 22:24:28 

--------- Composite Index with 2 fields ---------
2023/01/06 22:24:28 
--- ScanAllNoFilter ---
2023/01/06 22:24:28 distinct = true
2023/01/06 22:24:29 Using n1ql client
2023/01/06 22:24:29 Expected and Actual scan responses are the same
2023/01/06 22:24:29 
--- ScanAllFilterNil ---
2023/01/06 22:24:29 distinct = true
2023/01/06 22:24:29 Using n1ql client
2023/01/06 22:24:29 Expected and Actual scan responses are the same
2023/01/06 22:24:29 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:24:29 distinct = true
2023/01/06 22:24:30 Using n1ql client
2023/01/06 22:24:30 Expected and Actual scan responses are the same
2023/01/06 22:24:30 
--- SingleSeek ---
2023/01/06 22:24:30 distinct = true
2023/01/06 22:24:30 Using n1ql client
2023/01/06 22:24:30 Expected and Actual scan responses are the same
2023/01/06 22:24:30 
--- MultipleSeek ---
2023/01/06 22:24:30 distinct = true
2023/01/06 22:24:31 Using n1ql client
2023/01/06 22:24:31 Expected and Actual scan responses are the same
2023/01/06 22:24:31 
--- SimpleRange ---
2023/01/06 22:24:31 distinct = true
2023/01/06 22:24:31 Using n1ql client
2023/01/06 22:24:31 Expected and Actual scan responses are the same
2023/01/06 22:24:31 
--- NonOverlappingRanges ---
2023/01/06 22:24:31 distinct = true
2023/01/06 22:24:31 Using n1ql client
2023/01/06 22:24:31 Expected and Actual scan responses are the same
2023/01/06 22:24:31 
--- OverlappingRanges ---
2023/01/06 22:24:31 distinct = true
2023/01/06 22:24:32 Using n1ql client
2023/01/06 22:24:32 Expected and Actual scan responses are the same
2023/01/06 22:24:32 
--- NonOverlappingFilters ---
2023/01/06 22:24:32 distinct = true
2023/01/06 22:24:32 Using n1ql client
2023/01/06 22:24:32 Expected and Actual scan responses are the same
2023/01/06 22:24:32 
--- OverlappingFilters ---
2023/01/06 22:24:32 distinct = true
2023/01/06 22:24:33 Using n1ql client
2023/01/06 22:24:33 Expected and Actual scan responses are the same
2023/01/06 22:24:33 
--- BoundaryFilters ---
2023/01/06 22:24:33 distinct = true
2023/01/06 22:24:33 Using n1ql client
2023/01/06 22:24:33 Expected and Actual scan responses are the same
2023/01/06 22:24:33 
--- SeekAndFilters_NonOverlapping ---
2023/01/06 22:24:33 distinct = true
2023/01/06 22:24:34 Using n1ql client
2023/01/06 22:24:34 Expected and Actual scan responses are the same
2023/01/06 22:24:34 
--- SeekAndFilters_Overlapping ---
2023/01/06 22:24:34 distinct = true
2023/01/06 22:24:34 Using n1ql client
2023/01/06 22:24:34 Expected and Actual scan responses are the same
2023/01/06 22:24:34 
--- SimpleRangeLowUnbounded ---
2023/01/06 22:24:34 distinct = false
2023/01/06 22:24:34 Using n1ql client
2023/01/06 22:24:34 Expected and Actual scan responses are the same
2023/01/06 22:24:34 
--- SimpleRangeHighUnbounded ---
2023/01/06 22:24:34 distinct = false
2023/01/06 22:24:35 Using n1ql client
2023/01/06 22:24:35 Expected and Actual scan responses are the same
2023/01/06 22:24:35 
--- SimpleRangeMultipleUnbounded ---
2023/01/06 22:24:35 distinct = false
2023/01/06 22:24:35 Using n1ql client
2023/01/06 22:24:35 Expected and Actual scan responses are the same
2023/01/06 22:24:35 
--- FiltersWithUnbounded ---
2023/01/06 22:24:35 distinct = false
2023/01/06 22:24:36 Using n1ql client
2023/01/06 22:24:36 Expected and Actual scan responses are the same
2023/01/06 22:24:36 
--- FiltersLowGreaterThanHigh ---
2023/01/06 22:24:36 distinct = false
2023/01/06 22:24:36 Using n1ql client
2023/01/06 22:24:36 Expected and Actual scan responses are the same
2023/01/06 22:24:36 

--------- Simple Index with 1 field ---------
2023/01/06 22:24:36 
--- SingleIndexSimpleRange ---
2023/01/06 22:24:36 distinct = true
2023/01/06 22:24:36 Using n1ql client
2023/01/06 22:24:36 Expected and Actual scan responses are the same
2023/01/06 22:24:36 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2023/01/06 22:24:36 distinct = true
2023/01/06 22:24:37 Using n1ql client
2023/01/06 22:24:37 Expected and Actual scan responses are the same
2023/01/06 22:24:37 
--- SingleIndex_SimpleRanges_Overlapping ---
2023/01/06 22:24:37 distinct = true
2023/01/06 22:24:37 Using n1ql client
2023/01/06 22:24:37 Expected and Actual scan responses are the same
2023/01/06 22:24:37 

--------- Composite Index with 3 fields ---------
2023/01/06 22:24:37 
--- ScanAllNoFilter ---
2023/01/06 22:24:37 distinct = true
2023/01/06 22:24:38 Using n1ql client
2023/01/06 22:24:38 Expected and Actual scan responses are the same
2023/01/06 22:24:38 
--- ScanAllFilterNil ---
2023/01/06 22:24:38 distinct = true
2023/01/06 22:24:38 Using n1ql client
2023/01/06 22:24:38 Expected and Actual scan responses are the same
2023/01/06 22:24:38 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:24:38 distinct = true
2023/01/06 22:24:39 Using n1ql client
2023/01/06 22:24:39 Expected and Actual scan responses are the same
2023/01/06 22:24:39 
--- 3FieldsSingleSeek ---
2023/01/06 22:24:39 distinct = true
2023/01/06 22:24:39 Using n1ql client
2023/01/06 22:24:39 Expected and Actual scan responses are the same
2023/01/06 22:24:39 
--- 3FieldsMultipleSeeks ---
2023/01/06 22:24:39 distinct = true
2023/01/06 22:24:40 Using n1ql client
2023/01/06 22:24:40 Expected and Actual scan responses are the same
2023/01/06 22:24:40 
--- 3FieldsMultipleSeeks_Identical ---
2023/01/06 22:24:40 distinct = true
2023/01/06 22:24:40 Using n1ql client
2023/01/06 22:24:40 Expected and Actual scan responses are the same
2023/01/06 22:24:40 indexes are: index_company, index_companyname, index_company_name_age, index_company_name_age_address, index_company_name_age_address_friends
2023/01/06 22:24:40 fields are: [company], [company name], [company name age], [company name age address], [company name age address friends]
2023/01/06 22:24:40 
--- SingleIndexProjectFirst ---
2023/01/06 22:24:40 distinct = true
2023/01/06 22:24:41 Using n1ql client
2023/01/06 22:24:41 Expected and Actual scan responses are the same
2023/01/06 22:24:41 
--- 2FieldIndexProjectSecond ---
2023/01/06 22:24:41 distinct = true
2023/01/06 22:24:41 Using n1ql client
2023/01/06 22:24:41 Expected and Actual scan responses are the same
2023/01/06 22:24:41 
--- 3FieldIndexProjectThird ---
2023/01/06 22:24:41 distinct = true
2023/01/06 22:24:41 Using n1ql client
2023/01/06 22:24:41 Expected and Actual scan responses are the same
2023/01/06 22:24:41 
--- 4FieldIndexProjectFourth ---
2023/01/06 22:24:41 distinct = true
2023/01/06 22:24:43 Using n1ql client
2023/01/06 22:24:43 Expected and Actual scan responses are the same
2023/01/06 22:24:43 
--- 5FieldIndexProjectFifth ---
2023/01/06 22:24:43 distinct = true
2023/01/06 22:24:47 Using n1ql client
2023/01/06 22:24:47 Expected and Actual scan responses are the same
2023/01/06 22:24:47 
--- 2FieldIndexProjectTwo ---
2023/01/06 22:24:47 distinct = true
2023/01/06 22:24:47 Using n1ql client
2023/01/06 22:24:47 Expected and Actual scan responses are the same
2023/01/06 22:24:47 
--- 3FieldIndexProjectTwo ---
2023/01/06 22:24:47 distinct = true
2023/01/06 22:24:48 Using n1ql client
2023/01/06 22:24:48 Expected and Actual scan responses are the same
2023/01/06 22:24:48 
--- 3FieldIndexProjectTwo ---
2023/01/06 22:24:48 distinct = true
2023/01/06 22:24:48 Using n1ql client
2023/01/06 22:24:48 Expected and Actual scan responses are the same
2023/01/06 22:24:48 
--- 3FieldIndexProjectTwo ---
2023/01/06 22:24:48 distinct = true
2023/01/06 22:24:49 Using n1ql client
2023/01/06 22:24:49 Expected and Actual scan responses are the same
2023/01/06 22:24:49 
--- 4FieldIndexProjectTwo ---
2023/01/06 22:24:49 distinct = true
2023/01/06 22:24:50 Using n1ql client
2023/01/06 22:24:50 Expected and Actual scan responses are the same
2023/01/06 22:24:50 
--- 4FieldIndexProjectTwo ---
2023/01/06 22:24:50 distinct = true
2023/01/06 22:24:52 Using n1ql client
2023/01/06 22:24:52 Expected and Actual scan responses are the same
2023/01/06 22:24:52 
--- 4FieldIndexProjectTwo ---
2023/01/06 22:24:52 distinct = true
2023/01/06 22:24:54 Using n1ql client
2023/01/06 22:24:54 Expected and Actual scan responses are the same
2023/01/06 22:24:54 
--- 4FieldIndexProjectTwo ---
2023/01/06 22:24:54 distinct = true
2023/01/06 22:24:55 Using n1ql client
2023/01/06 22:24:55 Expected and Actual scan responses are the same
2023/01/06 22:24:55 
--- 4FieldIndexProjectTwo ---
2023/01/06 22:24:55 distinct = true
2023/01/06 22:24:57 Using n1ql client
2023/01/06 22:24:57 Expected and Actual scan responses are the same
2023/01/06 22:24:57 
--- 5FieldIndexProjectTwo ---
2023/01/06 22:24:57 distinct = true
2023/01/06 22:25:01 Using n1ql client
2023/01/06 22:25:01 Expected and Actual scan responses are the same
2023/01/06 22:25:01 
--- 5FieldIndexProjectTwo ---
2023/01/06 22:25:01 distinct = true
2023/01/06 22:25:04 Using n1ql client
2023/01/06 22:25:04 Expected and Actual scan responses are the same
2023/01/06 22:25:04 
--- 5FieldIndexProjectTwo ---
2023/01/06 22:25:04 distinct = true
2023/01/06 22:25:08 Using n1ql client
2023/01/06 22:25:08 Expected and Actual scan responses are the same
2023/01/06 22:25:08 
--- 5FieldIndexProjectTwo ---
2023/01/06 22:25:08 distinct = true
2023/01/06 22:25:11 Using n1ql client
2023/01/06 22:25:12 Expected and Actual scan responses are the same
2023/01/06 22:25:12 
--- 5FieldIndexProjectThree ---
2023/01/06 22:25:12 distinct = true
2023/01/06 22:25:15 Using n1ql client
2023/01/06 22:25:15 Expected and Actual scan responses are the same
2023/01/06 22:25:15 
--- 5FieldIndexProjectFour ---
2023/01/06 22:25:15 distinct = true
2023/01/06 22:25:19 Using n1ql client
2023/01/06 22:25:19 Expected and Actual scan responses are the same
2023/01/06 22:25:19 
--- 5FieldIndexProjectAll ---
2023/01/06 22:25:19 distinct = true
2023/01/06 22:25:22 Using n1ql client
2023/01/06 22:25:22 Expected and Actual scan responses are the same
2023/01/06 22:25:22 
--- 5FieldIndexProjectAlternate ---
2023/01/06 22:25:22 distinct = true
2023/01/06 22:25:26 Using n1ql client
2023/01/06 22:25:26 Expected and Actual scan responses are the same
2023/01/06 22:25:26 
--- 5FieldIndexProjectEmptyEntryKeys ---
2023/01/06 22:25:26 distinct = true
2023/01/06 22:25:30 Using n1ql client
2023/01/06 22:25:30 Expected and Actual scan responses are the same
--- PASS: TestMultiScanProjection (62.01s)
=== RUN   TestMultiScanRestAPI
2023/01/06 22:25:30 In TestMultiScanRestAPI()
2023/01/06 22:25:30 In DropAllSecondaryIndexes()
2023/01/06 22:25:30 Index found:  index_company_name_age_address
2023/01/06 22:25:31 Dropped index index_company_name_age_address
2023/01/06 22:25:31 Index found:  index_company_name_age_address_friends
2023/01/06 22:25:31 Dropped index index_company_name_age_address_friends
2023/01/06 22:25:31 Index found:  index_primary
2023/01/06 22:25:31 Dropped index index_primary
2023/01/06 22:25:31 Index found:  addressidx
2023/01/06 22:25:31 Dropped index addressidx
2023/01/06 22:25:31 Index found:  index_companyname
2023/01/06 22:25:31 Dropped index index_companyname
2023/01/06 22:25:31 Index found:  index_company
2023/01/06 22:25:31 Dropped index index_company
2023/01/06 22:25:31 Index found:  index_company_name_age
2023/01/06 22:25:31 Dropped index index_company_name_age
2023/01/06 22:25:34 Created the secondary index index_companyname. Waiting for it become active
2023/01/06 22:25:34 Index is 17550871465437690956 now active
2023/01/06 22:25:35 GET all indexes
2023/01/06 22:25:35 200 OK
2023/01/06 22:25:35 getscans status : 200 OK
2023/01/06 22:25:35 number of entries 337
2023/01/06 22:25:35 Status : 200 OK
2023/01/06 22:25:35 Result from multiscancount API = 0
--- PASS: TestMultiScanRestAPI (4.25s)
=== RUN   TestMultiScanPrimaryIndexVariations
2023/01/06 22:25:35 In TestMultiScanPrimaryIndexVariations()
2023/01/06 22:25:42 Created the secondary index index_pi. Waiting for it become active
2023/01/06 22:25:42 Index is 2290479932044572010 now active
2023/01/06 22:25:42 
--- No Overlap ---
2023/01/06 22:25:42 Using n1ql client
2023/01/06 22:25:42 Expected and Actual scan responses are the same
2023/01/06 22:25:42 
--- Proper Overlap ---
2023/01/06 22:25:42 Using n1ql client
2023/01/06 22:25:42 Expected and Actual scan responses are the same
2023/01/06 22:25:42 
--- Low Boundary Overlap ---
2023/01/06 22:25:42 Using n1ql client
2023/01/06 22:25:42 Expected and Actual scan responses are the same
2023/01/06 22:25:42 
--- Complex Overlaps ---
2023/01/06 22:25:42 Using n1ql client
2023/01/06 22:25:42 Expected and Actual scan responses are the same
2023/01/06 22:25:42 
--- Multiple Equal Overlaps ---
2023/01/06 22:25:42 Using n1ql client
2023/01/06 22:25:42 Expected and Actual scan responses are the same
2023/01/06 22:25:42 
--- Boundary and Subset Overlaps ---
2023/01/06 22:25:42 Using n1ql client
2023/01/06 22:25:42 Expected and Actual scan responses are the same
2023/01/06 22:25:42 
--- Point Overlaps ---
2023/01/06 22:25:42 Using n1ql client
2023/01/06 22:25:42 Expected and Actual scan responses are the same
2023/01/06 22:25:42 
--- Boundary and Point Overlaps ---
2023/01/06 22:25:42 Using n1ql client
2023/01/06 22:25:42 Expected and Actual scan responses are the same
2023/01/06 22:25:42 
--- Primary index range null ---
2023/01/06 22:25:42 Using n1ql client
2023/01/06 22:25:42 Expected and Actual scan responses are the same
2023/01/06 22:25:42 Dropping the secondary index index_pi
2023/01/06 22:25:42 Index dropped
--- PASS: TestMultiScanPrimaryIndexVariations (7.16s)
=== RUN   TestMultiScanDescSetup
2023/01/06 22:25:42 In TestMultiScanDescSetup()
2023/01/06 22:25:42 In DropAllSecondaryIndexes()
2023/01/06 22:25:42 Index found:  index_companyname
2023/01/06 22:25:42 Dropped index index_companyname
2023/01/06 22:25:48 Created the secondary index index_companyname_desc. Waiting for it become active
2023/01/06 22:25:48 Index is 894791401607657894 now active
2023/01/06 22:25:54 Created the secondary index index_company_desc. Waiting for it become active
2023/01/06 22:25:54 Index is 15966441105209017428 now active
2023/01/06 22:26:01 Created the secondary index index_company_name_age_desc. Waiting for it become active
2023/01/06 22:26:01 Index is 4894508137107083251 now active
--- PASS: TestMultiScanDescSetup (19.07s)
=== RUN   TestMultiScanDescScenarios
2023/01/06 22:26:01 In TestMultiScanDescScenarios()
2023/01/06 22:26:01 

--------- Composite Index with 2 fields ---------
2023/01/06 22:26:01 
--- ScanAllNoFilter ---
2023/01/06 22:26:01 distinct = false
2023/01/06 22:26:01 Using n1ql client
2023/01/06 22:26:01 Expected and Actual scan responses are the same
2023/01/06 22:26:01 
--- ScanAllFilterNil ---
2023/01/06 22:26:01 distinct = false
2023/01/06 22:26:02 Using n1ql client
2023/01/06 22:26:02 Expected and Actual scan responses are the same
2023/01/06 22:26:02 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:26:02 distinct = false
2023/01/06 22:26:02 Using n1ql client
2023/01/06 22:26:03 Expected and Actual scan responses are the same
2023/01/06 22:26:03 
--- SingleSeek ---
2023/01/06 22:26:03 distinct = false
2023/01/06 22:26:03 Using n1ql client
2023/01/06 22:26:03 Expected and Actual scan responses are the same
2023/01/06 22:26:03 
--- MultipleSeek ---
2023/01/06 22:26:03 distinct = false
2023/01/06 22:26:03 Using n1ql client
2023/01/06 22:26:03 Expected and Actual scan responses are the same
2023/01/06 22:26:03 
--- SimpleRange ---
2023/01/06 22:26:03 distinct = false
2023/01/06 22:26:04 Using n1ql client
2023/01/06 22:26:04 Expected and Actual scan responses are the same
2023/01/06 22:26:04 
--- NonOverlappingRanges ---
2023/01/06 22:26:04 distinct = false
2023/01/06 22:26:04 Using n1ql client
2023/01/06 22:26:04 Expected and Actual scan responses are the same
2023/01/06 22:26:04 
--- OverlappingRanges ---
2023/01/06 22:26:04 distinct = false
2023/01/06 22:26:05 Using n1ql client
2023/01/06 22:26:05 Expected and Actual scan responses are the same
2023/01/06 22:26:05 
--- NonOverlappingFilters ---
2023/01/06 22:26:05 distinct = false
2023/01/06 22:26:05 Using n1ql client
2023/01/06 22:26:05 Expected and Actual scan responses are the same
2023/01/06 22:26:05 
--- OverlappingFilters ---
2023/01/06 22:26:05 distinct = false
2023/01/06 22:26:06 Using n1ql client
2023/01/06 22:26:06 Expected and Actual scan responses are the same
2023/01/06 22:26:06 
--- BoundaryFilters ---
2023/01/06 22:26:06 distinct = false
2023/01/06 22:26:06 Using n1ql client
2023/01/06 22:26:06 Expected and Actual scan responses are the same
2023/01/06 22:26:06 
--- SeekAndFilters_NonOverlapping ---
2023/01/06 22:26:06 distinct = false
2023/01/06 22:26:06 Using n1ql client
2023/01/06 22:26:06 Expected and Actual scan responses are the same
2023/01/06 22:26:06 
--- SeekAndFilters_Overlapping ---
2023/01/06 22:26:06 distinct = false
2023/01/06 22:26:07 Using n1ql client
2023/01/06 22:26:07 Expected and Actual scan responses are the same
2023/01/06 22:26:07 
--- SimpleRangeLowUnbounded ---
2023/01/06 22:26:07 distinct = false
2023/01/06 22:26:07 Using n1ql client
2023/01/06 22:26:07 Expected and Actual scan responses are the same
2023/01/06 22:26:07 
--- SimpleRangeHighUnbounded ---
2023/01/06 22:26:07 distinct = false
2023/01/06 22:26:08 Using n1ql client
2023/01/06 22:26:08 Expected and Actual scan responses are the same
2023/01/06 22:26:08 
--- SimpleRangeMultipleUnbounded ---
2023/01/06 22:26:08 distinct = false
2023/01/06 22:26:08 Using n1ql client
2023/01/06 22:26:08 Expected and Actual scan responses are the same
2023/01/06 22:26:08 
--- FiltersWithUnbounded ---
2023/01/06 22:26:08 distinct = false
2023/01/06 22:26:09 Using n1ql client
2023/01/06 22:26:09 Expected and Actual scan responses are the same
2023/01/06 22:26:09 
--- FiltersLowGreaterThanHigh ---
2023/01/06 22:26:09 distinct = false
2023/01/06 22:26:09 Using n1ql client
2023/01/06 22:26:09 Expected and Actual scan responses are the same
2023/01/06 22:26:09 

--------- Simple Index with 1 field ---------
2023/01/06 22:26:09 
--- SingleIndexSimpleRange ---
2023/01/06 22:26:09 distinct = false
2023/01/06 22:26:09 Using n1ql client
2023/01/06 22:26:09 Expected and Actual scan responses are the same
2023/01/06 22:26:09 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2023/01/06 22:26:09 distinct = false
2023/01/06 22:26:10 Using n1ql client
2023/01/06 22:26:10 Expected and Actual scan responses are the same
2023/01/06 22:26:10 
--- SingleIndex_SimpleRanges_Overlapping ---
2023/01/06 22:26:10 distinct = false
2023/01/06 22:26:10 Using n1ql client
2023/01/06 22:26:10 Expected and Actual scan responses are the same
2023/01/06 22:26:10 

--------- Composite Index with 3 fields ---------
2023/01/06 22:26:10 
--- ScanAllNoFilter ---
2023/01/06 22:26:10 distinct = false
2023/01/06 22:26:11 Using n1ql client
2023/01/06 22:26:11 Expected and Actual scan responses are the same
2023/01/06 22:26:11 
--- ScanAllFilterNil ---
2023/01/06 22:26:11 distinct = false
2023/01/06 22:26:11 Using n1ql client
2023/01/06 22:26:11 Expected and Actual scan responses are the same
2023/01/06 22:26:11 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:26:11 distinct = false
2023/01/06 22:26:12 Using n1ql client
2023/01/06 22:26:12 Expected and Actual scan responses are the same
2023/01/06 22:26:12 
--- 3FieldsSingleSeek ---
2023/01/06 22:26:12 distinct = false
2023/01/06 22:26:12 Using n1ql client
2023/01/06 22:26:12 Expected and Actual scan responses are the same
2023/01/06 22:26:12 
--- 3FieldsMultipleSeeks ---
2023/01/06 22:26:12 distinct = false
2023/01/06 22:26:13 Using n1ql client
2023/01/06 22:26:13 Expected and Actual scan responses are the same
2023/01/06 22:26:13 
--- 3FieldsMultipleSeeks_Identical ---
2023/01/06 22:26:13 distinct = false
2023/01/06 22:26:13 Using n1ql client
2023/01/06 22:26:13 Expected and Actual scan responses are the same
2023/01/06 22:26:13 

--------- New scenarios ---------
2023/01/06 22:26:13 
--- CompIndexHighUnbounded1 ---
2023/01/06 22:26:13 
--- Multi Scan 0 ---
2023/01/06 22:26:13 distinct = false
2023/01/06 22:26:13 Using n1ql client
2023/01/06 22:26:13 Expected and Actual scan responses are the same
2023/01/06 22:26:13 
--- Multi Scan 1 ---
2023/01/06 22:26:13 distinct = false
2023/01/06 22:26:14 Using n1ql client
2023/01/06 22:26:14 Expected and Actual scan responses are the same
2023/01/06 22:26:14 
--- Multi Scan 2 ---
2023/01/06 22:26:14 distinct = false
2023/01/06 22:26:14 Using n1ql client
2023/01/06 22:26:14 Expected and Actual scan responses are the same
2023/01/06 22:26:14 
--- CompIndexHighUnbounded2 ---
2023/01/06 22:26:14 
--- Multi Scan 0 ---
2023/01/06 22:26:14 distinct = false
2023/01/06 22:26:15 Using n1ql client
2023/01/06 22:26:15 Expected and Actual scan responses are the same
2023/01/06 22:26:15 
--- Multi Scan 1 ---
2023/01/06 22:26:15 distinct = false
2023/01/06 22:26:15 Using n1ql client
2023/01/06 22:26:15 Expected and Actual scan responses are the same
2023/01/06 22:26:15 
--- Multi Scan 2 ---
2023/01/06 22:26:15 distinct = false
2023/01/06 22:26:16 Using n1ql client
2023/01/06 22:26:16 Expected and Actual scan responses are the same
2023/01/06 22:26:16 
--- CompIndexHighUnbounded3 ---
2023/01/06 22:26:16 
--- Multi Scan 0 ---
2023/01/06 22:26:16 distinct = false
2023/01/06 22:26:16 Using n1ql client
2023/01/06 22:26:16 Expected and Actual scan responses are the same
2023/01/06 22:26:16 
--- CompIndexHighUnbounded4 ---
2023/01/06 22:26:16 
--- Multi Scan 0 ---
2023/01/06 22:26:16 distinct = false
2023/01/06 22:26:16 Using n1ql client
2023/01/06 22:26:16 Expected and Actual scan responses are the same
2023/01/06 22:26:16 
--- CompIndexHighUnbounded5 ---
2023/01/06 22:26:16 
--- Multi Scan 0 ---
2023/01/06 22:26:16 distinct = false
2023/01/06 22:26:17 Using n1ql client
2023/01/06 22:26:17 Expected and Actual scan responses are the same
2023/01/06 22:26:17 
--- SeekBoundaries ---
2023/01/06 22:26:17 
--- Multi Scan 0 ---
2023/01/06 22:26:17 distinct = false
2023/01/06 22:26:17 Using n1ql client
2023/01/06 22:26:17 Expected and Actual scan responses are the same
2023/01/06 22:26:17 
--- Multi Scan 1 ---
2023/01/06 22:26:17 distinct = false
2023/01/06 22:26:18 Using n1ql client
2023/01/06 22:26:18 Expected and Actual scan responses are the same
2023/01/06 22:26:18 
--- Multi Scan 2 ---
2023/01/06 22:26:18 distinct = false
2023/01/06 22:26:18 Using n1ql client
2023/01/06 22:26:18 Expected and Actual scan responses are the same
2023/01/06 22:26:18 
--- Multi Scan 3 ---
2023/01/06 22:26:18 distinct = false
2023/01/06 22:26:19 Using n1ql client
2023/01/06 22:26:19 Expected and Actual scan responses are the same
2023/01/06 22:26:19 
--- Multi Scan 4 ---
2023/01/06 22:26:19 distinct = false
2023/01/06 22:26:19 Using n1ql client
2023/01/06 22:26:19 Expected and Actual scan responses are the same
2023/01/06 22:26:19 
--- Multi Scan 5 ---
2023/01/06 22:26:19 distinct = false
2023/01/06 22:26:20 Using n1ql client
2023/01/06 22:26:20 Expected and Actual scan responses are the same
2023/01/06 22:26:20 
--- Multi Scan 6 ---
2023/01/06 22:26:20 distinct = false
2023/01/06 22:26:20 Using n1ql client
2023/01/06 22:26:20 Expected and Actual scan responses are the same
2023/01/06 22:26:20 
--- Multi Scan 7 ---
2023/01/06 22:26:20 distinct = false
2023/01/06 22:26:20 Using n1ql client
2023/01/06 22:26:20 Expected and Actual scan responses are the same
--- PASS: TestMultiScanDescScenarios (19.50s)
=== RUN   TestMultiScanDescCount
2023/01/06 22:26:20 In TestMultiScanDescCount()
2023/01/06 22:26:20 

--------- Composite Index with 2 fields ---------
2023/01/06 22:26:20 
--- ScanAllNoFilter ---
2023/01/06 22:26:20 distinct = false
2023/01/06 22:26:21 Using n1ql client
2023/01/06 22:26:21 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:26:21 
--- ScanAllFilterNil ---
2023/01/06 22:26:21 distinct = false
2023/01/06 22:26:21 Using n1ql client
2023/01/06 22:26:21 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:26:21 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:26:21 distinct = false
2023/01/06 22:26:22 Using n1ql client
2023/01/06 22:26:22 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:26:22 
--- SingleSeek ---
2023/01/06 22:26:22 distinct = false
2023/01/06 22:26:22 Using n1ql client
2023/01/06 22:26:22 MultiScanCount = 1 ExpectedMultiScanCount = 1
2023/01/06 22:26:22 
--- MultipleSeek ---
2023/01/06 22:26:22 distinct = false
2023/01/06 22:26:22 Using n1ql client
2023/01/06 22:26:22 MultiScanCount = 2 ExpectedMultiScanCount = 2
2023/01/06 22:26:22 
--- SimpleRange ---
2023/01/06 22:26:22 distinct = false
2023/01/06 22:26:23 Using n1ql client
2023/01/06 22:26:23 MultiScanCount = 2273 ExpectedMultiScanCount = 2273
2023/01/06 22:26:23 
--- NonOverlappingRanges ---
2023/01/06 22:26:23 distinct = false
2023/01/06 22:26:23 Using n1ql client
2023/01/06 22:26:23 MultiScanCount = 4283 ExpectedMultiScanCount = 4283
2023/01/06 22:26:23 
--- OverlappingRanges ---
2023/01/06 22:26:23 distinct = false
2023/01/06 22:26:24 Using n1ql client
2023/01/06 22:26:24 MultiScanCount = 5756 ExpectedMultiScanCount = 5756
2023/01/06 22:26:24 
--- NonOverlappingFilters ---
2023/01/06 22:26:24 distinct = false
2023/01/06 22:26:24 Using n1ql client
2023/01/06 22:26:24 MultiScanCount = 337 ExpectedMultiScanCount = 337
2023/01/06 22:26:24 
--- OverlappingFilters ---
2023/01/06 22:26:24 distinct = false
2023/01/06 22:26:25 Using n1ql client
2023/01/06 22:26:25 MultiScanCount = 2559 ExpectedMultiScanCount = 2559
2023/01/06 22:26:25 
--- BoundaryFilters ---
2023/01/06 22:26:25 distinct = false
2023/01/06 22:26:25 Using n1ql client
2023/01/06 22:26:25 MultiScanCount = 499 ExpectedMultiScanCount = 499
2023/01/06 22:26:25 
--- SeekAndFilters_NonOverlapping ---
2023/01/06 22:26:25 distinct = false
2023/01/06 22:26:25 Using n1ql client
2023/01/06 22:26:25 MultiScanCount = 256 ExpectedMultiScanCount = 256
2023/01/06 22:26:25 
--- SeekAndFilters_Overlapping ---
2023/01/06 22:26:25 distinct = false
2023/01/06 22:26:26 Using n1ql client
2023/01/06 22:26:26 MultiScanCount = 255 ExpectedMultiScanCount = 255
2023/01/06 22:26:26 
--- SimpleRangeLowUnbounded ---
2023/01/06 22:26:26 distinct = false
2023/01/06 22:26:26 Using n1ql client
2023/01/06 22:26:26 MultiScanCount = 5618 ExpectedMultiScanCount = 5618
2023/01/06 22:26:26 
--- SimpleRangeHighUnbounded ---
2023/01/06 22:26:26 distinct = false
2023/01/06 22:26:27 Using n1ql client
2023/01/06 22:26:27 MultiScanCount = 3704 ExpectedMultiScanCount = 3704
2023/01/06 22:26:27 
--- SimpleRangeMultipleUnbounded ---
2023/01/06 22:26:27 distinct = false
2023/01/06 22:26:28 Using n1ql client
2023/01/06 22:26:28 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:26:28 
--- FiltersWithUnbounded ---
2023/01/06 22:26:28 distinct = false
2023/01/06 22:26:28 Using n1ql client
2023/01/06 22:26:28 MultiScanCount = 3173 ExpectedMultiScanCount = 3173
2023/01/06 22:26:28 
--- FiltersLowGreaterThanHigh ---
2023/01/06 22:26:28 distinct = false
2023/01/06 22:26:29 Using n1ql client
2023/01/06 22:26:29 MultiScanCount = 418 ExpectedMultiScanCount = 418
2023/01/06 22:26:29 

--------- Simple Index with 1 field ---------
2023/01/06 22:26:29 
--- SingleIndexSimpleRange ---
2023/01/06 22:26:29 distinct = false
2023/01/06 22:26:29 Using n1ql client
2023/01/06 22:26:29 MultiScanCount = 2273 ExpectedMultiScanCount = 2273
2023/01/06 22:26:29 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2023/01/06 22:26:29 distinct = false
2023/01/06 22:26:29 Using n1ql client
2023/01/06 22:26:29 MultiScanCount = 7140 ExpectedMultiScanCount = 7140
2023/01/06 22:26:29 
--- SingleIndex_SimpleRanges_Overlapping ---
2023/01/06 22:26:29 distinct = false
2023/01/06 22:26:30 Using n1ql client
2023/01/06 22:26:30 MultiScanCount = 8701 ExpectedMultiScanCount = 8701
2023/01/06 22:26:30 

--------- Composite Index with 3 fields ---------
2023/01/06 22:26:30 
--- ScanAllNoFilter ---
2023/01/06 22:26:30 distinct = false
2023/01/06 22:26:30 Using n1ql client
2023/01/06 22:26:30 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:26:30 
--- ScanAllFilterNil ---
2023/01/06 22:26:30 distinct = false
2023/01/06 22:26:31 Using n1ql client
2023/01/06 22:26:31 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:26:31 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:26:31 distinct = false
2023/01/06 22:26:31 Using n1ql client
2023/01/06 22:26:31 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:26:31 
--- 3FieldsSingleSeek ---
2023/01/06 22:26:31 distinct = false
2023/01/06 22:26:32 Using n1ql client
2023/01/06 22:26:32 MultiScanCount = 1 ExpectedMultiScanCount = 1
2023/01/06 22:26:32 
--- 3FieldsMultipleSeeks ---
2023/01/06 22:26:32 distinct = false
2023/01/06 22:26:32 Using n1ql client
2023/01/06 22:26:32 MultiScanCount = 3 ExpectedMultiScanCount = 3
2023/01/06 22:26:32 
--- 3FieldsMultipleSeeks_Identical ---
2023/01/06 22:26:32 distinct = false
2023/01/06 22:26:33 Using n1ql client
2023/01/06 22:26:33 MultiScanCount = 2 ExpectedMultiScanCount = 2
2023/01/06 22:26:33 

--------- New scenarios ---------
2023/01/06 22:26:33 
--- CompIndexHighUnbounded1 ---
2023/01/06 22:26:33 
--- Multi Scan 0 ---
2023/01/06 22:26:33 distinct = false
2023/01/06 22:26:33 Using n1ql client
2023/01/06 22:26:33 Using n1ql client
2023/01/06 22:26:33 len(scanResults) = 8 MultiScanCount = 8
2023/01/06 22:26:33 Expected and Actual scan responses are the same
2023/01/06 22:26:33 
--- Multi Scan 1 ---
2023/01/06 22:26:33 distinct = false
2023/01/06 22:26:33 Using n1ql client
2023/01/06 22:26:33 Using n1ql client
2023/01/06 22:26:33 len(scanResults) = 0 MultiScanCount = 0
2023/01/06 22:26:33 Expected and Actual scan responses are the same
2023/01/06 22:26:33 
--- Multi Scan 2 ---
2023/01/06 22:26:33 distinct = false
2023/01/06 22:26:34 Using n1ql client
2023/01/06 22:26:34 Using n1ql client
2023/01/06 22:26:34 len(scanResults) = 9 MultiScanCount = 9
2023/01/06 22:26:34 Expected and Actual scan responses are the same
2023/01/06 22:26:34 
--- CompIndexHighUnbounded2 ---
2023/01/06 22:26:34 
--- Multi Scan 0 ---
2023/01/06 22:26:34 distinct = false
2023/01/06 22:26:34 Using n1ql client
2023/01/06 22:26:34 Using n1ql client
2023/01/06 22:26:34 len(scanResults) = 4138 MultiScanCount = 4138
2023/01/06 22:26:34 Expected and Actual scan responses are the same
2023/01/06 22:26:34 
--- Multi Scan 1 ---
2023/01/06 22:26:34 distinct = false
2023/01/06 22:26:35 Using n1ql client
2023/01/06 22:26:35 Using n1ql client
2023/01/06 22:26:35 len(scanResults) = 2746 MultiScanCount = 2746
2023/01/06 22:26:35 Expected and Actual scan responses are the same
2023/01/06 22:26:35 
--- Multi Scan 2 ---
2023/01/06 22:26:35 distinct = false
2023/01/06 22:26:35 Using n1ql client
2023/01/06 22:26:35 Using n1ql client
2023/01/06 22:26:35 len(scanResults) = 4691 MultiScanCount = 4691
2023/01/06 22:26:35 Expected and Actual scan responses are the same
2023/01/06 22:26:35 
--- CompIndexHighUnbounded3 ---
2023/01/06 22:26:35 
--- Multi Scan 0 ---
2023/01/06 22:26:35 distinct = false
2023/01/06 22:26:36 Using n1ql client
2023/01/06 22:26:36 Using n1ql client
2023/01/06 22:26:36 len(scanResults) = 1329 MultiScanCount = 1329
2023/01/06 22:26:36 Expected and Actual scan responses are the same
2023/01/06 22:26:36 
--- CompIndexHighUnbounded4 ---
2023/01/06 22:26:36 
--- Multi Scan 0 ---
2023/01/06 22:26:36 distinct = false
2023/01/06 22:26:36 Using n1ql client
2023/01/06 22:26:36 Using n1ql client
2023/01/06 22:26:36 len(scanResults) = 5349 MultiScanCount = 5349
2023/01/06 22:26:36 Expected and Actual scan responses are the same
2023/01/06 22:26:36 
--- CompIndexHighUnbounded5 ---
2023/01/06 22:26:36 
--- Multi Scan 0 ---
2023/01/06 22:26:36 distinct = false
2023/01/06 22:26:36 Using n1ql client
2023/01/06 22:26:36 Using n1ql client
2023/01/06 22:26:36 len(scanResults) = 8210 MultiScanCount = 8210
2023/01/06 22:26:37 Expected and Actual scan responses are the same
2023/01/06 22:26:37 
--- SeekBoundaries ---
2023/01/06 22:26:37 
--- Multi Scan 0 ---
2023/01/06 22:26:37 distinct = false
2023/01/06 22:26:37 Using n1ql client
2023/01/06 22:26:37 Using n1ql client
2023/01/06 22:26:37 len(scanResults) = 175 MultiScanCount = 175
2023/01/06 22:26:37 Expected and Actual scan responses are the same
2023/01/06 22:26:37 
--- Multi Scan 1 ---
2023/01/06 22:26:37 distinct = false
2023/01/06 22:26:37 Using n1ql client
2023/01/06 22:26:37 Using n1ql client
2023/01/06 22:26:37 len(scanResults) = 1 MultiScanCount = 1
2023/01/06 22:26:37 Expected and Actual scan responses are the same
2023/01/06 22:26:37 
--- Multi Scan 2 ---
2023/01/06 22:26:37 distinct = false
2023/01/06 22:26:38 Using n1ql client
2023/01/06 22:26:38 Using n1ql client
2023/01/06 22:26:38 len(scanResults) = 555 MultiScanCount = 555
2023/01/06 22:26:38 Expected and Actual scan responses are the same
2023/01/06 22:26:38 
--- Multi Scan 3 ---
2023/01/06 22:26:38 distinct = false
2023/01/06 22:26:38 Using n1ql client
2023/01/06 22:26:38 Using n1ql client
2023/01/06 22:26:38 len(scanResults) = 872 MultiScanCount = 872
2023/01/06 22:26:38 Expected and Actual scan responses are the same
2023/01/06 22:26:38 
--- Multi Scan 4 ---
2023/01/06 22:26:38 distinct = false
2023/01/06 22:26:39 Using n1ql client
2023/01/06 22:26:39 Using n1ql client
2023/01/06 22:26:39 len(scanResults) = 287 MultiScanCount = 287
2023/01/06 22:26:39 Expected and Actual scan responses are the same
2023/01/06 22:26:39 
--- Multi Scan 5 ---
2023/01/06 22:26:39 distinct = false
2023/01/06 22:26:39 Using n1ql client
2023/01/06 22:26:39 Using n1ql client
2023/01/06 22:26:39 len(scanResults) = 5254 MultiScanCount = 5254
2023/01/06 22:26:39 Expected and Actual scan responses are the same
2023/01/06 22:26:39 
--- Multi Scan 6 ---
2023/01/06 22:26:39 distinct = false
2023/01/06 22:26:39 Using n1ql client
2023/01/06 22:26:40 Using n1ql client
2023/01/06 22:26:40 len(scanResults) = 5566 MultiScanCount = 5566
2023/01/06 22:26:40 Expected and Actual scan responses are the same
2023/01/06 22:26:40 
--- Multi Scan 7 ---
2023/01/06 22:26:40 distinct = false
2023/01/06 22:26:40 Using n1ql client
2023/01/06 22:26:40 Using n1ql client
2023/01/06 22:26:40 len(scanResults) = 8 MultiScanCount = 8
2023/01/06 22:26:40 Expected and Actual scan responses are the same
2023/01/06 22:26:40 

--------- With DISTINCT True ---------
2023/01/06 22:26:40 
--- ScanAllNoFilter ---
2023/01/06 22:26:40 distinct = true
2023/01/06 22:26:40 Using n1ql client
2023/01/06 22:26:40 MultiScanCount = 999 ExpectedMultiScanCount = 999
2023/01/06 22:26:40 
--- ScanAllFilterNil ---
2023/01/06 22:26:40 distinct = true
2023/01/06 22:26:41 Using n1ql client
2023/01/06 22:26:41 MultiScanCount = 999 ExpectedMultiScanCount = 999
2023/01/06 22:26:41 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:26:41 distinct = true
2023/01/06 22:26:41 Using n1ql client
2023/01/06 22:26:41 MultiScanCount = 999 ExpectedMultiScanCount = 999
2023/01/06 22:26:41 
--- SingleSeek ---
2023/01/06 22:26:41 distinct = true
2023/01/06 22:26:42 Using n1ql client
2023/01/06 22:26:42 MultiScanCount = 1 ExpectedMultiScanCount = 1
2023/01/06 22:26:42 
--- MultipleSeek ---
2023/01/06 22:26:42 distinct = true
2023/01/06 22:26:42 Using n1ql client
2023/01/06 22:26:42 MultiScanCount = 2 ExpectedMultiScanCount = 2
2023/01/06 22:26:42 
--- SimpleRange ---
2023/01/06 22:26:42 distinct = true
2023/01/06 22:26:42 Using n1ql client
2023/01/06 22:26:42 MultiScanCount = 227 ExpectedMultiScanCount = 227
2023/01/06 22:26:42 
--- NonOverlappingRanges ---
2023/01/06 22:26:42 distinct = true
2023/01/06 22:26:43 Using n1ql client
2023/01/06 22:26:43 MultiScanCount = 428 ExpectedMultiScanCount = 428
2023/01/06 22:26:43 
--- NonOverlappingFilters2 ---
2023/01/06 22:26:43 distinct = true
2023/01/06 22:26:43 Using n1ql client
2023/01/06 22:26:43 MultiScanCount = 1 ExpectedMultiScanCount = 1
2023/01/06 22:26:43 
--- OverlappingRanges ---
2023/01/06 22:26:43 distinct = true
2023/01/06 22:26:44 Using n1ql client
2023/01/06 22:26:44 MultiScanCount = 575 ExpectedMultiScanCount = 575
2023/01/06 22:26:44 
--- NonOverlappingFilters ---
2023/01/06 22:26:44 distinct = true
2023/01/06 22:26:44 Using n1ql client
2023/01/06 22:26:44 MultiScanCount = 186 ExpectedMultiScanCount = 186
2023/01/06 22:26:44 
--- OverlappingFilters ---
2023/01/06 22:26:44 distinct = true
2023/01/06 22:26:44 Using n1ql client
2023/01/06 22:26:44 MultiScanCount = 543 ExpectedMultiScanCount = 543
2023/01/06 22:26:44 
--- BoundaryFilters ---
2023/01/06 22:26:44 distinct = true
2023/01/06 22:26:45 Using n1ql client
2023/01/06 22:26:45 MultiScanCount = 172 ExpectedMultiScanCount = 172
2023/01/06 22:26:45 
--- SeekAndFilters_NonOverlapping ---
2023/01/06 22:26:45 distinct = true
2023/01/06 22:26:45 Using n1ql client
2023/01/06 22:26:45 MultiScanCount = 135 ExpectedMultiScanCount = 135
2023/01/06 22:26:45 
--- SeekAndFilters_Overlapping ---
2023/01/06 22:26:45 distinct = true
2023/01/06 22:26:46 Using n1ql client
2023/01/06 22:26:46 MultiScanCount = 134 ExpectedMultiScanCount = 134
2023/01/06 22:26:46 
--- SimpleRangeLowUnbounded ---
2023/01/06 22:26:46 distinct = false
2023/01/06 22:26:46 Using n1ql client
2023/01/06 22:26:46 MultiScanCount = 5618 ExpectedMultiScanCount = 5618
2023/01/06 22:26:46 
--- SimpleRangeHighUnbounded ---
2023/01/06 22:26:46 distinct = false
2023/01/06 22:26:47 Using n1ql client
2023/01/06 22:26:47 MultiScanCount = 3704 ExpectedMultiScanCount = 3704
2023/01/06 22:26:47 
--- SimpleRangeMultipleUnbounded ---
2023/01/06 22:26:47 distinct = false
2023/01/06 22:26:47 Using n1ql client
2023/01/06 22:26:47 MultiScanCount = 10002 ExpectedMultiScanCount = 10002
2023/01/06 22:26:47 
--- FiltersWithUnbounded ---
2023/01/06 22:26:47 distinct = false
2023/01/06 22:26:47 Using n1ql client
2023/01/06 22:26:47 MultiScanCount = 3173 ExpectedMultiScanCount = 3173
2023/01/06 22:26:47 
--- FiltersLowGreaterThanHigh ---
2023/01/06 22:26:47 distinct = false
2023/01/06 22:26:48 Using n1ql client
2023/01/06 22:26:48 MultiScanCount = 418 ExpectedMultiScanCount = 418
2023/01/06 22:26:48 

--------- Simple Index with 1 field ---------
2023/01/06 22:26:48 
--- SingleIndexSimpleRange ---
2023/01/06 22:26:48 distinct = true
2023/01/06 22:26:48 Using n1ql client
2023/01/06 22:26:48 MultiScanCount = 227 ExpectedMultiScanCount = 227
2023/01/06 22:26:48 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2023/01/06 22:26:48 distinct = true
2023/01/06 22:26:48 Using n1ql client
2023/01/06 22:26:48 MultiScanCount = 713 ExpectedMultiScanCount = 713
2023/01/06 22:26:48 
--- SingleIndex_SimpleRanges_Overlapping ---
2023/01/06 22:26:48 distinct = true
2023/01/06 22:26:49 Using n1ql client
2023/01/06 22:26:49 MultiScanCount = 869 ExpectedMultiScanCount = 869
2023/01/06 22:26:49 

--------- Composite Index with 3 fields ---------
2023/01/06 22:26:49 
--- ScanAllNoFilter ---
2023/01/06 22:26:49 distinct = true
2023/01/06 22:26:49 Using n1ql client
2023/01/06 22:26:49 MultiScanCount = 999 ExpectedMultiScanCount = 999
2023/01/06 22:26:49 
--- ScanAllFilterNil ---
2023/01/06 22:26:49 distinct = true
2023/01/06 22:26:50 Using n1ql client
2023/01/06 22:26:50 MultiScanCount = 999 ExpectedMultiScanCount = 999
2023/01/06 22:26:50 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:26:50 distinct = true
2023/01/06 22:26:50 Using n1ql client
2023/01/06 22:26:50 MultiScanCount = 999 ExpectedMultiScanCount = 999
2023/01/06 22:26:50 
--- 3FieldsSingleSeek ---
2023/01/06 22:26:50 distinct = true
2023/01/06 22:26:51 Using n1ql client
2023/01/06 22:26:51 MultiScanCount = 1 ExpectedMultiScanCount = 1
2023/01/06 22:26:51 
--- 3FieldsMultipleSeeks ---
2023/01/06 22:26:51 distinct = true
2023/01/06 22:26:51 Using n1ql client
2023/01/06 22:26:51 MultiScanCount = 3 ExpectedMultiScanCount = 3
2023/01/06 22:26:51 
--- 3FieldsMultipleSeeks_Identical ---
2023/01/06 22:26:51 distinct = true
2023/01/06 22:26:52 Using n1ql client
2023/01/06 22:26:52 MultiScanCount = 2 ExpectedMultiScanCount = 2
--- PASS: TestMultiScanDescCount (31.30s)
=== RUN   TestMultiScanDescOffset
2023/01/06 22:26:52 In SkipTestMultiScanDescOffset()
2023/01/06 22:26:52 

--------- Composite Index with 2 fields ---------
2023/01/06 22:26:52 
--- ScanAllNoFilter ---
2023/01/06 22:26:52 distinct = false
2023/01/06 22:26:52 Using n1ql client
2023/01/06 22:26:52 
--- ScanAllFilterNil ---
2023/01/06 22:26:52 distinct = false
2023/01/06 22:26:53 Using n1ql client
2023/01/06 22:26:53 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:26:53 distinct = false
2023/01/06 22:26:53 Using n1ql client
2023/01/06 22:26:53 
--- SingleSeek ---
2023/01/06 22:26:53 distinct = false
2023/01/06 22:26:53 Using n1ql client
2023/01/06 22:26:53 
--- MultipleSeek ---
2023/01/06 22:26:53 distinct = false
2023/01/06 22:26:54 Using n1ql client
2023/01/06 22:26:54 
--- SimpleRange ---
2023/01/06 22:26:54 distinct = false
2023/01/06 22:26:54 Using n1ql client
2023/01/06 22:26:54 
--- NonOverlappingRanges ---
2023/01/06 22:26:54 distinct = false
2023/01/06 22:26:55 Using n1ql client
2023/01/06 22:26:55 
--- OverlappingRanges ---
2023/01/06 22:26:55 distinct = false
2023/01/06 22:26:55 Using n1ql client
2023/01/06 22:26:55 
--- NonOverlappingFilters ---
2023/01/06 22:26:55 distinct = false
2023/01/06 22:26:55 Using n1ql client
2023/01/06 22:26:55 
--- OverlappingFilters ---
2023/01/06 22:26:55 distinct = false
2023/01/06 22:26:56 Using n1ql client
2023/01/06 22:26:56 
--- BoundaryFilters ---
2023/01/06 22:26:56 distinct = false
2023/01/06 22:26:56 Using n1ql client
2023/01/06 22:26:56 
--- SeekAndFilters_NonOverlapping ---
2023/01/06 22:26:56 distinct = false
2023/01/06 22:26:57 Using n1ql client
2023/01/06 22:26:57 
--- SeekAndFilters_Overlapping ---
2023/01/06 22:26:57 distinct = false
2023/01/06 22:26:57 Using n1ql client
2023/01/06 22:26:57 
--- SimpleRangeLowUnbounded ---
2023/01/06 22:26:57 distinct = false
2023/01/06 22:26:58 Using n1ql client
2023/01/06 22:26:58 Expected and Actual scan responses are the same
2023/01/06 22:26:58 
--- SimpleRangeHighUnbounded ---
2023/01/06 22:26:58 distinct = false
2023/01/06 22:26:58 Using n1ql client
2023/01/06 22:26:58 Expected and Actual scan responses are the same
2023/01/06 22:26:58 
--- SimpleRangeMultipleUnbounded ---
2023/01/06 22:26:58 distinct = false
2023/01/06 22:26:58 Using n1ql client
2023/01/06 22:26:58 Expected and Actual scan responses are the same
2023/01/06 22:26:58 
--- FiltersWithUnbounded ---
2023/01/06 22:26:58 distinct = false
2023/01/06 22:26:59 Using n1ql client
2023/01/06 22:26:59 Expected and Actual scan responses are the same
2023/01/06 22:26:59 
--- FiltersLowGreaterThanHigh ---
2023/01/06 22:26:59 distinct = false
2023/01/06 22:26:59 Using n1ql client
2023/01/06 22:26:59 Expected and Actual scan responses are the same
2023/01/06 22:26:59 

--------- Simple Index with 1 field ---------
2023/01/06 22:26:59 
--- SingleIndexSimpleRange ---
2023/01/06 22:26:59 distinct = false
2023/01/06 22:27:00 Using n1ql client
2023/01/06 22:27:00 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2023/01/06 22:27:00 distinct = false
2023/01/06 22:27:00 Using n1ql client
2023/01/06 22:27:00 
--- SingleIndex_SimpleRanges_Overlapping ---
2023/01/06 22:27:00 distinct = false
2023/01/06 22:27:00 Using n1ql client
2023/01/06 22:27:00 

--------- Composite Index with 3 fields ---------
2023/01/06 22:27:00 
--- ScanAllNoFilter ---
2023/01/06 22:27:00 distinct = false
2023/01/06 22:27:01 Using n1ql client
2023/01/06 22:27:01 
--- ScanAllFilterNil ---
2023/01/06 22:27:01 distinct = false
2023/01/06 22:27:01 Using n1ql client
2023/01/06 22:27:01 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:27:01 distinct = false
2023/01/06 22:27:02 Using n1ql client
2023/01/06 22:27:02 
--- 3FieldsSingleSeek ---
2023/01/06 22:27:02 distinct = false
2023/01/06 22:27:02 Using n1ql client
2023/01/06 22:27:02 
--- 3FieldsMultipleSeeks ---
2023/01/06 22:27:02 distinct = false
2023/01/06 22:27:03 Using n1ql client
2023/01/06 22:27:03 
--- 3FieldsMultipleSeeks_Identical ---
2023/01/06 22:27:03 distinct = false
2023/01/06 22:27:03 Using n1ql client
--- PASS: TestMultiScanDescOffset (11.46s)
=== RUN   TestMultiScanDescDistinct
2023/01/06 22:27:03 In SkipTestMultiScanDescDistinct()
2023/01/06 22:27:03 

--------- Composite Index with 2 fields ---------
2023/01/06 22:27:03 
--- ScanAllNoFilter ---
2023/01/06 22:27:03 distinct = true
2023/01/06 22:27:04 Using n1ql client
2023/01/06 22:27:04 Expected and Actual scan responses are the same
2023/01/06 22:27:04 
--- ScanAllFilterNil ---
2023/01/06 22:27:04 distinct = true
2023/01/06 22:27:04 Using n1ql client
2023/01/06 22:27:04 Expected and Actual scan responses are the same
2023/01/06 22:27:04 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:27:04 distinct = true
2023/01/06 22:27:05 Using n1ql client
2023/01/06 22:27:05 Expected and Actual scan responses are the same
2023/01/06 22:27:05 
--- SingleSeek ---
2023/01/06 22:27:05 distinct = true
2023/01/06 22:27:05 Using n1ql client
2023/01/06 22:27:05 Expected and Actual scan responses are the same
2023/01/06 22:27:05 
--- MultipleSeek ---
2023/01/06 22:27:05 distinct = true
2023/01/06 22:27:05 Using n1ql client
2023/01/06 22:27:05 Expected and Actual scan responses are the same
2023/01/06 22:27:05 
--- SimpleRange ---
2023/01/06 22:27:05 distinct = true
2023/01/06 22:27:06 Using n1ql client
2023/01/06 22:27:06 Expected and Actual scan responses are the same
2023/01/06 22:27:06 
--- NonOverlappingRanges ---
2023/01/06 22:27:06 distinct = true
2023/01/06 22:27:06 Using n1ql client
2023/01/06 22:27:06 Expected and Actual scan responses are the same
2023/01/06 22:27:06 
--- OverlappingRanges ---
2023/01/06 22:27:06 distinct = true
2023/01/06 22:27:07 Using n1ql client
2023/01/06 22:27:07 Expected and Actual scan responses are the same
2023/01/06 22:27:07 
--- NonOverlappingFilters ---
2023/01/06 22:27:07 distinct = true
2023/01/06 22:27:07 Using n1ql client
2023/01/06 22:27:07 Expected and Actual scan responses are the same
2023/01/06 22:27:07 
--- OverlappingFilters ---
2023/01/06 22:27:07 distinct = true
2023/01/06 22:27:08 Using n1ql client
2023/01/06 22:27:08 Expected and Actual scan responses are the same
2023/01/06 22:27:08 
--- BoundaryFilters ---
2023/01/06 22:27:08 distinct = true
2023/01/06 22:27:08 Using n1ql client
2023/01/06 22:27:08 Expected and Actual scan responses are the same
2023/01/06 22:27:08 
--- SeekAndFilters_NonOverlapping ---
2023/01/06 22:27:08 distinct = true
2023/01/06 22:27:09 Using n1ql client
2023/01/06 22:27:09 Expected and Actual scan responses are the same
2023/01/06 22:27:09 
--- SeekAndFilters_Overlapping ---
2023/01/06 22:27:09 distinct = true
2023/01/06 22:27:09 Using n1ql client
2023/01/06 22:27:09 Expected and Actual scan responses are the same
2023/01/06 22:27:09 
--- SimpleRangeLowUnbounded ---
2023/01/06 22:27:09 distinct = false
2023/01/06 22:27:09 Using n1ql client
2023/01/06 22:27:09 Expected and Actual scan responses are the same
2023/01/06 22:27:09 
--- SimpleRangeHighUnbounded ---
2023/01/06 22:27:09 distinct = false
2023/01/06 22:27:10 Using n1ql client
2023/01/06 22:27:10 Expected and Actual scan responses are the same
2023/01/06 22:27:10 
--- SimpleRangeMultipleUnbounded ---
2023/01/06 22:27:10 distinct = false
2023/01/06 22:27:10 Using n1ql client
2023/01/06 22:27:10 Expected and Actual scan responses are the same
2023/01/06 22:27:10 
--- FiltersWithUnbounded ---
2023/01/06 22:27:10 distinct = false
2023/01/06 22:27:11 Using n1ql client
2023/01/06 22:27:11 Expected and Actual scan responses are the same
2023/01/06 22:27:11 
--- FiltersLowGreaterThanHigh ---
2023/01/06 22:27:11 distinct = false
2023/01/06 22:27:11 Using n1ql client
2023/01/06 22:27:11 Expected and Actual scan responses are the same
2023/01/06 22:27:11 

--------- Simple Index with 1 field ---------
2023/01/06 22:27:11 
--- SingleIndexSimpleRange ---
2023/01/06 22:27:11 distinct = true
2023/01/06 22:27:11 Using n1ql client
2023/01/06 22:27:11 Expected and Actual scan responses are the same
2023/01/06 22:27:11 
--- SingleIndex_SimpleRanges_NonOverlapping ---
2023/01/06 22:27:11 distinct = true
2023/01/06 22:27:12 Using n1ql client
2023/01/06 22:27:12 Expected and Actual scan responses are the same
2023/01/06 22:27:12 
--- SingleIndex_SimpleRanges_Overlapping ---
2023/01/06 22:27:12 distinct = true
2023/01/06 22:27:12 Using n1ql client
2023/01/06 22:27:12 Expected and Actual scan responses are the same
2023/01/06 22:27:12 

--------- Composite Index with 3 fields ---------
2023/01/06 22:27:12 
--- ScanAllNoFilter ---
2023/01/06 22:27:12 distinct = true
2023/01/06 22:27:13 Using n1ql client
2023/01/06 22:27:13 Expected and Actual scan responses are the same
2023/01/06 22:27:13 
--- ScanAllFilterNil ---
2023/01/06 22:27:13 distinct = true
2023/01/06 22:27:13 Using n1ql client
2023/01/06 22:27:13 Expected and Actual scan responses are the same
2023/01/06 22:27:13 
--- ScanAll_AllFiltersNil ---
2023/01/06 22:27:13 distinct = true
2023/01/06 22:27:14 Using n1ql client
2023/01/06 22:27:14 Expected and Actual scan responses are the same
2023/01/06 22:27:14 
--- 3FieldsSingleSeek ---
2023/01/06 22:27:14 distinct = true
2023/01/06 22:27:14 Using n1ql client
2023/01/06 22:27:14 Expected and Actual scan responses are the same
2023/01/06 22:27:14 
--- 3FieldsMultipleSeeks ---
2023/01/06 22:27:14 distinct = true
2023/01/06 22:27:15 Using n1ql client
2023/01/06 22:27:15 Expected and Actual scan responses are the same
2023/01/06 22:27:15 
--- 3FieldsMultipleSeeks_Identical ---
2023/01/06 22:27:15 distinct = true
2023/01/06 22:27:15 Using n1ql client
2023/01/06 22:27:15 Expected and Actual scan responses are the same
--- PASS: TestMultiScanDescDistinct (12.03s)
=== RUN   TestGroupAggrSetup
2023/01/06 22:27:15 In TestGroupAggrSetup()
2023/01/06 22:27:15 Emptying the default bucket
2023/01/06 22:27:18 Flush Enabled on bucket default, responseBody: 
2023/01/06 22:28:00 Flushed the bucket default, Response body: 
2023/01/06 22:28:00 Dropping the secondary index index_agg
2023/01/06 22:28:00 Populating the default bucket
2023/01/06 22:28:04 Created the secondary index index_agg. Waiting for it become active
2023/01/06 22:28:04 Index is 8989707236103957161 now active
--- PASS: TestGroupAggrSetup (54.71s)
=== RUN   TestGroupAggrLeading
2023/01/06 22:28:10 In TestGroupAggrLeading()
2023/01/06 22:28:10 Total Scanresults = 7
2023/01/06 22:28:10 Expected and Actual scan responses are the same
2023/01/06 22:28:10 Total Scanresults = 3
2023/01/06 22:28:10 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrLeading (0.02s)
=== RUN   TestGroupAggrNonLeading
2023/01/06 22:28:10 In TestGroupAggrNonLeading()
2023/01/06 22:28:10 Total Scanresults = 4
2023/01/06 22:28:10 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrNonLeading (0.01s)
=== RUN   TestGroupAggrNoGroup
2023/01/06 22:28:10 In TestGroupAggrNoGroup()
2023/01/06 22:28:10 Total Scanresults = 1
2023/01/06 22:28:10 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrNoGroup (0.00s)
=== RUN   TestGroupAggrMinMax
2023/01/06 22:28:10 In TestGroupAggrMinMax()
2023/01/06 22:28:10 Total Scanresults = 4
2023/01/06 22:28:10 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrMinMax (0.00s)
=== RUN   TestGroupAggrMinMax2
2023/01/06 22:28:10 In TestGroupAggrMinMax()
2023/01/06 22:28:10 Total Scanresults = 1
2023/01/06 22:28:10 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrMinMax2 (0.01s)
=== RUN   TestGroupAggrLeading_N1QLExprs
2023/01/06 22:28:10 In TestGroupAggrLeading_N1QLExprs()
2023/01/06 22:28:10 Total Scanresults = 17
2023/01/06 22:28:10 basicGroupAggrN1QLExprs1: Scan validation passed
2023/01/06 22:28:10 Total Scanresults = 9
2023/01/06 22:28:10 basicGroupAggrN1QLExprs2: Scan validation passed
--- PASS: TestGroupAggrLeading_N1QLExprs (0.26s)
=== RUN   TestGroupAggrLimit
2023/01/06 22:28:10 In TestGroupAggrLimit()
2023/01/06 22:28:10 Total Scanresults = 3
2023/01/06 22:28:10 Expected and Actual scan responses are the same
2023/01/06 22:28:10 Total Scanresults = 2
2023/01/06 22:28:10 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrLimit (0.01s)
=== RUN   TestGroupAggrOffset
2023/01/06 22:28:10 In TestGroupAggrOffset()
2023/01/06 22:28:10 Total Scanresults = 3
2023/01/06 22:28:10 Expected and Actual scan responses are the same
2023/01/06 22:28:10 Total Scanresults = 2
2023/01/06 22:28:10 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrOffset (0.01s)
=== RUN   TestGroupAggrCountN
2023/01/06 22:28:10 In TestGroupAggrCountN()
2023/01/06 22:28:10 Total Scanresults = 4
2023/01/06 22:28:10 Expected and Actual scan responses are the same
2023/01/06 22:28:10 Total Scanresults = 4
2023/01/06 22:28:10 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrCountN (0.00s)
=== RUN   TestGroupAggrNoGroupNoMatch
2023/01/06 22:28:10 In TestGroupAggrNoGroupNoMatch()
2023/01/06 22:28:10 Total Scanresults = 1
2023/01/06 22:28:10 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrNoGroupNoMatch (0.00s)
=== RUN   TestGroupAggrGroupNoMatch
2023/01/06 22:28:10 In TestGroupAggrGroupNoMatch()
2023/01/06 22:28:10 Total Scanresults = 0
2023/01/06 22:28:10 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrGroupNoMatch (0.00s)
=== RUN   TestGroupAggrMultDataTypes
2023/01/06 22:28:10 In TestGroupAggrMultDataTypes()
2023/01/06 22:28:10 Total Scanresults = 8
2023/01/06 22:28:10 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrMultDataTypes (0.00s)
=== RUN   TestGroupAggrDistinct
2023/01/06 22:28:10 In TestGroupAggrDistinct()
2023/01/06 22:28:10 Total Scanresults = 2
2023/01/06 22:28:10 Expected and Actual scan responses are the same
2023/01/06 22:28:10 Total Scanresults = 2
2023/01/06 22:28:10 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrDistinct (0.00s)
=== RUN   TestGroupAggrDistinct2
2023/01/06 22:28:10 In TestGroupAggrDistinct2()
2023/01/06 22:28:10 Total Scanresults = 1
2023/01/06 22:28:10 Expected and Actual scan responses are the same
2023/01/06 22:28:10 Total Scanresults = 4
2023/01/06 22:28:10 Expected and Actual scan responses are the same
2023/01/06 22:28:10 Total Scanresults = 4
2023/01/06 22:28:10 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrDistinct2 (0.01s)
=== RUN   TestGroupAggrNull
2023/01/06 22:28:10 In TestGroupAggrNull()
2023/01/06 22:28:10 Total Scanresults = 2
2023/01/06 22:28:10 Expected and Actual scan responses are the same
2023/01/06 22:28:10 Total Scanresults = 2
2023/01/06 22:28:10 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrNull (0.00s)
=== RUN   TestGroupAggrInt64
2023/01/06 22:28:10 In TestGroupAggrInt64()
2023/01/06 22:28:10 Updating the default bucket
2023/01/06 22:28:10 Total Scanresults = 2
2023/01/06 22:28:10 Expected and Actual scan responses are the same
2023/01/06 22:28:10 Total Scanresults = 2
2023/01/06 22:28:10 Expected and Actual scan responses are the same
2023/01/06 22:28:10 Total Scanresults = 2
2023/01/06 22:28:10 Expected and Actual scan responses are the same
2023/01/06 22:28:10 Total Scanresults = 2
2023/01/06 22:28:10 Expected and Actual scan responses are the same
2023/01/06 22:28:10 Total Scanresults = 2
2023/01/06 22:28:10 Expected and Actual scan responses are the same
2023/01/06 22:28:10 Total Scanresults = 2
2023/01/06 22:28:10 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrInt64 (0.18s)
=== RUN   TestGroupAggr1
2023/01/06 22:28:10 In TestGroupAggr1()
2023/01/06 22:28:10 In DropAllSecondaryIndexes()
2023/01/06 22:28:10 Index found:  #primary
2023/01/06 22:28:10 Dropped index #primary
2023/01/06 22:28:10 Index found:  index_companyname_desc
2023/01/06 22:28:11 Dropped index index_companyname_desc
2023/01/06 22:28:11 Index found:  index_agg
2023/01/06 22:28:11 Dropped index index_agg
2023/01/06 22:28:11 Index found:  index_company_desc
2023/01/06 22:28:11 Dropped index index_company_desc
2023/01/06 22:28:11 Index found:  index_company_name_age_desc
2023/01/06 22:28:11 Dropped index index_company_name_age_desc
2023/01/06 22:28:49 Flushed the bucket default, Response body: 
2023/01/06 22:29:00 Created the secondary index idx_aggrs. Waiting for it become active
2023/01/06 22:29:00 Index is 8645386498067936986 now active
2023/01/06 22:29:00 Total Scanresults = 633
2023/01/06 22:29:03 Total Scanresults = 743
--- PASS: TestGroupAggr1 (53.01s)
=== RUN   TestGroupAggrArrayIndex
2023/01/06 22:29:03 In TestGroupAggrArrayIndex()
2023/01/06 22:29:08 Created the secondary index ga_arr1. Waiting for it become active
2023/01/06 22:29:08 Index is 10757554347068008527 now active
2023/01/06 22:29:14 Created the secondary index ga_arr2. Waiting for it become active
2023/01/06 22:29:14 Index is 1582314551716432026 now active
2023/01/06 22:29:14 Scenario 1
2023/01/06 22:29:15 Total Scanresults = 633
2023/01/06 22:29:15 Scenario 2
2023/01/06 22:29:15 Total Scanresults = 2824
2023/01/06 22:29:16 Scenario 3
2023/01/06 22:29:16 Total Scanresults = 1
2023/01/06 22:29:16 Scenario 4
2023/01/06 22:29:16 Total Scanresults = 992
2023/01/06 22:29:17 Scenario 5
2023/01/06 22:29:17 Total Scanresults = 2824
2023/01/06 22:29:18 Scenario 6
2023/01/06 22:29:18 Total Scanresults = 1
2023/01/06 22:29:18 Scenario 7
2023/01/06 22:29:18 Total Scanresults = 2929
2023/01/06 22:29:21 Scenario 8
2023/01/06 22:29:21 Total Scanresults = 1171
2023/01/06 22:29:22 Scenario 9
2023/01/06 22:29:22 Total Scanresults = 1
2023/01/06 22:29:22 Scenario 10
2023/01/06 22:29:22 Total Scanresults = 633
2023/01/06 22:29:23 Scenario 11
2023/01/06 22:29:23 Total Scanresults = 1171
2023/01/06 22:29:23 Scenario 12
2023/01/06 22:29:23 Total Scanresults = 1
2023/01/06 22:29:24 Scenario 13
2023/01/06 22:29:28 Total Scanresults = 1
2023/01/06 22:29:28 Count of scanResults is 1
2023/01/06 22:29:28 Value: [2 133]
--- PASS: TestGroupAggrArrayIndex (24.80s)
=== RUN   TestGroupAggr_FirstValidAggrOnly
2023/01/06 22:29:28 In TestGroupAggr_FirstValidAggrOnly()
2023/01/06 22:29:28 In DropAllSecondaryIndexes()
2023/01/06 22:29:28 Index found:  test_oneperprimarykey
2023/01/06 22:29:28 Dropped index test_oneperprimarykey
2023/01/06 22:29:28 Index found:  #primary
2023/01/06 22:29:28 Dropped index #primary
2023/01/06 22:29:28 Index found:  idx_aggrs
2023/01/06 22:29:28 Dropped index idx_aggrs
2023/01/06 22:29:28 Index found:  PRIMARY_IDX_CBO_STATS
2023/01/06 22:29:29 Dropped index PRIMARY_IDX_CBO_STATS
2023/01/06 22:29:29 Index found:  ga_arr1
2023/01/06 22:29:29 Dropped index ga_arr1
2023/01/06 22:29:29 Index found:  ga_arr2
2023/01/06 22:29:29 Dropped index ga_arr2
2023/01/06 22:29:44 Created the secondary index idx_asc_3field. Waiting for it become active
2023/01/06 22:29:44 Index is 7398984718338841476 now active
2023/01/06 22:29:50 Created the secondary index idx_desc_3field. Waiting for it become active
2023/01/06 22:29:50 Index is 12281661039197807901 now active
2023/01/06 22:29:50 === MIN no group by ===
2023/01/06 22:29:50 Total Scanresults = 1
2023/01/06 22:29:50 Count of scanResults is 1
2023/01/06 22:29:50 Value: ["ACCEL"]
2023/01/06 22:29:50 === MIN no group by, no row match ===
2023/01/06 22:29:50 Total Scanresults = 1
2023/01/06 22:29:50 Count of scanResults is 1
2023/01/06 22:29:50 Value: [null]
2023/01/06 22:29:50 === MIN with group by ===
2023/01/06 22:29:50 Total Scanresults = 633
2023/01/06 22:29:51 === MIN with group by, no row match ===
2023/01/06 22:29:51 Total Scanresults = 0
2023/01/06 22:29:51 === One Aggr, no group by ===
2023/01/06 22:29:51 Total Scanresults = 1
2023/01/06 22:29:51 Count of scanResults is 1
2023/01/06 22:29:51 Value: ["FANFARE"]
2023/01/06 22:29:51 === One Aggr, no group by, no row match ===
2023/01/06 22:29:51 Total Scanresults = 1
2023/01/06 22:29:51 Count of scanResults is 1
2023/01/06 22:29:51 Value: [null]
2023/01/06 22:29:51 === Multiple Aggr, no group by ===
2023/01/06 22:29:51 Total Scanresults = 1
2023/01/06 22:29:51 Count of scanResults is 1
2023/01/06 22:29:51 Value: ["FANFARE" 15]
2023/01/06 22:29:51 === Multiple Aggr, no group by, no row match ===
2023/01/06 22:29:51 Total Scanresults = 1
2023/01/06 22:29:51 Count of scanResults is 1
2023/01/06 22:29:51 Value: [null null]
2023/01/06 22:29:52 === No Aggr, 1 group by ===
2023/01/06 22:29:52 Total Scanresults = 207
2023/01/06 22:29:52 === Aggr on non-leading key, previous equality filter, no group ===
2023/01/06 22:29:52 Total Scanresults = 1
2023/01/06 22:29:52 Count of scanResults is 1
2023/01/06 22:29:52 Value: [17]
2023/01/06 22:29:52 === Aggr on non-leading key, previous equality filters, no group ===
2023/01/06 22:29:52 Total Scanresults = 1
2023/01/06 22:29:52 Count of scanResults is 1
2023/01/06 22:29:52 Value: [null]
2023/01/06 22:29:52 === Aggr on non-leading key, previous non-equality filters, no group ===
2023/01/06 22:29:52 Total Scanresults = 1
2023/01/06 22:29:52 Count of scanResults is 1
2023/01/06 22:29:52 Value: ["Ahmed"]
2023/01/06 22:29:52 === MIN on desc, no group ===
2023/01/06 22:29:52 Total Scanresults = 1
2023/01/06 22:29:52 Count of scanResults is 1
2023/01/06 22:29:52 Value: ["FANFARE"]
2023/01/06 22:29:53 === MAX on asc, no group ===
2023/01/06 22:29:53 Total Scanresults = 1
2023/01/06 22:29:53 Count of scanResults is 1
2023/01/06 22:29:53 Value: ["OZEAN"]
2023/01/06 22:29:53 === MAX on desc, no group ===
2023/01/06 22:29:53 Total Scanresults = 1
2023/01/06 22:29:53 Count of scanResults is 1
2023/01/06 22:29:53 Value: ["OZEAN"]
2023/01/06 22:29:53 === COUNT(DISTINCT const_expr, no group ===
2023/01/06 22:29:53 Total Scanresults = 1
2023/01/06 22:29:53 Count of scanResults is 1
2023/01/06 22:29:53 Value: [1]
2023/01/06 22:29:53 === COUNT(DISTINCT const_expr, no group, no row match ===
2023/01/06 22:29:53 Total Scanresults = 1
2023/01/06 22:29:53 Count of scanResults is 1
2023/01/06 22:29:53 Value: [0]
2023/01/06 22:29:53 === COUNT(const_expr, no group ===
2023/01/06 22:29:53 Total Scanresults = 1
2023/01/06 22:29:53 Count of scanResults is 1
2023/01/06 22:29:53 Value: [321]
2023/01/06 22:29:58 Created the secondary index indexMinAggr. Waiting for it become active
2023/01/06 22:29:58 Index is 5844016776839426332 now active
2023/01/06 22:29:58 === Equality filter check: Equality for a, nil filter for b - inclusion 0, no filter for c ===
2023/01/06 22:29:58 Total Scanresults = 1
2023/01/06 22:29:58 Count of scanResults is 1
2023/01/06 22:29:58 Value: [5]
2023/01/06 22:29:58 === Equality filter check: Equality for a, nil filter for b - inclusion 3, no filter for c ===
2023/01/06 22:29:58 Total Scanresults = 1
2023/01/06 22:29:58 Count of scanResults is 1
2023/01/06 22:29:58 Value: [5]
2023/01/06 22:29:58 === Equality filter check: Equality for a, nil filter for b - inclusion 3, nil filter for c ===
2023/01/06 22:29:58 Total Scanresults = 1
2023/01/06 22:29:58 Count of scanResults is 1
2023/01/06 22:29:58 Value: [5]
--- PASS: TestGroupAggr_FirstValidAggrOnly (30.11s)
=== RUN   TestGroupAggrPrimary
2023/01/06 22:29:58 In TestGroupAggrPrimary()
2023/01/06 22:29:58 Total Scanresults = 1
2023/01/06 22:29:58 Total Scanresults = 1
2023/01/06 22:29:59 Total Scanresults = 1
2023/01/06 22:29:59 Total Scanresults = 1002
2023/01/06 22:29:59 Total Scanresults = 1002
2023/01/06 22:30:00 Total Scanresults = 1
2023/01/06 22:30:00 Total Scanresults = 1
2023/01/06 22:30:00 Total Scanresults = 1
2023/01/06 22:30:00 Total Scanresults = 1
2023/01/06 22:30:00 Total Scanresults = 1
2023/01/06 22:30:00 Total Scanresults = 1
2023/01/06 22:30:00 --- MB-28305 Scenario 1 ---
2023/01/06 22:30:00 Total Scanresults = 1
2023/01/06 22:30:00 Count of scanResults is 1
2023/01/06 22:30:00 Value: [0]
2023/01/06 22:30:00 --- MB-28305 Scenario 2 ---
2023/01/06 22:30:00 Total Scanresults = 1
2023/01/06 22:30:00 Count of scanResults is 1
2023/01/06 22:30:00 Value: [0]
2023/01/06 22:30:00 --- MB-28305 Scenario 3 ---
2023/01/06 22:30:00 Total Scanresults = 1
2023/01/06 22:30:00 Count of scanResults is 1
2023/01/06 22:30:00 Value: [0]
--- PASS: TestGroupAggrPrimary (1.77s)
=== RUN   TestGroupAggrDocumentKey
2023/01/06 22:30:00 In TestGroupAggrDocumentKey()
2023/01/06 22:30:00 Dropping the secondary index documentkey_idx1
2023/01/06 22:30:00 Dropping the secondary index documentkey_idx2
2023/01/06 22:30:00 Populating the default bucket for TestGroupAggrDocumentKey single key index
2023/01/06 22:30:04 Created the secondary index documentkey_idx1. Waiting for it become active
2023/01/06 22:30:04 Index is 3882499887742349001 now active
2023/01/06 22:30:04 Using n1ql client
2023/01/06 22:30:05 Scanresult Row  ["1"] :  
2023/01/06 22:30:05 Scanresult Row  ["2"] :  
2023/01/06 22:30:05 Scanresult Row  ["3"] :  
2023/01/06 22:30:06 Expected and Actual scan responses are the same
2023/01/06 22:30:06 Populating the default bucket for TestGroupAggrDocumentKey composite key index
2023/01/06 22:30:11 Created the secondary index documentkey_idx2. Waiting for it become active
2023/01/06 22:30:11 Index is 2550836935189215311 now active
2023/01/06 22:30:11 Using n1ql client
2023/01/06 22:30:11 Scanresult Row  ["1"] :  
2023/01/06 22:30:11 Scanresult Row  ["2"] :  
2023/01/06 22:30:11 Scanresult Row  ["3"] :  
2023/01/06 22:30:11 Expected and Actual scan responses are the same
--- PASS: TestGroupAggrDocumentKey (10.83s)
=== RUN   TestRebalanceSetupCluster
2023/01/06 22:30:11 set14_rebalance_test.go::TestRebalanceSetupCluster: entry: Current cluster configuration: map[127.0.0.1:9001:[index kv] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 22:30:11 set14_rebalance_test.go::TestRebalanceSetupCluster: 1. Setting up initial cluster configuration
2023/01/06 22:30:11 Removing node(s): [127.0.0.1:9001 127.0.0.1:9002 127.0.0.1:9003] from the cluster
2023/01/06 22:30:16 Rebalance progress: 2.099609375
2023/01/06 22:30:21 Rebalance progress: 6.412760416666667
2023/01/06 22:30:26 Rebalance progress: 10.77473958333333
2023/01/06 22:30:31 Rebalance progress: 13.37890625
2023/01/06 22:30:36 Rebalance progress: 17.822265625
2023/01/06 22:30:41 Rebalance progress: 22.49348958333333
2023/01/06 22:30:46 Rebalance progress: 25
2023/01/06 22:30:51 Rebalance progress: 25
2023/01/06 22:30:58 Rebalance progress: 25.32552083333333
2023/01/06 22:31:01 Rebalance progress: 28.15755208333333
2023/01/06 22:31:06 Rebalance progress: 30.76171875
2023/01/06 22:31:11 Rebalance progress: 33.46354166666667
2023/01/06 22:31:16 Rebalance progress: 36.00260416666667
2023/01/06 22:31:21 Rebalance progress: 38.73697916666667
2023/01/06 22:31:26 Rebalance progress: 41.6015625
2023/01/06 22:31:31 Rebalance progress: 43.52213541666667
2023/01/06 22:31:36 Rebalance progress: 46.46809895833334
2023/01/06 22:31:41 Rebalance progress: 49.67447916666667
2023/01/06 22:31:46 Rebalance progress: 50
2023/01/06 22:31:51 Rebalance progress: 50
2023-01-06T22:31:53.975+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:49650->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T22:31:54.026+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:49652->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023/01/06 22:31:56 Rebalance progress: 100
2023/01/06 22:31:56 Adding node: https://127.0.0.1:19001 with role: index to the cluster
2023/01/06 22:32:08 AddNode: Successfully added node: 127.0.0.1:9001 (role index), response: {"otpNode":"n_1@127.0.0.1"}
2023/01/06 22:32:14 Rebalance progress: 100
2023/01/06 22:32:14 set14_rebalance_test.go::TestRebalanceSetupCluster: 2. Changing indexer.settings.rebalance.redistribute_indexes to true
2023/01/06 22:32:14 Changing config key indexer.settings.rebalance.redistribute_indexes to value true
2023/01/06 22:32:14 set14_rebalance_test.go::TestRebalanceSetupCluster: exit: Current cluster configuration: map[127.0.0.1:9001:[index] 172.31.5.112:9000:[kv n1ql]]
--- PASS: TestRebalanceSetupCluster (123.01s)
=== RUN   TestCreateDocsBeforeRebalance
2023/01/06 22:32:14 set14_rebalance_test.go::TestCreateDocsBeforeRebalance: entry: Current cluster configuration: map[127.0.0.1:9001:[index] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 22:32:14 set14_rebalance_test.go::TestCreateDocsBeforeRebalance: 1. Creating 100 documents
2023/01/06 22:32:14 set14_rebalance_test.go::TestCreateDocsBeforeRebalance: 100 documents created
2023/01/06 22:32:14 set14_rebalance_test.go::TestCreateDocsBeforeRebalance: exit: Current cluster configuration: map[127.0.0.1:9001:[index] 172.31.5.112:9000:[kv n1ql]]
--- PASS: TestCreateDocsBeforeRebalance (0.15s)
=== RUN   TestCreateIndexesBeforeRebalance
2023/01/06 22:32:14 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: entry: Current cluster configuration: map[127.0.0.1:9001:[index] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 22:32:14 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: 1. Creating 17 indexes: non-partitioned, 0-replica, non-deferred
2023/01/06 22:32:14 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN__id on `default`(_id)
2023/01/06 22:32:18 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN__id index is now active.
2023/01/06 22:32:18 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_docid on `default`(docid)
2023/01/06 22:32:24 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_docid index is now active.
2023/01/06 22:32:24 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_guid on `default`(guid)
2023/01/06 22:32:30 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_guid index is now active.
2023/01/06 22:32:30 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_isActive on `default`(isActive)
2023/01/06 22:32:36 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_isActive index is now active.
2023/01/06 22:32:36 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_balance on `default`(balance)
2023/01/06 22:32:42 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_balance index is now active.
2023/01/06 22:32:42 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_picture on `default`(picture)
2023/01/06 22:32:48 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_picture index is now active.
2023/01/06 22:32:48 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_age on `default`(age)
2023/01/06 22:32:54 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_age index is now active.
2023/01/06 22:32:54 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_eyeColor on `default`(eyeColor)
2023/01/06 22:33:00 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_eyeColor index is now active.
2023/01/06 22:33:00 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_name on `default`(name)
2023/01/06 22:33:07 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_name index is now active.
2023/01/06 22:33:07 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_gender on `default`(gender)
2023/01/06 22:33:13 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_gender index is now active.
2023/01/06 22:33:13 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_company on `default`(company)
2023/01/06 22:33:19 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_company index is now active.
2023/01/06 22:33:19 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_email on `default`(email)
2023/01/06 22:33:26 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_email index is now active.
2023/01/06 22:33:26 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_phone on `default`(phone)
2023/01/06 22:33:32 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_phone index is now active.
2023/01/06 22:33:32 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_registered on `default`(registered)
2023/01/06 22:33:38 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_registered index is now active.
2023/01/06 22:33:38 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_latitude on `default`(latitude)
2023/01/06 22:33:44 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_latitude index is now active.
2023/01/06 22:33:44 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_longitude on `default`(longitude)
2023/01/06 22:33:50 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_longitude index is now active.
2023/01/06 22:33:50 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_PLAIN_favoriteFruit on `default`(favoriteFruit)
2023/01/06 22:33:56 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_PLAIN_favoriteFruit index is now active.
2023/01/06 22:33:56 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: 2. Creating 17 indexes: non-partitioned, 0-replica, DEFERRED
2023/01/06 22:33:56 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED__id_docid on `default`(_id, docid) with {"defer_build":true}
2023/01/06 22:33:56 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED__id_docid index is now deferred.
2023/01/06 22:33:56 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_docid_guid on `default`(docid, guid) with {"defer_build":true}
2023/01/06 22:33:56 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_docid_guid index is now deferred.
2023/01/06 22:33:56 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_guid_isActive on `default`(guid, isActive) with {"defer_build":true}
2023/01/06 22:33:56 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_guid_isActive index is now deferred.
2023/01/06 22:33:57 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_isActive_balance on `default`(isActive, balance) with {"defer_build":true}
2023/01/06 22:33:57 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_isActive_balance index is now deferred.
2023/01/06 22:33:57 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_balance_picture on `default`(balance, picture) with {"defer_build":true}
2023/01/06 22:33:57 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_balance_picture index is now deferred.
2023/01/06 22:33:57 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_picture_age on `default`(picture, age) with {"defer_build":true}
2023/01/06 22:33:57 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_picture_age index is now deferred.
2023/01/06 22:33:57 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_age_eyeColor on `default`(age, eyeColor) with {"defer_build":true}
2023/01/06 22:33:58 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_age_eyeColor index is now deferred.
2023/01/06 22:33:58 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_eyeColor_name on `default`(eyeColor, name) with {"defer_build":true}
2023/01/06 22:33:58 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_eyeColor_name index is now deferred.
2023/01/06 22:33:58 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_name_gender on `default`(name, gender) with {"defer_build":true}
2023/01/06 22:33:58 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_name_gender index is now deferred.
2023/01/06 22:33:58 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_gender_company on `default`(gender, company) with {"defer_build":true}
2023/01/06 22:33:58 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_gender_company index is now deferred.
2023/01/06 22:33:58 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_company_email on `default`(company, email) with {"defer_build":true}
2023/01/06 22:33:58 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_company_email index is now deferred.
2023/01/06 22:33:58 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_email_phone on `default`(email, phone) with {"defer_build":true}
2023/01/06 22:33:59 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_email_phone index is now deferred.
2023/01/06 22:33:59 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_phone_registered on `default`(phone, registered) with {"defer_build":true}
2023/01/06 22:33:59 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_phone_registered index is now deferred.
2023/01/06 22:33:59 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_registered_latitude on `default`(registered, latitude) with {"defer_build":true}
2023/01/06 22:33:59 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_registered_latitude index is now deferred.
2023/01/06 22:33:59 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_latitude_longitude on `default`(latitude, longitude) with {"defer_build":true}
2023/01/06 22:33:59 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_latitude_longitude index is now deferred.
2023/01/06 22:33:59 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_longitude_favoriteFruit on `default`(longitude, favoriteFruit) with {"defer_build":true}
2023/01/06 22:34:00 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_longitude_favoriteFruit index is now deferred.
2023/01/06 22:34:00 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_DEFERRED_favoriteFruit__id on `default`(favoriteFruit, _id) with {"defer_build":true}
2023/01/06 22:34:00 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_DEFERRED_favoriteFruit__id index is now deferred.
2023/01/06 22:34:00 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: 3. Creating 3 indexes: 7-PARTITION, 0-replica, non-deferred
2023/01/06 22:34:00 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_7PARTITIONS__id_guid on `default`(_id, guid) partition by hash(Meta().id) with {"num_partition":7}
2023/01/06 22:34:04 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_7PARTITIONS__id_guid index is now active.
2023/01/06 22:34:04 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_7PARTITIONS_docid_isActive on `default`(docid, isActive) partition by hash(Meta().id) with {"num_partition":7}
2023/01/06 22:34:10 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_7PARTITIONS_docid_isActive index is now active.
2023/01/06 22:34:10 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_7PARTITIONS_guid_balance on `default`(guid, balance) partition by hash(Meta().id) with {"num_partition":7}
2023/01/06 22:34:17 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: set14_idx_7PARTITIONS_guid_balance index is now active.
2023/01/06 22:34:17 set14_rebalance_test.go::TestCreateIndexesBeforeRebalance: exit: Current cluster configuration: map[127.0.0.1:9001:[index] 172.31.5.112:9000:[kv n1ql]]
--- PASS: TestCreateIndexesBeforeRebalance (122.98s)
=== RUN   TestIndexNodeRebalanceIn
2023/01/06 22:34:17 TestIndexNodeRebalanceIn entry: Current cluster configuration: map[127.0.0.1:9001:[index] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 22:34:17 TestIndexNodeRebalanceIn: 1. Adding index node 127.0.0.1:9002 to the cluster
2023/01/06 22:34:17 Adding node: https://127.0.0.1:19002 with role: index to the cluster
2023/01/06 22:34:26 AddNode: Successfully added node: 127.0.0.1:9002 (role index), response: {"otpNode":"n_2@127.0.0.1"}
2023/01/06 22:34:26 TestIndexNodeRebalanceIn: 2. Adding index node 127.0.0.1:9003 to the cluster
2023/01/06 22:34:26 Adding node: https://127.0.0.1:19003 with role: index to the cluster
2023/01/06 22:34:34 AddNode: Successfully added node: 127.0.0.1:9003 (role index), response: {"otpNode":"n_3@127.0.0.1"}
2023/01/06 22:34:34 TestIndexNodeRebalanceIn: 3. Rebalancing
2023/01/06 22:34:39 Rebalance progress: 0
2023/01/06 22:34:44 Rebalance progress: 12.5
2023/01/06 22:34:49 Rebalance progress: 45.5
2023/01/06 22:34:54 Rebalance progress: 54.50000000000001
2023/01/06 22:34:59 Rebalance progress: 60.5
2023/01/06 22:35:04 Rebalance progress: 60.5
2023/01/06 22:35:09 Rebalance progress: 66.5
2023/01/06 22:35:14 Rebalance progress: 72.50000000000001
2023/01/06 22:35:19 Rebalance progress: 78.5
2023/01/06 22:35:24 Rebalance progress: 84.5
2023/01/06 22:35:29 Rebalance progress: 100
2023/01/06 22:35:29 TestIndexNodeRebalanceIn exit: Current cluster configuration: map[127.0.0.1:9001:[index] 127.0.0.1:9002:[index] 127.0.0.1:9003:[index] 172.31.5.112:9000:[kv n1ql]]
--- PASS: TestIndexNodeRebalanceIn (74.23s)
=== RUN   TestCreateReplicatedIndexesBeforeRebalance
2023/01/06 22:35:31 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance: entry: Current cluster configuration: map[127.0.0.1:9001:[index] 127.0.0.1:9002:[index] 127.0.0.1:9003:[index] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 22:35:31 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance: 1. Creating 5 indexes: non-partitioned, 2-REPLICA, non-deferred
2023/01/06 22:35:31 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_2REPLICAS__id_isActive on `default`(_id, isActive) with {"num_replica":2}
2023/01/06 22:35:42 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance: set14_idx_2REPLICAS__id_isActive index is now active.
2023/01/06 22:35:42 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_2REPLICAS_docid_balance on `default`(docid, balance) with {"num_replica":2}
2023/01/06 22:35:56 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance: set14_idx_2REPLICAS_docid_balance index is now active.
2023/01/06 22:35:56 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_2REPLICAS_guid_picture on `default`(guid, picture) with {"num_replica":2}
2023/01/06 22:36:07 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance: set14_idx_2REPLICAS_guid_picture index is now active.
2023/01/06 22:36:07 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_2REPLICAS_isActive_age on `default`(isActive, age) with {"num_replica":2}
2023/01/06 22:36:19 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance: set14_idx_2REPLICAS_isActive_age index is now active.
2023/01/06 22:36:19 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_2REPLICAS_balance_eyeColor on `default`(balance, eyeColor) with {"num_replica":2}
2023/01/06 22:36:30 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance: set14_idx_2REPLICAS_balance_eyeColor index is now active.
2023/01/06 22:36:30 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance: 2. Creating 2 indexes: 5-PARTITION, 1-REPLICA, non-deferred
2023/01/06 22:36:30 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_5PARTITIONS_1REPLICAS__id_balance on `default`(_id, balance) partition by hash(Meta().id) with {"num_partition":5, "num_replica":1}
2023/01/06 22:36:41 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance: set14_idx_5PARTITIONS_1REPLICAS__id_balance index is now active.
2023/01/06 22:36:41 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance:: Executing N1QL: create index set14_idx_5PARTITIONS_1REPLICAS_docid_picture on `default`(docid, picture) partition by hash(Meta().id) with {"num_partition":5, "num_replica":1}
2023/01/06 22:36:56 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance: set14_idx_5PARTITIONS_1REPLICAS_docid_picture index is now active.
2023/01/06 22:36:56 set14_rebalance_test.go::TestCreateReplicatedIndexesBeforeRebalance: exit: Current cluster configuration: map[127.0.0.1:9001:[index] 127.0.0.1:9002:[index] 127.0.0.1:9003:[index] 172.31.5.112:9000:[kv n1ql]]
--- PASS: TestCreateReplicatedIndexesBeforeRebalance (84.67s)
=== RUN   TestIndexNodeRebalanceOut
2023/01/06 22:36:56 set14_rebalance_test.go::TestIndexNodeRebalanceOut: entry: Current cluster configuration: map[127.0.0.1:9001:[index] 127.0.0.1:9002:[index] 127.0.0.1:9003:[index] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 22:36:56 set14_rebalance_test.go::TestIndexNodeRebalanceOut: 1. Rebalancing index node 127.0.0.1:9001 out of the cluster
2023/01/06 22:36:56 Removing node(s): [127.0.0.1:9001] from the cluster
2023/01/06 22:37:02 Rebalance progress: 0
2023/01/06 22:37:07 Rebalance progress: 12.5
2023/01/06 22:37:12 Rebalance progress: 29.80769230769231
2023/01/06 22:37:17 Rebalance progress: 38.46153846153846
2023/01/06 22:37:23 Rebalance progress: 44.23076923076923
2023/01/06 22:37:27 Rebalance progress: 50
2023/01/06 22:37:32 Rebalance progress: 52.88461538461539
2023/01/06 22:37:37 Rebalance progress: 55.76923076923077
2023/01/06 22:37:42 Rebalance progress: 61.53846153846154
2023/01/06 22:37:47 Rebalance progress: 67.3076923076923
2023/01/06 22:37:53 Rebalance progress: 73.07692307692308
2023/01/06 22:37:57 Rebalance progress: 81.73076923076923
2023/01/06 22:38:02 Rebalance progress: 87.5
2023-01-06T22:38:02.941+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:39326->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T22:38:02.965+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:39334->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023/01/06 22:38:07 Rebalance progress: 100
2023/01/06 22:38:07 set14_rebalance_test.go::TestIndexNodeRebalanceOut: exit: Current cluster configuration: map[127.0.0.1:9002:[index] 127.0.0.1:9003:[index] 172.31.5.112:9000:[kv n1ql]]
--- PASS: TestIndexNodeRebalanceOut (73.21s)
=== RUN   TestFailoverAndRebalance
2023/01/06 22:38:09 set14_rebalance_test.go::TestFailoverAndRebalance: entry: Current cluster configuration: map[127.0.0.1:9002:[index] 127.0.0.1:9003:[index] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 22:38:09 set14_rebalance_test.go::TestFailoverAndRebalance: 1. Failing over index node 127.0.0.1:9002
2023/01/06 22:38:09 Failing over: [127.0.0.1:9002]
2023-01-06T22:38:10.074+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = read tcp 127.0.0.1:42908->127.0.0.1:9112: use of closed network connection. Kill Pipe.
2023-01-06T22:38:10.075+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = read tcp 127.0.0.1:42910->127.0.0.1:9112: use of closed network connection. Kill Pipe.
2023/01/06 22:38:11 set14_rebalance_test.go::TestFailoverAndRebalance: 2. Rebalancing
2023/01/06 22:38:16 Rebalance progress: 25
2023/01/06 22:38:21 Rebalance progress: 25
2023/01/06 22:38:26 Rebalance progress: 100
2023/01/06 22:38:26 set14_rebalance_test.go::TestFailoverAndRebalance: exit: Current cluster configuration: map[127.0.0.1:9003:[index] 172.31.5.112:9000:[kv n1ql]]
--- PASS: TestFailoverAndRebalance (18.92s)
=== RUN   TestSwapRebalance
2023/01/06 22:38:28 set14_rebalance_test.go::TestSwapRebalance: entry: Current cluster configuration: map[127.0.0.1:9003:[index] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 22:38:28 set14_rebalance_test.go::TestSwapRebalance: 1. Adding index node 127.0.0.1:9001 to the cluster
2023/01/06 22:38:28 Adding node: https://127.0.0.1:19001 with role: index to the cluster
2023/01/06 22:38:38 AddNode: Successfully added node: 127.0.0.1:9001 (role index), response: {"otpNode":"n_1@127.0.0.1"}
2023/01/06 22:38:38 set14_rebalance_test.go::TestSwapRebalance: 2. Swap rebalancing index node 127.0.0.1:9003 out of the cluster
2023/01/06 22:38:38 Removing node(s): [127.0.0.1:9003] from the cluster
2023/01/06 22:38:43 Rebalance progress: 16.66666666666667
2023/01/06 22:38:48 Rebalance progress: 36.41975308641975
2023/01/06 22:38:53 Rebalance progress: 43.82716049382717
2023/01/06 22:38:58 Rebalance progress: 51.23456790123458
2023/01/06 22:39:03 Rebalance progress: 51.23456790123458
2023/01/06 22:39:08 Rebalance progress: 58.64197530864197
2023/01/06 22:39:13 Rebalance progress: 66.04938271604938
2023/01/06 22:39:18 Rebalance progress: 73.4567901234568
2023/01/06 22:39:23 Rebalance progress: 73.4567901234568
2023/01/06 22:39:28 Rebalance progress: 80.86419753086419
2023-01-06T22:39:30.064+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9118.  Error = read tcp 127.0.0.1:42908->127.0.0.1:9118: use of closed network connection. Kill Pipe.
2023-01-06T22:39:30.197+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9118.  Error = read tcp 127.0.0.1:42896->127.0.0.1:9118: use of closed network connection. Kill Pipe.
2023/01/06 22:39:33 Rebalance progress: 100
2023/01/06 22:39:33 set14_rebalance_test.go::TestSwapRebalance: exit: Current cluster configuration: map[127.0.0.1:9001:[index] 172.31.5.112:9000:[kv n1ql]]
--- PASS: TestSwapRebalance (66.97s)
=== RUN   TestRebalanceReplicaRepair
2023/01/06 22:39:35 TestRebalanceReplicaRepair entry: Current cluster configuration: map[127.0.0.1:9001:[index] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 22:39:35 TestRebalanceReplicaRepair: 1. Adding index node 127.0.0.1:9002 to the cluster
2023/01/06 22:39:35 Adding node: https://127.0.0.1:19002 with role: index to the cluster
2023/01/06 22:39:46 AddNode: Successfully added node: 127.0.0.1:9002 (role index), response: {"otpNode":"n_2@127.0.0.1"}
2023/01/06 22:39:46 TestRebalanceReplicaRepair: 2. Adding index node 127.0.0.1:9003 to the cluster
2023/01/06 22:39:46 Adding node: https://127.0.0.1:19003 with role: index to the cluster
2023/01/06 22:39:54 AddNode: Successfully added node: 127.0.0.1:9003 (role index), response: {"otpNode":"n_3@127.0.0.1"}
2023/01/06 22:39:54 TestRebalanceReplicaRepair: 3. Rebalancing
2023/01/06 22:40:00 Rebalance progress: 12.5
2023/01/06 22:40:05 Rebalance progress: 12.5
2023/01/06 22:40:10 Rebalance progress: 23.86363636363636
2023/01/06 22:40:15 Rebalance progress: 30.68181818181818
2023/01/06 22:40:20 Rebalance progress: 35.22727272727273
2023/01/06 22:40:25 Rebalance progress: 39.77272727272727
2023/01/06 22:40:30 Rebalance progress: 44.31818181818181
2023/01/06 22:40:35 Rebalance progress: 48.86363636363637
2023/01/06 22:40:40 Rebalance progress: 48.86363636363637
2023/01/06 22:40:45 Rebalance progress: 57.95454545454546
2023/01/06 22:40:50 Rebalance progress: 57.95454545454546
2023/01/06 22:40:55 Rebalance progress: 64.77272727272727
2023/01/06 22:41:00 Rebalance progress: 67.04545454545455
2023/01/06 22:41:05 Rebalance progress: 71.59090909090908
2023/01/06 22:41:10 Rebalance progress: 76.13636363636363
2023/01/06 22:41:15 Rebalance progress: 80.68181818181817
2023/01/06 22:41:20 Rebalance progress: 85.22727272727273
2023/01/06 22:41:25 Rebalance progress: 100
2023/01/06 22:41:25 TestRebalanceReplicaRepair exit: Current cluster configuration: map[127.0.0.1:9001:[index] 127.0.0.1:9002:[index] 127.0.0.1:9003:[index] 172.31.5.112:9000:[kv n1ql]]
--- PASS: TestRebalanceReplicaRepair (111.59s)
=== RUN   TestPreparePauseAndPrepareResume
    set14_rebalance_test.go:553: Skipping pause resume tests
--- SKIP: TestPreparePauseAndPrepareResume (0.00s)
=== RUN   TestPause
    set14_rebalance_test.go:723: Skipping pause resume tests
--- SKIP: TestPause (0.00s)
=== RUN   TestFailureAndRebalanceDuringInitialIndexBuild
2023/01/06 22:41:27 set14_rebalance_test.go::TestFailureAndRebalanceDuringInitialIndexBuild: entry: Current cluster configuration: map[127.0.0.1:9001:[index] 127.0.0.1:9002:[index] 127.0.0.1:9003:[index] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 22:41:32 Created the secondary index index_0. Waiting for it become active
2023/01/06 22:41:32 Index is 10438827562615123387 now active
2023/01/06 22:41:38 Created the secondary index index_1. Waiting for it become active
2023/01/06 22:41:38 Index is 2062611513882280611 now active
2023/01/06 22:41:45 Created the secondary index index_2. Waiting for it become active
2023/01/06 22:41:45 Index is 129694807779493274 now active
2023/01/06 22:41:52 Created the secondary index index_3. Waiting for it become active
2023/01/06 22:41:52 Index is 10154670148972795378 now active
2023/01/06 22:42:00 Created the secondary index index_4. Waiting for it become active
2023/01/06 22:42:00 Index is 18323665574583543386 now active
2023/01/06 22:42:09 Created the secondary index index_5. Waiting for it become active
2023/01/06 22:42:09 Index is 6687369257374062305 now active
2023/01/06 22:42:16 Created the secondary index index_6. Waiting for it become active
2023/01/06 22:42:16 Index is 15725206916067481928 now active
2023/01/06 22:42:22 Created the secondary index index_7. Waiting for it become active
2023/01/06 22:42:22 Index is 12066226434130408015 now active
2023/01/06 22:42:28 Created the secondary index index_8. Waiting for it become active
2023/01/06 22:42:28 Index is 16609833542561443845 now active
2023/01/06 22:42:35 Created the secondary index index_9. Waiting for it become active
2023/01/06 22:42:35 Index is 11975067733437685867 now active
2023/01/06 22:43:14 Failing over: [127.0.0.1:9002]
2023-01-06T22:43:15.142+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = read tcp 127.0.0.1:56810->127.0.0.1:9112: use of closed network connection. Kill Pipe.
2023-01-06T22:43:15.143+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = read tcp 127.0.0.1:56814->127.0.0.1:9112: use of closed network connection. Kill Pipe.
2023/01/06 22:43:15 Build the deferred index index_11. Waiting for the index to become active
2023/01/06 22:43:15 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:16 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:17 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:18 TestFailureAndRebalanceDuringInitialIndexBuild: 1. Adding index node 127.0.0.1:9002 to the cluster
2023/01/06 22:43:18 Kicking off failover recovery, type: full
2023/01/06 22:43:18 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:19 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:20 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:21 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:22 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:23 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:24 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:25 Rebalance progress: 12.5
2023/01/06 22:43:25 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:26 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:27 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:28 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:29 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:30 Rebalance failed. See logs for detailed reason. You can try again.
2023/01/06 22:43:30 set14_rebalance_test.go::TestFailureAndRebalanceDuringInitialIndexBuild: exit: Current cluster configuration: map[127.0.0.1:9001:[index] 127.0.0.1:9002:[index] 127.0.0.1:9003:[index] 172.31.5.112:9000:[kv n1ql]]
--- PASS: TestFailureAndRebalanceDuringInitialIndexBuild (123.35s)
=== RUN   TestRebalanceResetCluster
2023/01/06 22:43:30 set14_rebalance_test.go::TestRebalanceResetCluster: entry: Current cluster configuration: map[127.0.0.1:9001:[index] 127.0.0.1:9002:[index] 127.0.0.1:9003:[index] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 22:43:30 set14_rebalance_test.go::TestRebalanceResetCluster: 1. Restoring indexer.settings.rebalance.redistribute_indexes to false
2023/01/06 22:43:30 Changing config key indexer.settings.rebalance.redistribute_indexes to value false
2023/01/06 22:43:30 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:30 set14_rebalance_test.go::TestRebalanceResetCluster: 2. Resetting cluster to initial configuration
2023/01/06 22:43:30 Removing node(s): [127.0.0.1:9001 127.0.0.1:9002 127.0.0.1:9003] from the cluster
2023/01/06 22:43:31 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:32 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:33 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:34 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:35 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:36 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:37 Rebalance progress: 6.25
2023/01/06 22:43:37 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:38 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:39 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:40 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:41 Rebalance progress: 12.5
2023/01/06 22:43:41 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:42 Waiting for index 15231047653044176887 to go active ...
2023-01-06T22:43:43.773+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = read tcp 127.0.0.1:35938->127.0.0.1:9112: use of closed network connection. Kill Pipe.
2023-01-06T22:43:43.773+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9118.  Error = read tcp 127.0.0.1:56800->127.0.0.1:9118: use of closed network connection. Kill Pipe.
2023-01-06T22:43:43.774+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:54726->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T22:43:43.782+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = read tcp 127.0.0.1:35936->127.0.0.1:9112: use of closed network connection. Kill Pipe.
2023-01-06T22:43:43.782+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9118.  Error = read tcp 127.0.0.1:56802->127.0.0.1:9118: use of closed network connection. Kill Pipe.
2023-01-06T22:43:43.783+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:54724->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023/01/06 22:43:43 Waiting for index 15231047653044176887 to go active ...
2023-01-06T22:43:44.333+05:30 [Warn] ClusterInfoCache:validateCache - Failed as len(c.nodes): 4 != len(c.nodesvs): 3
2023/01/06 22:43:44 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:45 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:46 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:47 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:48 Rebalance progress: 87.5
2023/01/06 22:43:48 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:49 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:50 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:51 Rebalance progress: 100
2023/01/06 22:43:51 Adding node: https://127.0.0.1:19001 with role: index to the cluster
2023/01/06 22:43:51 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:52 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:53 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:54 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:56 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:57 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:58 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:43:59 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:00 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:01 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:02 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:03 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:04 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:05 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:06 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:06 AddNode: Successfully added node: 127.0.0.1:9001 (role index), response: {"otpNode":"n_1@127.0.0.1"}
2023/01/06 22:44:07 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:08 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:09 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:10 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:11 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:12 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:12 Rebalance progress: 100
2023/01/06 22:44:12 set14_rebalance_test.go::TestRebalanceResetCluster: exit: Current cluster configuration: map[127.0.0.1:9001:[index] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 22:44:13 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:14 Waiting for index 15231047653044176887 to go active ...
--- PASS: TestRebalanceResetCluster (43.80s)
=== RUN   TestAlterIndexIncrReplica
2023/01/06 22:44:14 In TestAlterIndexIncrReplica()
2023/01/06 22:44:14 This test creates an index with one replica and then increments replica count to 2
2023/01/06 22:44:14 Removing node(s): [127.0.0.1:9001 127.0.0.1:9002 127.0.0.1:9003] from the cluster
2023/01/06 22:44:15 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:16 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:17 Waiting for index 15231047653044176887 to go active ...
2023-01-06T22:44:17.166+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:40848->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T22:44:17.192+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:40842->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023/01/06 22:44:18 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:19 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:19 Rebalance progress: 100
2023/01/06 22:44:19 Adding node: https://127.0.0.1:19001 with role: index to the cluster
2023/01/06 22:44:20 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:21 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:22 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:23 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:24 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:25 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:26 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:27 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:28 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:29 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:30 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:31 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:32 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:32 AddNode: Successfully added node: 127.0.0.1:9001 (role index), response: {"otpNode":"n_1@127.0.0.1"}
2023/01/06 22:44:33 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:34 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:35 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:36 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:37 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:37 Rebalance progress: 100
2023/01/06 22:44:37 Adding node: https://127.0.0.1:19002 with role: index to the cluster
2023/01/06 22:44:38 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:39 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:40 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:41 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:42 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:43 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:44 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:45 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:45 AddNode: Successfully added node: 127.0.0.1:9002 (role index), response: {"otpNode":"n_2@127.0.0.1"}
2023/01/06 22:44:46 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:47 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:48 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:49 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:50 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:51 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:51 Rebalance progress: 16.66666666666667
2023/01/06 22:44:52 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:53 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:54 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:55 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:56 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:56 Rebalance progress: 100
2023/01/06 22:44:56 Adding node: https://127.0.0.1:19003 with role: index to the cluster
2023/01/06 22:44:57 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:58 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:44:59 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:00 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:01 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:02 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:03 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:04 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:04 AddNode: Successfully added node: 127.0.0.1:9003 (role index), response: {"otpNode":"n_3@127.0.0.1"}
2023/01/06 22:45:05 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:06 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:07 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:08 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:09 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:10 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:10 Rebalance progress: 12.5
2023/01/06 22:45:11 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:12 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:13 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:14 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:15 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:15 Rebalance progress: 100
2023/01/06 22:45:15 In DropAllSecondaryIndexes()
2023/01/06 22:45:16 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:17 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:18 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:19 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:20 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:21 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:22 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:23 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:24 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:25 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:26 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:27 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:28 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:29 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:30 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:31 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:32 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:33 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:34 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:35 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:36 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:37 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:38 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:39 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:40 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:41 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:42 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:43 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:44 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:45 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:46 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:47 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:48 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:49 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:50 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:51 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:52 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:53 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:54 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:55 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:56 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:57 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:58 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:45:59 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:00 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:00 Flushed the bucket default, Response body: 
2023/01/06 22:46:01 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:02 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:03 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:04 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:05 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:06 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:06 Created the secondary index idx_1. Waiting for it become active
2023/01/06 22:46:06 Index is 5845127527580024520 now active
2023/01/06 22:46:06 Executing alter index command: alter index `default`.idx_1 with {"action":"replica_count", "num_replica":2}
2023/01/06 22:46:07 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:08 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:09 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:10 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:11 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:12 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:13 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:14 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:15 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:16 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:17 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:18 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:19 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:20 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:21 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:22 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:23 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:24 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:25 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:26 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:27 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:28 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:29 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:30 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:31 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:32 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:33 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
2023/01/06 22:46:33 Using n1ql client
--- PASS: TestAlterIndexIncrReplica (139.66s)
=== RUN   TestAlterIndexDecrReplica
2023/01/06 22:46:33 In TestAlterIndexDecrReplica()
2023/01/06 22:46:33 This test creates an index with two replicas and then decrements replica count to 1
2023/01/06 22:46:34 In DropAllSecondaryIndexes()
2023/01/06 22:46:34 Index found:  idx_1
2023/01/06 22:46:34 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:34 Dropped index idx_1
2023/01/06 22:46:35 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:36 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:37 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:38 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:39 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:40 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:41 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:42 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:43 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:44 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:45 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:46 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:47 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:48 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:49 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:50 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:50 Created the secondary index idx_2. Waiting for it become active
2023/01/06 22:46:50 Index is 3150742287528133035 now active
2023/01/06 22:46:50 Executing alter index command: alter index `default`.idx_2 with {"action":"replica_count", "num_replica":1}
2023/01/06 22:46:51 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:52 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:53 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:54 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:55 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:56 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:57 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:58 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:46:59 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:00 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:01 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:02 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:03 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:04 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:05 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:06 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:07 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:08 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:09 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:10 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:11 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:12 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:13 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:14 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:15 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:16 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
2023/01/06 22:47:16 Using n1ql client
--- PASS: TestAlterIndexDecrReplica (42.98s)
=== RUN   TestAlterIndexDropReplica
2023/01/06 22:47:16 In TestAlterIndexDropReplica()
2023/01/06 22:47:16 This test creates an index with two replicas and then drops one replica from cluster
2023/01/06 22:47:17 In DropAllSecondaryIndexes()
2023/01/06 22:47:17 Index found:  idx_2
2023/01/06 22:47:17 Dropped index idx_2
2023/01/06 22:47:17 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:18 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:19 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:20 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:21 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:22 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:23 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:24 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:25 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:26 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:27 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:28 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:29 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:30 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:31 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:32 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:33 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:33 Created the secondary index idx_3. Waiting for it become active
2023/01/06 22:47:33 Index is 17254376194357669301 now active
2023/01/06 22:47:33 Executing alter index command: alter index `default`.idx_3 with {"action":"drop_replica", "replicaId":0}
2023/01/06 22:47:34 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:35 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:36 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:37 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:38 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:39 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:40 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:41 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:42 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:43 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:44 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:45 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:46 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:47 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:48 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:49 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:50 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:51 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:52 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:53 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:54 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:55 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:56 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:57 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:58 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:59 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
2023/01/06 22:47:59 Using n1ql client
--- PASS: TestAlterIndexDropReplica (42.81s)
=== RUN   TestResetCluster_1
2023/01/06 22:47:59 Removing node(s): [127.0.0.1:9001 127.0.0.1:9002 127.0.0.1:9003] from the cluster
2023/01/06 22:48:00 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:01 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:02 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:03 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:04 Waiting for index 15231047653044176887 to go active ...
2023-01-06T22:48:04.512+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9118.  Error = read tcp 127.0.0.1:42030->127.0.0.1:9118: use of closed network connection. Kill Pipe.
2023-01-06T22:48:04.512+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:42090->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T22:48:04.512+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = read tcp 127.0.0.1:41036->127.0.0.1:9112: use of closed network connection. Kill Pipe.
2023-01-06T22:48:04.523+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9118.  Error = read tcp 127.0.0.1:42036->127.0.0.1:9118: use of closed network connection. Kill Pipe.
2023-01-06T22:48:04.523+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:42088->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T22:48:04.523+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = read tcp 127.0.0.1:41038->127.0.0.1:9112: use of closed network connection. Kill Pipe.
2023/01/06 22:48:05 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:05 Rebalance progress: 87.5
2023/01/06 22:48:06 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:07 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:08 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:09 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:10 Rebalance progress: 100
2023/01/06 22:48:10 Adding node: https://127.0.0.1:19001 with role: index to the cluster
2023/01/06 22:48:10 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:11 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:12 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:13 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:14 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:15 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:16 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:17 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:18 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:19 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:20 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:21 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:22 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:23 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:24 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:25 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:26 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:27 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:27 AddNode: Successfully added node: 127.0.0.1:9001 (role index), response: {"otpNode":"n_1@127.0.0.1"}
2023/01/06 22:48:28 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:29 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:30 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:31 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:32 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:33 Rebalance progress: 100
--- PASS: TestResetCluster_1 (33.37s)
=== RUN   TestPartitionDistributionWithReplica
2023/01/06 22:48:33 In TestPartitionDistributionWithReplica()
2023/01/06 22:48:33 This test will create a paritioned index with replica and checks the parition distribution
2023/01/06 22:48:33 Parititions with same ID beloning to both replica and source index should not be on the same node
2023/01/06 22:48:33 Removing node(s): [127.0.0.1:9001 127.0.0.1:9002 127.0.0.1:9003] from the cluster
2023/01/06 22:48:33 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:34 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:35 Waiting for index 15231047653044176887 to go active ...
2023-01-06T22:48:35.872+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:51936->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T22:48:35.879+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:51942->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023/01/06 22:48:36 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:37 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:38 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:38 Rebalance progress: 100
2023/01/06 22:48:38 Adding node: https://127.0.0.1:19001 with role: index to the cluster
2023/01/06 22:48:39 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:40 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:41 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:42 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:43 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:44 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:45 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:46 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:47 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:48 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:49 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:50 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:51 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:51 AddNode: Successfully added node: 127.0.0.1:9001 (role index), response: {"otpNode":"n_1@127.0.0.1"}
2023/01/06 22:48:52 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:53 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:54 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:55 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:56 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:56 Rebalance progress: 100
2023/01/06 22:48:56 Adding node: https://127.0.0.1:19002 with role: index to the cluster
2023/01/06 22:48:57 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:58 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:48:59 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:00 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:01 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:02 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:03 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:04 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:05 AddNode: Successfully added node: 127.0.0.1:9002 (role index), response: {"otpNode":"n_2@127.0.0.1"}
2023/01/06 22:49:05 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:06 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:07 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:08 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:09 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:10 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:10 Rebalance progress: 16.66666666666667
2023/01/06 22:49:11 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:12 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:13 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:14 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:15 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:15 Rebalance progress: 100
2023/01/06 22:49:15 Adding node: https://127.0.0.1:19003 with role: index to the cluster
2023/01/06 22:49:16 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:17 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:18 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:19 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:20 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:21 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:22 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:23 AddNode: Successfully added node: 127.0.0.1:9003 (role index), response: {"otpNode":"n_3@127.0.0.1"}
2023/01/06 22:49:23 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:24 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:25 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:26 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:27 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:28 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:28 Rebalance progress: 87.5
2023/01/06 22:49:29 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:30 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:31 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:32 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:33 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:33 Rebalance progress: 100
2023/01/06 22:49:33 In DropAllSecondaryIndexes()
2023/01/06 22:49:34 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:35 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:36 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:37 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:38 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:39 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:40 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:41 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:42 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:43 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:44 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:45 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:46 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:47 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:48 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:49 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:50 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:51 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:52 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:53 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:54 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:55 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:56 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:57 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:58 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:49:59 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:00 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:01 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:02 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:03 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:04 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:05 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:06 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:07 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:08 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:09 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:10 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:11 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:12 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:13 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:14 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:15 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:16 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:17 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:18 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:18 Flushed the bucket default, Response body: 
2023/01/06 22:50:19 Executing create partition index command on: create index `idx_partn` on `default`(age) partition by hash(meta().id) with {"num_partition":8, "num_replica":1}
2023/01/06 22:50:19 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:20 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:21 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:22 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:23 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:24 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:25 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:26 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:27 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:28 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:29 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:30 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:31 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:32 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:33 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:34 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:35 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:36 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:37 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:38 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:39 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:40 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:41 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:42 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:43 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:43 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
2023/01/06 22:50:44 Using n1ql client
--- PASS: TestPartitionDistributionWithReplica (131.25s)
=== RUN   TestPartitionedPartialIndex
2023/01/06 22:50:44 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:44 Executing create index command: CREATE INDEX `idx_regular` ON `default`(partn_name)
2023/01/06 22:50:45 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:46 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:47 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:48 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:49 Using n1ql client
2023/01/06 22:50:49 Dropping the secondary index idx_regular
2023/01/06 22:50:49 Index dropped
2023/01/06 22:50:49 Executing create index command: CREATE INDEX `idx_partial` ON `default`(partn_name) WHERE partn_age >= 0
2023/01/06 22:50:49 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:50 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:51 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:52 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:53 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:54 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:55 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:55 Using n1ql client
2023/01/06 22:50:55 Using n1ql client
2023/01/06 22:50:55 Using n1ql client
2023/01/06 22:50:55 Dropping the secondary index idx_partial
2023/01/06 22:50:55 Index dropped
2023/01/06 22:50:56 Executing create index command: CREATE INDEX `idx_partitioned` ON `default`(partn_name) PARTITION BY HASH(meta().id) 
2023/01/06 22:50:56 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:57 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:58 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:50:59 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:00 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:01 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:02 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:03 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:04 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:05 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:06 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:07 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:08 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:08 Using n1ql client
2023/01/06 22:51:09 Using n1ql client
2023/01/06 22:51:09 Dropping the secondary index idx_partitioned
2023/01/06 22:51:09 Index dropped
2023/01/06 22:51:09 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:09 Executing create index command: CREATE INDEX `idx_partitioned_partial` ON `default`(partn_name) PARTITION BY HASH(meta().id) WHERE partn_age >= 0
2023/01/06 22:51:10 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:11 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:12 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:13 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:14 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:15 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:16 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:17 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:18 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:19 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:20 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:21 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:22 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:22 Using n1ql client
2023/01/06 22:51:23 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:23 Using n1ql client
2023/01/06 22:51:23 Using n1ql client
2023/01/06 22:51:23 Using n1ql client
2023/01/06 22:51:23 Dropping the secondary index idx_partitioned_partial
2023/01/06 22:51:24 Index dropped
2023/01/06 22:51:24 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:25 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:26 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:27 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:28 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:29 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:30 Waiting for index 15231047653044176887 to go active ...
--- PASS: TestPartitionedPartialIndex (46.20s)
=== RUN   TestResetCluster_2
2023/01/06 22:51:30 Removing node(s): [127.0.0.1:9001 127.0.0.1:9002 127.0.0.1:9003] from the cluster
2023/01/06 22:51:31 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:32 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:33 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:34 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:35 Waiting for index 15231047653044176887 to go active ...
2023-01-06T22:51:36.360+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:53290->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T22:51:36.360+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = read tcp 127.0.0.1:52216->127.0.0.1:9112: use of closed network connection. Kill Pipe.
2023-01-06T22:51:36.361+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9118.  Error = read tcp 127.0.0.1:53216->127.0.0.1:9118: use of closed network connection. Kill Pipe.
2023-01-06T22:51:36.400+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:53282->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T22:51:36.401+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9118.  Error = read tcp 127.0.0.1:53222->127.0.0.1:9118: use of closed network connection. Kill Pipe.
2023-01-06T22:51:36.401+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = read tcp 127.0.0.1:52210->127.0.0.1:9112: use of closed network connection. Kill Pipe.
2023/01/06 22:51:36 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:36 Rebalance progress: 87.5
2023/01/06 22:51:37 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:38 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:39 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:40 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:41 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:41 Rebalance progress: 100
2023/01/06 22:51:41 Adding node: https://127.0.0.1:19001 with role: index to the cluster
2023/01/06 22:51:42 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:43 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:44 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:45 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:46 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:47 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:48 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:49 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:50 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:51 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:52 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:53 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:54 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:55 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:56 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:57 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:58 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:59 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:51:59 AddNode: Successfully added node: 127.0.0.1:9001 (role index), response: {"otpNode":"n_1@127.0.0.1"}
2023/01/06 22:52:00 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:01 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:02 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:03 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:04 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:04 Rebalance progress: 100
--- PASS: TestResetCluster_2 (34.32s)
=== RUN   TestCollectionSetup
2023/01/06 22:52:04 In TestCollectionSetup()
2023/01/06 22:52:04 In DropAllSecondaryIndexes()
2023/01/06 22:52:05 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:06 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:07 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:08 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:09 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:10 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:11 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:12 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:13 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:14 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:15 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:16 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:17 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:18 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:19 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:20 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:21 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:22 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:23 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:24 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:25 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:26 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:27 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:28 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:29 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:30 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:31 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:32 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:33 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:34 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:35 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:36 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:37 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:38 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:39 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:40 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:41 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:42 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:43 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:44 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:45 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:46 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:47 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:48 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:49 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:50 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:51 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:52 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:53 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:54 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:55 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:56 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:57 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:58 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:52:59 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:00 Flushed the bucket default, Response body: 
2023/01/06 22:53:00 Waiting for index 15231047653044176887 to go active ...
--- PASS: TestCollectionSetup (56.45s)
=== RUN   TestCollectionDefault
2023/01/06 22:53:01 In TestCollectionDefault()
2023/01/06 22:53:01 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:02 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:03 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:04 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:05 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:06 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:07 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:08 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:09 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:10 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:11 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:12 Created the secondary index _default__default_i1. Waiting for it become active
2023/01/06 22:53:12 Index is 7851125730030787833 now active
2023/01/06 22:53:12 Using n1ql client
2023/01/06 22:53:12 Expected and Actual scan responses are the same
2023/01/06 22:53:12 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:13 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:14 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:15 Waiting for index 15231047653044176887 to go active ...
2023/01/06 22:53:19 Created the secondary index _default__default_i2. Waiting for it become active
2023/01/06 22:53:19 Index is 7577910416433793119 now active
2023/01/06 22:53:19 Using n1ql client
2023/01/06 22:53:19 Expected and Actual scan responses are the same
2023/01/06 22:53:19 Dropping the secondary index _default__default_i1
2023/01/06 22:53:19 Index dropped
2023/01/06 22:53:19 Using n1ql client
2023/01/06 22:53:19 Expected and Actual scan responses are the same
2023/01/06 22:53:25 Created the secondary index _default__default_i1. Waiting for it become active
2023/01/06 22:53:25 Index is 3143724035263556755 now active
2023/01/06 22:53:25 Using n1ql client
2023/01/06 22:53:26 Expected and Actual scan responses are the same
2023/01/06 22:53:26 Dropping the secondary index _default__default_i1
2023/01/06 22:53:26 Index dropped
2023/01/06 22:53:26 Dropping the secondary index _default__default_i2
2023/01/06 22:53:26 Index dropped
2023/01/06 22:53:30 Created the secondary index _default__default_i1. Waiting for it become active
2023/01/06 22:53:30 Index is 10843934492241919668 now active
2023/01/06 22:53:30 Using n1ql client
2023/01/06 22:53:30 Expected and Actual scan responses are the same
2023/01/06 22:53:36 Created the secondary index _default__default_i2. Waiting for it become active
2023/01/06 22:53:36 Index is 13180061126929877704 now active
2023/01/06 22:53:36 Using n1ql client
2023/01/06 22:53:36 Expected and Actual scan responses are the same
2023/01/06 22:53:36 Dropping the secondary index _default__default_i1
2023/01/06 22:53:36 Index dropped
2023/01/06 22:53:36 Dropping the secondary index _default__default_i2
2023/01/06 22:53:37 Index dropped
2023/01/06 22:53:37 Build command issued for the deferred indexes [_default__default_i1 _default__default_i2], bucket: default, scope: _default, coll: _default
2023/01/06 22:53:37 Waiting for the index _default__default_i1 to become active
2023/01/06 22:53:37 Waiting for index 3734015506342706009 to go active ...
2023/01/06 22:53:38 Waiting for index 3734015506342706009 to go active ...
2023/01/06 22:53:39 Waiting for index 3734015506342706009 to go active ...
2023/01/06 22:53:40 Waiting for index 3734015506342706009 to go active ...
2023/01/06 22:53:41 Waiting for index 3734015506342706009 to go active ...
2023/01/06 22:53:42 Index is 3734015506342706009 now active
2023/01/06 22:53:42 Waiting for the index _default__default_i2 to become active
2023/01/06 22:53:42 Index is 17746333681334245396 now active
2023/01/06 22:53:48 Using n1ql client
2023/01/06 22:53:48 Expected and Actual scan responses are the same
2023/01/06 22:53:48 Using n1ql client
2023/01/06 22:53:48 Expected and Actual scan responses are the same
2023/01/06 22:53:48 Dropping the secondary index _default__default_i1
2023/01/06 22:53:48 Index dropped
2023/01/06 22:53:54 Using n1ql client
2023/01/06 22:53:54 Expected and Actual scan responses are the same
2023/01/06 22:53:54 Dropping the secondary index _default__default_i2
2023/01/06 22:53:54 Index dropped
--- PASS: TestCollectionDefault (53.43s)
=== RUN   TestCollectionNonDefault
2023/01/06 22:53:54 In TestCollectionNonDefault()
2023/01/06 22:53:54 Creating scope: s1 for bucket: default as it does not exist
2023/01/06 22:53:55 Create scope succeeded for bucket default, scopeName: s1, body: {"uid":"1"} 
2023/01/06 22:53:55 Created collection succeeded for bucket: default, scope: s1, collection: c1, body: {"uid":"2"}
2023/01/06 22:54:10 Created the secondary index s1_c1_i1. Waiting for it become active
2023/01/06 22:54:10 Index is 7027891192360908111 now active
2023/01/06 22:54:10 Using n1ql client
2023-01-06T22:54:10.978+05:30 [Info] metadata provider version changed 3273 -> 3274
2023-01-06T22:54:10.978+05:30 [Info] switched currmeta from 3273 -> 3274 force false 
2023-01-06T22:54:10.978+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T22:54:10.978+05:30 [Info] GSIC[default/default-s1-c1-1673025850973508474] started ...
2023/01/06 22:54:11 Expected and Actual scan responses are the same
2023/01/06 22:54:17 Created the secondary index s1_c1_i2. Waiting for it become active
2023/01/06 22:54:17 Index is 15922879889572716619 now active
2023/01/06 22:54:17 Using n1ql client
2023/01/06 22:54:17 Expected and Actual scan responses are the same
2023/01/06 22:54:17 Dropping the secondary index s1_c1_i1
2023/01/06 22:54:18 Index dropped
2023/01/06 22:54:18 Using n1ql client
2023/01/06 22:54:18 Expected and Actual scan responses are the same
2023/01/06 22:54:24 Created the secondary index s1_c1_i1. Waiting for it become active
2023/01/06 22:54:24 Index is 189165008139855671 now active
2023/01/06 22:54:24 Using n1ql client
2023/01/06 22:54:24 Expected and Actual scan responses are the same
2023/01/06 22:54:24 Dropping the secondary index s1_c1_i1
2023/01/06 22:54:24 Index dropped
2023/01/06 22:54:24 Dropping the secondary index s1_c1_i2
2023/01/06 22:54:24 Index dropped
2023/01/06 22:54:30 Created the secondary index s1_c1_i1. Waiting for it become active
2023/01/06 22:54:30 Index is 11176618908976388459 now active
2023/01/06 22:54:30 Using n1ql client
2023/01/06 22:54:31 Expected and Actual scan responses are the same
2023/01/06 22:54:37 Created the secondary index s1_c1_i2. Waiting for it become active
2023/01/06 22:54:37 Index is 9706472822691186085 now active
2023/01/06 22:54:37 Using n1ql client
2023/01/06 22:54:38 Expected and Actual scan responses are the same
2023/01/06 22:54:38 Dropping the secondary index s1_c1_i1
2023/01/06 22:54:38 Index dropped
2023/01/06 22:54:38 Dropping the secondary index s1_c1_i2
2023/01/06 22:54:38 Index dropped
2023/01/06 22:54:38 Build command issued for the deferred indexes [s1_c1_i1 s1_c1_i2], bucket: default, scope: s1, coll: c1
2023/01/06 22:54:38 Waiting for the index s1_c1_i1 to become active
2023/01/06 22:54:38 Waiting for index 16515257917299310656 to go active ...
2023/01/06 22:54:39 Waiting for index 16515257917299310656 to go active ...
2023/01/06 22:54:40 Waiting for index 16515257917299310656 to go active ...
2023/01/06 22:54:41 Waiting for index 16515257917299310656 to go active ...
2023/01/06 22:54:42 Waiting for index 16515257917299310656 to go active ...
2023/01/06 22:54:43 Index is 16515257917299310656 now active
2023/01/06 22:54:43 Waiting for the index s1_c1_i2 to become active
2023/01/06 22:54:43 Index is 3550360937132039747 now active
2023/01/06 22:54:45 Using n1ql client
2023/01/06 22:54:46 Expected and Actual scan responses are the same
2023/01/06 22:54:46 Using n1ql client
2023/01/06 22:54:46 Expected and Actual scan responses are the same
2023/01/06 22:54:46 Dropping the secondary index s1_c1_i1
2023/01/06 22:54:46 Index dropped
2023/01/06 22:54:47 Using n1ql client
2023/01/06 22:54:47 Expected and Actual scan responses are the same
2023/01/06 22:54:47 Dropping the secondary index s1_c1_i2
2023/01/06 22:54:47 Index dropped
--- PASS: TestCollectionNonDefault (52.95s)
=== RUN   TestCollectionMetaAtSnapEnd
2023/01/06 22:54:47 In TestCollectionMetaAtSnapEnd()
2023/01/06 22:54:47 Creating scope: s2 for bucket: default as it does not exist
2023/01/06 22:54:47 Create scope succeeded for bucket default, scopeName: s2, body: {"uid":"3"} 
2023/01/06 22:54:48 Created collection succeeded for bucket: default, scope: s2, collection: c2, body: {"uid":"4"}
2023/01/06 22:55:01 Created the secondary index s2_c2_i1. Waiting for it become active
2023/01/06 22:55:01 Index is 18014002342189101451 now active
2023/01/06 22:55:01 Using n1ql client
2023-01-06T22:55:01.637+05:30 [Info] metadata provider version changed 3332 -> 3333
2023-01-06T22:55:01.637+05:30 [Info] switched currmeta from 3332 -> 3333 force false 
2023-01-06T22:55:01.637+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T22:55:01.637+05:30 [Info] GSIC[default/default-s2-c2-1673025901556453212] started ...
2023/01/06 22:55:01 Expected and Actual scan responses are the same
2023/01/06 22:55:05 Using n1ql client
2023/01/06 22:55:06 Expected and Actual scan responses are the same
2023/01/06 22:55:06 Created collection succeeded for bucket: default, scope: s2, collection: c3, body: {"uid":"5"}
2023/01/06 22:55:16 Using n1ql client
2023/01/06 22:55:16 Expected and Actual scan responses are the same
2023/01/06 22:55:20 Created the secondary index s2_c2_i2. Waiting for it become active
2023/01/06 22:55:20 Index is 14137828191761493968 now active
2023/01/06 22:55:20 Using n1ql client
2023/01/06 22:55:21 Expected and Actual scan responses are the same
2023/01/06 22:55:21 Using n1ql client
2023/01/06 22:55:21 Expected and Actual scan responses are the same
2023/01/06 22:55:27 Using n1ql client
2023/01/06 22:55:27 Expected and Actual scan responses are the same
2023/01/06 22:55:27 Using n1ql client
2023/01/06 22:55:27 Expected and Actual scan responses are the same
--- PASS: TestCollectionMetaAtSnapEnd (39.32s)
=== RUN   TestCollectionUpdateSeq
2023/01/06 22:55:27 In TestCollectionUpdateSeq()
2023/01/06 22:55:28 Using n1ql client
2023/01/06 22:55:28 Expected and Actual scan responses are the same
2023/01/06 22:55:28 Using n1ql client
2023/01/06 22:55:28 Expected and Actual scan responses are the same
2023/01/06 22:55:34 Using n1ql client
2023/01/06 22:55:34 Expected and Actual scan responses are the same
2023/01/06 22:55:34 Using n1ql client
2023/01/06 22:55:34 Expected and Actual scan responses are the same
2023/01/06 22:55:34 Dropping the secondary index s2_c2_i1
2023/01/06 22:55:34 Index dropped
2023/01/06 22:55:34 Dropping the secondary index s2_c2_i2
2023/01/06 22:55:34 Index dropped
--- PASS: TestCollectionUpdateSeq (7.43s)
=== RUN   TestCollectionMultiple
2023/01/06 22:55:34 In TestCollectionMultiple()
2023/01/06 22:55:38 Created the secondary index _default__default_i3. Waiting for it become active
2023/01/06 22:55:38 Index is 14314567001665745390 now active
2023/01/06 22:55:38 Using n1ql client
2023/01/06 22:55:38 Expected and Actual scan responses are the same
2023/01/06 22:55:46 Created the secondary index s1_c1_i4. Waiting for it become active
2023/01/06 22:55:46 Index is 13274773299242874708 now active
2023/01/06 22:55:46 Using n1ql client
2023/01/06 22:55:46 Expected and Actual scan responses are the same
2023/01/06 22:55:46 Dropping the secondary index _default__default_i3
2023/01/06 22:55:46 Index dropped
2023/01/06 22:55:46 Dropping the secondary index s1_c1_i4
2023/01/06 22:55:47 Index dropped
--- PASS: TestCollectionMultiple (12.73s)
=== RUN   TestCollectionNoDocs
2023/01/06 22:55:47 In TestCollectionNoDocs()
2023/01/06 22:55:52 Created the secondary index s1_c1_i1. Waiting for it become active
2023/01/06 22:55:52 Index is 12150530435772244254 now active
2023/01/06 22:55:52 Using n1ql client
2023/01/06 22:55:53 Expected and Actual scan responses are the same
2023/01/06 22:56:00 Created the secondary index s1_c1_i2. Waiting for it become active
2023/01/06 22:56:00 Index is 16725705258988000238 now active
2023/01/06 22:56:00 Using n1ql client
2023/01/06 22:56:01 Expected and Actual scan responses are the same
2023/01/06 22:56:01 Dropping the secondary index s1_c1_i1
2023/01/06 22:56:01 Index dropped
2023/01/06 22:56:01 Using n1ql client
2023/01/06 22:56:01 Expected and Actual scan responses are the same
2023/01/06 22:56:07 Created the secondary index s1_c1_i1. Waiting for it become active
2023/01/06 22:56:07 Index is 2390917696903481485 now active
2023/01/06 22:56:07 Using n1ql client
2023/01/06 22:56:08 Expected and Actual scan responses are the same
2023/01/06 22:56:15 Using n1ql client
2023/01/06 22:56:15 Expected and Actual scan responses are the same
2023/01/06 22:56:15 Using n1ql client
2023/01/06 22:56:15 Expected and Actual scan responses are the same
2023/01/06 22:56:15 Dropping the secondary index s1_c1_i1
2023/01/06 22:56:15 Index dropped
2023/01/06 22:56:15 Dropping the secondary index s1_c1_i2
2023/01/06 22:56:15 Index dropped
--- PASS: TestCollectionNoDocs (28.46s)
=== RUN   TestCollectionPrimaryIndex
2023/01/06 22:56:15 In TestCollectionPrimaryIndex()
2023/01/06 22:56:21 Created the secondary index s1_c1_i1. Waiting for it become active
2023/01/06 22:56:21 Index is 13858578316187031011 now active
2023/01/06 22:56:21 Using n1ql client
2023/01/06 22:56:29 Created the secondary index s1_c1_i2. Waiting for it become active
2023/01/06 22:56:29 Index is 2191262030727817842 now active
2023/01/06 22:56:29 Using n1ql client
2023/01/06 22:56:30 Using n1ql client
2023/01/06 22:56:30 Using n1ql client
2023/01/06 22:56:30 Dropping the secondary index s1_c1_i1
2023/01/06 22:56:31 Index dropped
2023/01/06 22:56:36 Created the secondary index s1_c1_i1. Waiting for it become active
2023/01/06 22:56:36 Index is 2117310380658300023 now active
2023/01/06 22:56:36 Using n1ql client
2023/01/06 22:56:37 Expected and Actual scan responses are the same
2023/01/06 22:56:38 Using n1ql client
2023/01/06 22:56:38 Expected and Actual scan responses are the same
2023/01/06 22:56:38 Using n1ql client
2023/01/06 22:56:38 Dropping the secondary index s1_c1_i1
2023/01/06 22:56:39 Index dropped
2023/01/06 22:56:39 Dropping the secondary index s1_c1_i2
2023/01/06 22:56:39 Index dropped
--- PASS: TestCollectionPrimaryIndex (24.10s)
=== RUN   TestCollectionMultipleBuilds
2023/01/06 22:56:40 Build command issued for the deferred indexes [16761530426435706050 8624117702239491796]
2023/01/06 22:56:41 Build command issued for the deferred indexes [11501555615806311382 15210881044043842740]
2023/01/06 22:56:41 Waiting for index 16761530426435706050 to go active ...
2023/01/06 22:56:42 Waiting for index 16761530426435706050 to go active ...
2023/01/06 22:56:43 Waiting for index 16761530426435706050 to go active ...
2023/01/06 22:56:44 Waiting for index 16761530426435706050 to go active ...
2023/01/06 22:56:45 Index is 16761530426435706050 now active
2023/01/06 22:56:45 Index is 8624117702239491796 now active
2023/01/06 22:56:45 Waiting for index 11501555615806311382 to go active ...
2023/01/06 22:56:46 Waiting for index 11501555615806311382 to go active ...
2023/01/06 22:56:47 Waiting for index 11501555615806311382 to go active ...
2023/01/06 22:56:48 Waiting for index 11501555615806311382 to go active ...
2023/01/06 22:56:49 Waiting for index 11501555615806311382 to go active ...
2023/01/06 22:56:50 Waiting for index 11501555615806311382 to go active ...
2023/01/06 22:56:51 Waiting for index 11501555615806311382 to go active ...
2023/01/06 22:56:52 Waiting for index 11501555615806311382 to go active ...
2023/01/06 22:56:53 Waiting for index 11501555615806311382 to go active ...
2023/01/06 22:56:54 Index is 11501555615806311382 now active
2023/01/06 22:56:54 Index is 15210881044043842740 now active
2023/01/06 22:56:54 Using n1ql client
2023/01/06 22:56:54 Expected and Actual scan responses are the same
2023/01/06 22:56:54 Using n1ql client
2023/01/06 22:56:54 Expected and Actual scan responses are the same
2023/01/06 22:56:54 Using n1ql client
2023/01/06 22:56:54 Expected and Actual scan responses are the same
2023/01/06 22:56:54 Using n1ql client
2023/01/06 22:56:54 Expected and Actual scan responses are the same
2023/01/06 22:56:54 Dropping the secondary index s2_c2_i3
2023/01/06 22:56:54 Index dropped
2023/01/06 22:56:54 Dropping the secondary index s2_c2_i4
2023/01/06 22:56:54 Index dropped
2023/01/06 22:56:54 Dropping the secondary index s1_c1_i1
2023/01/06 22:56:54 Index dropped
2023/01/06 22:56:54 Dropping the secondary index s1_c1_i2
2023/01/06 22:56:54 Index dropped
--- PASS: TestCollectionMultipleBuilds (15.05s)
=== RUN   TestCollectionMultipleBuilds2
2023/01/06 22:56:55 Build command issued for the deferred indexes [1298974613762086907 7423739497064469888 3713142222669233721 9188597762575100391 11681134083264582145 15913439146438225555 14055018052603929492 4876929487483406882]
2023/01/06 22:56:55 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:56:56 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:56:57 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:56:58 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:56:59 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:00 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:01 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:02 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:03 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:04 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:05 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:06 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:07 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:08 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:09 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:10 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:11 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:12 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:13 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:14 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:15 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:16 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:17 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:18 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:19 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:20 Waiting for index 1298974613762086907 to go active ...
2023/01/06 22:57:21 Index is 1298974613762086907 now active
2023/01/06 22:57:21 Index is 7423739497064469888 now active
2023/01/06 22:57:21 Index is 3713142222669233721 now active
2023/01/06 22:57:21 Waiting for index 9188597762575100391 to go active ...
2023/01/06 22:57:22 Index is 9188597762575100391 now active
2023/01/06 22:57:22 Index is 11681134083264582145 now active
2023/01/06 22:57:22 Index is 15913439146438225555 now active
2023/01/06 22:57:22 Index is 14055018052603929492 now active
2023/01/06 22:57:22 Index is 4876929487483406882 now active
2023/01/06 22:57:22 Using n1ql client
2023/01/06 22:57:22 Expected and Actual scan responses are the same
2023/01/06 22:57:22 Using n1ql client
2023/01/06 22:57:22 Expected and Actual scan responses are the same
2023/01/06 22:57:22 Using n1ql client
2023/01/06 22:57:22 Expected and Actual scan responses are the same
2023/01/06 22:57:22 Using n1ql client
2023/01/06 22:57:22 Expected and Actual scan responses are the same
2023/01/06 22:57:22 Using n1ql client
2023-01-06T22:57:22.772+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T22:57:22.772+05:30 [Info] GSIC[default/default-s2-c3-1673026042769432562] started ...
2023/01/06 22:57:22 Expected and Actual scan responses are the same
2023/01/06 22:57:22 Using n1ql client
2023/01/06 22:57:22 Expected and Actual scan responses are the same
2023/01/06 22:57:22 Using n1ql client
2023/01/06 22:57:22 Expected and Actual scan responses are the same
2023/01/06 22:57:22 Using n1ql client
2023/01/06 22:57:22 Expected and Actual scan responses are the same
2023/01/06 22:57:23 Using n1ql client
2023/01/06 22:57:23 Expected and Actual scan responses are the same
2023/01/06 22:57:23 Using n1ql client
2023/01/06 22:57:23 Expected and Actual scan responses are the same
2023/01/06 22:57:23 Dropping the secondary index s1_c1_i1
2023/01/06 22:57:23 Index dropped
2023/01/06 22:57:23 Dropping the secondary index s1_c1_i2
2023/01/06 22:57:24 Index dropped
2023/01/06 22:57:24 Dropping the secondary index s2_c2_i1
2023/01/06 22:57:24 Index dropped
2023/01/06 22:57:24 Dropping the secondary index s2_c2_i2
2023/01/06 22:57:24 Index dropped
2023/01/06 22:57:24 Dropping the secondary index s2_c3_i1
2023/01/06 22:57:24 Index dropped
2023/01/06 22:57:24 Dropping the secondary index s2_c3_i2
2023/01/06 22:57:24 Index dropped
2023/01/06 22:57:24 Dropping the secondary index _default__default_i1
2023/01/06 22:57:24 Index dropped
2023/01/06 22:57:24 Dropping the secondary index _default__default_i2
2023/01/06 22:57:24 Index dropped
--- PASS: TestCollectionMultipleBuilds2 (29.93s)
=== RUN   TestCollectionIndexDropConcurrentBuild
2023/01/06 22:57:24 In TestCollectionIndexDropConcurrentBuild()
2023/01/06 22:57:25 Build command issued for the deferred indexes [17368836276541966189 8931289682174057856]
2023/01/06 22:57:26 Dropping the secondary index s1_c1_i1
2023/01/06 22:57:26 Index dropped
2023/01/06 22:57:26 Waiting for index 8931289682174057856 to go active ...
2023/01/06 22:57:27 Waiting for index 8931289682174057856 to go active ...
2023/01/06 22:57:28 Waiting for index 8931289682174057856 to go active ...
2023/01/06 22:57:29 Index is 8931289682174057856 now active
2023/01/06 22:57:29 Using n1ql client
2023/01/06 22:57:29 Expected and Actual scan responses are the same
2023/01/06 22:57:29 Dropping the secondary index s1_c1_i2
2023/01/06 22:57:29 Index dropped
--- PASS: TestCollectionIndexDropConcurrentBuild (5.04s)
=== RUN   TestCollectionIndexDropConcurrentBuild2
2023/01/06 22:57:29 In TestCollectionIndexDropConcurrentBuild2()
2023/01/06 22:57:36 Created the secondary index s1_c1_i3. Waiting for it become active
2023/01/06 22:57:36 Index is 9914541907438418564 now active
2023/01/06 22:57:36 Using n1ql client
2023/01/06 22:57:36 Expected and Actual scan responses are the same
2023/01/06 22:57:36 Build command issued for the deferred indexes [5021401759953320176 5551947562648291228]
2023/01/06 22:57:37 Dropping the secondary index s1_c1_i3
2023/01/06 22:57:37 Index dropped
2023/01/06 22:57:37 Waiting for index 5021401759953320176 to go active ...
2023/01/06 22:57:38 Waiting for index 5021401759953320176 to go active ...
2023/01/06 22:57:39 Waiting for index 5021401759953320176 to go active ...
2023/01/06 22:57:40 Waiting for index 5021401759953320176 to go active ...
2023/01/06 22:57:41 Waiting for index 5021401759953320176 to go active ...
2023/01/06 22:57:42 Waiting for index 5021401759953320176 to go active ...
2023/01/06 22:57:43 Index is 5021401759953320176 now active
2023/01/06 22:57:43 Index is 5551947562648291228 now active
2023/01/06 22:57:43 Using n1ql client
2023/01/06 22:57:43 Expected and Actual scan responses are the same
2023/01/06 22:57:43 Using n1ql client
2023/01/06 22:57:43 Expected and Actual scan responses are the same
2023/01/06 22:57:43 Dropping the secondary index s1_c1_i1
2023/01/06 22:57:43 Index dropped
2023/01/06 22:57:43 Dropping the secondary index s1_c1_i2
2023/01/06 22:57:46 Index dropped
--- PASS: TestCollectionIndexDropConcurrentBuild2 (16.36s)
=== RUN   TestCollectionDrop
2023/01/06 22:57:46 In TestCollectionDrop()
2023/01/06 22:57:50 Created the secondary index s1_c1_i1. Waiting for it become active
2023/01/06 22:57:50 Index is 3360325006081168164 now active
2023/01/06 22:57:58 Created the secondary index s1_c1_i2. Waiting for it become active
2023/01/06 22:57:58 Index is 9230175393939774452 now active
2023/01/06 22:58:05 Created the secondary index s2_c2_i1. Waiting for it become active
2023/01/06 22:58:05 Index is 4226588059852677755 now active
2023/01/06 22:58:12 Created the secondary index s2_c2_i2. Waiting for it become active
2023/01/06 22:58:12 Index is 6330242339609976911 now active
2023/01/06 22:58:18 Created the secondary index s2_c3_i1. Waiting for it become active
2023/01/06 22:58:18 Index is 15249611108553849396 now active
2023/01/06 22:58:24 Created the secondary index s2_c3_i2. Waiting for it become active
2023/01/06 22:58:24 Index is 1653747365787608011 now active
2023/01/06 22:58:31 Created the secondary index _default__default_i1. Waiting for it become active
2023/01/06 22:58:31 Index is 12247014563219620858 now active
2023/01/06 22:58:38 Created the secondary index _default__default_i2. Waiting for it become active
2023/01/06 22:58:38 Index is 9227957891975444799 now active
2023/01/06 22:58:38 Dropped collection c1 for bucket: default, scope: s1, body: {"uid":"6"}
2023/01/06 22:58:43 Using n1ql client
2023/01/06 22:58:43 Scan failed as expected with error: Index Not Found - cause: GSI index s1_c1_i1 not found.
2023/01/06 22:58:43 Dropped scope s2 for bucket: default, body: {"uid":"7"}
2023/01/06 22:58:48 Using n1ql client
2023-01-06T22:58:48.385+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T22:58:48.385+05:30 [Info] GSIC[default/default-s2-c1-1673026128382036082] started ...
2023/01/06 22:58:48 Scan failed as expected with error: Index Not Found - cause: GSI index s2_c1_i1 not found.
--- PASS: TestCollectionDrop (62.20s)
=== RUN   TestCollectionDDLWithConcurrentSystemEvents
2023/01/06 22:58:48 Creating scope: sc for bucket: default as it does not exist
2023/01/06 22:58:48 Create scope succeeded for bucket default, scopeName: sc, body: {"uid":"8"} 
2023/01/06 22:58:48 Created collection succeeded for bucket: default, scope: sc, collection: cc, body: {"uid":"9"}
2023/01/06 22:59:03 Created the secondary index sc_cc_i1. Waiting for it become active
2023/01/06 22:59:03 Index is 11525017192536640322 now active
2023/01/06 22:59:05 Build command issued for the deferred indexes [sc_cc_i2], bucket: default, scope: sc, coll: cc
2023/01/06 22:59:05 Waiting for the index sc_cc_i2 to become active
2023/01/06 22:59:05 Waiting for index 17174784764803082158 to go active ...
2023/01/06 22:59:05 Created collection succeeded for bucket: default, scope: sc, collection: cc_0, body: {"uid":"a"}
2023/01/06 22:59:06 Created collection succeeded for bucket: default, scope: sc, collection: cc_1, body: {"uid":"b"}
2023/01/06 22:59:06 Waiting for index 17174784764803082158 to go active ...
2023/01/06 22:59:06 Created collection succeeded for bucket: default, scope: sc, collection: cc_2, body: {"uid":"c"}
2023/01/06 22:59:07 Created collection succeeded for bucket: default, scope: sc, collection: cc_3, body: {"uid":"d"}
2023/01/06 22:59:07 Waiting for index 17174784764803082158 to go active ...
2023/01/06 22:59:08 Created collection succeeded for bucket: default, scope: sc, collection: cc_4, body: {"uid":"e"}
2023/01/06 22:59:08 Waiting for index 17174784764803082158 to go active ...
2023/01/06 22:59:08 Created collection succeeded for bucket: default, scope: sc, collection: cc_5, body: {"uid":"f"}
2023/01/06 22:59:09 Created collection succeeded for bucket: default, scope: sc, collection: cc_6, body: {"uid":"10"}
2023/01/06 22:59:09 Waiting for index 17174784764803082158 to go active ...
2023/01/06 22:59:10 Waiting for index 17174784764803082158 to go active ...
2023/01/06 22:59:11 Created collection succeeded for bucket: default, scope: sc, collection: cc_7, body: {"uid":"11"}
2023/01/06 22:59:11 Waiting for index 17174784764803082158 to go active ...
2023/01/06 22:59:12 Waiting for index 17174784764803082158 to go active ...
2023/01/06 22:59:13 Created collection succeeded for bucket: default, scope: sc, collection: cc_8, body: {"uid":"12"}
2023/01/06 22:59:13 Created collection succeeded for bucket: default, scope: sc, collection: cc_9, body: {"uid":"13"}
2023/01/06 22:59:13 Index is 17174784764803082158 now active
2023/01/06 22:59:13 Using n1ql client
2023-01-06T22:59:13.751+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T22:59:13.751+05:30 [Info] GSIC[default/default-sc-cc-1673026153744504921] started ...
--- PASS: TestCollectionDDLWithConcurrentSystemEvents (25.40s)
=== RUN   TestCollectionDropWithMultipleBuckets
2023/01/06 22:59:13 In TestCollectionWithDropMultipleBuckets()
2023/01/06 22:59:13 This test will create a collection across multiple buckets and 
2023/01/06 22:59:13 drops a collection on one bucket. Indexer should not drop indexes
2023/01/06 22:59:13 with same CollectionID but different buckets
2023/01/06 22:59:13 Creating test_bucket_1
2023/01/06 22:59:13 Created bucket test_bucket_1, responseBody: 
2023/01/06 22:59:23 Creating test_bucket_2
2023/01/06 22:59:23 Created bucket test_bucket_2, responseBody: 
2023/01/06 22:59:33 Creating collection: test for bucket: test_bucket_1
2023/01/06 22:59:34 Created collection succeeded for bucket: test_bucket_1, scope: _default, collection: test, body: {"uid":"1"}
2023/01/06 22:59:34 Creating collection: test for bucket: test_bucket_2
2023/01/06 22:59:34 Created collection succeeded for bucket: test_bucket_2, scope: _default, collection: test, body: {"uid":"1"}
2023/01/06 22:59:44 Creating Index: idx_1 on scope: _default collection: test for bucket: test_bucket_1
2023/01/06 22:59:47 Created the secondary index idx_1. Waiting for it become active
2023/01/06 22:59:47 Index is 11843102560619944578 now active
2023/01/06 22:59:52 Creating Index: idx_1 on scope: _default collection: test for bucket: test_bucket_2
2023/01/06 22:59:56 Created the secondary index idx_1. Waiting for it become active
2023/01/06 22:59:56 Index is 542916048211959756 now active
2023/01/06 23:00:01 Dropping collection: test for bucket: test_bucket_1
2023/01/06 23:00:01 Dropped collection test for bucket: test_bucket_1, scope: _default, body: {"uid":"2"}
2023/01/06 23:00:03 Scanning index: idx_1, bucket: test_bucket_2
2023/01/06 23:00:03 Using n1ql client
2023-01-06T23:00:03.316+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T23:00:03.317+05:30 [Info] GSIC[default/test_bucket_2-_default-test-1673026203313402147] started ...
2023/01/06 23:00:03 Deleting bucket: test_bucket_2
2023/01/06 23:00:05 Deleted bucket test_bucket_2, responseBody: 
2023/01/06 23:00:10 Creating test_bucket_2
2023/01/06 23:00:10 Created bucket test_bucket_2, responseBody: 
2023/01/06 23:00:20 Creating collection: test for bucket: test_bucket_2
2023/01/06 23:00:20 Created collection succeeded for bucket: test_bucket_2, scope: _default, collection: test, body: {"uid":"1"}
2023/01/06 23:00:30 Creating Index: idx_1 on scope: _default collection: test for bucket: test_bucket_2
2023/01/06 23:00:35 Build command issued for the deferred indexes [idx_1], bucket: test_bucket_2, scope: _default, coll: test
2023/01/06 23:00:35 Waiting for the index idx_1 to become active
2023/01/06 23:00:35 Waiting for index 9021933297093121568 to go active ...
2023/01/06 23:00:36 Waiting for index 9021933297093121568 to go active ...
2023/01/06 23:00:37 Waiting for index 9021933297093121568 to go active ...
2023/01/06 23:00:38 Index is 9021933297093121568 now active
2023/01/06 23:00:38 Scanning index: idx_1, bucket: test_bucket_2
2023/01/06 23:00:38 Using n1ql client
2023/01/06 23:00:40 Deleted bucket test_bucket_1, responseBody: 
2023/01/06 23:00:43 Deleted bucket test_bucket_2, responseBody: 
--- PASS: TestCollectionDropWithMultipleBuckets (94.68s)
=== RUN   TestStringToByteSlice
--- PASS: TestStringToByteSlice (0.00s)
=== RUN   TestStringToByteSlice_Stack
--- PASS: TestStringToByteSlice_Stack (1.53s)
=== RUN   TestByteSliceToString
--- PASS: TestByteSliceToString (0.00s)
=== RUN   TestBytesToString_WithUnusedBytes
--- PASS: TestBytesToString_WithUnusedBytes (0.00s)
=== RUN   TestStringHeadersCompatible
--- PASS: TestStringHeadersCompatible (0.00s)
=== RUN   TestSliceHeadersCompatible
--- PASS: TestSliceHeadersCompatible (0.00s)
=== RUN   TestEphemeralBucketBasic
2023/01/06 23:00:50 In TestEphemeralBuckets()
2023/01/06 23:00:50 In DropAllSecondaryIndexes()
2023/01/06 23:00:50 Index found:  sc_cc_i1
2023/01/06 23:00:50 Dropped index sc_cc_i1
2023/01/06 23:00:50 Index found:  sc_cc_i2
2023/01/06 23:00:50 Dropped index sc_cc_i2
2023/01/06 23:00:50 Index found:  _default__default_i2
2023/01/06 23:00:50 Dropped index _default__default_i2
2023/01/06 23:00:50 Index found:  _default__default_i1
2023/01/06 23:00:50 Dropped index _default__default_i1
2023/01/06 23:01:26 Flushed the bucket default, Response body: 
2023/01/06 23:01:29 Modified parameters of bucket default, responseBody: 
2023/01/06 23:01:29 Created bucket ephemeral1, responseBody: 
2023/01/06 23:01:29 Created bucket ephemeral2, responseBody: 
2023/01/06 23:01:29 Created bucket ephemeral3, responseBody: 
2023/01/06 23:01:44 Generating docs and Populating all the buckets
2023/01/06 23:01:48 Created the secondary index bucket1_age. Waiting for it become active
2023/01/06 23:01:48 Index is 12677695387077783993 now active
2023/01/06 23:01:55 Created the secondary index bucket2_city. Waiting for it become active
2023/01/06 23:01:55 Index is 12385656155493767884 now active
2023/01/06 23:02:02 Created the secondary index bucket3_gender. Waiting for it become active
2023/01/06 23:02:02 Index is 17016491495372935123 now active
2023/01/06 23:02:13 Created the secondary index bucket4_balance. Waiting for it become active
2023/01/06 23:02:13 Index is 5727400406235622369 now active
2023/01/06 23:02:16 Using n1ql client
2023/01/06 23:02:16 Expected and Actual scan responses are the same
2023/01/06 23:02:16 Using n1ql client
2023-01-06T23:02:17.006+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T23:02:17.006+05:30 [Info] GSIC[default/ephemeral1-_default-_default-1673026336999768285] started ...
2023/01/06 23:02:17 Expected and Actual scan responses are the same
2023/01/06 23:02:17 Using n1ql client
2023-01-06T23:02:17.026+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T23:02:17.027+05:30 [Info] GSIC[default/ephemeral2-_default-_default-1673026337019173212] started ...
2023/01/06 23:02:17 Expected and Actual scan responses are the same
2023/01/06 23:02:17 Using n1ql client
2023-01-06T23:02:17.110+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T23:02:17.110+05:30 [Info] GSIC[default/ephemeral3-_default-_default-1673026337057523773] started ...
2023/01/06 23:02:17 Expected and Actual scan responses are the same
2023/01/06 23:02:18 Deleted bucket ephemeral1, responseBody: 
2023/01/06 23:02:20 Deleted bucket ephemeral2, responseBody: 
2023/01/06 23:02:22 Deleted bucket ephemeral3, responseBody: 
2023/01/06 23:02:25 Modified parameters of bucket default, responseBody: 
--- PASS: TestEphemeralBucketBasic (110.49s)
=== RUN   TestEphemeralBucketRecovery
2023/01/06 23:02:40 In TestEphemeralBucketRecovery()
2023/01/06 23:02:40 In DropAllSecondaryIndexes()
2023/01/06 23:02:40 Index found:  bucket1_age
2023/01/06 23:02:40 Dropped index bucket1_age
2023/01/06 23:03:16 Flushed the bucket default, Response body: 
2023/01/06 23:03:19 Modified parameters of bucket default, responseBody: 
2023/01/06 23:03:19 Created bucket ephemeral1, responseBody: 
2023/01/06 23:03:34 Generating docs and Populating all the buckets
2023/01/06 23:03:38 Created the secondary index bucket1_age. Waiting for it become active
2023/01/06 23:03:38 Index is 9659947917785639439 now active
2023/01/06 23:03:45 Created the secondary index bucket2_city. Waiting for it become active
2023/01/06 23:03:45 Index is 2789621620270125142 now active
Restarting indexer process ...
2023/01/06 23:03:48 []
2023-01-06T23:03:48.296+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:03:48.296+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T23:03:48.296+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:03:48.297+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023/01/06 23:04:08 Using n1ql client
2023-01-06T23:04:08.220+05:30 [Error] transport error between 127.0.0.1:60162->127.0.0.1:9107: write tcp 127.0.0.1:60162->127.0.0.1:9107: write: broken pipe
2023-01-06T23:04:08.220+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] -1078830925666318536 request transport failed `write tcp 127.0.0.1:60162->127.0.0.1:9107: write: broken pipe`
2023-01-06T23:04:08.220+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T23:04:08.220+05:30 [Error] metadataClient:PickRandom: Replicas - [15595283789163209707], PrunedReplica - map[], FilteredReplica map[]
2023/01/06 23:04:08 Expected and Actual scan responses are the same
2023/01/06 23:04:08 Using n1ql client
2023-01-06T23:04:08.241+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T23:04:08.242+05:30 [Info] GSIC[default/ephemeral1-_default-_default-1673026448239304738] started ...
2023/01/06 23:04:08 Expected and Actual scan responses are the same
2023/01/06 23:04:09 Deleted bucket ephemeral1, responseBody: 
2023/01/06 23:04:12 Modified parameters of bucket default, responseBody: 
--- PASS: TestEphemeralBucketRecovery (107.21s)
=== RUN   TestEphemeralBucketFlush
2023/01/06 23:04:27 In TestEphemeralBucketFlush()
2023/01/06 23:04:27 In DropAllSecondaryIndexes()
2023/01/06 23:04:27 Index found:  bucket1_age
2023/01/06 23:04:27 Dropped index bucket1_age
2023/01/06 23:05:03 Flushed the bucket default, Response body: 
2023/01/06 23:05:06 Modified parameters of bucket default, responseBody: 
2023/01/06 23:05:06 Created bucket ephemeral1, responseBody: 
2023/01/06 23:05:21 Generating docs and Populating all the buckets
2023/01/06 23:05:25 Created the secondary index bucket1_age. Waiting for it become active
2023/01/06 23:05:25 Index is 2099911274518964931 now active
2023/01/06 23:05:32 Created the secondary index bucket2_city. Waiting for it become active
2023/01/06 23:05:32 Index is 12972120647782425880 now active
2023/01/06 23:06:10 Flushed the bucket default, Response body: 
2023/01/06 23:06:13 Flush Enabled on bucket ephemeral1, responseBody: 
2023/01/06 23:06:46 Flushed the bucket ephemeral1, Response body: 
2023/01/06 23:06:46 Using n1ql client
2023/01/06 23:06:46 Using n1ql client
2023-01-06T23:06:46.836+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T23:06:46.837+05:30 [Info] GSIC[default/ephemeral1-_default-_default-1673026606826276361] started ...
2023/01/06 23:06:47 Deleted bucket ephemeral1, responseBody: 
2023/01/06 23:06:50 Modified parameters of bucket default, responseBody: 
--- PASS: TestEphemeralBucketFlush (158.24s)
=== RUN   TestEphemeralBucketMCDCrash
2023/01/06 23:07:05 In TestEphemeralBucketMCDCrash()
2023/01/06 23:07:05 In DropAllSecondaryIndexes()
2023/01/06 23:07:05 Index found:  bucket1_age
2023/01/06 23:07:06 Dropped index bucket1_age
2023/01/06 23:07:41 Flushed the bucket default, Response body: 
2023/01/06 23:07:44 Modified parameters of bucket default, responseBody: 
2023/01/06 23:07:44 Created bucket ephemeral1, responseBody: 
2023/01/06 23:07:59 Generating docs and Populating all the buckets
2023/01/06 23:08:03 Created the secondary index bucket1_age. Waiting for it become active
2023/01/06 23:08:03 Index is 10537942921597997939 now active
2023/01/06 23:08:10 Created the secondary index bucket2_city. Waiting for it become active
2023/01/06 23:08:10 Index is 13513009777938587953 now active
Restarting memcached process ...
2023/01/06 23:08:13 []
2023/01/06 23:08:33 Using n1ql client
2023/01/06 23:08:33 Expected and Actual scan responses are the same
2023/01/06 23:08:33 Using n1ql client
2023-01-06T23:08:33.210+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T23:08:33.210+05:30 [Info] GSIC[default/ephemeral1-_default-_default-1673026713202227765] started ...
2023/01/06 23:08:34 Deleted bucket ephemeral1, responseBody: 
2023/01/06 23:08:37 Modified parameters of bucket default, responseBody: 
--- PASS: TestEphemeralBucketMCDCrash (106.79s)
=== RUN   TestScheduleIndexBasic
2023/01/06 23:08:52 In TestMultipleDeferredIndexes_BuildTogether()
2023/01/06 23:08:52 In DropAllSecondaryIndexes()
2023/01/06 23:08:52 Index found:  bucket1_age
2023/01/06 23:08:52 Dropped index bucket1_age
2023/01/06 23:09:05 Setting JSON docs in KV
2023/01/06 23:09:27 Changing config key indexer.debug.enableBackgroundIndexCreation to value false
2023/01/06 23:09:27 Creating indexes Asynchronously
2023/01/06 23:09:27 Finding definition IDs for all indexes
2023/01/06 23:09:33 Status of all indexes
2023/01/06 23:09:33 Index id_company is in state INDEX_STATE_SCHEDULED
2023/01/06 23:09:34 Changing config key indexer.debug.enableBackgroundIndexCreation to value true
2023/01/06 23:09:34 Waiting for all indexes to become active
2023/01/06 23:09:34 Waiting for index 17203097775943791035 in state INDEX_STATE_ERROR to go active ...
2023/01/06 23:09:34 Waiting for index 13200394802844025919 in state INDEX_STATE_ERROR to go active ...
2023/01/06 23:09:34 Waiting for index 16813458371448204413 in state INDEX_STATE_ERROR to go active ...
2023/01/06 23:09:34 Waiting for index 413583099492632706 in state INDEX_STATE_INITIAL to go active ...
2023/01/06 23:09:37 Waiting for index 17203097775943791035 in state INDEX_STATE_INITIAL to go active ...
2023/01/06 23:09:37 Waiting for index 13200394802844025919 in state INDEX_STATE_ERROR to go active ...
2023/01/06 23:09:37 Waiting for index 16813458371448204413 in state INDEX_STATE_ERROR to go active ...
2023/01/06 23:09:37 Index 413583099492632706 is now active
2023/01/06 23:09:40 Waiting for index 17203097775943791035 in state INDEX_STATE_INITIAL to go active ...
2023/01/06 23:09:40 Waiting for index 13200394802844025919 in state INDEX_STATE_ERROR to go active ...
2023/01/06 23:09:40 Waiting for index 16813458371448204413 in state INDEX_STATE_ERROR to go active ...
2023/01/06 23:09:43 Waiting for index 17203097775943791035 in state INDEX_STATE_INITIAL to go active ...
2023/01/06 23:09:43 Waiting for index 13200394802844025919 in state INDEX_STATE_ERROR to go active ...
2023/01/06 23:09:43 Waiting for index 16813458371448204413 in state INDEX_STATE_ERROR to go active ...
2023/01/06 23:09:46 Index 17203097775943791035 is now active
2023/01/06 23:09:46 Waiting for index 13200394802844025919 in state INDEX_STATE_READY to go active ...
2023/01/06 23:09:46 Waiting for index 16813458371448204413 in state INDEX_STATE_ERROR to go active ...
2023/01/06 23:09:49 Waiting for index 13200394802844025919 in state INDEX_STATE_INITIAL to go active ...
2023/01/06 23:09:49 Waiting for index 16813458371448204413 in state INDEX_STATE_ERROR to go active ...
2023/01/06 23:09:52 Waiting for index 13200394802844025919 in state INDEX_STATE_INITIAL to go active ...
2023/01/06 23:09:52 Waiting for index 16813458371448204413 in state INDEX_STATE_ERROR to go active ...
2023/01/06 23:09:55 Waiting for index 13200394802844025919 in state INDEX_STATE_INITIAL to go active ...
2023/01/06 23:09:55 Waiting for index 16813458371448204413 in state INDEX_STATE_ERROR to go active ...
2023/01/06 23:09:58 Index 13200394802844025919 is now active
2023/01/06 23:09:58 Waiting for index 16813458371448204413 in state INDEX_STATE_READY to go active ...
2023/01/06 23:10:01 Waiting for index 16813458371448204413 in state INDEX_STATE_INITIAL to go active ...
2023/01/06 23:10:04 Waiting for index 16813458371448204413 in state INDEX_STATE_INITIAL to go active ...
2023/01/06 23:10:07 Waiting for index 16813458371448204413 in state INDEX_STATE_CATCHUP to go active ...
2023/01/06 23:10:07 Error  Observed when creating index
2023/01/06 23:10:07 Error  Observed when creating index
2023/01/06 23:10:07 Error  Observed when creating index
2023/01/06 23:10:07 Error  Observed when creating index
2023/01/06 23:10:10 Index 16813458371448204413 is now active
--- PASS: TestScheduleIndexBasic (77.43s)
=== RUN   TestFlattenArrayIndexTestSetup
2023/01/06 23:10:10 In DropAllSecondaryIndexes()
2023/01/06 23:10:10 Index found:  id_company
2023/01/06 23:10:10 Dropped index id_company
2023/01/06 23:10:10 Index found:  id_gender
2023/01/06 23:10:10 Dropped index id_gender
2023/01/06 23:10:10 Index found:  id_isActive
2023/01/06 23:10:10 Dropped index id_isActive
2023/01/06 23:10:10 Index found:  id_age
2023/01/06 23:10:10 Dropped index id_age
2023/01/06 23:10:45 Flushed the bucket default, Response body: 
--- PASS: TestFlattenArrayIndexTestSetup (40.39s)
=== RUN   TestScanOnFlattenedAraryIndex
2023/01/06 23:11:00 Created the secondary index idx_flatten. Waiting for it become active
2023/01/06 23:11:00 Index is 3271626248762950687 now active
2023/01/06 23:11:00 Using n1ql client
--- PASS: TestScanOnFlattenedAraryIndex (10.48s)
=== RUN   TestGroupAggrFlattenArrayIndex
2023/01/06 23:11:01 In TestGroupAggrArrayIndex()
2023/01/06 23:11:07 Created the secondary index ga_flatten_arr1. Waiting for it become active
2023/01/06 23:11:07 Index is 15293860219679258439 now active
2023/01/06 23:11:13 Created the secondary index ga_flatten_arr2. Waiting for it become active
2023/01/06 23:11:13 Index is 10250145059818302959 now active
2023/01/06 23:11:13 Scenario 1
2023-01-06T23:11:13.572+05:30 [Error] transport error between 127.0.0.1:60148->127.0.0.1:9107: write tcp 127.0.0.1:60148->127.0.0.1:9107: write: broken pipe
2023-01-06T23:11:13.572+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"]  request transport failed `write tcp 127.0.0.1:60148->127.0.0.1:9107: write: broken pipe`
2023-01-06T23:11:13.572+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T23:11:13.572+05:30 [Error] metadataClient:PickRandom: Replicas - [9249136673816339734], PrunedReplica - map[], FilteredReplica map[]
2023/01/06 23:11:13 Total Scanresults = 644
2023/01/06 23:11:14 Scenario 2
2023/01/06 23:11:14 Total Scanresults = 2864
2023/01/06 23:11:14 Scenario 3
2023/01/06 23:11:14 Total Scanresults = 1
2023/01/06 23:11:14 Scenario 4
2023/01/06 23:11:15 Total Scanresults = 995
2023/01/06 23:11:15 Scenario 5
2023/01/06 23:11:15 Total Scanresults = 2864
2023/01/06 23:11:16 Scenario 6
2023/01/06 23:11:16 Total Scanresults = 1
2023/01/06 23:11:16 Scenario 7
2023/01/06 23:11:16 Total Scanresults = 2970
2023/01/06 23:11:19 Scenario 8
2023/01/06 23:11:19 Total Scanresults = 2864
2023/01/06 23:11:20 Scenario 9
2023/01/06 23:11:20 Total Scanresults = 1
2023/01/06 23:11:20 Scenario 10
2023/01/06 23:11:20 Total Scanresults = 644
2023/01/06 23:11:21 Scenario 11
2023/01/06 23:11:21 Total Scanresults = 1191
2023/01/06 23:11:22 Scenario 12
2023/01/06 23:11:22 Total Scanresults = 1
2023/01/06 23:11:22 Scenario 13
2023/01/06 23:11:27 Total Scanresults = 1
2023/01/06 23:11:27 Count of scanResults is 1
2023/01/06 23:11:27 Value: [1 21]
--- PASS: TestGroupAggrFlattenArrayIndex (26.70s)
=== RUN   TestNullAndMissingValuesFlattenArrayIndex
2023/01/06 23:11:27 In TestNullAndMissingValuesFlattenArrayIndex
2023/01/06 23:11:29 Scenario-1: Scanning for docsWithNullEntries with array as non-leading key
2023/01/06 23:11:29 Using n1ql client
2023/01/06 23:11:29 Scenario-2: Scanning for docsWithMissingEntries with array as non-leading key
2023/01/06 23:11:29 Using n1ql client
2023/01/06 23:11:29 Scenario-3: Scanning for docs with 'missing' entry for first key in array expression with array as non-leading key
2023/01/06 23:11:29 Using n1ql client
2023/01/06 23:11:29 Scenario-4: Scanning for docs with valid entry for first key in array expression with array as non-leading key
2023/01/06 23:11:29 Add docs in docsWithPartialMissingLeadingKeyInArrEntry should be present in results
2023/01/06 23:11:29 Using n1ql client
2023/01/06 23:11:29 Scenario-5: Scanning for docsWithNullEntries with array as leading key
2023/01/06 23:11:29 Using n1ql client
2023/01/06 23:11:29 Scenario-6: Scanning for docsWithMissingEntries with array as leading entry
2023/01/06 23:11:29 Using n1ql client
2023/01/06 23:11:29 Scenario-7: Scanning for docs with 'missing' entry for first key in array expression
2023/01/06 23:11:29 Using n1ql client
2023/01/06 23:11:29 Scenario-9: Scanning for all docs in the index
2023/01/06 23:11:29 Docs in docsWithCompleteMissingLeadingKeyInArrEntry should be present in results
2023/01/06 23:11:29 Using n1ql client
2023/01/06 23:11:29 Scenario-9: Scanning for docs with valid entry for first key in array expression
2023/01/06 23:11:29 Add docs in docsWithPartialMissingLeadingKeyInArrEntry should be present in results
2023/01/06 23:11:29 Using n1ql client
--- PASS: TestNullAndMissingValuesFlattenArrayIndex (1.51s)
=== RUN   TestEmptyArrayFlattenArrayIndex
2023/01/06 23:11:29 In TestEmptyArrayFlattenArrayIndex
2023/01/06 23:11:30 Scenario-1: Scanning for docs with missing entry for first key in array expression
2023/01/06 23:11:30 The docs in `docsWithEmptyArrayEntry` should not be presnt in scanResults
2023/01/06 23:11:30 Using n1ql client
2023/01/06 23:11:30 Scenario-1: Scanning for all docs in the index
2023/01/06 23:11:30 The docs in `docsWithEmptyArrayEntry` should not be presnt in scanResults
2023/01/06 23:11:30 Using n1ql client
--- PASS: TestEmptyArrayFlattenArrayIndex (1.22s)
=== RUN   TestOSOSetup
2023/01/06 23:11:30 In TestOSOSetup()
2023/01/06 23:11:30 In DropAllSecondaryIndexes()
2023/01/06 23:11:30 Index found:  ga_flatten_arr2
2023/01/06 23:11:30 Dropped index ga_flatten_arr2
2023/01/06 23:11:30 Index found:  idx_flatten
2023/01/06 23:11:31 Dropped index idx_flatten
2023/01/06 23:11:31 Index found:  ga_flatten_arr1
2023/01/06 23:11:31 Dropped index ga_flatten_arr1
2023/01/06 23:11:31 Index found:  test_oneperprimarykey
2023/01/06 23:11:32 Dropped index test_oneperprimarykey
2023/01/06 23:11:32 Index found:  #primary
2023/01/06 23:11:33 Dropped index #primary
2023/01/06 23:12:08 Flushed the bucket default, Response body: 
2023/01/06 23:12:09 Populating the default bucket
2023/01/06 23:12:14 Changing config key indexer.build.enableOSO to value true
--- PASS: TestOSOSetup (43.91s)
=== RUN   TestOSOInitBuildDeleteMutation
2023/01/06 23:12:14 In TestOSOInitBuildDeleteMutation()
2023/01/06 23:12:18 Created the secondary index index_p1_oso. Waiting for it become active
2023/01/06 23:12:18 Index is 14601503077881419795 now active
2023/01/06 23:12:26 Created the secondary index index_p_oso. Waiting for it become active
2023/01/06 23:12:26 Index is 13411968390355876009 now active
2023/01/06 23:12:26 Expected and Actual scan responses are the same
2023/01/06 23:12:26 CountRange() expected and actual is:  1791 and 1791
2023/01/06 23:12:26 lookupkey for CountLookup() = User390349cc-80bd-4935-8805-5469763eb2c0
2023/01/06 23:12:26 CountLookup() = 1
--- PASS: TestOSOInitBuildDeleteMutation (12.41s)
=== RUN   TestOSOInitBuildIndexerRestart
2023/01/06 23:12:26 In TestOSOInitBuildIndexerRestart()
2023/01/06 23:12:26 Build command issued for the deferred indexes [17368340661804080807]
2023/01/06 23:12:27 []
2023-01-06T23:12:27.095+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:12:27.096+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T23:12:27.096+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:12:27.096+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023/01/06 23:12:32 Waiting for index 17368340661804080807 to go active ...
2023/01/06 23:12:33 Waiting for index 17368340661804080807 to go active ...
2023/01/06 23:12:34 Waiting for index 17368340661804080807 to go active ...
2023/01/06 23:12:35 Index is 17368340661804080807 now active
2023-01-06T23:12:35.057+05:30 [Error] transport error between 127.0.0.1:46014->127.0.0.1:9107: write tcp 127.0.0.1:46014->127.0.0.1:9107: write: broken pipe
2023-01-06T23:12:35.057+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"]  request transport failed `write tcp 127.0.0.1:46014->127.0.0.1:9107: write: broken pipe`
2023-01-06T23:12:35.058+05:30 [Warn] scan failed: requestId  queryport 127.0.0.1:9107 inst 8900307709147709251 partition [0]
2023-01-06T23:12:35.058+05:30 [Warn] Scan failed with error for index 17368340661804080807.  Trying scan again with replica, reqId: :  write tcp 127.0.0.1:46014->127.0.0.1:9107: write: broken pipe from [127.0.0.1:9107] ...
2023-01-06T23:12:35.058+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T23:12:35.058+05:30 [Error] metadataClient:PickRandom: Replicas - [18130613850443258749], PrunedReplica - map[], FilteredReplica map[]
2023-01-06T23:12:35.058+05:30 [Warn] Fail to find indexers to satisfy query request.  Trying scan again for index 17368340661804080807, reqId: :  write tcp 127.0.0.1:46014->127.0.0.1:9107: write: broken pipe from [127.0.0.1:9107] ...
2023/01/06 23:12:35 Expected and Actual scan responses are the same
2023/01/06 23:12:35 CountRange() expected and actual is:  1791 and 1791
2023/01/06 23:12:35 lookupkey for CountLookup() = Usere03f38cb-b6fe-433c-93c9-011ad8d49e88
2023/01/06 23:12:35 CountLookup() = 1
--- PASS: TestOSOInitBuildIndexerRestart (8.34s)
=== RUN   TestMissingLeadingKeyBasic
2023/01/06 23:12:35 In DropAllSecondaryIndexes()
2023/01/06 23:12:35 Index found:  index_p2_oso
2023/01/06 23:12:35 Dropped index index_p2_oso
2023/01/06 23:12:35 Index found:  index_p_oso
2023/01/06 23:12:35 Dropped index index_p_oso
2023/01/06 23:12:35 Index found:  index_p1_oso
2023/01/06 23:12:35 Dropped index index_p1_oso
2023/01/06 23:13:11 Flushed the bucket default, Response body: 
2023/01/06 23:13:11 Populating the default bucket
--- PASS: TestMissingLeadingKeyBasic (41.19s)
=== RUN   TestMissingLeadingKeyPartitioned
2023/01/06 23:13:16 In DropAllSecondaryIndexes()
2023/01/06 23:13:16 Index found:  idx_vac
2023/01/06 23:13:16 Dropped index idx_vac
--- PASS: TestMissingLeadingKeyPartitioned (7.07s)
=== RUN   TestIdxCorruptBasicSanityMultipleIndices
2023/01/06 23:13:23 In DropAllSecondaryIndexes()
2023/01/06 23:13:23 Index found:  idx_doses_partn
2023/01/06 23:13:23 Dropped index idx_doses_partn
Creating two indices ...
2023/01/06 23:13:36 Created the secondary index corrupt_idx1_age. Waiting for it become active
2023/01/06 23:13:36 Index is 13225225085921220461 now active
2023/01/06 23:13:44 Created the secondary index corrupt_idx2_company. Waiting for it become active
2023/01/06 23:13:44 Index is 16602644437835286553 now active
hosts = [127.0.0.1:9108]
2023/01/06 23:13:44 Corrupting index corrupt_idx1_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx1_age_17965630548957211144_0.index
snapshot datapath =  /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx1_age_17965630548957211144_0.index/snapshot.2023-01-06.231336.345
Restarting indexer process ...
2023/01/06 23:13:49 []
2023-01-06T23:13:49.221+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:13:49.221+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T23:13:49.221+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:13:49.221+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023/01/06 23:14:10 Using n1ql client
2023-01-06T23:14:10.194+05:30 [Error] transport error between 127.0.0.1:58760->127.0.0.1:9107: write tcp 127.0.0.1:58760->127.0.0.1:9107: write: broken pipe
2023-01-06T23:14:10.194+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] -3070866414381571285 request transport failed `write tcp 127.0.0.1:58760->127.0.0.1:9107: write: broken pipe`
2023-01-06T23:14:10.195+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T23:14:10.195+05:30 [Error] metadataClient:PickRandom: Replicas - [2931148347203012122], PrunedReplica - map[], FilteredReplica map[]
2023/01/06 23:14:10 Using n1ql client
--- PASS: TestIdxCorruptBasicSanityMultipleIndices (46.84s)
=== RUN   TestIdxCorruptPartitionedIndex
Creating partitioned index ...
2023/01/06 23:14:14 Created the secondary index corrupt_idx3_age. Waiting for it become active
2023/01/06 23:14:14 Index is 11757091623595143660 now active
hosts = [127.0.0.1:9108]
indexer.numPartitions = 8
Corrupting partn id 4
Getting slicepath for  1
slicePath for partn 1 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_1.index
Getting slicepath for  2
slicePath for partn 2 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_2.index
Getting slicepath for  3
slicePath for partn 3 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_3.index
Getting slicepath for  4
slicePath for partn 4 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_4.index
Getting slicepath for  5
slicePath for partn 5 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_5.index
Getting slicepath for  6
slicePath for partn 6 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_6.index
Getting slicepath for  7
slicePath for partn 7 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_7.index
Getting slicepath for  8
slicePath for partn 8 = /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_8.index
2023/01/06 23:14:14 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_4.index
snapshot datapath =  /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_4.index/snapshot.2023-01-06.231413.145
Restarting indexer process ...
2023/01/06 23:14:19 []
2023-01-06T23:14:19.953+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:14:19.954+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T23:14:19.954+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:14:19.954+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023/01/06 23:14:39 Using n1ql client
2023-01-06T23:14:39.934+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 8.  Partition with instances 7 
2023-01-06T23:14:39.934+05:30 [Error] metadataClient:PickRandom: Replicas - [7729956654179230649], PrunedReplica - map[], FilteredReplica map[]
2023-01-06T23:14:39.945+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 8.  Partition with instances 7 
2023-01-06T23:14:39.945+05:30 [Error] metadataClient:PickRandom: Replicas - [7729956654179230649], PrunedReplica - map[], FilteredReplica map[]
Scan error: All indexer replica is down or unavailable or unable to process request - cause: queryport.client.noHost
Verified single partition corruption
Restarting indexer process ...
2023/01/06 23:14:39 []
2023-01-06T23:14:40.009+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:14:40.009+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T23:14:40.009+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:14:40.009+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023/01/06 23:14:59 Using n1ql client
2023-01-06T23:14:59.987+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 8.  Partition with instances 7 
2023-01-06T23:14:59.987+05:30 [Error] metadataClient:PickRandom: Replicas - [7729956654179230649], PrunedReplica - map[], FilteredReplica map[]
2023-01-06T23:14:59.997+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 8.  Partition with instances 7 
2023-01-06T23:14:59.998+05:30 [Error] metadataClient:PickRandom: Replicas - [7729956654179230649], PrunedReplica - map[], FilteredReplica map[]
Scan error: All indexer replica is down or unavailable or unable to process request - cause: queryport.client.noHost
2023/01/06 23:15:00 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_1.index
snapshot datapath =  /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_1.index/snapshot.2023-01-06.231413.146
2023/01/06 23:15:05 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_2.index
snapshot datapath =  /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_2.index/snapshot.2023-01-06.231413.148
2023/01/06 23:15:10 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_3.index
snapshot datapath =  /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_3.index/snapshot.2023-01-06.231413.148
Skip corrupting partition 4
2023/01/06 23:15:15 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_5.index
snapshot datapath =  /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_5.index/snapshot.2023-01-06.231413.147
2023/01/06 23:15:20 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_6.index
snapshot datapath =  /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_6.index/snapshot.2023-01-06.231413.146
2023/01/06 23:15:25 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_7.index
snapshot datapath =  /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_7.index/snapshot.2023-01-06.231413.144
2023/01/06 23:15:30 Corrupting index corrupt_idx3_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_8.index
snapshot datapath =  /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx3_age_7729956654179230649_8.index/snapshot.2023-01-06.231413.146
Restarting indexer process ...
2023/01/06 23:15:35 []
2023-01-06T23:15:35.064+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:15:35.065+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T23:15:35.065+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:15:35.065+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023/01/06 23:15:55 Using n1ql client
2023-01-06T23:15:55.046+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 8.  Partition with instances 7 
2023-01-06T23:15:55.046+05:30 [Error] metadataClient:PickRandom: Replicas - [7729956654179230649], PrunedReplica - map[], FilteredReplica map[]
2023-01-06T23:15:55.057+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 8.  Partition with instances 7 
2023-01-06T23:15:55.057+05:30 [Error] metadataClient:PickRandom: Replicas - [7729956654179230649], PrunedReplica - map[], FilteredReplica map[]
Scan error: All indexer replica is down or unavailable or unable to process request - cause: queryport.client.noHost
--- PASS: TestIdxCorruptPartitionedIndex (104.85s)
=== RUN   TestIdxCorruptMOITwoSnapsOneCorrupt
2023/01/06 23:15:55 Changing config key indexer.settings.persisted_snapshot.moi.interval to value 20000
Creating an index ...
2023/01/06 23:16:09 Created the secondary index corrupt_idx4_age. Waiting for it become active
2023/01/06 23:16:09 Index is 3061486803463387579 now active
hosts = [127.0.0.1:9108]
Populating the default bucket with more docs
Snapshots:  [/opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx4_age_7673351825244626292_0.index/snapshot.2023-01-06.231609.749 /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx4_age_7673351825244626292_0.index/snapshot.2023-01-06.231629.749]
2023/01/06 23:16:37 Corrupting index corrupt_idx4_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx4_age_7673351825244626292_0.index
snapshot datapath =  /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx4_age_7673351825244626292_0.index/snapshot.2023-01-06.231629.749
Restarting indexer process ...
2023/01/06 23:16:37 []
2023-01-06T23:16:37.686+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:16:37.686+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:16:37.688+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T23:16:37.690+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023/01/06 23:16:57 Using n1ql client
2023-01-06T23:16:57.666+05:30 [Error] transport error between 127.0.0.1:53056->127.0.0.1:9107: write tcp 127.0.0.1:53056->127.0.0.1:9107: write: broken pipe
2023-01-06T23:16:57.666+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] 4369531303897471814 request transport failed `write tcp 127.0.0.1:53056->127.0.0.1:9107: write: broken pipe`
2023-01-06T23:16:57.666+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T23:16:57.666+05:30 [Error] metadataClient:PickRandom: Replicas - [7673351825244626292], PrunedReplica - map[], FilteredReplica map[]
Restarting indexer process ...
2023/01/06 23:16:57 []
2023-01-06T23:17:09.312+05:30 [Error] watcher.updateIndexStats2NoLock: unexpected nil *DedupedIndexStats. bucket default, dedupedIndexStats &{0 0 1673027218683991643 1673027225314510932 map[]}, clientStatsPtr.Stats[bucket] 
2023-01-06T23:17:09.312+05:30 [Error] watcher.updateIndexStats2NoLock: unexpected nil *DedupedIndexStats. bucket default, dedupedIndexStats &{0 0 1673027218683991643 1673027225314510932 map[]}, clientStatsPtr.Stats[bucket] 
2023-01-06T23:17:14.312+05:30 [Error] watcher.updateIndexStats2NoLock: unexpected nil *DedupedIndexStats. bucket default, dedupedIndexStats &{0 0 1673027218683991643 1673027231314847826 map[]}, clientStatsPtr.Stats[bucket] 
2023-01-06T23:17:14.312+05:30 [Error] watcher.updateIndexStats2NoLock: unexpected nil *DedupedIndexStats. bucket default, dedupedIndexStats &{0 0 1673027218683991643 1673027231314847826 map[]}, clientStatsPtr.Stats[bucket] 
2023/01/06 23:17:17 Using n1ql client
2023-01-06T23:17:17.711+05:30 [Error] transport error between 127.0.0.1:59782->127.0.0.1:9107: write tcp 127.0.0.1:59782->127.0.0.1:9107: write: broken pipe
2023-01-06T23:17:17.711+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] -1136310487018097594 request transport failed `write tcp 127.0.0.1:59782->127.0.0.1:9107: write: broken pipe`
2023-01-06T23:17:17.711+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T23:17:17.711+05:30 [Error] metadataClient:PickRandom: Replicas - [7673351825244626292], PrunedReplica - map[], FilteredReplica map[]
--- PASS: TestIdxCorruptMOITwoSnapsOneCorrupt (82.68s)
=== RUN   TestIdxCorruptMOITwoSnapsBothCorrupt
2023/01/06 23:17:17 Changing config key indexer.settings.persisted_snapshot.moi.interval to value 20000
Creating an index ...
2023-01-06T23:17:19.314+05:30 [Error] watcher.updateIndexStats2NoLock: unexpected nil *DedupedIndexStats. bucket default, dedupedIndexStats &{0 0 1673027218683991643 1673027237314531414 map[]}, clientStatsPtr.Stats[bucket] 
2023-01-06T23:17:19.314+05:30 [Error] watcher.updateIndexStats2NoLock: unexpected nil *DedupedIndexStats. bucket default, dedupedIndexStats &{0 0 1673027218683991643 1673027237314531414 map[]}, clientStatsPtr.Stats[bucket] 
2023/01/06 23:17:22 Created the secondary index corrupt_idx5_name. Waiting for it become active
2023/01/06 23:17:22 Index is 15826253710312418498 now active
hosts = [127.0.0.1:9108]
Populating the default bucket with more docs
Snapshots:  [/opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx5_name_17814919369314355385_0.index/snapshot.2023-01-06.231721.799 /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx5_name_17814919369314355385_0.index/snapshot.2023-01-06.231738.702]
snapshot datapath =  /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx5_name_17814919369314355385_0.index/snapshot.2023-01-06.231738.702
snapshot datapath =  /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx5_name_17814919369314355385_0.index/snapshot.2023-01-06.231721.799
Restarting indexer process ...
2023/01/06 23:17:50 []
2023-01-06T23:17:50.936+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:17:50.936+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:17:50.936+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T23:17:50.937+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023/01/06 23:18:10 Using n1ql client
2023-01-06T23:18:10.911+05:30 [Error] transport error between 127.0.0.1:60652->127.0.0.1:9107: write tcp 127.0.0.1:60652->127.0.0.1:9107: write: broken pipe
2023-01-06T23:18:10.911+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] -7589699043027880986 request transport failed `write tcp 127.0.0.1:60652->127.0.0.1:9107: write: broken pipe`
2023-01-06T23:18:10.911+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T23:18:10.911+05:30 [Error] metadataClient:PickRandom: Replicas - [17814919369314355385], PrunedReplica - map[], FilteredReplica map[]
2023-01-06T23:18:10.929+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] Range(-7589699043027880986) response failed `Index not found`
2023-01-06T23:18:10.929+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T23:18:10.929+05:30 [Error] metadataClient:PickRandom: Replicas - [17814919369314355385], PrunedReplica - map[], FilteredReplica map[]
Scan error:  Index not found from [127.0.0.1:9107] - cause:  Index not found from [127.0.0.1:9107]
--- PASS: TestIdxCorruptMOITwoSnapsBothCorrupt (53.19s)
=== RUN   TestIdxCorruptBackup
2023/01/06 23:18:11 Changing config key indexer.settings.enable_corrupt_index_backup to value true
Creating index ...
2023/01/06 23:18:26 Created the secondary index corrupt_idx6_age. Waiting for it become active
2023/01/06 23:18:26 Index is 16610367279745833594 now active
hosts = [127.0.0.1:9108]
2023/01/06 23:18:26 Corrupting index corrupt_idx6_age slicePath /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx6_age_14337741984162473363_0.index
snapshot datapath =  /opt/build/ns_server/data/n_1/data/@2i/default_corrupt_idx6_age_14337741984162473363_0.index/snapshot.2023-01-06.231825.308
Restarting indexer process ...
2023/01/06 23:18:31 []
2023-01-06T23:18:31.746+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:18:31.747+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T23:18:31.747+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:18:31.747+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T23:18:31.747+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7589699043027880986) connection "127.0.0.1:34546" closed `EOF`
--- PASS: TestIdxCorruptBackup (40.79s)
=== RUN   TestStatsPersistence
2023/01/06 23:18:51 In TestStatsPersistence()
2023/01/06 23:18:51 In DropAllSecondaryIndexes()
2023/01/06 23:18:51 Index found:  corrupt_idx6_age
2023/01/06 23:19:01 Dropped index corrupt_idx6_age
2023/01/06 23:19:01 Index found:  corrupt_idx2_company
2023/01/06 23:19:01 Dropped index corrupt_idx2_company
2023/01/06 23:19:01 Index found:  corrupt_idx4_age
2023/01/06 23:19:01 Dropped index corrupt_idx4_age
2023/01/06 23:19:07 Created the secondary index index_age. Waiting for it become active
2023/01/06 23:19:07 Index is 197849648016602278 now active
2023/01/06 23:19:14 Created the secondary index index_gender. Waiting for it become active
2023/01/06 23:19:14 Index is 10596328466606079366 now active
2023/01/06 23:19:22 Created the secondary index index_city. Waiting for it become active
2023/01/06 23:19:22 Index is 531102960427799648 now active
2023/01/06 23:19:29 Created the secondary index p1. Waiting for it become active
2023/01/06 23:19:29 Index is 11171195094701054939 now active
2023/01/06 23:19:29 === Testing for persistence enabled = true, with interval = 5  ===
2023/01/06 23:19:29 Changing config key indexer.statsPersistenceInterval to value 5
2023/01/06 23:19:29 []
2023-01-06T23:19:29.325+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:19:29.326+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T23:19:29.326+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:19:29.326+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023/01/06 23:19:34 Using n1ql client
2023/01/06 23:19:34 Using n1ql client
2023/01/06 23:19:34 Using n1ql client
2023/01/06 23:19:41 []
2023/01/06 23:19:46 === Testing for persistence enabled = false, with interval = 0  ===
2023/01/06 23:19:46 Changing config key indexer.statsPersistenceInterval to value 0
2023/01/06 23:19:46 []
2023/01/06 23:19:51 Using n1ql client
2023-01-06T23:19:51.851+05:30 [Error] transport error between 127.0.0.1:37998->127.0.0.1:9107: write tcp 127.0.0.1:37998->127.0.0.1:9107: write: broken pipe
2023-01-06T23:19:51.851+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] 4913459091122460287 request transport failed `write tcp 127.0.0.1:37998->127.0.0.1:9107: write: broken pipe`
2023-01-06T23:19:51.851+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T23:19:51.851+05:30 [Error] metadataClient:PickRandom: Replicas - [2437995983460277660], PrunedReplica - map[], FilteredReplica map[]
2023/01/06 23:19:51 Using n1ql client
2023/01/06 23:19:51 Using n1ql client
2023/01/06 23:19:54 []
2023/01/06 23:19:59 === Testing for persistence enabled = true, with interval = 10  ===
2023/01/06 23:19:59 Changing config key indexer.statsPersistenceInterval to value 10
2023/01/06 23:19:59 []
2023-01-06T23:19:59.404+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:19:59.404+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:19:59.404+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T23:19:59.404+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023/01/06 23:20:04 Using n1ql client
2023-01-06T23:20:04.377+05:30 [Error] transport error between 127.0.0.1:38912->127.0.0.1:9107: write tcp 127.0.0.1:38912->127.0.0.1:9107: write: broken pipe
2023-01-06T23:20:04.377+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] 7534665128764342053 request transport failed `write tcp 127.0.0.1:38912->127.0.0.1:9107: write: broken pipe`
2023-01-06T23:20:04.377+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T23:20:04.377+05:30 [Error] metadataClient:PickRandom: Replicas - [2437995983460277660], PrunedReplica - map[], FilteredReplica map[]
2023/01/06 23:20:04 Using n1ql client
2023/01/06 23:20:04 Using n1ql client
2023/01/06 23:20:16 []
2023/01/06 23:20:21 === Testing for persistence enabled = true, with interval = 5  ===
2023/01/06 23:20:21 Changing config key indexer.statsPersistenceInterval to value 5
2023/01/06 23:20:21 []
2023/01/06 23:20:26 Using n1ql client
2023-01-06T23:20:26.996+05:30 [Error] transport error between 127.0.0.1:39642->127.0.0.1:9107: write tcp 127.0.0.1:39642->127.0.0.1:9107: write: broken pipe
2023-01-06T23:20:26.996+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] 6089344158678010306 request transport failed `write tcp 127.0.0.1:39642->127.0.0.1:9107: write: broken pipe`
2023-01-06T23:20:26.997+05:30 [Error] metadataClient:PickRandom: Fail to find indexer for all index partitions. Num partition 1.  Partition with instances 0 
2023-01-06T23:20:26.997+05:30 [Error] metadataClient:PickRandom: Replicas - [2437995983460277660], PrunedReplica - map[], FilteredReplica map[]
2023/01/06 23:20:27 Using n1ql client
2023/01/06 23:20:27 Using n1ql client
2023/01/06 23:20:34 []
2023-01-06T23:20:34.274+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:20:34.274+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-06T23:20:34.275+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = EOF. Kill Pipe.
2023-01-06T23:20:34.275+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
--- PASS: TestStatsPersistence (107.62s)
=== RUN   TestStats_StorageStatistics
2023/01/06 23:20:39 In TestStats_StorageStatistics()
2023/01/06 23:20:39 Index found:  index_age
2023/01/06 23:20:39 Stats from Index4 StorageStatistics for index index_age are [map[AVG_ITEM_SIZE:98 NUM_ITEMS:11502 PARTITION_ID:0]]
2023/01/06 23:20:39 Index found:  p1
2023/01/06 23:20:39 Stats from Index4 StorageStatistics for index p1 are [map[AVG_ITEM_SIZE:86 NUM_ITEMS:11502 PARTITION_ID:0]]
--- PASS: TestStats_StorageStatistics (0.21s)
PASS
ok  	github.com/couchbase/indexing/secondary/tests/functionaltests	8245.464s
Indexer Go routine dump logged in /opt/build/ns_server/logs/n_1/indexer_functests_pprof.log
curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 65692    0 65692    0     0  9286k      0 --:--:-- --:--:-- --:--:-- 10.4M
2023/01/06 23:20:42 In TestMain()
2023/01/06 23:20:43 Changing config key indexer.api.enableTestServer to value true
2023/01/06 23:20:43 Using memory_optimized for creating indexes
2023/01/06 23:20:43 Changing config key indexer.settings.storage_mode to value memory_optimized
=== RUN   TestRangeWithConcurrentAddMuts
2023/01/06 23:20:48 In TestRangeWithConcurrentAddMuts()
2023/01/06 23:20:48 In DropAllSecondaryIndexes()
2023/01/06 23:20:48 Index found:  index_gender
2023/01/06 23:20:48 Dropped index index_gender
2023/01/06 23:20:48 Index found:  p1
2023/01/06 23:20:48 Dropped index p1
2023/01/06 23:20:48 Index found:  index_age
2023/01/06 23:20:48 Dropped index index_age
2023/01/06 23:20:48 Index found:  index_city
2023/01/06 23:20:48 Dropped index index_city
2023/01/06 23:20:48 Generating JSON docs
2023/01/06 23:20:48 Setting initial JSON docs in KV
2023/01/06 23:20:49 All indexers are active
2023/01/06 23:20:49 Creating a 2i
2023/01/06 23:20:52 Created the secondary index index_company. Waiting for it become active
2023/01/06 23:20:52 Index is 8608909638592262735 now active
2023/01/06 23:20:52 In Range Scan for Thread 1: 
2023/01/06 23:20:52 CreateDocs:: Creating mutations
2023/01/06 23:20:52 ListAllSecondaryIndexes() for Thread 1: : Index index_company Bucket default
--- PASS: TestRangeWithConcurrentAddMuts (124.65s)
=== RUN   TestRangeWithConcurrentDelMuts
2023/01/06 23:22:52 In TestRangeWithConcurrentDelMuts()
2023/01/06 23:22:52 Generating JSON docs
2023-01-06T23:22:52.833+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:43934->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023/01/06 23:22:54 Setting initial JSON docs in KV
2023/01/06 23:23:12 All indexers are active
2023/01/06 23:23:12 Creating a 2i
2023/01/06 23:23:12 Index found:  index_company
2023/01/06 23:23:12 In Range Scan for Thread 1: 
2023/01/06 23:23:12 CreateDocs:: Delete mutations
2023/01/06 23:23:12 ListAllSecondaryIndexes() for Thread 1: : Index index_company Bucket default
--- PASS: TestRangeWithConcurrentDelMuts (139.76s)
=== RUN   TestScanWithConcurrentIndexOps
2023/01/06 23:25:12 In TestScanWithConcurrentIndexOps()
2023-01-06T23:25:12.589+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:46064->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023/01/06 23:25:12 Generating JSON docs
2023/01/06 23:25:18 Setting initial JSON docs in KV
2023/01/06 23:26:24 All indexers are active
2023/01/06 23:26:24 Creating a 2i
2023/01/06 23:26:24 Index found:  index_company
2023/01/06 23:26:24 In Range Scan for Thread 1: 
2023/01/06 23:26:24 Create and Drop index operations
2023/01/06 23:26:24 ListAllSecondaryIndexes() for Thread 1: : Index index_company Bucket default
2023/01/06 23:26:42 Created the secondary index index_age. Waiting for it become active
2023/01/06 23:26:42 Index is 991479731010585310 now active
2023/01/06 23:27:01 Created the secondary index index_firstname. Waiting for it become active
2023/01/06 23:27:01 Index is 15761326803078402798 now active
2023/01/06 23:27:03 Dropping the secondary index index_age
2023/01/06 23:27:03 Index dropped
2023/01/06 23:27:04 Dropping the secondary index index_firstname
2023/01/06 23:27:04 Index dropped
2023/01/06 23:27:23 Created the secondary index index_age. Waiting for it become active
2023/01/06 23:27:23 Index is 12892881284284649325 now active
2023/01/06 23:27:43 Created the secondary index index_firstname. Waiting for it become active
2023/01/06 23:27:43 Index is 3794597315328795057 now active
2023/01/06 23:27:46 Dropping the secondary index index_age
2023/01/06 23:27:46 Index dropped
2023/01/06 23:27:47 Dropping the secondary index index_firstname
2023/01/06 23:27:47 Index dropped
2023/01/06 23:28:05 Created the secondary index index_age. Waiting for it become active
2023/01/06 23:28:05 Index is 16992260352931421403 now active
2023-01-06T23:28:25.045+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:49948->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023/01/06 23:28:25 Created the secondary index index_firstname. Waiting for it become active
2023/01/06 23:28:25 Index is 3837736739789732616 now active
2023/01/06 23:28:26 Dropping the secondary index index_age
2023/01/06 23:28:26 Index dropped
2023/01/06 23:28:27 Dropping the secondary index index_firstname
2023/01/06 23:28:27 Index dropped
--- PASS: TestScanWithConcurrentIndexOps (196.22s)
=== RUN   TestConcurrentScans_SameIndex
2023/01/06 23:28:28 In TestConcurrentScans_SameIndex()
2023/01/06 23:28:28 Generating JSON docs
2023-01-06T23:28:28.811+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:49950->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023/01/06 23:28:34 Setting initial JSON docs in KV
2023/01/06 23:29:38 All indexers are active
2023/01/06 23:29:38 Creating a 2i
2023/01/06 23:29:38 Index found:  index_company
2023/01/06 23:29:38 In Range Scan for Thread 6: 
2023/01/06 23:29:38 In Range Scan for Thread 3: 
2023/01/06 23:29:38 In Range Scan for Thread 4: 
2023/01/06 23:29:38 In Range Scan for Thread 2: 
2023/01/06 23:29:38 In Range Scan for Thread 1: 
2023/01/06 23:29:38 In Range Scan for Thread 5: 
2023/01/06 23:29:38 ListAllSecondaryIndexes() for Thread 5: : Index index_company Bucket default
2023/01/06 23:29:38 ListAllSecondaryIndexes() for Thread 3: : Index index_company Bucket default
2023/01/06 23:29:38 ListAllSecondaryIndexes() for Thread 4: : Index index_company Bucket default
2023/01/06 23:29:38 ListAllSecondaryIndexes() for Thread 6: : Index index_company Bucket default
2023/01/06 23:29:38 ListAllSecondaryIndexes() for Thread 2: : Index index_company Bucket default
2023/01/06 23:29:38 ListAllSecondaryIndexes() for Thread 1: : Index index_company Bucket default
2023-01-06T23:31:38.434+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:51630->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T23:31:38.491+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:51632->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T23:31:39.010+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:51628->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T23:31:39.185+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:51638->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T23:31:39.750+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:51634->127.0.0.1:9106: use of closed network connection. Kill Pipe.
--- PASS: TestConcurrentScans_SameIndex (190.97s)
=== RUN   TestConcurrentScans_MultipleIndexes
2023/01/06 23:31:39 In TestConcurrentScans_MultipleIndexes()
2023/01/06 23:31:39 Generating JSON docs
2023-01-06T23:31:39.785+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:51636->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023/01/06 23:31:46 Setting initial JSON docs in KV
2023/01/06 23:32:54 All indexers are active
2023/01/06 23:32:54 Creating multiple indexes
2023/01/06 23:32:54 Index found:  index_company
2023/01/06 23:33:16 Created the secondary index index_age. Waiting for it become active
2023/01/06 23:33:16 Index is 16287642465059918158 now active
2023/01/06 23:33:38 Created the secondary index index_firstname. Waiting for it become active
2023/01/06 23:33:38 Index is 4101073744629448536 now active
2023/01/06 23:33:38 In Range Scan for Thread 3: 
2023/01/06 23:33:38 In Range Scan for Thread 1: 
2023/01/06 23:33:38 In Range Scan
2023/01/06 23:33:38 ListAllSecondaryIndexes() for Thread 1: : Index index_firstname Bucket default
2023/01/06 23:33:38 ListAllSecondaryIndexes() for Thread 1: : Index index_company Bucket default
2023/01/06 23:33:38 ListAllSecondaryIndexes() for Thread 1: : Index index_age Bucket default
2023/01/06 23:33:39 ListAllSecondaryIndexes() for Thread 3: : Index index_age Bucket default
2023/01/06 23:33:39 ListAllSecondaryIndexes() for Thread 3: : Index index_firstname Bucket default
2023/01/06 23:33:39 ListAllSecondaryIndexes() for Thread 3: : Index index_company Bucket default
2023-01-06T23:35:39.987+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:53556->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T23:35:40.416+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:53552->127.0.0.1:9106: use of closed network connection. Kill Pipe.
--- PASS: TestConcurrentScans_MultipleIndexes (240.69s)
=== RUN   TestMutationsWithMultipleIndexBuilds
2023/01/06 23:35:40 In TestMutationsWithMultipleIndexBuilds()
2023/01/06 23:35:40 In DropAllSecondaryIndexes()
2023/01/06 23:35:40 Index found:  index_company
2023-01-06T23:35:40.480+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:53554->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023/01/06 23:35:40 Dropped index index_company
2023/01/06 23:35:40 Index found:  index_age
2023/01/06 23:35:40 Dropped index index_age
2023/01/06 23:35:40 Index found:  index_firstname
2023/01/06 23:35:40 Dropped index index_firstname
2023/01/06 23:35:40 Generating JSON docs
2023/01/06 23:35:49 Setting initial JSON docs in KV
2023/01/06 23:36:58 Created the secondary index index_primary. Waiting for it become active
2023/01/06 23:36:58 Index is 4075860103949379112 now active
2023/01/06 23:36:58 Creating multiple indexes in deferred mode
2023/01/06 23:36:58 Build Indexes and wait for indexes to become active: [index_company index_age index_firstname index_lastname]
2023/01/06 23:36:58 Build command issued for the deferred indexes [index_company index_age index_firstname index_lastname], bucket: default, scope: _default, coll: _default
2023/01/06 23:36:58 Waiting for the index index_company to become active
2023/01/06 23:36:58 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:36:59 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:00 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:01 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:02 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:03 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:04 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:05 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:06 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:07 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:08 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:09 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:10 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:11 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:12 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:13 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:14 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:15 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:16 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:17 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:18 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:19 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:20 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:21 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:22 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:23 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:24 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:25 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:26 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:27 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:28 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:29 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:30 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:31 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:32 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:33 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:34 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:35 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:36 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:37 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:38 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:39 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:40 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:41 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:42 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:43 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:44 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:45 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:46 Waiting for index 3958240337914903787 to go active ...
2023/01/06 23:37:47 Index is 3958240337914903787 now active
2023/01/06 23:37:47 Waiting for the index index_age to become active
2023/01/06 23:37:47 Index is 7984252896438711515 now active
2023/01/06 23:37:47 Waiting for the index index_firstname to become active
2023/01/06 23:37:47 Index is 3774719183371550759 now active
2023/01/06 23:37:47 Waiting for the index index_lastname to become active
2023/01/06 23:37:47 Index is 1146082802339050375 now active
--- PASS: TestMutationsWithMultipleIndexBuilds (127.19s)
PASS
ok  	github.com/couchbase/indexing/secondary/tests/largedatatests	1025.056s
Indexer Go routine dump logged in /opt/build/ns_server/logs/n_1/indexer_largedata_pprof.log
curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 64009    0 64009    0     0  8978k      0 --:--:-- --:--:-- --:--:-- 10.1M

Serverless tests

Starting server: attempt 1

Serverless tests

2023/01/06 23:40:50 In TestMain()
2023/01/06 23:40:50 otp node fetch error: json: cannot unmarshal string into Go value of type couchbase.Pool
2023/01/06 23:40:50 Initialising services with role: kv,n1ql on node: 127.0.0.1:9000
2023/01/06 23:40:51 Initialising web UI on node: 127.0.0.1:9000
2023/01/06 23:40:51 InitWebCreds, response is: {"newBaseUri":"http://127.0.0.1:9000/"}
2023/01/06 23:40:51 Setting data quota of 1500M and Index quota of 1500M
2023/01/06 23:40:52 Adding serverGroup: Group 2 via server: 127.0.0.1:9000
2023/01/06 23:40:52 AddServerGroup: Successfully added serverGroup 127.0.0.1:9000, server: Group 2, response: []
2023/01/06 23:40:52 Adding node: https://127.0.0.1:19001 with role: index to the cluster with uri: /pools/default/serverGroups/93c2efc37299674c67f8076b2d06040d/addNode
2023/01/06 23:41:06 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9001 (role index, serverGroup: Group 2), response: {"otpNode":"n_1@127.0.0.1"}
2023/01/06 23:41:06 Adding node: https://127.0.0.1:19002 with role: index to the cluster with uri: /pools/default/serverGroups/0/addNode
2023/01/06 23:41:13 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9002 (role index, serverGroup: Group 1), response: {"otpNode":"n_2@127.0.0.1"}
2023/01/06 23:41:18 Rebalance progress: 0
2023/01/06 23:41:23 Rebalance progress: 100
2023/01/06 23:41:29 Created bucket default, responseBody: 
2023/01/06 23:41:34 Cluster status: map[127.0.0.1:9001:[index] 127.0.0.1:9002:[index] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 23:41:34 Successfully initialised cluster
2023/01/06 23:41:34 Cluster status: map[127.0.0.1:9001:[index] 127.0.0.1:9002:[index] 172.31.5.112:9000:[kv n1ql]]
2023/01/06 23:41:34 Changing config key queryport.client.settings.backfillLimit to value 0
2023/01/06 23:41:34 Changing config key queryport.client.log_level to value Warn
2023/01/06 23:41:34 Changing config key indexer.api.enableTestServer to value true
2023/01/06 23:41:34 Changing config key indexer.settings.persisted_snapshot_init_build.moi.interval to value 60000
2023/01/06 23:41:34 Changing config key indexer.settings.persisted_snapshot.moi.interval to value 60000
2023/01/06 23:41:35 Changing config key indexer.settings.log_level to value info
2023/01/06 23:41:35 Changing config key indexer.settings.storage_mode.disable_upgrade to value true
2023/01/06 23:41:35 Changing config key indexer.settings.rebalance.blob_storage_scheme to value 
2023/01/06 23:41:35 Changing config key indexer.plasma.serverless.shardCopy.dbg to value true
2023/01/06 23:41:35 Changing config key indexer.rebalance.serverless.transferBatchSize to value 2
2023/01/06 23:41:35 Changing config key indexer.client_stats_refresh_interval to value 1000
2023/01/06 23:41:35 Using plasma for creating indexes
2023/01/06 23:41:35 Changing config key indexer.settings.storage_mode to value plasma
2023/01/06 23:41:40 Data file exists. Skipping download
2023/01/06 23:41:40 Data file exists. Skipping download
2023/01/06 23:41:42 In DropAllSecondaryIndexes()
2023/01/06 23:41:42 Emptying the default bucket
2023/01/06 23:41:43 Deleted bucket default, responseBody: 
2023/01/06 23:41:43 http://127.0.0.1:9000/pools/default/buckets/bucket_1
2023/01/06 23:41:43 &{DELETE http://127.0.0.1:9000/pools/default/buckets/bucket_1 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc000138000}
2023/01/06 23:41:43 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Fri, 06 Jan 2023 18:11:42 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc003ffa900 31 [] false false map[] 0xc0004f0100 }
2023/01/06 23:41:43 DeleteBucket failed for bucket bucket_1 
2023/01/06 23:41:43 Deleted bucket bucket_1, responseBody: Requested resource not found.
2023/01/06 23:41:43 http://127.0.0.1:9000/pools/default/buckets/bucket_%!(NOVERB)
2023/01/06 23:41:43 &{DELETE http://127.0.0.1:9000/pools/default/buckets/bucket_%252 HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9000 map[] map[]  map[]      0xc000138000}
2023/01/06 23:41:43 &{404 Object Not Found 404 HTTP/1.1 1 1 map[Cache-Control:[no-cache,no-store,must-revalidate] Content-Length:[31] Content-Type:[text/plain] Date:[Fri, 06 Jan 2023 18:11:42 GMT] Expires:[Thu, 01 Jan 1970 00:00:00 GMT] Pragma:[no-cache] Server:[Couchbase Server] X-Content-Type-Options:[nosniff] X-Frame-Options:[DENY] X-Permitted-Cross-Domain-Policies:[none] X-Xss-Protection:[1; mode=block]] 0xc003e92880 31 [] false false map[] 0xc0000a2200 }
2023/01/06 23:41:43 DeleteBucket failed for bucket bucket_%2 
2023/01/06 23:41:43 Deleted bucket bucket_%2, responseBody: Requested resource not found.
2023/01/06 23:41:58 cleanupStorageDir: Cleaning up /opt/build/goproj/src/github.com/couchbase/indexing/secondary/tests/serverlesstests/shard_rebalance_storage_dir
=== RUN   TestIndexPlacement
2023/01/06 23:41:58 Created bucket bucket_1, responseBody: 
2023/01/06 23:41:58 WaitForBucketCreation: Checking bucket(bucket_1) creation for host: 127.0.0.1:9000
2023/01/06 23:41:59 WaitForBucketCreation: Checking bucket(bucket_1) creation for host: 127.0.0.1:9001
2023/01/06 23:41:59 WaitForBucketCreation: Checking bucket(bucket_1) creation for host: 127.0.0.1:9002
2023/01/06 23:41:59 Created collection succeeded for bucket: bucket_1, scope: _default, collection: c1, body: {"uid":"2"}
2023/01/06 23:41:59 TestIndexPlacement: Manifest for bucket: bucket_1, scope: _default, collection: c1 is: map[uid:2]
2023/01/06 23:41:59 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9000, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:41:59 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9001, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:41:59 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9002, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:42:00 Received OK response from ensureManifest, bucket: bucket_1, uid: 2
2023/01/06 23:42:03 Executed N1ql statement: create index idx_1 on `bucket_1`.`_default`.`c1`(company)
2023/01/06 23:42:07 Index status is: Ready for index: idx_1
2023/01/06 23:42:08 Index status is: Ready for index: idx_1 (replica 1)
2023/01/06 23:42:10 scanIndexReplicas: Scanning all for index: idx_1, bucket: bucket_1, scope: _default, collection: c1
2023-01-06T23:42:10.560+05:30 [Info] creating GsiClient for 127.0.0.1:9000
2023/01/06 23:42:13 Deleted bucket bucket_1, responseBody: 
--- PASS: TestIndexPlacement (29.88s)
=== RUN   TestShardIdMapping
2023/01/06 23:42:28 Created bucket bucket_1, responseBody: 
2023/01/06 23:42:28 WaitForBucketCreation: Checking bucket(bucket_1) creation for host: 127.0.0.1:9000
2023/01/06 23:42:29 WaitForBucketCreation: Checking bucket(bucket_1) creation for host: 127.0.0.1:9001
2023/01/06 23:42:29 WaitForBucketCreation: Checking bucket(bucket_1) creation for host: 127.0.0.1:9002
2023/01/06 23:42:32 Executed N1ql statement: create index idx_secondary on `bucket_1`.`_default`.`_default`(age)
2023/01/06 23:42:38 Index status is: Ready for index: idx_secondary
2023/01/06 23:42:38 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/06 23:42:38 Executed N1ql statement: create index idx_secondary_defer on `bucket_1`.`_default`.`_default`(company) with {"defer_build":true}
2023/01/06 23:42:43 Index status is: Created for index: idx_secondary_defer
2023/01/06 23:42:43 Index status is: Created for index: idx_secondary_defer (replica 1)
2023/01/06 23:42:46 Executed N1ql statement: create primary index `#primary` on `bucket_1`.`_default`.`_default`
2023/01/06 23:42:48 Index status is: Ready for index: #primary
2023/01/06 23:42:48 Index status is: Ready for index: #primary (replica 1)
2023/01/06 23:42:48 Executed N1ql statement: create primary index `#primary_defer` on `bucket_1`.`_default`.`_default` with {"defer_build":true}
2023/01/06 23:42:52 Index status is: Created for index: #primary_defer
2023/01/06 23:42:53 Index status is: Created for index: #primary_defer (replica 1)
2023/01/06 23:42:55 Executed N1ql statement: create index idx_partitioned on `bucket_1`.`_default`.`_default`(emalid) partition by hash(meta().id)
2023/01/06 23:43:03 Index status is: Ready for index: idx_partitioned
2023/01/06 23:43:03 Index status is: Ready for index: idx_partitioned (replica 1)
2023/01/06 23:43:03 Executed N1ql statement: create index idx_partitioned_defer on `bucket_1`.`_default`.`_default`(balance) partition by hash(meta().id)  with {"defer_build":true}
2023/01/06 23:43:08 Index status is: Created for index: idx_partitioned_defer
2023/01/06 23:43:08 Index status is: Created for index: idx_partitioned_defer (replica 1)
2023/01/06 23:43:08 Created collection succeeded for bucket: bucket_1, scope: _default, collection: c1, body: {"uid":"2"}
2023/01/06 23:43:08 TestIndexPlacement: Manifest for bucket: bucket_1, scope: _default, collection: c1 is: map[uid:2]
2023/01/06 23:43:08 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9000, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:43:08 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9001, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:43:08 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9002, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:43:08 Received OK response from ensureManifest, bucket: bucket_1, uid: 2
2023/01/06 23:43:12 Executed N1ql statement: create index idx_secondary on `bucket_1`.`_default`.`c1`(age)
2023/01/06 23:43:17 Index status is: Ready for index: idx_secondary
2023/01/06 23:43:18 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/06 23:43:18 Executed N1ql statement: create index idx_secondary_defer on `bucket_1`.`_default`.`c1`(company) with {"defer_build":true}
2023/01/06 23:43:23 Index status is: Created for index: idx_secondary_defer
2023/01/06 23:43:23 Index status is: Created for index: idx_secondary_defer (replica 1)
2023/01/06 23:43:25 Executed N1ql statement: create primary index `#primary` on `bucket_1`.`_default`.`c1`
2023/01/06 23:43:27 Index status is: Ready for index: #primary
2023/01/06 23:43:27 Index status is: Ready for index: #primary (replica 1)
2023/01/06 23:43:28 Executed N1ql statement: create primary index `#primary_defer` on `bucket_1`.`_default`.`c1` with {"defer_build":true}
2023/01/06 23:43:33 Index status is: Created for index: #primary_defer
2023/01/06 23:43:33 Index status is: Created for index: #primary_defer (replica 1)
2023/01/06 23:43:36 Executed N1ql statement: create index idx_partitioned on `bucket_1`.`_default`.`c1`(emalid) partition by hash(meta().id)
2023/01/06 23:43:43 Index status is: Ready for index: idx_partitioned
2023/01/06 23:43:43 Index status is: Ready for index: idx_partitioned (replica 1)
2023/01/06 23:43:44 Executed N1ql statement: create index idx_partitioned_defer on `bucket_1`.`_default`.`c1`(balance) partition by hash(meta().id)  with {"defer_build":true}
2023/01/06 23:43:53 Index status is: Created for index: idx_partitioned_defer
2023/01/06 23:43:53 Index status is: Created for index: idx_partitioned_defer (replica 1)
2023/01/06 23:43:53 Created collection succeeded for bucket: bucket_1, scope: _default, collection: c2%, body: {"uid":"3"}
2023/01/06 23:43:53 TestIndexPlacement: Manifest for bucket: bucket_1, scope: _default, collection: c2% is: map[uid:3]
2023/01/06 23:43:53 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9000, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:43:54 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9001, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:43:54 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9002, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:43:54 Received OK response from ensureManifest, bucket: bucket_1, uid: 3
2023/01/06 23:43:58 Executed N1ql statement: create index idx_secondary on `bucket_1`.`_default`.`c2%`(age)
2023/01/06 23:44:03 Index status is: Ready for index: idx_secondary
2023/01/06 23:44:03 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/06 23:44:03 Executed N1ql statement: create index idx_secondary_defer on `bucket_1`.`_default`.`c2%`(company) with {"defer_build":true}
2023/01/06 23:44:08 Index status is: Created for index: idx_secondary_defer
2023/01/06 23:44:08 Index status is: Created for index: idx_secondary_defer (replica 1)
2023/01/06 23:44:10 Executed N1ql statement: create primary index `#primary` on `bucket_1`.`_default`.`c2%`
2023/01/06 23:44:17 Index status is: Ready for index: #primary
2023/01/06 23:44:18 Index status is: Ready for index: #primary (replica 1)
2023/01/06 23:44:18 Executed N1ql statement: create primary index `#primary_defer` on `bucket_1`.`_default`.`c2%` with {"defer_build":true}
2023/01/06 23:44:23 Index status is: Created for index: #primary_defer
2023/01/06 23:44:23 Index status is: Created for index: #primary_defer (replica 1)
2023/01/06 23:44:26 Executed N1ql statement: create index idx_partitioned on `bucket_1`.`_default`.`c2%`(emalid) partition by hash(meta().id)
2023/01/06 23:44:28 Index status is: Ready for index: idx_partitioned
2023/01/06 23:44:28 Index status is: Ready for index: idx_partitioned (replica 1)
2023/01/06 23:44:29 Executed N1ql statement: create index idx_partitioned_defer on `bucket_1`.`_default`.`c2%`(balance) partition by hash(meta().id)  with {"defer_build":true}
2023/01/06 23:44:33 Index status is: Created for index: idx_partitioned_defer
2023/01/06 23:44:33 Index status is: Created for index: idx_partitioned_defer (replica 1)
2023/01/06 23:44:33 Created bucket bucket_%2, responseBody: 
2023/01/06 23:44:33 WaitForBucketCreation: Checking bucket(bucket_%2) creation for host: 127.0.0.1:9000
2023/01/06 23:44:34 WaitForBucketCreation: Checking bucket(bucket_%2) creation for host: 127.0.0.1:9001
2023/01/06 23:44:34 WaitForBucketCreation: Checking bucket(bucket_%2) creation for host: 127.0.0.1:9002
2023/01/06 23:44:38 Executed N1ql statement: create index idx_secondary on `bucket_%2`.`_default`.`_default`(age)
2023/01/06 23:44:43 Index status is: Ready for index: idx_secondary
2023/01/06 23:44:44 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/06 23:44:44 Executed N1ql statement: create index idx_secondary_defer on `bucket_%2`.`_default`.`_default`(company) with {"defer_build":true}
2023/01/06 23:44:48 Index status is: Created for index: idx_secondary_defer
2023/01/06 23:44:48 Index status is: Created for index: idx_secondary_defer (replica 1)
2023/01/06 23:44:51 Executed N1ql statement: create primary index `#primary` on `bucket_%2`.`_default`.`_default`
2023/01/06 23:44:53 Index status is: Ready for index: #primary
2023/01/06 23:44:53 Index status is: Ready for index: #primary (replica 1)
2023/01/06 23:44:53 Executed N1ql statement: create primary index `#primary_defer` on `bucket_%2`.`_default`.`_default` with {"defer_build":true}
2023/01/06 23:44:58 Index status is: Created for index: #primary_defer
2023/01/06 23:44:58 Index status is: Created for index: #primary_defer (replica 1)
2023/01/06 23:45:01 Executed N1ql statement: create index idx_partitioned on `bucket_%2`.`_default`.`_default`(emalid) partition by hash(meta().id)
2023/01/06 23:45:08 Index status is: Ready for index: idx_partitioned
2023/01/06 23:45:08 Index status is: Ready for index: idx_partitioned (replica 1)
2023/01/06 23:45:09 Executed N1ql statement: create index idx_partitioned_defer on `bucket_%2`.`_default`.`_default`(balance) partition by hash(meta().id)  with {"defer_build":true}
2023/01/06 23:45:18 Index status is: Created for index: idx_partitioned_defer
2023/01/06 23:45:18 Index status is: Created for index: idx_partitioned_defer (replica 1)
2023/01/06 23:45:18 Created collection succeeded for bucket: bucket_%2, scope: _default, collection: c1, body: {"uid":"2"}
2023/01/06 23:45:18 TestIndexPlacement: Manifest for bucket: bucket_%2, scope: _default, collection: c1 is: map[uid:2]
2023/01/06 23:45:18 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9000, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:45:18 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9001, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:45:18 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9002, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:45:18 Received OK response from ensureManifest, bucket: bucket_%2, uid: 2
2023/01/06 23:45:22 Executed N1ql statement: create index idx_secondary on `bucket_%2`.`_default`.`c1`(age)
2023/01/06 23:45:28 Index status is: Ready for index: idx_secondary
2023/01/06 23:45:28 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/06 23:45:28 Executed N1ql statement: create index idx_secondary_defer on `bucket_%2`.`_default`.`c1`(company) with {"defer_build":true}
2023/01/06 23:45:33 Index status is: Created for index: idx_secondary_defer
2023/01/06 23:45:33 Index status is: Created for index: idx_secondary_defer (replica 1)
2023/01/06 23:45:35 Executed N1ql statement: create primary index `#primary` on `bucket_%2`.`_default`.`c1`
2023/01/06 23:45:43 Index status is: Ready for index: #primary
2023/01/06 23:45:43 Index status is: Ready for index: #primary (replica 1)
2023/01/06 23:45:43 Executed N1ql statement: create primary index `#primary_defer` on `bucket_%2`.`_default`.`c1` with {"defer_build":true}
2023/01/06 23:45:48 Index status is: Created for index: #primary_defer
2023/01/06 23:45:48 Index status is: Created for index: #primary_defer (replica 1)
2023/01/06 23:45:50 Executed N1ql statement: create index idx_partitioned on `bucket_%2`.`_default`.`c1`(emalid) partition by hash(meta().id)
2023/01/06 23:45:53 Index status is: Ready for index: idx_partitioned
2023/01/06 23:45:53 Index status is: Ready for index: idx_partitioned (replica 1)
2023/01/06 23:45:53 Executed N1ql statement: create index idx_partitioned_defer on `bucket_%2`.`_default`.`c1`(balance) partition by hash(meta().id)  with {"defer_build":true}
2023/01/06 23:45:58 Index status is: Created for index: idx_partitioned_defer
2023/01/06 23:45:58 Index status is: Created for index: idx_partitioned_defer (replica 1)
2023/01/06 23:45:58 Created collection succeeded for bucket: bucket_%2, scope: _default, collection: c2%, body: {"uid":"3"}
2023/01/06 23:45:58 TestIndexPlacement: Manifest for bucket: bucket_%2, scope: _default, collection: c2% is: map[uid:3]
2023/01/06 23:45:58 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9000, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:45:58 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9001, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:45:58 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9002, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:45:58 Received OK response from ensureManifest, bucket: bucket_%2, uid: 3
2023/01/06 23:46:03 Executed N1ql statement: create index idx_secondary on `bucket_%2`.`_default`.`c2%`(age)
2023/01/06 23:46:08 Index status is: Ready for index: idx_secondary
2023/01/06 23:46:08 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/06 23:46:09 Executed N1ql statement: create index idx_secondary_defer on `bucket_%2`.`_default`.`c2%`(company) with {"defer_build":true}
2023/01/06 23:46:13 Index status is: Created for index: idx_secondary_defer
2023/01/06 23:46:13 Index status is: Created for index: idx_secondary_defer (replica 1)
2023/01/06 23:46:16 Executed N1ql statement: create primary index `#primary` on `bucket_%2`.`_default`.`c2%`
2023/01/06 23:46:23 Index status is: Ready for index: #primary
2023/01/06 23:46:23 Index status is: Ready for index: #primary (replica 1)
2023/01/06 23:46:24 Executed N1ql statement: create primary index `#primary_defer` on `bucket_%2`.`_default`.`c2%` with {"defer_build":true}
2023/01/06 23:46:28 Index status is: Created for index: #primary_defer
2023/01/06 23:46:28 Index status is: Created for index: #primary_defer (replica 1)
2023/01/06 23:46:31 Executed N1ql statement: create index idx_partitioned on `bucket_%2`.`_default`.`c2%`(emalid) partition by hash(meta().id)
2023/01/06 23:46:38 Index status is: Ready for index: idx_partitioned
2023/01/06 23:46:38 Index status is: Ready for index: idx_partitioned (replica 1)
2023/01/06 23:46:39 Executed N1ql statement: create index idx_partitioned_defer on `bucket_%2`.`_default`.`c2%`(balance) partition by hash(meta().id)  with {"defer_build":true}
2023/01/06 23:46:43 Index status is: Created for index: idx_partitioned_defer
2023/01/06 23:46:43 Index status is: Created for index: idx_partitioned_defer (replica 1)
2023/01/06 23:46:46 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: _default
2023-01-06T23:46:46.350+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T23:46:46.351+05:30 [Info] GSIC[default/bucket_1-_default-_default-1673029006347273213] started ...
2023/01/06 23:46:47 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: _default
2023/01/06 23:46:49 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: _default
2023/01/06 23:46:50 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:46:51 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:46:53 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:46:54 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c2%
2023-01-06T23:46:54.233+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T23:46:54.235+05:30 [Info] GSIC[default/bucket_1-_default-c2%-1673029014226734033] started ...
2023/01/06 23:46:55 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:46:57 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:46:58 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: _default
2023-01-06T23:46:58.178+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T23:46:58.179+05:30 [Info] GSIC[default/bucket_%2-_default-_default-1673029018176034488] started ...
2023/01/06 23:46:59 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: _default
2023/01/06 23:47:01 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: _default
2023/01/06 23:47:02 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c1
2023-01-06T23:47:02.492+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T23:47:02.493+05:30 [Info] GSIC[default/bucket_%2-_default-c1-1673029022487204638] started ...
2023/01/06 23:47:03 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:47:05 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:47:06 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c2%
2023-01-06T23:47:06.708+05:30 [Info] GsiClient::UpdateUsecjson: using collatejson as data format between indexer and GsiClient
2023-01-06T23:47:06.709+05:30 [Info] GSIC[default/bucket_%2-_default-c2%-1673029026698392430] started ...
2023/01/06 23:47:07 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:47:09 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: c2%
--- PASS: TestShardIdMapping (282.19s)
=== RUN   TestShardRebalanceSetup
2023/01/06 23:47:10 In TestShardRebalanceSetup
2023/01/06 23:47:10 TestShardRebalanceSetup: Using  as storage dir for rebalance
2023/01/06 23:47:10 Changing config key indexer.settings.rebalance.blob_storage_bucket to value /opt/build/goproj/src/github.com/couchbase/indexing/secondary/tests/serverlesstests
2023/01/06 23:47:10 Changing config key indexer.settings.rebalance.blob_storage_prefix to value shard_rebalance_storage_dir
--- PASS: TestShardRebalanceSetup (0.27s)
=== RUN   TestTwoNodeSwapRebalance
2023/01/06 23:47:10 In TestTwoNodeSwapRebalance
2023/01/06 23:47:10 Adding node: https://127.0.0.1:19003 with role: index to the cluster with uri: /pools/default/serverGroups/93c2efc37299674c67f8076b2d06040d/addNode
2023/01/06 23:47:22 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9003 (role index, serverGroup: Group 2), response: {"otpNode":"n_3@127.0.0.1"}
2023/01/06 23:47:22 Adding node: https://127.0.0.1:19004 with role: index to the cluster with uri: /pools/default/serverGroups/0/addNode
2023/01/06 23:47:34 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9004 (role index, serverGroup: Group 1), response: {"otpNode":"n_4@127.0.0.1"}
2023/01/06 23:47:34 Removing node(s): [127.0.0.1:9001 127.0.0.1:9002] from the cluster
2023/01/06 23:47:39 Rebalance progress: 10
2023/01/06 23:47:44 Rebalance progress: 10
2023/01/06 23:47:49 Rebalance progress: 18
2023/01/06 23:47:54 Rebalance progress: 38.11684210526316
2023/01/06 23:47:59 Rebalance progress: 38.12
2023/01/06 23:48:04 Rebalance progress: 38.12
2023/01/06 23:48:09 Rebalance progress: 38.12
2023/01/06 23:48:14 Rebalance progress: 38.12
2023/01/06 23:48:19 Rebalance progress: 38.12
2023/01/06 23:48:24 Rebalance progress: 38.12
2023/01/06 23:48:29 Rebalance progress: 50
2023/01/06 23:48:34 Rebalance progress: 78.11842105263159
2023/01/06 23:48:39 Rebalance progress: 78.12
2023/01/06 23:48:44 Rebalance progress: 78.11842105263159
2023-01-06T23:48:47.690+05:30 [Error] receiving packet: read tcp 127.0.0.1:58550->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.691+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-8032648886162803340) connection "127.0.0.1:58550" response transport failed `read tcp 127.0.0.1:58550->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:47.692+05:30 [Error] receiving packet: read tcp 127.0.0.1:57868->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:47.692+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-8032648886162803340) connection "127.0.0.1:57868" response transport failed `read tcp 127.0.0.1:57868->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:47.696+05:30 [Error] receiving packet: read tcp 127.0.0.1:38064->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.696+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-8032648886162803340) connection "127.0.0.1:38064" response transport failed `read tcp 127.0.0.1:38064->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:47.707+05:30 [Error] receiving packet: read tcp 127.0.0.1:37382->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:47.707+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2367533499215032937) connection "127.0.0.1:37382" response transport failed `read tcp 127.0.0.1:37382->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:47.712+05:30 [Error] receiving packet: read tcp 127.0.0.1:37384->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:47.715+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2367533499215032937) connection "127.0.0.1:37384" response transport failed `read tcp 127.0.0.1:37384->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:47.739+05:30 [Error] receiving packet: read tcp 127.0.0.1:38074->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.739+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3018825974061535849) connection "127.0.0.1:38074" response transport failed `read tcp 127.0.0.1:38074->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:47.742+05:30 [Error] receiving packet: read tcp 127.0.0.1:38076->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.742+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3018825974061535849) connection "127.0.0.1:38076" response transport failed `read tcp 127.0.0.1:38076->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:47.759+05:30 [Error] receiving packet: read tcp 127.0.0.1:38078->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.759+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(500408384071299727) connection "127.0.0.1:38078" response transport failed `read tcp 127.0.0.1:38078->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:47.779+05:30 [Error] receiving packet: read tcp 127.0.0.1:38080->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.779+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2087416526082373633) connection "127.0.0.1:38080" response transport failed `read tcp 127.0.0.1:38080->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:47.808+05:30 [Error] receiving packet: read tcp 127.0.0.1:38082->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.808+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4804115805768375780) connection "127.0.0.1:38082" response transport failed `read tcp 127.0.0.1:38082->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:47.812+05:30 [Error] receiving packet: read tcp 127.0.0.1:37388->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:47.812+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4804115805768375780) connection "127.0.0.1:37388" response transport failed `read tcp 127.0.0.1:37388->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:47.819+05:30 [Error] receiving packet: read tcp 127.0.0.1:38084->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.819+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4804115805768375780) connection "127.0.0.1:38084" response transport failed `read tcp 127.0.0.1:38084->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:47.824+05:30 [Error] receiving packet: read tcp 127.0.0.1:38086->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.824+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4804115805768375780) connection "127.0.0.1:38086" response transport failed `read tcp 127.0.0.1:38086->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:47.827+05:30 [Error] receiving packet: read tcp 127.0.0.1:38088->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.827+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4804115805768375780) connection "127.0.0.1:38088" response transport failed `read tcp 127.0.0.1:38088->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:47.839+05:30 [Error] receiving packet: read tcp 127.0.0.1:38090->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.839+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7667940430041423006) connection "127.0.0.1:38090" response transport failed `read tcp 127.0.0.1:38090->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:47.859+05:30 [Error] receiving packet: read tcp 127.0.0.1:37408->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:47.859+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(2189220508685589625) connection "127.0.0.1:37408" response transport failed `read tcp 127.0.0.1:37408->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:47.868+05:30 [Error] receiving packet: read tcp 127.0.0.1:38094->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.868+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7646423813217812307) connection "127.0.0.1:38094" response transport failed `read tcp 127.0.0.1:38094->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:47.871+05:30 [Error] receiving packet: read tcp 127.0.0.1:37412->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:47.871+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(7646423813217812307) connection "127.0.0.1:37412" response transport failed `read tcp 127.0.0.1:37412->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:47.878+05:30 [Error] receiving packet: read tcp 127.0.0.1:37416->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:47.878+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(7646423813217812307) connection "127.0.0.1:37416" response transport failed `read tcp 127.0.0.1:37416->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:47.878+05:30 [Error] receiving packet: read tcp 127.0.0.1:38098->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.878+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7646423813217812307) connection "127.0.0.1:38098" response transport failed `read tcp 127.0.0.1:38098->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:47.896+05:30 [Error] receiving packet: read tcp 127.0.0.1:38102->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.896+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-6764778739203201887) connection "127.0.0.1:38102" response transport failed `read tcp 127.0.0.1:38102->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:47.900+05:30 [Error] receiving packet: read tcp 127.0.0.1:37420->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:47.900+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-6764778739203201887) connection "127.0.0.1:37420" response transport failed `read tcp 127.0.0.1:37420->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:47.910+05:30 [Error] receiving packet: read tcp 127.0.0.1:37422->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:47.910+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4854508490151919897) connection "127.0.0.1:37422" response transport failed `read tcp 127.0.0.1:37422->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:47.914+05:30 [Error] receiving packet: read tcp 127.0.0.1:37424->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:47.914+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4854508490151919897) connection "127.0.0.1:37424" response transport failed `read tcp 127.0.0.1:37424->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:47.917+05:30 [Error] receiving packet: read tcp 127.0.0.1:38110->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.917+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4854508490151919897) connection "127.0.0.1:38110" response transport failed `read tcp 127.0.0.1:38110->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:47.991+05:30 [Error] receiving packet: read tcp 127.0.0.1:38114->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:47.991+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1869986175420642535) connection "127.0.0.1:38114" response transport failed `read tcp 127.0.0.1:38114->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.046+05:30 [Error] receiving packet: read tcp 127.0.0.1:37428->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.046+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(7350924727414520255) connection "127.0.0.1:37428" response transport failed `read tcp 127.0.0.1:37428->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.061+05:30 [Error] receiving packet: read tcp 127.0.0.1:38116->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.061+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2288976205861289574) connection "127.0.0.1:38116" response transport failed `read tcp 127.0.0.1:38116->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.065+05:30 [Error] receiving packet: read tcp 127.0.0.1:38118->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.065+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2288976205861289574) connection "127.0.0.1:38118" response transport failed `read tcp 127.0.0.1:38118->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.079+05:30 [Error] receiving packet: read tcp 127.0.0.1:37438->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.079+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-5744739774154123828) connection "127.0.0.1:37438" response transport failed `read tcp 127.0.0.1:37438->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.082+05:30 [Error] receiving packet: read tcp 127.0.0.1:38120->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.082+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-5744739774154123828) connection "127.0.0.1:38120" response transport failed `read tcp 127.0.0.1:38120->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.113+05:30 [Error] receiving packet: read tcp 127.0.0.1:38126->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.113+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-897855906410252239) connection "127.0.0.1:38126" response transport failed `read tcp 127.0.0.1:38126->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.122+05:30 [Error] receiving packet: read tcp 127.0.0.1:38128->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.122+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-897855906410252239) connection "127.0.0.1:38128" response transport failed `read tcp 127.0.0.1:38128->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.133+05:30 [Error] receiving packet: read tcp 127.0.0.1:38130->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.133+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8657406247884381300) connection "127.0.0.1:38130" response transport failed `read tcp 127.0.0.1:38130->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.147+05:30 [Error] receiving packet: read tcp 127.0.0.1:38132->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.147+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-6452036030491241916) connection "127.0.0.1:38132" response transport failed `read tcp 127.0.0.1:38132->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.155+05:30 [Error] receiving packet: read tcp 127.0.0.1:38134->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.155+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2796556298120989901) connection "127.0.0.1:38134" response transport failed `read tcp 127.0.0.1:38134->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.159+05:30 [Error] receiving packet: read tcp 127.0.0.1:37440->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.159+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2796556298120989901) connection "127.0.0.1:37440" response transport failed `read tcp 127.0.0.1:37440->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.159+05:30 [Error] receiving packet: read tcp 127.0.0.1:38136->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.159+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2796556298120989901) connection "127.0.0.1:38136" response transport failed `read tcp 127.0.0.1:38136->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.168+05:30 [Error] receiving packet: read tcp 127.0.0.1:38138->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.168+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-8073348267505990537) connection "127.0.0.1:38138" response transport failed `read tcp 127.0.0.1:38138->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.190+05:30 [Error] receiving packet: read tcp 127.0.0.1:37458->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.190+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(7207526971079506933) connection "127.0.0.1:37458" response transport failed `read tcp 127.0.0.1:37458->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.224+05:30 [Error] receiving packet: read tcp 127.0.0.1:38140->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.224+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(1139833361608066101) connection "127.0.0.1:38140" response transport failed `read tcp 127.0.0.1:38140->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.224+05:30 [Error] receiving packet: read tcp 127.0.0.1:37460->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.224+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(1139833361608066101) connection "127.0.0.1:37460" response transport failed `read tcp 127.0.0.1:37460->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.242+05:30 [Error] receiving packet: read tcp 127.0.0.1:37462->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.242+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(1139833361608066101) connection "127.0.0.1:37462" response transport failed `read tcp 127.0.0.1:37462->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.257+05:30 [Error] receiving packet: read tcp 127.0.0.1:37464->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.257+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1275161254185483458) connection "127.0.0.1:37464" response transport failed `read tcp 127.0.0.1:37464->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.260+05:30 [Error] receiving packet: read tcp 127.0.0.1:38150->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.260+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1275161254185483458) connection "127.0.0.1:38150" response transport failed `read tcp 127.0.0.1:38150->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.285+05:30 [Error] receiving packet: read tcp 127.0.0.1:38152->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.285+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-301713780250894544) connection "127.0.0.1:38152" response transport failed `read tcp 127.0.0.1:38152->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.306+05:30 [Error] receiving packet: read tcp 127.0.0.1:37470->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.306+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4038362341167637928) connection "127.0.0.1:37470" response transport failed `read tcp 127.0.0.1:37470->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.364+05:30 [Error] receiving packet: read tcp 127.0.0.1:37476->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.364+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4250892926594942322) connection "127.0.0.1:37476" response transport failed `read tcp 127.0.0.1:37476->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.370+05:30 [Error] receiving packet: read tcp 127.0.0.1:38156->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.370+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4250892926594942322) connection "127.0.0.1:38156" response transport failed `read tcp 127.0.0.1:38156->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.371+05:30 [Error] receiving packet: read tcp 127.0.0.1:37478->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.371+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4250892926594942322) connection "127.0.0.1:37478" response transport failed `read tcp 127.0.0.1:37478->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.393+05:30 [Error] receiving packet: read tcp 127.0.0.1:37480->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.393+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4250892926594942322) connection "127.0.0.1:37480" response transport failed `read tcp 127.0.0.1:37480->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.393+05:30 [Error] receiving packet: read tcp 127.0.0.1:38168->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.393+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3326206408542090229) connection "127.0.0.1:38168" response transport failed `read tcp 127.0.0.1:38168->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.405+05:30 [Error] receiving packet: read tcp 127.0.0.1:38170->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.406+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(66783744191901759) connection "127.0.0.1:38170" response transport failed `read tcp 127.0.0.1:38170->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.420+05:30 [Error] receiving packet: read tcp 127.0.0.1:37482->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.431+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-6330343631763054345) connection "127.0.0.1:37482" response transport failed `read tcp 127.0.0.1:37482->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.514+05:30 [Error] receiving packet: read tcp 127.0.0.1:38174->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.514+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-6068347183062380162) connection "127.0.0.1:38174" response transport failed `read tcp 127.0.0.1:38174->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.554+05:30 [Error] receiving packet: read tcp 127.0.0.1:38188->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.554+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8968420584634799281) connection "127.0.0.1:38188" response transport failed `read tcp 127.0.0.1:38188->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.564+05:30 [Error] receiving packet: read tcp 127.0.0.1:37494->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.564+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2934171837193107990) connection "127.0.0.1:37494" response transport failed `read tcp 127.0.0.1:37494->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.565+05:30 [Error] receiving packet: read tcp 127.0.0.1:38190->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.565+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2934171837193107990) connection "127.0.0.1:38190" response transport failed `read tcp 127.0.0.1:38190->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.588+05:30 [Error] receiving packet: read tcp 127.0.0.1:38192->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.588+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1219699235584877455) connection "127.0.0.1:38192" response transport failed `read tcp 127.0.0.1:38192->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.588+05:30 [Error] receiving packet: read tcp 127.0.0.1:37510->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.588+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1219699235584877455) connection "127.0.0.1:37510" response transport failed `read tcp 127.0.0.1:37510->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.624+05:30 [Error] receiving packet: read tcp 127.0.0.1:38198->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.624+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6838959949561179419) connection "127.0.0.1:38198" response transport failed `read tcp 127.0.0.1:38198->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.643+05:30 [Error] receiving packet: read tcp 127.0.0.1:37512->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.643+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8541401576758812736) connection "127.0.0.1:37512" response transport failed `read tcp 127.0.0.1:37512->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.648+05:30 [Error] receiving packet: read tcp 127.0.0.1:37516->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.648+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8541401576758812736) connection "127.0.0.1:37516" response transport failed `read tcp 127.0.0.1:37516->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.652+05:30 [Error] receiving packet: read tcp 127.0.0.1:38202->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.652+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8541401576758812736) connection "127.0.0.1:38202" response transport failed `read tcp 127.0.0.1:38202->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.661+05:30 [Error] receiving packet: read tcp 127.0.0.1:38204->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.661+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(21489149713649481) connection "127.0.0.1:38204" response transport failed `read tcp 127.0.0.1:38204->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.666+05:30 [Error] receiving packet: read tcp 127.0.0.1:37522->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.666+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(21489149713649481) connection "127.0.0.1:37522" response transport failed `read tcp 127.0.0.1:37522->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.702+05:30 [Error] receiving packet: read tcp 127.0.0.1:37526->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.702+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8764320885981737254) connection "127.0.0.1:37526" response transport failed `read tcp 127.0.0.1:37526->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.710+05:30 [Error] receiving packet: read tcp 127.0.0.1:37532->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.710+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8764320885981737254) connection "127.0.0.1:37532" response transport failed `read tcp 127.0.0.1:37532->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.716+05:30 [Error] receiving packet: read tcp 127.0.0.1:37536->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.716+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8764320885981737254) connection "127.0.0.1:37536" response transport failed `read tcp 127.0.0.1:37536->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.738+05:30 [Error] receiving packet: read tcp 127.0.0.1:37538->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.738+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8828515822797985207) connection "127.0.0.1:37538" response transport failed `read tcp 127.0.0.1:37538->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.738+05:30 [Error] receiving packet: read tcp 127.0.0.1:38208->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.739+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8828515822797985207) connection "127.0.0.1:38208" response transport failed `read tcp 127.0.0.1:38208->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.756+05:30 [Error] receiving packet: read tcp 127.0.0.1:38226->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.756+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2991538347211954439) connection "127.0.0.1:38226" response transport failed `read tcp 127.0.0.1:38226->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.768+05:30 [Error] receiving packet: read tcp 127.0.0.1:38228->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.768+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4706457060998697629) connection "127.0.0.1:38228" response transport failed `read tcp 127.0.0.1:38228->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.770+05:30 [Error] receiving packet: read tcp 127.0.0.1:38230->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.770+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4706457060998697629) connection "127.0.0.1:38230" response transport failed `read tcp 127.0.0.1:38230->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.791+05:30 [Error] receiving packet: read tcp 127.0.0.1:38232->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.791+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(1293531526555875885) connection "127.0.0.1:38232" response transport failed `read tcp 127.0.0.1:38232->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.795+05:30 [Error] receiving packet: read tcp 127.0.0.1:37540->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.795+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(1293531526555875885) connection "127.0.0.1:37540" response transport failed `read tcp 127.0.0.1:37540->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.801+05:30 [Error] receiving packet: read tcp 127.0.0.1:38240->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.801+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(1293531526555875885) connection "127.0.0.1:38240" response transport failed `read tcp 127.0.0.1:38240->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.831+05:30 [Error] receiving packet: read tcp 127.0.0.1:37562->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.831+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4969762594647560335) connection "127.0.0.1:37562" response transport failed `read tcp 127.0.0.1:37562->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.831+05:30 [Error] receiving packet: read tcp 127.0.0.1:38242->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.831+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4969762594647560335) connection "127.0.0.1:38242" response transport failed `read tcp 127.0.0.1:38242->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.839+05:30 [Error] receiving packet: read tcp 127.0.0.1:37564->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.839+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4969762594647560335) connection "127.0.0.1:37564" response transport failed `read tcp 127.0.0.1:37564->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.858+05:30 [Error] receiving packet: read tcp 127.0.0.1:38252->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.858+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-3593231621419848382) connection "127.0.0.1:38252" response transport failed `read tcp 127.0.0.1:38252->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.920+05:30 [Error] receiving packet: read tcp 127.0.0.1:37572->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:48.920+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(9120112617680480883) connection "127.0.0.1:37572" response transport failed `read tcp 127.0.0.1:37572->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:48.935+05:30 [Error] receiving packet: read tcp 127.0.0.1:38258->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.935+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(866195641334697245) connection "127.0.0.1:38258" response transport failed `read tcp 127.0.0.1:38258->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:48.938+05:30 [Error] receiving packet: read tcp 127.0.0.1:38260->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:48.939+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(866195641334697245) connection "127.0.0.1:38260" response transport failed `read tcp 127.0.0.1:38260->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:49.007+05:30 [Error] receiving packet: read tcp 127.0.0.1:37580->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:49.007+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-4760030175787294643) connection "127.0.0.1:37580" response transport failed `read tcp 127.0.0.1:37580->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:49.011+05:30 [Error] receiving packet: read tcp 127.0.0.1:37582->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:49.011+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-4760030175787294643) connection "127.0.0.1:37582" response transport failed `read tcp 127.0.0.1:37582->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:49.028+05:30 [Error] receiving packet: read tcp 127.0.0.1:37584->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:49.029+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-7223011369892635175) connection "127.0.0.1:37584" response transport failed `read tcp 127.0.0.1:37584->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:49.031+05:30 [Error] receiving packet: read tcp 127.0.0.1:37586->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:49.031+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-7223011369892635175) connection "127.0.0.1:37586" response transport failed `read tcp 127.0.0.1:37586->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:49.055+05:30 [Error] receiving packet: read tcp 127.0.0.1:37588->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:49.056+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8683248573476851881) connection "127.0.0.1:37588" response transport failed `read tcp 127.0.0.1:37588->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:49.058+05:30 [Error] receiving packet: read tcp 127.0.0.1:38262->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:49.058+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8683248573476851881) connection "127.0.0.1:38262" response transport failed `read tcp 127.0.0.1:38262->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:49.062+05:30 [Error] receiving packet: read tcp 127.0.0.1:38276->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:49.062+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8683248573476851881) connection "127.0.0.1:38276" response transport failed `read tcp 127.0.0.1:38276->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:49.065+05:30 [Error] receiving packet: read tcp 127.0.0.1:38278->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:49.065+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8683248573476851881) connection "127.0.0.1:38278" response transport failed `read tcp 127.0.0.1:38278->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:49.069+05:30 [Error] receiving packet: read tcp 127.0.0.1:38280->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:49.069+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8683248573476851881) connection "127.0.0.1:38280" response transport failed `read tcp 127.0.0.1:38280->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:49.073+05:30 [Error] receiving packet: read tcp 127.0.0.1:37598->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:49.073+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8683248573476851881) connection "127.0.0.1:37598" response transport failed `read tcp 127.0.0.1:37598->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:49.077+05:30 [Error] receiving packet: read tcp 127.0.0.1:37600->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:49.077+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8683248573476851881) connection "127.0.0.1:37600" response transport failed `read tcp 127.0.0.1:37600->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:49.088+05:30 [Error] receiving packet: read tcp 127.0.0.1:37602->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:49.088+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8683248573476851881) connection "127.0.0.1:37602" response transport failed `read tcp 127.0.0.1:37602->127.0.0.1:9113: i/o timeout`
2023/01/06 23:48:49 Rebalance progress: 78.11842105263159
2023-01-06T23:48:51.551+05:30 [Error] receiving packet: read tcp 127.0.0.1:37604->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:51.551+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(6248942459749779721) connection "127.0.0.1:37604" response transport failed `read tcp 127.0.0.1:37604->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:51.577+05:30 [Error] receiving packet: read tcp 127.0.0.1:38304->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:51.577+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2489655386026542567) connection "127.0.0.1:38304" response transport failed `read tcp 127.0.0.1:38304->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:51.579+05:30 [Error] receiving packet: read tcp 127.0.0.1:37698->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:51.579+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(2489655386026542567) connection "127.0.0.1:37698" response transport failed `read tcp 127.0.0.1:37698->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:51.600+05:30 [Error] receiving packet: read tcp 127.0.0.1:37702->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:51.600+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-3995368831088017811) connection "127.0.0.1:37702" response transport failed `read tcp 127.0.0.1:37702->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:51.631+05:30 [Error] receiving packet: read tcp 127.0.0.1:37704->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:51.631+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-997443788492860099) connection "127.0.0.1:37704" response transport failed `read tcp 127.0.0.1:37704->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:51.645+05:30 [Error] receiving packet: read tcp 127.0.0.1:38384->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:51.645+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8587564523268475101) connection "127.0.0.1:38384" response transport failed `read tcp 127.0.0.1:38384->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:51.645+05:30 [Error] receiving packet: read tcp 127.0.0.1:37706->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:51.645+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8587564523268475101) connection "127.0.0.1:37706" response transport failed `read tcp 127.0.0.1:37706->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:51.648+05:30 [Error] receiving packet: read tcp 127.0.0.1:37708->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:51.648+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8587564523268475101) connection "127.0.0.1:37708" response transport failed `read tcp 127.0.0.1:37708->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:51.689+05:30 [Error] receiving packet: read tcp 127.0.0.1:37710->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:51.689+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-5015608403023408528) connection "127.0.0.1:37710" response transport failed `read tcp 127.0.0.1:37710->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:51.760+05:30 [Error] receiving packet: read tcp 127.0.0.1:38396->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:51.760+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(752542415104049911) connection "127.0.0.1:38396" response transport failed `read tcp 127.0.0.1:38396->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:51.829+05:30 [Error] receiving packet: read tcp 127.0.0.1:37714->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:51.829+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-3037952039641539007) connection "127.0.0.1:37714" response transport failed `read tcp 127.0.0.1:37714->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:51.838+05:30 [Error] receiving packet: read tcp 127.0.0.1:38404->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:51.838+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1354761316580738880) connection "127.0.0.1:38404" response transport failed `read tcp 127.0.0.1:38404->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:51.882+05:30 [Error] receiving packet: read tcp 127.0.0.1:38408->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:51.882+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2712055275800245589) connection "127.0.0.1:38408" response transport failed `read tcp 127.0.0.1:38408->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:51.888+05:30 [Error] receiving packet: read tcp 127.0.0.1:38410->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:51.888+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2712055275800245589) connection "127.0.0.1:38410" response transport failed `read tcp 127.0.0.1:38410->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:51.890+05:30 [Error] receiving packet: read tcp 127.0.0.1:38412->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:51.890+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2712055275800245589) connection "127.0.0.1:38412" response transport failed `read tcp 127.0.0.1:38412->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:51.892+05:30 [Error] receiving packet: read tcp 127.0.0.1:37722->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:51.892+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2712055275800245589) connection "127.0.0.1:37722" response transport failed `read tcp 127.0.0.1:37722->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:51.897+05:30 [Error] receiving packet: read tcp 127.0.0.1:37730->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:51.897+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2712055275800245589) connection "127.0.0.1:37730" response transport failed `read tcp 127.0.0.1:37730->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:51.927+05:30 [Error] receiving packet: read tcp 127.0.0.1:38418->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:51.927+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2870466764475929784) connection "127.0.0.1:38418" response transport failed `read tcp 127.0.0.1:38418->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:51.943+05:30 [Error] receiving packet: read tcp 127.0.0.1:38420->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:51.943+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5710037191792789091) connection "127.0.0.1:38420" response transport failed `read tcp 127.0.0.1:38420->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:51.960+05:30 [Error] receiving packet: read tcp 127.0.0.1:38422->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:51.961+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-5037613492372605880) connection "127.0.0.1:38422" response transport failed `read tcp 127.0.0.1:38422->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.002+05:30 [Error] receiving packet: read tcp 127.0.0.1:38424->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.002+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(1116391977351252155) connection "127.0.0.1:38424" response transport failed `read tcp 127.0.0.1:38424->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.004+05:30 [Error] receiving packet: read tcp 127.0.0.1:37732->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.004+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(1116391977351252155) connection "127.0.0.1:37732" response transport failed `read tcp 127.0.0.1:37732->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.024+05:30 [Error] receiving packet: read tcp 127.0.0.1:37742->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.024+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(5745627677438667296) connection "127.0.0.1:37742" response transport failed `read tcp 127.0.0.1:37742->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.024+05:30 [Error] receiving packet: read tcp 127.0.0.1:38428->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.024+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5745627677438667296) connection "127.0.0.1:38428" response transport failed `read tcp 127.0.0.1:38428->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.038+05:30 [Error] receiving packet: read tcp 127.0.0.1:37746->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.038+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-340580548666280610) connection "127.0.0.1:37746" response transport failed `read tcp 127.0.0.1:37746->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.043+05:30 [Error] receiving packet: read tcp 127.0.0.1:37748->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.043+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-340580548666280610) connection "127.0.0.1:37748" response transport failed `read tcp 127.0.0.1:37748->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.078+05:30 [Error] receiving packet: read tcp 127.0.0.1:37750->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.078+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4446005661409882574) connection "127.0.0.1:37750" response transport failed `read tcp 127.0.0.1:37750->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.081+05:30 [Error] receiving packet: read tcp 127.0.0.1:37754->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.081+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4446005661409882574) connection "127.0.0.1:37754" response transport failed `read tcp 127.0.0.1:37754->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.086+05:30 [Error] receiving packet: read tcp 127.0.0.1:38436->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.086+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4446005661409882574) connection "127.0.0.1:38436" response transport failed `read tcp 127.0.0.1:38436->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.099+05:30 [Error] receiving packet: read tcp 127.0.0.1:37758->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.099+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-8875307376468093246) connection "127.0.0.1:37758" response transport failed `read tcp 127.0.0.1:37758->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.099+05:30 [Error] receiving packet: read tcp 127.0.0.1:38440->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.099+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-8875307376468093246) connection "127.0.0.1:38440" response transport failed `read tcp 127.0.0.1:38440->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.124+05:30 [Error] receiving packet: read tcp 127.0.0.1:38444->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.124+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2820862290239885464) connection "127.0.0.1:38444" response transport failed `read tcp 127.0.0.1:38444->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.138+05:30 [Error] receiving packet: read tcp 127.0.0.1:38448->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.138+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(623573702046492947) connection "127.0.0.1:38448" response transport failed `read tcp 127.0.0.1:38448->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.151+05:30 [Error] receiving packet: read tcp 127.0.0.1:37762->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.151+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-4887094511737573600) connection "127.0.0.1:37762" response transport failed `read tcp 127.0.0.1:37762->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.152+05:30 [Error] receiving packet: read tcp 127.0.0.1:38450->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.152+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-4887094511737573600) connection "127.0.0.1:38450" response transport failed `read tcp 127.0.0.1:38450->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.172+05:30 [Error] receiving packet: read tcp 127.0.0.1:38454->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.172+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3805489208286392265) connection "127.0.0.1:38454" response transport failed `read tcp 127.0.0.1:38454->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.172+05:30 [Error] receiving packet: read tcp 127.0.0.1:38456->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.172+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3805489208286392265) connection "127.0.0.1:38456" response transport failed `read tcp 127.0.0.1:38456->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.186+05:30 [Error] receiving packet: read tcp 127.0.0.1:38458->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.186+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(9120550372012396755) connection "127.0.0.1:38458" response transport failed `read tcp 127.0.0.1:38458->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.196+05:30 [Error] receiving packet: read tcp 127.0.0.1:37768->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.196+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-3585866110326817615) connection "127.0.0.1:37768" response transport failed `read tcp 127.0.0.1:37768->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.231+05:30 [Error] receiving packet: read tcp 127.0.0.1:37778->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.231+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-9014497443427951307) connection "127.0.0.1:37778" response transport failed `read tcp 127.0.0.1:37778->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.247+05:30 [Error] receiving packet: read tcp 127.0.0.1:38460->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.247+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6071113844782867364) connection "127.0.0.1:38460" response transport failed `read tcp 127.0.0.1:38460->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.261+05:30 [Error] receiving packet: read tcp 127.0.0.1:38466->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.261+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1399585549391642420) connection "127.0.0.1:38466" response transport failed `read tcp 127.0.0.1:38466->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.283+05:30 [Error] receiving packet: read tcp 127.0.0.1:38468->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.283+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(339799910298157203) connection "127.0.0.1:38468" response transport failed `read tcp 127.0.0.1:38468->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.314+05:30 [Error] receiving packet: read tcp 127.0.0.1:37780->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.314+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-8874608270425606743) connection "127.0.0.1:37780" response transport failed `read tcp 127.0.0.1:37780->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.316+05:30 [Error] receiving packet: read tcp 127.0.0.1:37788->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.316+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-8874608270425606743) connection "127.0.0.1:37788" response transport failed `read tcp 127.0.0.1:37788->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.316+05:30 [Error] receiving packet: read tcp 127.0.0.1:38470->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.316+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-8874608270425606743) connection "127.0.0.1:38470" response transport failed `read tcp 127.0.0.1:38470->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.338+05:30 [Error] receiving packet: read tcp 127.0.0.1:37792->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.338+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-8874608270425606743) connection "127.0.0.1:37792" response transport failed `read tcp 127.0.0.1:37792->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.338+05:30 [Error] receiving packet: read tcp 127.0.0.1:37790->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.338+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-8874608270425606743) connection "127.0.0.1:37790" response transport failed `read tcp 127.0.0.1:37790->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.359+05:30 [Error] receiving packet: read tcp 127.0.0.1:38480->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.359+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1654352004655170973) connection "127.0.0.1:38480" response transport failed `read tcp 127.0.0.1:38480->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.370+05:30 [Error] receiving packet: read tcp 127.0.0.1:37794->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.370+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1193106749422285467) connection "127.0.0.1:37794" response transport failed `read tcp 127.0.0.1:37794->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.393+05:30 [Error] receiving packet: read tcp 127.0.0.1:37806->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.393+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-9115791265048905785) connection "127.0.0.1:37806" response transport failed `read tcp 127.0.0.1:37806->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.430+05:30 [Error] receiving packet: read tcp 127.0.0.1:37808->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.430+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2349688775856218206) connection "127.0.0.1:37808" response transport failed `read tcp 127.0.0.1:37808->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.434+05:30 [Error] receiving packet: read tcp 127.0.0.1:37810->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.434+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2349688775856218206) connection "127.0.0.1:37810" response transport failed `read tcp 127.0.0.1:37810->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.438+05:30 [Error] receiving packet: read tcp 127.0.0.1:37812->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.438+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2349688775856218206) connection "127.0.0.1:37812" response transport failed `read tcp 127.0.0.1:37812->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.442+05:30 [Error] receiving packet: read tcp 127.0.0.1:37814->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.442+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2349688775856218206) connection "127.0.0.1:37814" response transport failed `read tcp 127.0.0.1:37814->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.485+05:30 [Error] receiving packet: read tcp 127.0.0.1:37816->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.485+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2507239260352020763) connection "127.0.0.1:37816" response transport failed `read tcp 127.0.0.1:37816->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.485+05:30 [Error] receiving packet: read tcp 127.0.0.1:38488->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.485+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2507239260352020763) connection "127.0.0.1:38488" response transport failed `read tcp 127.0.0.1:38488->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.494+05:30 [Error] receiving packet: read tcp 127.0.0.1:38502->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.494+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5383482738498223595) connection "127.0.0.1:38502" response transport failed `read tcp 127.0.0.1:38502->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.505+05:30 [Error] receiving packet: read tcp 127.0.0.1:37820->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.505+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3517043646861393304) connection "127.0.0.1:37820" response transport failed `read tcp 127.0.0.1:37820->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.535+05:30 [Error] receiving packet: read tcp 127.0.0.1:37822->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.535+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3517043646861393304) connection "127.0.0.1:37822" response transport failed `read tcp 127.0.0.1:37822->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.556+05:30 [Error] receiving packet: read tcp 127.0.0.1:38508->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.556+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3517043646861393304) connection "127.0.0.1:38508" response transport failed `read tcp 127.0.0.1:38508->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.619+05:30 [Error] receiving packet: read tcp 127.0.0.1:38510->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.619+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6245701657706656638) connection "127.0.0.1:38510" response transport failed `read tcp 127.0.0.1:38510->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.639+05:30 [Error] receiving packet: read tcp 127.0.0.1:38512->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.639+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7214096000351406666) connection "127.0.0.1:38512" response transport failed `read tcp 127.0.0.1:38512->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.643+05:30 [Error] receiving packet: read tcp 127.0.0.1:37830->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.643+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-7214096000351406666) connection "127.0.0.1:37830" response transport failed `read tcp 127.0.0.1:37830->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.805+05:30 [Error] receiving packet: read tcp 127.0.0.1:37836->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.805+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1224727667215997002) connection "127.0.0.1:37836" response transport failed `read tcp 127.0.0.1:37836->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.839+05:30 [Error] receiving packet: read tcp 127.0.0.1:38516->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.839+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1974515582575616332) connection "127.0.0.1:38516" response transport failed `read tcp 127.0.0.1:38516->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.876+05:30 [Error] receiving packet: read tcp 127.0.0.1:38524->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.876+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8572929930791950871) connection "127.0.0.1:38524" response transport failed `read tcp 127.0.0.1:38524->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.881+05:30 [Error] receiving packet: read tcp 127.0.0.1:37842->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.881+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8572929930791950871) connection "127.0.0.1:37842" response transport failed `read tcp 127.0.0.1:37842->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.884+05:30 [Error] receiving packet: read tcp 127.0.0.1:37844->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:52.884+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8572929930791950871) connection "127.0.0.1:37844" response transport failed `read tcp 127.0.0.1:37844->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:52.972+05:30 [Error] receiving packet: read tcp 127.0.0.1:38532->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.972+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7878011010910426100) connection "127.0.0.1:38532" response transport failed `read tcp 127.0.0.1:38532->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:52.976+05:30 [Error] receiving packet: read tcp 127.0.0.1:38534->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:52.976+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7878011010910426100) connection "127.0.0.1:38534" response transport failed `read tcp 127.0.0.1:38534->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:53.010+05:30 [Error] receiving packet: read tcp 127.0.0.1:38536->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:53.010+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7878011010910426100) connection "127.0.0.1:38536" response transport failed `read tcp 127.0.0.1:38536->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:53.012+05:30 [Error] receiving packet: read tcp 127.0.0.1:38538->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:53.013+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7878011010910426100) connection "127.0.0.1:38538" response transport failed `read tcp 127.0.0.1:38538->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:53.061+05:30 [Error] receiving packet: read tcp 127.0.0.1:38540->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:53.061+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3892676505126521215) connection "127.0.0.1:38540" response transport failed `read tcp 127.0.0.1:38540->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:53.062+05:30 [Error] receiving packet: read tcp 127.0.0.1:37846->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:53.062+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3892676505126521215) connection "127.0.0.1:37846" response transport failed `read tcp 127.0.0.1:37846->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:53.062+05:30 [Error] receiving packet: read tcp 127.0.0.1:37858->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:53.062+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3892676505126521215) connection "127.0.0.1:37858" response transport failed `read tcp 127.0.0.1:37858->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:53.066+05:30 [Error] receiving packet: read tcp 127.0.0.1:38544->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:53.066+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3892676505126521215) connection "127.0.0.1:38544" response transport failed `read tcp 127.0.0.1:38544->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:53.085+05:30 [Error] receiving packet: read tcp 127.0.0.1:38548->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:53.085+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-8420035948674914767) connection "127.0.0.1:38548" response transport failed `read tcp 127.0.0.1:38548->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:53.106+05:30 [Error] receiving packet: read tcp 127.0.0.1:38550->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:53.106+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2248688786655951002) connection "127.0.0.1:38550" response transport failed `read tcp 127.0.0.1:38550->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:53.111+05:30 [Error] receiving packet: read tcp 127.0.0.1:37862->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:53.111+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1409743836250163020) connection "127.0.0.1:37862" response transport failed `read tcp 127.0.0.1:37862->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:53.132+05:30 [Error] receiving packet: read tcp 127.0.0.1:38552->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:53.132+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-4255831073778110623) connection "127.0.0.1:38552" response transport failed `read tcp 127.0.0.1:38552->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:53.146+05:30 [Error] receiving packet: read tcp 127.0.0.1:38556->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:53.146+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-6407699876796664943) connection "127.0.0.1:38556" response transport failed `read tcp 127.0.0.1:38556->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:53.148+05:30 [Error] receiving packet: read tcp 127.0.0.1:38558->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:53.148+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-6407699876796664943) connection "127.0.0.1:38558" response transport failed `read tcp 127.0.0.1:38558->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:53.160+05:30 [Error] receiving packet: read tcp 127.0.0.1:37870->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:53.160+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-3946205084937027256) connection "127.0.0.1:37870" response transport failed `read tcp 127.0.0.1:37870->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:53.164+05:30 [Error] receiving packet: read tcp 127.0.0.1:38560->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:53.164+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-3946205084937027256) connection "127.0.0.1:38560" response transport failed `read tcp 127.0.0.1:38560->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:53.180+05:30 [Error] receiving packet: read tcp 127.0.0.1:37878->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:53.180+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1051152960581807551) connection "127.0.0.1:37878" response transport failed `read tcp 127.0.0.1:37878->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:53.183+05:30 [Error] receiving packet: read tcp 127.0.0.1:37880->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:53.183+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1051152960581807551) connection "127.0.0.1:37880" response transport failed `read tcp 127.0.0.1:37880->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:53.196+05:30 [Error] receiving packet: read tcp 127.0.0.1:37882->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:53.196+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(1063389480755799450) connection "127.0.0.1:37882" response transport failed `read tcp 127.0.0.1:37882->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:53.202+05:30 [Error] receiving packet: read tcp 127.0.0.1:38568->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:53.202+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(1063389480755799450) connection "127.0.0.1:38568" response transport failed `read tcp 127.0.0.1:38568->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:53.207+05:30 [Error] receiving packet: read tcp 127.0.0.1:37886->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:53.207+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(1063389480755799450) connection "127.0.0.1:37886" response transport failed `read tcp 127.0.0.1:37886->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:53.215+05:30 [Error] receiving packet: read tcp 127.0.0.1:38572->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:53.215+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(1063389480755799450) connection "127.0.0.1:38572" response transport failed `read tcp 127.0.0.1:38572->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:53.221+05:30 [Error] receiving packet: read tcp 127.0.0.1:37890->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:53.221+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(1063389480755799450) connection "127.0.0.1:37890" response transport failed `read tcp 127.0.0.1:37890->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:53.236+05:30 [Error] receiving packet: read tcp 127.0.0.1:38576->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:53.237+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3325904521312948656) connection "127.0.0.1:38576" response transport failed `read tcp 127.0.0.1:38576->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:53.262+05:30 [Error] receiving packet: read tcp 127.0.0.1:38578->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:53.262+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-4464429075064435314) connection "127.0.0.1:38578" response transport failed `read tcp 127.0.0.1:38578->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:53.263+05:30 [Error] receiving packet: read tcp 127.0.0.1:37896->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:53.263+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-4464429075064435314) connection "127.0.0.1:37896" response transport failed `read tcp 127.0.0.1:37896->127.0.0.1:9113: i/o timeout`
2023/01/06 23:48:54 Rebalance progress: 78.12
2023-01-06T23:48:55.828+05:30 [Error] receiving packet: read tcp 127.0.0.1:38588->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:55.828+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-582749289029742743) connection "127.0.0.1:38588" response transport failed `read tcp 127.0.0.1:38588->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:55.832+05:30 [Error] receiving packet: read tcp 127.0.0.1:37898->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:55.832+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-582749289029742743) connection "127.0.0.1:37898" response transport failed `read tcp 127.0.0.1:37898->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:55.840+05:30 [Error] receiving packet: read tcp 127.0.0.1:38658->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:55.840+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-582749289029742743) connection "127.0.0.1:38658" response transport failed `read tcp 127.0.0.1:38658->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:55.843+05:30 [Error] receiving packet: read tcp 127.0.0.1:38660->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:55.843+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-582749289029742743) connection "127.0.0.1:38660" response transport failed `read tcp 127.0.0.1:38660->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:55.893+05:30 [Error] receiving packet: read tcp 127.0.0.1:37978->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:55.893+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(435329897428309328) connection "127.0.0.1:37978" response transport failed `read tcp 127.0.0.1:37978->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:55.930+05:30 [Error] receiving packet: read tcp 127.0.0.1:38666->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:55.930+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-9141010307534705626) connection "127.0.0.1:38666" response transport failed `read tcp 127.0.0.1:38666->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:55.947+05:30 [Error] receiving packet: read tcp 127.0.0.1:38672->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:55.947+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-4929378026288449092) connection "127.0.0.1:38672" response transport failed `read tcp 127.0.0.1:38672->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:55.964+05:30 [Error] receiving packet: read tcp 127.0.0.1:38674->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:55.965+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8709269196194752219) connection "127.0.0.1:38674" response transport failed `read tcp 127.0.0.1:38674->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:55.965+05:30 [Error] receiving packet: read tcp 127.0.0.1:37984->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:55.965+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8709269196194752219) connection "127.0.0.1:37984" response transport failed `read tcp 127.0.0.1:37984->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:55.973+05:30 [Error] receiving packet: read tcp 127.0.0.1:38676->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:55.973+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-303956370007954451) connection "127.0.0.1:38676" response transport failed `read tcp 127.0.0.1:38676->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:55.987+05:30 [Error] receiving packet: read tcp 127.0.0.1:38678->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:55.987+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5852575700921730094) connection "127.0.0.1:38678" response transport failed `read tcp 127.0.0.1:38678->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:55.990+05:30 [Error] receiving packet: read tcp 127.0.0.1:38680->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:55.990+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5852575700921730094) connection "127.0.0.1:38680" response transport failed `read tcp 127.0.0.1:38680->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.025+05:30 [Error] receiving packet: read tcp 127.0.0.1:38684->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.025+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1521093339636491334) connection "127.0.0.1:38684" response transport failed `read tcp 127.0.0.1:38684->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.036+05:30 [Error] receiving packet: read tcp 127.0.0.1:38686->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.036+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7229936737021423710) connection "127.0.0.1:38686" response transport failed `read tcp 127.0.0.1:38686->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.038+05:30 [Error] receiving packet: read tcp 127.0.0.1:37998->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.038+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-7229936737021423710) connection "127.0.0.1:37998" response transport failed `read tcp 127.0.0.1:37998->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.053+05:30 [Error] receiving packet: read tcp 127.0.0.1:38006->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.053+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-4113889890454842108) connection "127.0.0.1:38006" response transport failed `read tcp 127.0.0.1:38006->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.057+05:30 [Error] receiving packet: read tcp 127.0.0.1:38008->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.057+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-4113889890454842108) connection "127.0.0.1:38008" response transport failed `read tcp 127.0.0.1:38008->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.087+05:30 [Error] receiving packet: read tcp 127.0.0.1:38010->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.088+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3292492338972057091) connection "127.0.0.1:38010" response transport failed `read tcp 127.0.0.1:38010->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.096+05:30 [Error] receiving packet: read tcp 127.0.0.1:38688->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.096+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8572310899359474274) connection "127.0.0.1:38688" response transport failed `read tcp 127.0.0.1:38688->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.131+05:30 [Error] receiving packet: read tcp 127.0.0.1:38014->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.131+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1942405581453059458) connection "127.0.0.1:38014" response transport failed `read tcp 127.0.0.1:38014->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.131+05:30 [Error] receiving packet: read tcp 127.0.0.1:38696->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.131+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1942405581453059458) connection "127.0.0.1:38696" response transport failed `read tcp 127.0.0.1:38696->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.138+05:30 [Error] receiving packet: read tcp 127.0.0.1:38700->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.138+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1942405581453059458) connection "127.0.0.1:38700" response transport failed `read tcp 127.0.0.1:38700->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.142+05:30 [Error] receiving packet: read tcp 127.0.0.1:38702->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.142+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1942405581453059458) connection "127.0.0.1:38702" response transport failed `read tcp 127.0.0.1:38702->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.147+05:30 [Error] receiving packet: read tcp 127.0.0.1:38020->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.147+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1942405581453059458) connection "127.0.0.1:38020" response transport failed `read tcp 127.0.0.1:38020->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.151+05:30 [Error] receiving packet: read tcp 127.0.0.1:38022->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.151+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1942405581453059458) connection "127.0.0.1:38022" response transport failed `read tcp 127.0.0.1:38022->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.155+05:30 [Error] receiving packet: read tcp 127.0.0.1:38708->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.155+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1942405581453059458) connection "127.0.0.1:38708" response transport failed `read tcp 127.0.0.1:38708->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.167+05:30 [Error] receiving packet: read tcp 127.0.0.1:38026->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.167+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2073673567259179909) connection "127.0.0.1:38026" response transport failed `read tcp 127.0.0.1:38026->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.172+05:30 [Error] receiving packet: read tcp 127.0.0.1:38712->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.172+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2073673567259179909) connection "127.0.0.1:38712" response transport failed `read tcp 127.0.0.1:38712->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.224+05:30 [Error] receiving packet: read tcp 127.0.0.1:38722->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.224+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6057842961127898817) connection "127.0.0.1:38722" response transport failed `read tcp 127.0.0.1:38722->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.319+05:30 [Error] receiving packet: read tcp 127.0.0.1:38726->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.319+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1670307720906734784) connection "127.0.0.1:38726" response transport failed `read tcp 127.0.0.1:38726->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.455+05:30 [Error] receiving packet: read tcp 127.0.0.1:38030->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.455+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(5292721012260161116) connection "127.0.0.1:38030" response transport failed `read tcp 127.0.0.1:38030->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.484+05:30 [Error] receiving packet: read tcp 127.0.0.1:38742->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.484+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-8202392509089625211) connection "127.0.0.1:38742" response transport failed `read tcp 127.0.0.1:38742->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.545+05:30 [Error] receiving packet: read tcp 127.0.0.1:38756->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.545+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7017246075770377805) connection "127.0.0.1:38756" response transport failed `read tcp 127.0.0.1:38756->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.575+05:30 [Error] receiving packet: read tcp 127.0.0.1:38760->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.575+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(852510710779153952) connection "127.0.0.1:38760" response transport failed `read tcp 127.0.0.1:38760->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.610+05:30 [Error] receiving packet: read tcp 127.0.0.1:38070->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.610+05:30 [Error] receiving packet: read tcp 127.0.0.1:38762->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.610+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-7051157559868694612) connection "127.0.0.1:38070" response transport failed `read tcp 127.0.0.1:38070->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.610+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7051157559868694612) connection "127.0.0.1:38762" response transport failed `read tcp 127.0.0.1:38762->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.629+05:30 [Error] receiving packet: read tcp 127.0.0.1:38764->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.629+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7051157559868694612) connection "127.0.0.1:38764" response transport failed `read tcp 127.0.0.1:38764->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.655+05:30 [Error] receiving packet: read tcp 127.0.0.1:38766->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.655+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-3023130209419919117) connection "127.0.0.1:38766" response transport failed `read tcp 127.0.0.1:38766->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.662+05:30 [Error] receiving packet: read tcp 127.0.0.1:38770->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.662+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-3023130209419919117) connection "127.0.0.1:38770" response transport failed `read tcp 127.0.0.1:38770->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.674+05:30 [Error] receiving packet: read tcp 127.0.0.1:38772->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.674+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1469530087264482958) connection "127.0.0.1:38772" response transport failed `read tcp 127.0.0.1:38772->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.691+05:30 [Error] receiving packet: read tcp 127.0.0.1:38774->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.691+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-6812749452024208019) connection "127.0.0.1:38774" response transport failed `read tcp 127.0.0.1:38774->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.704+05:30 [Error] receiving packet: read tcp 127.0.0.1:38084->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.704+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8534569999491050982) connection "127.0.0.1:38084" response transport failed `read tcp 127.0.0.1:38084->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.712+05:30 [Error] receiving packet: read tcp 127.0.0.1:38776->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.712+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-8388046731895769134) connection "127.0.0.1:38776" response transport failed `read tcp 127.0.0.1:38776->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.727+05:30 [Error] receiving packet: read tcp 127.0.0.1:38780->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.727+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-5680363568943919305) connection "127.0.0.1:38780" response transport failed `read tcp 127.0.0.1:38780->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.727+05:30 [Error] receiving packet: read tcp 127.0.0.1:38094->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.727+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-5680363568943919305) connection "127.0.0.1:38094" response transport failed `read tcp 127.0.0.1:38094->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.732+05:30 [Error] receiving packet: read tcp 127.0.0.1:38098->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.732+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-5680363568943919305) connection "127.0.0.1:38098" response transport failed `read tcp 127.0.0.1:38098->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.742+05:30 [Error] receiving packet: read tcp 127.0.0.1:38786->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.742+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-5680363568943919305) connection "127.0.0.1:38786" response transport failed `read tcp 127.0.0.1:38786->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.742+05:30 [Error] receiving packet: read tcp 127.0.0.1:38784->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.742+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-5680363568943919305) connection "127.0.0.1:38784" response transport failed `read tcp 127.0.0.1:38784->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.780+05:30 [Error] receiving packet: read tcp 127.0.0.1:38106->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.780+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(5702515904491050783) connection "127.0.0.1:38106" response transport failed `read tcp 127.0.0.1:38106->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.780+05:30 [Error] receiving packet: read tcp 127.0.0.1:38788->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.780+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5702515904491050783) connection "127.0.0.1:38788" response transport failed `read tcp 127.0.0.1:38788->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.801+05:30 [Error] receiving packet: read tcp 127.0.0.1:38794->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.801+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7409525128970222677) connection "127.0.0.1:38794" response transport failed `read tcp 127.0.0.1:38794->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.825+05:30 [Error] receiving packet: read tcp 127.0.0.1:38796->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.825+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7646079568606365541) connection "127.0.0.1:38796" response transport failed `read tcp 127.0.0.1:38796->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.825+05:30 [Error] receiving packet: read tcp 127.0.0.1:38108->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.825+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-7646079568606365541) connection "127.0.0.1:38108" response transport failed `read tcp 127.0.0.1:38108->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.831+05:30 [Error] receiving packet: read tcp 127.0.0.1:38800->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.832+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4977693061489346045) connection "127.0.0.1:38800" response transport failed `read tcp 127.0.0.1:38800->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.836+05:30 [Error] receiving packet: read tcp 127.0.0.1:38802->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.836+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4977693061489346045) connection "127.0.0.1:38802" response transport failed `read tcp 127.0.0.1:38802->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.845+05:30 [Error] receiving packet: read tcp 127.0.0.1:38804->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.845+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3268638841712102347) connection "127.0.0.1:38804" response transport failed `read tcp 127.0.0.1:38804->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.856+05:30 [Error] receiving packet: read tcp 127.0.0.1:38806->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.856+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5989731462191360912) connection "127.0.0.1:38806" response transport failed `read tcp 127.0.0.1:38806->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:56.858+05:30 [Error] receiving packet: read tcp 127.0.0.1:38114->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.858+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(5989731462191360912) connection "127.0.0.1:38114" response transport failed `read tcp 127.0.0.1:38114->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.867+05:30 [Error] receiving packet: read tcp 127.0.0.1:38124->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.867+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8016063816256493362) connection "127.0.0.1:38124" response transport failed `read tcp 127.0.0.1:38124->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.887+05:30 [Error] receiving packet: read tcp 127.0.0.1:38126->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.887+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8430592152519253647) connection "127.0.0.1:38126" response transport failed `read tcp 127.0.0.1:38126->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.897+05:30 [Error] receiving packet: read tcp 127.0.0.1:38130->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.897+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(5052900091328289977) connection "127.0.0.1:38130" response transport failed `read tcp 127.0.0.1:38130->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.929+05:30 [Error] receiving packet: read tcp 127.0.0.1:38132->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:56.929+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-317710254096063752) connection "127.0.0.1:38132" response transport failed `read tcp 127.0.0.1:38132->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:56.970+05:30 [Error] receiving packet: read tcp 127.0.0.1:38812->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:56.970+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3230321136372757627) connection "127.0.0.1:38812" response transport failed `read tcp 127.0.0.1:38812->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:57.058+05:30 [Error] receiving packet: read tcp 127.0.0.1:38134->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:57.058+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-3234642006024147017) connection "127.0.0.1:38134" response transport failed `read tcp 127.0.0.1:38134->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:57.060+05:30 [Error] receiving packet: read tcp 127.0.0.1:38820->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:57.060+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-3053575305849841170) connection "127.0.0.1:38820" response transport failed `read tcp 127.0.0.1:38820->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:57.108+05:30 [Error] receiving packet: read tcp 127.0.0.1:38826->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:57.108+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8589999560235413416) connection "127.0.0.1:38826" response transport failed `read tcp 127.0.0.1:38826->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:57.119+05:30 [Error] receiving packet: read tcp 127.0.0.1:38144->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:57.119+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(972032377251354606) connection "127.0.0.1:38144" response transport failed `read tcp 127.0.0.1:38144->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:57.124+05:30 [Error] receiving packet: read tcp 127.0.0.1:38146->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:57.124+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(972032377251354606) connection "127.0.0.1:38146" response transport failed `read tcp 127.0.0.1:38146->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:57.155+05:30 [Error] receiving packet: read tcp 127.0.0.1:38834->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:57.155+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-5072318203518976522) connection "127.0.0.1:38834" response transport failed `read tcp 127.0.0.1:38834->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:57.155+05:30 [Error] receiving packet: read tcp 127.0.0.1:38148->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:57.155+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-5072318203518976522) connection "127.0.0.1:38148" response transport failed `read tcp 127.0.0.1:38148->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:57.181+05:30 [Error] receiving packet: read tcp 127.0.0.1:38838->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:57.181+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5620534318307065179) connection "127.0.0.1:38838" response transport failed `read tcp 127.0.0.1:38838->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:57.222+05:30 [Error] receiving packet: read tcp 127.0.0.1:38152->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:57.222+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(7350628627218721228) connection "127.0.0.1:38152" response transport failed `read tcp 127.0.0.1:38152->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:57.245+05:30 [Error] receiving packet: read tcp 127.0.0.1:38158->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:57.245+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-6041831662630055524) connection "127.0.0.1:38158" response transport failed `read tcp 127.0.0.1:38158->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:57.261+05:30 [Error] receiving packet: read tcp 127.0.0.1:38840->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:57.261+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5198434863132745916) connection "127.0.0.1:38840" response transport failed `read tcp 127.0.0.1:38840->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:57.268+05:30 [Error] receiving packet: read tcp 127.0.0.1:38844->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:57.268+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5198434863132745916) connection "127.0.0.1:38844" response transport failed `read tcp 127.0.0.1:38844->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:57.281+05:30 [Error] receiving packet: read tcp 127.0.0.1:38846->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:57.281+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1527799123058778972) connection "127.0.0.1:38846" response transport failed `read tcp 127.0.0.1:38846->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:59.678+05:30 [Error] receiving packet: read tcp 127.0.0.1:38174->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:59.678+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-6325213548356414021) connection "127.0.0.1:38174" response transport failed `read tcp 127.0.0.1:38174->127.0.0.1:9113: i/o timeout`
2023/01/06 23:48:59 Rebalance progress: 78.11842105263159
2023-01-06T23:48:59.720+05:30 [Error] receiving packet: read tcp 127.0.0.1:38848->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:59.720+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2998816932148086307) connection "127.0.0.1:38848" response transport failed `read tcp 127.0.0.1:38848->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:59.734+05:30 [Error] receiving packet: read tcp 127.0.0.1:38914->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:59.734+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3802731626002182848) connection "127.0.0.1:38914" response transport failed `read tcp 127.0.0.1:38914->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:59.749+05:30 [Error] receiving packet: read tcp 127.0.0.1:38916->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:59.749+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-5174822568176234884) connection "127.0.0.1:38916" response transport failed `read tcp 127.0.0.1:38916->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:59.751+05:30 [Error] receiving packet: read tcp 127.0.0.1:38228->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:59.751+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-5174822568176234884) connection "127.0.0.1:38228" response transport failed `read tcp 127.0.0.1:38228->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:59.764+05:30 [Error] receiving packet: read tcp 127.0.0.1:38234->127.0.0.1:9113: i/o timeout
2023-01-06T23:48:59.764+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4166490157448884131) connection "127.0.0.1:38234" response transport failed `read tcp 127.0.0.1:38234->127.0.0.1:9113: i/o timeout`
2023-01-06T23:48:59.849+05:30 [Error] receiving packet: read tcp 127.0.0.1:38920->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:59.849+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(486537750097064168) connection "127.0.0.1:38920" response transport failed `read tcp 127.0.0.1:38920->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:59.853+05:30 [Error] receiving packet: read tcp 127.0.0.1:38926->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:59.853+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(486537750097064168) connection "127.0.0.1:38926" response transport failed `read tcp 127.0.0.1:38926->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:59.896+05:30 [Error] receiving packet: read tcp 127.0.0.1:38928->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:59.896+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(866418412057220796) connection "127.0.0.1:38928" response transport failed `read tcp 127.0.0.1:38928->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:59.911+05:30 [Error] receiving packet: read tcp 127.0.0.1:38938->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:59.911+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(866418412057220796) connection "127.0.0.1:38938" response transport failed `read tcp 127.0.0.1:38938->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:59.930+05:30 [Error] receiving packet: read tcp 127.0.0.1:38942->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:59.930+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(9164564888996407960) connection "127.0.0.1:38942" response transport failed `read tcp 127.0.0.1:38942->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:59.959+05:30 [Error] receiving packet: read tcp 127.0.0.1:38944->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:59.959+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-555268615047379992) connection "127.0.0.1:38944" response transport failed `read tcp 127.0.0.1:38944->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:59.963+05:30 [Error] receiving packet: read tcp 127.0.0.1:38950->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:59.963+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-555268615047379992) connection "127.0.0.1:38950" response transport failed `read tcp 127.0.0.1:38950->127.0.0.1:9107: i/o timeout`
2023-01-06T23:48:59.975+05:30 [Error] receiving packet: read tcp 127.0.0.1:38952->127.0.0.1:9107: i/o timeout
2023-01-06T23:48:59.975+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1196141536164985847) connection "127.0.0.1:38952" response transport failed `read tcp 127.0.0.1:38952->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.088+05:30 [Error] receiving packet: read tcp 127.0.0.1:38954->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.088+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-3216888739541186721) connection "127.0.0.1:38954" response transport failed `read tcp 127.0.0.1:38954->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.111+05:30 [Error] receiving packet: read tcp 127.0.0.1:38958->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.111+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-6254860647529829990) connection "127.0.0.1:38958" response transport failed `read tcp 127.0.0.1:38958->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.150+05:30 [Error] receiving packet: read tcp 127.0.0.1:38964->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.150+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-4044220629137110192) connection "127.0.0.1:38964" response transport failed `read tcp 127.0.0.1:38964->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.150+05:30 [Error] receiving packet: read tcp 127.0.0.1:38240->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.150+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-4044220629137110192) connection "127.0.0.1:38240" response transport failed `read tcp 127.0.0.1:38240->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.150+05:30 [Error] receiving packet: read tcp 127.0.0.1:38962->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.150+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-4044220629137110192) connection "127.0.0.1:38962" response transport failed `read tcp 127.0.0.1:38962->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.155+05:30 [Error] receiving packet: read tcp 127.0.0.1:38284->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.155+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-4044220629137110192) connection "127.0.0.1:38284" response transport failed `read tcp 127.0.0.1:38284->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.175+05:30 [Error] receiving packet: read tcp 127.0.0.1:38970->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.175+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2114578552952327343) connection "127.0.0.1:38970" response transport failed `read tcp 127.0.0.1:38970->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.178+05:30 [Error] receiving packet: read tcp 127.0.0.1:38972->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.178+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2114578552952327343) connection "127.0.0.1:38972" response transport failed `read tcp 127.0.0.1:38972->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.181+05:30 [Error] receiving packet: read tcp 127.0.0.1:38974->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.181+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2114578552952327343) connection "127.0.0.1:38974" response transport failed `read tcp 127.0.0.1:38974->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.215+05:30 [Error] receiving packet: read tcp 127.0.0.1:38978->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.215+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4073816806558542027) connection "127.0.0.1:38978" response transport failed `read tcp 127.0.0.1:38978->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.216+05:30 [Error] receiving packet: read tcp 127.0.0.1:38292->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.216+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4073816806558542027) connection "127.0.0.1:38292" response transport failed `read tcp 127.0.0.1:38292->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.219+05:30 [Error] receiving packet: read tcp 127.0.0.1:38980->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.219+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4073816806558542027) connection "127.0.0.1:38980" response transport failed `read tcp 127.0.0.1:38980->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.223+05:30 [Error] receiving packet: read tcp 127.0.0.1:38298->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.223+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4073816806558542027) connection "127.0.0.1:38298" response transport failed `read tcp 127.0.0.1:38298->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.270+05:30 [Error] receiving packet: read tcp 127.0.0.1:38302->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.271+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4022104975788916741) connection "127.0.0.1:38302" response transport failed `read tcp 127.0.0.1:38302->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.283+05:30 [Error] receiving packet: read tcp 127.0.0.1:38306->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.283+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(882991435498766964) connection "127.0.0.1:38306" response transport failed `read tcp 127.0.0.1:38306->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.342+05:30 [Error] receiving packet: read tcp 127.0.0.1:38984->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.342+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5270562382373320304) connection "127.0.0.1:38984" response transport failed `read tcp 127.0.0.1:38984->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.351+05:30 [Error] receiving packet: read tcp 127.0.0.1:38996->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.352+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-4903832814111562103) connection "127.0.0.1:38996" response transport failed `read tcp 127.0.0.1:38996->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.365+05:30 [Error] receiving packet: read tcp 127.0.0.1:38308->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.366+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4992102276505258760) connection "127.0.0.1:38308" response transport failed `read tcp 127.0.0.1:38308->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.367+05:30 [Error] receiving packet: read tcp 127.0.0.1:38316->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.367+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4992102276505258760) connection "127.0.0.1:38316" response transport failed `read tcp 127.0.0.1:38316->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.368+05:30 [Error] receiving packet: read tcp 127.0.0.1:38998->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.368+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4992102276505258760) connection "127.0.0.1:38998" response transport failed `read tcp 127.0.0.1:38998->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.371+05:30 [Error] receiving packet: read tcp 127.0.0.1:38318->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.371+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4992102276505258760) connection "127.0.0.1:38318" response transport failed `read tcp 127.0.0.1:38318->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.375+05:30 [Error] receiving packet: read tcp 127.0.0.1:39004->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.375+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4992102276505258760) connection "127.0.0.1:39004" response transport failed `read tcp 127.0.0.1:39004->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.379+05:30 [Error] receiving packet: read tcp 127.0.0.1:39006->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.379+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4992102276505258760) connection "127.0.0.1:39006" response transport failed `read tcp 127.0.0.1:39006->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.403+05:30 [Error] receiving packet: read tcp 127.0.0.1:38326->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.403+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(1886272075277711134) connection "127.0.0.1:38326" response transport failed `read tcp 127.0.0.1:38326->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.444+05:30 [Error] receiving packet: read tcp 127.0.0.1:39008->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.445+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(1379689885485156146) connection "127.0.0.1:39008" response transport failed `read tcp 127.0.0.1:39008->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.466+05:30 [Error] receiving packet: read tcp 127.0.0.1:38328->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.466+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(1023090732858042879) connection "127.0.0.1:38328" response transport failed `read tcp 127.0.0.1:38328->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.467+05:30 [Error] receiving packet: read tcp 127.0.0.1:39014->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.467+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(1023090732858042879) connection "127.0.0.1:39014" response transport failed `read tcp 127.0.0.1:39014->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.467+05:30 [Error] receiving packet: read tcp 127.0.0.1:38332->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.467+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(1023090732858042879) connection "127.0.0.1:38332" response transport failed `read tcp 127.0.0.1:38332->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.471+05:30 [Error] receiving packet: read tcp 127.0.0.1:38334->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.471+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(1023090732858042879) connection "127.0.0.1:38334" response transport failed `read tcp 127.0.0.1:38334->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.489+05:30 [Error] receiving packet: read tcp 127.0.0.1:38338->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.489+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3782622640558557713) connection "127.0.0.1:38338" response transport failed `read tcp 127.0.0.1:38338->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.490+05:30 [Error] receiving packet: read tcp 127.0.0.1:39020->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.490+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3782622640558557713) connection "127.0.0.1:39020" response transport failed `read tcp 127.0.0.1:39020->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.494+05:30 [Error] receiving packet: read tcp 127.0.0.1:38340->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.494+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3782622640558557713) connection "127.0.0.1:38340" response transport failed `read tcp 127.0.0.1:38340->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.507+05:30 [Error] receiving packet: read tcp 127.0.0.1:39026->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.508+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7288778214705859853) connection "127.0.0.1:39026" response transport failed `read tcp 127.0.0.1:39026->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.511+05:30 [Error] receiving packet: read tcp 127.0.0.1:38344->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.511+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(7288778214705859853) connection "127.0.0.1:38344" response transport failed `read tcp 127.0.0.1:38344->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.516+05:30 [Error] receiving packet: read tcp 127.0.0.1:39030->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.516+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7288778214705859853) connection "127.0.0.1:39030" response transport failed `read tcp 127.0.0.1:39030->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.521+05:30 [Error] receiving packet: read tcp 127.0.0.1:39032->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.521+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7288778214705859853) connection "127.0.0.1:39032" response transport failed `read tcp 127.0.0.1:39032->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.594+05:30 [Error] receiving packet: read tcp 127.0.0.1:39034->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.594+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3286713617219754564) connection "127.0.0.1:39034" response transport failed `read tcp 127.0.0.1:39034->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.630+05:30 [Error] receiving packet: read tcp 127.0.0.1:38352->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.630+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-4399762918185642200) connection "127.0.0.1:38352" response transport failed `read tcp 127.0.0.1:38352->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.631+05:30 [Error] receiving packet: read tcp 127.0.0.1:39040->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.631+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-4399762918185642200) connection "127.0.0.1:39040" response transport failed `read tcp 127.0.0.1:39040->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.647+05:30 [Error] receiving packet: read tcp 127.0.0.1:39042->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.647+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7510221555574942266) connection "127.0.0.1:39042" response transport failed `read tcp 127.0.0.1:39042->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.700+05:30 [Error] receiving packet: read tcp 127.0.0.1:38360->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.701+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1957422079394315153) connection "127.0.0.1:38360" response transport failed `read tcp 127.0.0.1:38360->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.722+05:30 [Error] receiving packet: read tcp 127.0.0.1:38364->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.722+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(1305358119645240494) connection "127.0.0.1:38364" response transport failed `read tcp 127.0.0.1:38364->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.739+05:30 [Error] receiving packet: read tcp 127.0.0.1:38366->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.740+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3716695018260919512) connection "127.0.0.1:38366" response transport failed `read tcp 127.0.0.1:38366->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.745+05:30 [Error] receiving packet: read tcp 127.0.0.1:38368->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.745+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3716695018260919512) connection "127.0.0.1:38368" response transport failed `read tcp 127.0.0.1:38368->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.770+05:30 [Error] receiving packet: read tcp 127.0.0.1:39046->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.771+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1403507799578235256) connection "127.0.0.1:39046" response transport failed `read tcp 127.0.0.1:39046->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.773+05:30 [Error] receiving packet: read tcp 127.0.0.1:38374->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.773+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1403507799578235256) connection "127.0.0.1:38374" response transport failed `read tcp 127.0.0.1:38374->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.815+05:30 [Error] receiving packet: read tcp 127.0.0.1:38378->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.815+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-5473355326618992260) connection "127.0.0.1:38378" response transport failed `read tcp 127.0.0.1:38378->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.816+05:30 [Error] receiving packet: read tcp 127.0.0.1:39060->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.816+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-5473355326618992260) connection "127.0.0.1:39060" response transport failed `read tcp 127.0.0.1:39060->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.833+05:30 [Error] receiving packet: read tcp 127.0.0.1:38380->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:00.834+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-5473355326618992260) connection "127.0.0.1:38380" response transport failed `read tcp 127.0.0.1:38380->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:00.882+05:30 [Error] receiving packet: read tcp 127.0.0.1:39066->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.882+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-5473355326618992260) connection "127.0.0.1:39066" response transport failed `read tcp 127.0.0.1:39066->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:00.984+05:30 [Error] receiving packet: read tcp 127.0.0.1:39068->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:00.984+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-8853304889305871618) connection "127.0.0.1:39068" response transport failed `read tcp 127.0.0.1:39068->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.057+05:30 [Error] receiving packet: read tcp 127.0.0.1:39076->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:01.057+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2923493005655077931) connection "127.0.0.1:39076" response transport failed `read tcp 127.0.0.1:39076->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.076+05:30 [Error] receiving packet: read tcp 127.0.0.1:38390->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.076+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4132182884325206381) connection "127.0.0.1:38390" response transport failed `read tcp 127.0.0.1:38390->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.132+05:30 [Error] receiving packet: read tcp 127.0.0.1:38400->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.133+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-9027739421320405824) connection "127.0.0.1:38400" response transport failed `read tcp 127.0.0.1:38400->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.137+05:30 [Error] receiving packet: read tcp 127.0.0.1:38404->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.137+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-9027739421320405824) connection "127.0.0.1:38404" response transport failed `read tcp 127.0.0.1:38404->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.142+05:30 [Error] receiving packet: read tcp 127.0.0.1:38406->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.142+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-9027739421320405824) connection "127.0.0.1:38406" response transport failed `read tcp 127.0.0.1:38406->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.143+05:30 [Error] receiving packet: read tcp 127.0.0.1:39080->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:01.143+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-9027739421320405824) connection "127.0.0.1:39080" response transport failed `read tcp 127.0.0.1:39080->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.148+05:30 [Error] receiving packet: read tcp 127.0.0.1:38408->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.148+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-9027739421320405824) connection "127.0.0.1:38408" response transport failed `read tcp 127.0.0.1:38408->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.166+05:30 [Error] receiving packet: read tcp 127.0.0.1:38412->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.166+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-7722848372385569235) connection "127.0.0.1:38412" response transport failed `read tcp 127.0.0.1:38412->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.166+05:30 [Error] receiving packet: read tcp 127.0.0.1:39094->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:01.166+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7722848372385569235) connection "127.0.0.1:39094" response transport failed `read tcp 127.0.0.1:39094->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.183+05:30 [Error] receiving packet: read tcp 127.0.0.1:39098->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:01.183+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7722848372385569235) connection "127.0.0.1:39098" response transport failed `read tcp 127.0.0.1:39098->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.186+05:30 [Error] receiving packet: read tcp 127.0.0.1:39100->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:01.186+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7722848372385569235) connection "127.0.0.1:39100" response transport failed `read tcp 127.0.0.1:39100->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.188+05:30 [Error] receiving packet: read tcp 127.0.0.1:39102->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:01.188+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7722848372385569235) connection "127.0.0.1:39102" response transport failed `read tcp 127.0.0.1:39102->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.200+05:30 [Error] receiving packet: read tcp 127.0.0.1:39104->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:01.200+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7722848372385569235) connection "127.0.0.1:39104" response transport failed `read tcp 127.0.0.1:39104->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.202+05:30 [Error] receiving packet: read tcp 127.0.0.1:39106->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:01.202+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7722848372385569235) connection "127.0.0.1:39106" response transport failed `read tcp 127.0.0.1:39106->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.205+05:30 [Error] receiving packet: read tcp 127.0.0.1:38424->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.205+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-7722848372385569235) connection "127.0.0.1:38424" response transport failed `read tcp 127.0.0.1:38424->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.229+05:30 [Error] receiving packet: read tcp 127.0.0.1:38430->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.229+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1834803705495518614) connection "127.0.0.1:38430" response transport failed `read tcp 127.0.0.1:38430->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.230+05:30 [Error] receiving packet: read tcp 127.0.0.1:38426->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.230+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1834803705495518614) connection "127.0.0.1:38426" response transport failed `read tcp 127.0.0.1:38426->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.243+05:30 [Error] receiving packet: read tcp 127.0.0.1:38432->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.243+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(5126959808738538175) connection "127.0.0.1:38432" response transport failed `read tcp 127.0.0.1:38432->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.255+05:30 [Error] receiving packet: read tcp 127.0.0.1:38434->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.255+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(5126959808738538175) connection "127.0.0.1:38434" response transport failed `read tcp 127.0.0.1:38434->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.294+05:30 [Error] receiving packet: read tcp 127.0.0.1:39112->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:01.294+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6105826639350783901) connection "127.0.0.1:39112" response transport failed `read tcp 127.0.0.1:39112->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.304+05:30 [Error] receiving packet: read tcp 127.0.0.1:38440->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.304+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(6105826639350783901) connection "127.0.0.1:38440" response transport failed `read tcp 127.0.0.1:38440->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.304+05:30 [Error] receiving packet: read tcp 127.0.0.1:39138->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:01.304+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6105826639350783901) connection "127.0.0.1:39138" response transport failed `read tcp 127.0.0.1:39138->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.307+05:30 [Error] receiving packet: read tcp 127.0.0.1:38456->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.307+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(6105826639350783901) connection "127.0.0.1:38456" response transport failed `read tcp 127.0.0.1:38456->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.332+05:30 [Error] receiving packet: read tcp 127.0.0.1:39144->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:01.332+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-4066233974722424684) connection "127.0.0.1:39144" response transport failed `read tcp 127.0.0.1:39144->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.332+05:30 [Error] receiving packet: read tcp 127.0.0.1:38458->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.332+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-4066233974722424684) connection "127.0.0.1:38458" response transport failed `read tcp 127.0.0.1:38458->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.337+05:30 [Error] receiving packet: read tcp 127.0.0.1:39146->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:01.337+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-4066233974722424684) connection "127.0.0.1:39146" response transport failed `read tcp 127.0.0.1:39146->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.354+05:30 [Error] receiving packet: read tcp 127.0.0.1:38466->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.354+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2977861565404268696) connection "127.0.0.1:38466" response transport failed `read tcp 127.0.0.1:38466->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.373+05:30 [Error] receiving packet: read tcp 127.0.0.1:39148->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:01.374+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-4458835338649852775) connection "127.0.0.1:39148" response transport failed `read tcp 127.0.0.1:39148->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.379+05:30 [Error] receiving packet: read tcp 127.0.0.1:38468->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.379+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-4458835338649852775) connection "127.0.0.1:38468" response transport failed `read tcp 127.0.0.1:38468->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.379+05:30 [Error] receiving packet: read tcp 127.0.0.1:39154->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:01.379+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-4458835338649852775) connection "127.0.0.1:39154" response transport failed `read tcp 127.0.0.1:39154->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.404+05:30 [Error] receiving packet: read tcp 127.0.0.1:38474->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.404+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-409060443391987637) connection "127.0.0.1:38474" response transport failed `read tcp 127.0.0.1:38474->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.408+05:30 [Error] receiving packet: read tcp 127.0.0.1:38478->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.408+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-409060443391987637) connection "127.0.0.1:38478" response transport failed `read tcp 127.0.0.1:38478->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.409+05:30 [Error] receiving packet: read tcp 127.0.0.1:39156->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:01.409+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-409060443391987637) connection "127.0.0.1:39156" response transport failed `read tcp 127.0.0.1:39156->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:01.411+05:30 [Error] receiving packet: read tcp 127.0.0.1:38480->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.412+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-409060443391987637) connection "127.0.0.1:38480" response transport failed `read tcp 127.0.0.1:38480->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.439+05:30 [Error] receiving packet: read tcp 127.0.0.1:38482->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.439+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-5091944853007561143) connection "127.0.0.1:38482" response transport failed `read tcp 127.0.0.1:38482->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.475+05:30 [Error] receiving packet: read tcp 127.0.0.1:38488->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.475+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-3440803613965995040) connection "127.0.0.1:38488" response transport failed `read tcp 127.0.0.1:38488->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.509+05:30 [Error] receiving packet: read tcp 127.0.0.1:38492->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.509+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(2808176610324440464) connection "127.0.0.1:38492" response transport failed `read tcp 127.0.0.1:38492->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:01.512+05:30 [Error] receiving packet: read tcp 127.0.0.1:38498->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:01.512+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(2808176610324440464) connection "127.0.0.1:38498" response transport failed `read tcp 127.0.0.1:38498->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:03.913+05:30 [Error] receiving packet: read tcp 127.0.0.1:38502->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:03.913+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-3231226565364151390) connection "127.0.0.1:38502" response transport failed `read tcp 127.0.0.1:38502->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:03.913+05:30 [Error] receiving packet: read tcp 127.0.0.1:39168->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:03.913+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-3231226565364151390) connection "127.0.0.1:39168" response transport failed `read tcp 127.0.0.1:39168->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:03.924+05:30 [Error] receiving packet: read tcp 127.0.0.1:39224->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:03.924+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6140948300053049586) connection "127.0.0.1:39224" response transport failed `read tcp 127.0.0.1:39224->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:03.928+05:30 [Error] receiving packet: read tcp 127.0.0.1:39226->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:03.928+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6140948300053049586) connection "127.0.0.1:39226" response transport failed `read tcp 127.0.0.1:39226->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:03.932+05:30 [Error] receiving packet: read tcp 127.0.0.1:38544->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:03.932+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(6140948300053049586) connection "127.0.0.1:38544" response transport failed `read tcp 127.0.0.1:38544->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:03.962+05:30 [Error] receiving packet: read tcp 127.0.0.1:39232->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:03.962+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6616450812863912370) connection "127.0.0.1:39232" response transport failed `read tcp 127.0.0.1:39232->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.009+05:30 [Error] receiving packet: read tcp 127.0.0.1:38546->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.009+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(6219149808335085319) connection "127.0.0.1:38546" response transport failed `read tcp 127.0.0.1:38546->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.016+05:30 [Error] receiving packet: read tcp 127.0.0.1:39242->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.016+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-3857220552541274258) connection "127.0.0.1:39242" response transport failed `read tcp 127.0.0.1:39242->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.020+05:30 [Error] receiving packet: read tcp 127.0.0.1:38560->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.020+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-3857220552541274258) connection "127.0.0.1:38560" response transport failed `read tcp 127.0.0.1:38560->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.038+05:30 [Error] receiving packet: read tcp 127.0.0.1:39250->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.038+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-8885678044809522381) connection "127.0.0.1:39250" response transport failed `read tcp 127.0.0.1:39250->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.039+05:30 [Error] receiving packet: read tcp 127.0.0.1:39252->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.039+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-8885678044809522381) connection "127.0.0.1:39252" response transport failed `read tcp 127.0.0.1:39252->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.056+05:30 [Error] receiving packet: read tcp 127.0.0.1:38562->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.056+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-4689741035888597411) connection "127.0.0.1:38562" response transport failed `read tcp 127.0.0.1:38562->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.056+05:30 [Error] receiving packet: read tcp 127.0.0.1:39254->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.056+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-4689741035888597411) connection "127.0.0.1:39254" response transport failed `read tcp 127.0.0.1:39254->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.073+05:30 [Error] receiving packet: read tcp 127.0.0.1:38572->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.073+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-5598328970199983348) connection "127.0.0.1:38572" response transport failed `read tcp 127.0.0.1:38572->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.094+05:30 [Error] receiving packet: read tcp 127.0.0.1:39260->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.095+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8790596462037784432) connection "127.0.0.1:39260" response transport failed `read tcp 127.0.0.1:39260->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.100+05:30 [Error] receiving packet: read tcp 127.0.0.1:39262->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.100+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8790596462037784432) connection "127.0.0.1:39262" response transport failed `read tcp 127.0.0.1:39262->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.174+05:30 [Error] receiving packet: read tcp 127.0.0.1:38584->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.174+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2123091273449333496) connection "127.0.0.1:38584" response transport failed `read tcp 127.0.0.1:38584->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.201+05:30 [Error] receiving packet: read tcp 127.0.0.1:39264->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.201+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4189702995251472029) connection "127.0.0.1:39264" response transport failed `read tcp 127.0.0.1:39264->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.201+05:30 [Error] receiving packet: read tcp 127.0.0.1:38586->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.201+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4189702995251472029) connection "127.0.0.1:38586" response transport failed `read tcp 127.0.0.1:38586->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.218+05:30 [Error] receiving packet: read tcp 127.0.0.1:38590->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.218+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-142030582747793392) connection "127.0.0.1:38590" response transport failed `read tcp 127.0.0.1:38590->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.222+05:30 [Error] receiving packet: read tcp 127.0.0.1:38592->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.222+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-142030582747793392) connection "127.0.0.1:38592" response transport failed `read tcp 127.0.0.1:38592->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.223+05:30 [Error] receiving packet: read tcp 127.0.0.1:39272->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.223+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-142030582747793392) connection "127.0.0.1:39272" response transport failed `read tcp 127.0.0.1:39272->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.227+05:30 [Error] receiving packet: read tcp 127.0.0.1:39278->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.227+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-142030582747793392) connection "127.0.0.1:39278" response transport failed `read tcp 127.0.0.1:39278->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.290+05:30 [Error] receiving packet: read tcp 127.0.0.1:39282->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.290+05:30 [Error] receiving packet: read tcp 127.0.0.1:38596->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.290+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-3767750134517129520) connection "127.0.0.1:39282" response transport failed `read tcp 127.0.0.1:39282->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.290+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-3767750134517129520) connection "127.0.0.1:38596" response transport failed `read tcp 127.0.0.1:38596->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.292+05:30 [Error] receiving packet: read tcp 127.0.0.1:38602->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.292+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-3767750134517129520) connection "127.0.0.1:38602" response transport failed `read tcp 127.0.0.1:38602->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.297+05:30 [Error] receiving packet: read tcp 127.0.0.1:39288->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.297+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-3767750134517129520) connection "127.0.0.1:39288" response transport failed `read tcp 127.0.0.1:39288->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.302+05:30 [Error] receiving packet: read tcp 127.0.0.1:38606->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.302+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-3767750134517129520) connection "127.0.0.1:38606" response transport failed `read tcp 127.0.0.1:38606->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.304+05:30 [Error] receiving packet: read tcp 127.0.0.1:38608->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.304+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-3767750134517129520) connection "127.0.0.1:38608" response transport failed `read tcp 127.0.0.1:38608->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.309+05:30 [Error] receiving packet: read tcp 127.0.0.1:38610->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.309+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-3767750134517129520) connection "127.0.0.1:38610" response transport failed `read tcp 127.0.0.1:38610->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.337+05:30 [Error] receiving packet: read tcp 127.0.0.1:38612->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.338+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8897042499360463931) connection "127.0.0.1:38612" response transport failed `read tcp 127.0.0.1:38612->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.355+05:30 [Error] receiving packet: read tcp 127.0.0.1:39300->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.355+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-479265174462772011) connection "127.0.0.1:39300" response transport failed `read tcp 127.0.0.1:39300->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.356+05:30 [Error] receiving packet: read tcp 127.0.0.1:38614->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.356+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-479265174462772011) connection "127.0.0.1:38614" response transport failed `read tcp 127.0.0.1:38614->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.431+05:30 [Error] receiving packet: read tcp 127.0.0.1:38618->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.431+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(1267407441073310800) connection "127.0.0.1:38618" response transport failed `read tcp 127.0.0.1:38618->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.432+05:30 [Error] receiving packet: read tcp 127.0.0.1:39304->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.432+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(1267407441073310800) connection "127.0.0.1:39304" response transport failed `read tcp 127.0.0.1:39304->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.453+05:30 [Error] receiving packet: read tcp 127.0.0.1:38626->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.453+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8949119034177655092) connection "127.0.0.1:38626" response transport failed `read tcp 127.0.0.1:38626->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.458+05:30 [Error] receiving packet: read tcp 127.0.0.1:38628->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.459+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8949119034177655092) connection "127.0.0.1:38628" response transport failed `read tcp 127.0.0.1:38628->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.473+05:30 [Error] receiving packet: read tcp 127.0.0.1:38630->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.473+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3045775941622362642) connection "127.0.0.1:38630" response transport failed `read tcp 127.0.0.1:38630->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.507+05:30 [Error] receiving packet: read tcp 127.0.0.1:38632->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.508+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-5840553608838090744) connection "127.0.0.1:38632" response transport failed `read tcp 127.0.0.1:38632->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.528+05:30 [Error] receiving packet: read tcp 127.0.0.1:39318->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.528+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3147280505622356805) connection "127.0.0.1:39318" response transport failed `read tcp 127.0.0.1:39318->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.531+05:30 [Error] receiving packet: read tcp 127.0.0.1:39322->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.531+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3147280505622356805) connection "127.0.0.1:39322" response transport failed `read tcp 127.0.0.1:39322->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.532+05:30 [Error] receiving packet: read tcp 127.0.0.1:38636->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.532+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3147280505622356805) connection "127.0.0.1:38636" response transport failed `read tcp 127.0.0.1:38636->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.591+05:30 [Error] receiving packet: read tcp 127.0.0.1:38642->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.591+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(2526975056932527042) connection "127.0.0.1:38642" response transport failed `read tcp 127.0.0.1:38642->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.592+05:30 [Error] receiving packet: read tcp 127.0.0.1:39324->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.592+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2526975056932527042) connection "127.0.0.1:39324" response transport failed `read tcp 127.0.0.1:39324->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.617+05:30 [Error] receiving packet: read tcp 127.0.0.1:39334->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.617+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-3822843713303674619) connection "127.0.0.1:39334" response transport failed `read tcp 127.0.0.1:39334->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.621+05:30 [Error] receiving packet: read tcp 127.0.0.1:39336->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.621+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-3822843713303674619) connection "127.0.0.1:39336" response transport failed `read tcp 127.0.0.1:39336->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.656+05:30 [Error] receiving packet: read tcp 127.0.0.1:39338->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.656+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2583756894088912269) connection "127.0.0.1:39338" response transport failed `read tcp 127.0.0.1:39338->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.670+05:30 [Error] receiving packet: read tcp 127.0.0.1:38646->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.670+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(2135040613721505133) connection "127.0.0.1:38646" response transport failed `read tcp 127.0.0.1:38646->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.673+05:30 [Error] receiving packet: read tcp 127.0.0.1:39342->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.673+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2135040613721505133) connection "127.0.0.1:39342" response transport failed `read tcp 127.0.0.1:39342->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.676+05:30 [Error] receiving packet: read tcp 127.0.0.1:39344->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.676+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2135040613721505133) connection "127.0.0.1:39344" response transport failed `read tcp 127.0.0.1:39344->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.681+05:30 [Error] receiving packet: read tcp 127.0.0.1:38662->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.681+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(2135040613721505133) connection "127.0.0.1:38662" response transport failed `read tcp 127.0.0.1:38662->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.734+05:30 [Error] receiving packet: read tcp 127.0.0.1:39350->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.735+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8641793749279602293) connection "127.0.0.1:39350" response transport failed `read tcp 127.0.0.1:39350->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.735+05:30 [Error] receiving packet: read tcp 127.0.0.1:38664->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.735+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8641793749279602293) connection "127.0.0.1:38664" response transport failed `read tcp 127.0.0.1:38664->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.748+05:30 [Error] receiving packet: read tcp 127.0.0.1:39354->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.748+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6722940491221747893) connection "127.0.0.1:39354" response transport failed `read tcp 127.0.0.1:39354->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.765+05:30 [Error] receiving packet: read tcp 127.0.0.1:38668->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.765+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(2264187888315298750) connection "127.0.0.1:38668" response transport failed `read tcp 127.0.0.1:38668->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.802+05:30 [Error] receiving packet: read tcp 127.0.0.1:38674->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.802+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-5978265101740412037) connection "127.0.0.1:38674" response transport failed `read tcp 127.0.0.1:38674->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.815+05:30 [Error] receiving packet: read tcp 127.0.0.1:39356->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.815+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6735079372304233755) connection "127.0.0.1:39356" response transport failed `read tcp 127.0.0.1:39356->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.831+05:30 [Error] receiving packet: read tcp 127.0.0.1:38676->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.831+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-4707415918647706239) connection "127.0.0.1:38676" response transport failed `read tcp 127.0.0.1:38676->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.854+05:30 [Error] receiving packet: read tcp 127.0.0.1:38680->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.854+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-3521493282434970301) connection "127.0.0.1:38680" response transport failed `read tcp 127.0.0.1:38680->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.877+05:30 [Error] receiving packet: read tcp 127.0.0.1:38682->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.877+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(1447334586248717292) connection "127.0.0.1:38682" response transport failed `read tcp 127.0.0.1:38682->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.884+05:30 [Error] receiving packet: read tcp 127.0.0.1:39362->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.885+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(1447334586248717292) connection "127.0.0.1:39362" response transport failed `read tcp 127.0.0.1:39362->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.913+05:30 [Error] receiving packet: read tcp 127.0.0.1:39370->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.914+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2108115991079423465) connection "127.0.0.1:39370" response transport failed `read tcp 127.0.0.1:39370->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.921+05:30 [Error] receiving packet: read tcp 127.0.0.1:39372->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.921+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(922492240538473106) connection "127.0.0.1:39372" response transport failed `read tcp 127.0.0.1:39372->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.925+05:30 [Error] receiving packet: read tcp 127.0.0.1:39374->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.925+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(922492240538473106) connection "127.0.0.1:39374" response transport failed `read tcp 127.0.0.1:39374->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.937+05:30 [Error] receiving packet: read tcp 127.0.0.1:38684->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:04.937+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(7538388179463895148) connection "127.0.0.1:38684" response transport failed `read tcp 127.0.0.1:38684->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:04.943+05:30 [Error] receiving packet: read tcp 127.0.0.1:39376->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.943+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7538388179463895148) connection "127.0.0.1:39376" response transport failed `read tcp 127.0.0.1:39376->127.0.0.1:9107: i/o timeout`
2023/01/06 23:49:04 Rebalance progress: 78.12
2023-01-06T23:49:04.989+05:30 [Error] receiving packet: read tcp 127.0.0.1:39384->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.989+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8736805182000941720) connection "127.0.0.1:39384" response transport failed `read tcp 127.0.0.1:39384->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.992+05:30 [Error] receiving packet: read tcp 127.0.0.1:39386->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.993+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8736805182000941720) connection "127.0.0.1:39386" response transport failed `read tcp 127.0.0.1:39386->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:04.997+05:30 [Error] receiving packet: read tcp 127.0.0.1:39388->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:04.997+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8736805182000941720) connection "127.0.0.1:39388" response transport failed `read tcp 127.0.0.1:39388->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:05.003+05:30 [Error] receiving packet: read tcp 127.0.0.1:39390->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:05.003+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8736805182000941720) connection "127.0.0.1:39390" response transport failed `read tcp 127.0.0.1:39390->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:05.023+05:30 [Error] receiving packet: read tcp 127.0.0.1:39392->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:05.023+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8736805182000941720) connection "127.0.0.1:39392" response transport failed `read tcp 127.0.0.1:39392->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:05.079+05:30 [Error] receiving packet: read tcp 127.0.0.1:38694->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:05.079+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-3261440095419483376) connection "127.0.0.1:38694" response transport failed `read tcp 127.0.0.1:38694->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:05.089+05:30 [Error] receiving packet: read tcp 127.0.0.1:39394->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:05.089+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2954931515405988934) connection "127.0.0.1:39394" response transport failed `read tcp 127.0.0.1:39394->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:05.092+05:30 [Error] receiving packet: read tcp 127.0.0.1:39396->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:05.092+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2954931515405988934) connection "127.0.0.1:39396" response transport failed `read tcp 127.0.0.1:39396->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:05.112+05:30 [Error] receiving packet: read tcp 127.0.0.1:38714->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:05.112+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2954931515405988934) connection "127.0.0.1:38714" response transport failed `read tcp 127.0.0.1:38714->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:05.141+05:30 [Error] receiving packet: read tcp 127.0.0.1:38716->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:05.141+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8547804126392554355) connection "127.0.0.1:38716" response transport failed `read tcp 127.0.0.1:38716->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:05.157+05:30 [Error] receiving packet: read tcp 127.0.0.1:38720->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:05.157+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3179792726499937896) connection "127.0.0.1:38720" response transport failed `read tcp 127.0.0.1:38720->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:05.157+05:30 [Error] receiving packet: read tcp 127.0.0.1:39402->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:05.157+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3179792726499937896) connection "127.0.0.1:39402" response transport failed `read tcp 127.0.0.1:39402->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:05.163+05:30 [Error] receiving packet: read tcp 127.0.0.1:39406->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:05.163+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3179792726499937896) connection "127.0.0.1:39406" response transport failed `read tcp 127.0.0.1:39406->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:05.165+05:30 [Error] receiving packet: read tcp 127.0.0.1:38724->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:05.165+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3179792726499937896) connection "127.0.0.1:38724" response transport failed `read tcp 127.0.0.1:38724->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:05.169+05:30 [Error] receiving packet: read tcp 127.0.0.1:39410->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:05.169+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3179792726499937896) connection "127.0.0.1:39410" response transport failed `read tcp 127.0.0.1:39410->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:05.184+05:30 [Error] receiving packet: read tcp 127.0.0.1:38730->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:05.184+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(931086246305926545) connection "127.0.0.1:38730" response transport failed `read tcp 127.0.0.1:38730->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:05.201+05:30 [Error] receiving packet: read tcp 127.0.0.1:39412->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:05.201+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(3891253888652656174) connection "127.0.0.1:39412" response transport failed `read tcp 127.0.0.1:39412->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:05.223+05:30 [Error] receiving packet: read tcp 127.0.0.1:39418->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:05.224+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7902942657648216182) connection "127.0.0.1:39418" response transport failed `read tcp 127.0.0.1:39418->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:05.228+05:30 [Error] receiving packet: read tcp 127.0.0.1:39420->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:05.228+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7902942657648216182) connection "127.0.0.1:39420" response transport failed `read tcp 127.0.0.1:39420->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:05.236+05:30 [Error] receiving packet: read tcp 127.0.0.1:38732->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:05.236+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(7902942657648216182) connection "127.0.0.1:38732" response transport failed `read tcp 127.0.0.1:38732->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:05.340+05:30 [Error] receiving packet: read tcp 127.0.0.1:39430->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:05.340+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7924042347654022737) connection "127.0.0.1:39430" response transport failed `read tcp 127.0.0.1:39430->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:05.370+05:30 [Error] receiving packet: read tcp 127.0.0.1:38740->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:05.370+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-589041377014699551) connection "127.0.0.1:38740" response transport failed `read tcp 127.0.0.1:38740->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:05.433+05:30 [Error] receiving packet: read tcp 127.0.0.1:38758->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:05.433+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-7729745207785465198) connection "127.0.0.1:38758" response transport failed `read tcp 127.0.0.1:38758->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:05.447+05:30 [Error] receiving packet: read tcp 127.0.0.1:38760->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:05.447+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3963730431244084959) connection "127.0.0.1:38760" response transport failed `read tcp 127.0.0.1:38760->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:05.468+05:30 [Error] receiving packet: read tcp 127.0.0.1:38762->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:05.468+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(127713223982740993) connection "127.0.0.1:38762" response transport failed `read tcp 127.0.0.1:38762->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:05.484+05:30 [Error] receiving packet: read tcp 127.0.0.1:39436->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:05.484+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7839145607259671561) connection "127.0.0.1:39436" response transport failed `read tcp 127.0.0.1:39436->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:05.488+05:30 [Error] receiving packet: read tcp 127.0.0.1:39452->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:05.488+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7839145607259671561) connection "127.0.0.1:39452" response transport failed `read tcp 127.0.0.1:39452->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:05.492+05:30 [Error] receiving packet: read tcp 127.0.0.1:39454->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:05.492+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(7839145607259671561) connection "127.0.0.1:39454" response transport failed `read tcp 127.0.0.1:39454->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:05.503+05:30 [Error] receiving packet: read tcp 127.0.0.1:38766->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:05.503+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-2126276720566118164) connection "127.0.0.1:38766" response transport failed `read tcp 127.0.0.1:38766->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:05.536+05:30 [Error] receiving packet: read tcp 127.0.0.1:38772->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:05.536+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-7307609199843388112) connection "127.0.0.1:38772" response transport failed `read tcp 127.0.0.1:38772->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:05.543+05:30 [Error] receiving packet: read tcp 127.0.0.1:39458->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:05.543+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7307609199843388112) connection "127.0.0.1:39458" response transport failed `read tcp 127.0.0.1:39458->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:07.267+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:59716->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T23:49:07.268+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = read tcp 127.0.0.1:57870->127.0.0.1:9112: use of closed network connection. Kill Pipe.
2023-01-06T23:49:07.278+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9106.  Error = read tcp 127.0.0.1:60838->127.0.0.1:9106: use of closed network connection. Kill Pipe.
2023-01-06T23:49:07.278+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9112.  Error = read tcp 127.0.0.1:58990->127.0.0.1:9112: use of closed network connection. Kill Pipe.
2023-01-06T23:49:07.853+05:30 [Error] receiving packet: read tcp 127.0.0.1:38784->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:07.853+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-8553788572101554000) connection "127.0.0.1:38784" response transport failed `read tcp 127.0.0.1:38784->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:07.869+05:30 [Error] receiving packet: read tcp 127.0.0.1:38838->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:07.869+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-8553788572101554000) connection "127.0.0.1:38838" response transport failed `read tcp 127.0.0.1:38838->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:07.929+05:30 [Error] receiving packet: read tcp 127.0.0.1:38840->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:07.929+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1911807410089327437) connection "127.0.0.1:38840" response transport failed `read tcp 127.0.0.1:38840->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:07.948+05:30 [Error] receiving packet: read tcp 127.0.0.1:39460->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:07.948+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5288029282008277416) connection "127.0.0.1:39460" response transport failed `read tcp 127.0.0.1:39460->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:07.952+05:30 [Error] receiving packet: read tcp 127.0.0.1:39528->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:07.952+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5288029282008277416) connection "127.0.0.1:39528" response transport failed `read tcp 127.0.0.1:39528->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:07.966+05:30 [Error] receiving packet: read tcp 127.0.0.1:39530->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:07.966+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6600514009180659268) connection "127.0.0.1:39530" response transport failed `read tcp 127.0.0.1:39530->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:07.969+05:30 [Error] receiving packet: read tcp 127.0.0.1:39532->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:07.970+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6600514009180659268) connection "127.0.0.1:39532" response transport failed `read tcp 127.0.0.1:39532->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:07.970+05:30 [Error] receiving packet: read tcp 127.0.0.1:38842->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:07.970+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(6600514009180659268) connection "127.0.0.1:38842" response transport failed `read tcp 127.0.0.1:38842->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:07.984+05:30 [Error] receiving packet: read tcp 127.0.0.1:39534->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:07.984+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5473436539520548531) connection "127.0.0.1:39534" response transport failed `read tcp 127.0.0.1:39534->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:07.994+05:30 [Error] receiving packet: read tcp 127.0.0.1:38852->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:07.994+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(5473436539520548531) connection "127.0.0.1:38852" response transport failed `read tcp 127.0.0.1:38852->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.064+05:30 [Error] receiving packet: read tcp 127.0.0.1:38856->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.064+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3917693274705216422) connection "127.0.0.1:38856" response transport failed `read tcp 127.0.0.1:38856->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.067+05:30 [Error] receiving packet: read tcp 127.0.0.1:38860->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.067+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3917693274705216422) connection "127.0.0.1:38860" response transport failed `read tcp 127.0.0.1:38860->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.088+05:30 [Error] receiving packet: read tcp 127.0.0.1:39542->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:08.088+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(328755277815149406) connection "127.0.0.1:39542" response transport failed `read tcp 127.0.0.1:39542->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:08.094+05:30 [Error] receiving packet: read tcp 127.0.0.1:39546->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:08.094+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(328755277815149406) connection "127.0.0.1:39546" response transport failed `read tcp 127.0.0.1:39546->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:08.104+05:30 [Error] receiving packet: read tcp 127.0.0.1:39548->127.0.0.1:9107: i/o timeout
2023-01-06T23:49:08.104+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(901804016906890166) connection "127.0.0.1:39548" response transport failed `read tcp 127.0.0.1:39548->127.0.0.1:9107: i/o timeout`
2023-01-06T23:49:08.131+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2997651026504263223) connection "127.0.0.1:39710" closed `EOF`
2023-01-06T23:49:08.131+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2997651026504263223) connection "127.0.0.1:39704" closed `EOF`
2023-01-06T23:49:08.131+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-2997651026504263223) connection "127.0.0.1:39708" closed `EOF`
2023-01-06T23:49:08.131+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2756914936779450753) connection "127.0.0.1:39702" closed `EOF`
2023-01-06T23:49:08.131+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1884295273201314730) connection "127.0.0.1:39696" closed `EOF`
2023-01-06T23:49:08.131+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-1884295273201314730) connection "127.0.0.1:39698" closed `EOF`
2023-01-06T23:49:08.132+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-6765289489456722572) connection "127.0.0.1:39692" closed `EOF`
2023-01-06T23:49:08.132+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6076213450797479104) connection "127.0.0.1:39680" closed `EOF`
2023-01-06T23:49:08.132+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-6765289489456722572) connection "127.0.0.1:39690" closed `EOF`
2023-01-06T23:49:08.133+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2570907549504783023) connection "127.0.0.1:39660" closed `EOF`
2023-01-06T23:49:08.133+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6217126857172552375) connection "127.0.0.1:39662" closed `EOF`
2023-01-06T23:49:08.133+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8848013257586689734) connection "127.0.0.1:39676" closed `EOF`
2023-01-06T23:49:08.135+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5719571531349095712) connection "127.0.0.1:39612" closed `EOF`
2023-01-06T23:49:08.135+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-6091977665538649603) connection "127.0.0.1:39614" closed `EOF`
2023-01-06T23:49:08.135+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-6091977665538649603) connection "127.0.0.1:39624" closed `EOF`
2023-01-06T23:49:08.135+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(1321127458175891772) connection "127.0.0.1:39628" closed `EOF`
2023-01-06T23:49:08.135+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5092648755081520658) connection "127.0.0.1:39640" closed `EOF`
2023-01-06T23:49:08.136+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5641547762529660590) connection "127.0.0.1:39650" closed `EOF`
2023-01-06T23:49:08.136+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2570907549504783023) connection "127.0.0.1:39656" closed `EOF`
2023-01-06T23:49:08.136+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2570907549504783023) connection "127.0.0.1:39658" closed `EOF`
2023-01-06T23:49:08.136+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-5475073148788031305) connection "127.0.0.1:39582" closed `EOF`
2023-01-06T23:49:08.136+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4680468552739577049) connection "127.0.0.1:39604" closed `EOF`
2023-01-06T23:49:08.136+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(8117274234497667612) connection "127.0.0.1:39586" closed `EOF`
2023-01-06T23:49:08.136+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(4680468552739577049) connection "127.0.0.1:39606" closed `EOF`
2023-01-06T23:49:08.136+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7244985745412363737) connection "127.0.0.1:39590" closed `EOF`
2023-01-06T23:49:08.136+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(5719571531349095712) connection "127.0.0.1:39610" closed `EOF`
2023-01-06T23:49:08.137+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6989319432105542153) connection "127.0.0.1:39572" closed `EOF`
2023-01-06T23:49:08.137+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-5475073148788031305) connection "127.0.0.1:39580" closed `EOF`
2023-01-06T23:49:08.138+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-6870434600389168152) connection "127.0.0.1:39570" closed `EOF`
2023-01-06T23:49:08.138+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7675985552357239311) connection "127.0.0.1:39568" closed `EOF`
2023-01-06T23:49:08.138+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7953843932982370015) connection "127.0.0.1:39562" closed `EOF`
2023-01-06T23:49:08.138+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-7953843932982370015) connection "127.0.0.1:39564" closed `EOF`
2023-01-06T23:49:08.139+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-9046263346313424863) connection "127.0.0.1:39552" closed `EOF`
2023-01-06T23:49:08.139+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(2398957071757768720) connection "127.0.0.1:39558" closed `EOF`
2023-01-06T23:49:08.139+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(-9046263346313424863) connection "127.0.0.1:39554" closed `EOF`
2023-01-06T23:49:08.147+05:30 [Error] receiving packet: read tcp 127.0.0.1:38866->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.147+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-9046263346313424863) connection "127.0.0.1:38866" response transport failed `read tcp 127.0.0.1:38866->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.159+05:30 [Error] [GsiScanClient:"127.0.0.1:9107"] req(6588117699123557572) connection "127.0.0.1:39648" closed `EOF`
2023-01-06T23:49:08.176+05:30 [Error] receiving packet: read tcp 127.0.0.1:38872->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.176+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(2398957071757768720) connection "127.0.0.1:38872" response transport failed `read tcp 127.0.0.1:38872->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.206+05:30 [Error] receiving packet: read tcp 127.0.0.1:38876->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.206+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-5643027309789615497) connection "127.0.0.1:38876" response transport failed `read tcp 127.0.0.1:38876->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.245+05:30 [Error] receiving packet: read tcp 127.0.0.1:38882->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.245+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8622750305757661245) connection "127.0.0.1:38882" response transport failed `read tcp 127.0.0.1:38882->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.288+05:30 [Error] receiving packet: read tcp 127.0.0.1:38892->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.288+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(6680232478672624491) connection "127.0.0.1:38892" response transport failed `read tcp 127.0.0.1:38892->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.289+05:30 [Error] receiving packet: read tcp 127.0.0.1:38890->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.289+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(6680232478672624491) connection "127.0.0.1:38890" response transport failed `read tcp 127.0.0.1:38890->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.339+05:30 [Error] receiving packet: read tcp 127.0.0.1:38894->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.339+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-4359971817014890874) connection "127.0.0.1:38894" response transport failed `read tcp 127.0.0.1:38894->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.358+05:30 [Error] receiving packet: read tcp 127.0.0.1:38900->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.358+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-5475073148788031305) connection "127.0.0.1:38900" response transport failed `read tcp 127.0.0.1:38900->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.378+05:30 [Error] receiving packet: read tcp 127.0.0.1:38904->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.378+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(6444498176576947801) connection "127.0.0.1:38904" response transport failed `read tcp 127.0.0.1:38904->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.436+05:30 [Error] receiving packet: read tcp 127.0.0.1:38908->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.437+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8199747117872461817) connection "127.0.0.1:38908" response transport failed `read tcp 127.0.0.1:38908->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.452+05:30 [Error] receiving packet: read tcp 127.0.0.1:38914->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.452+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-7244985745412363737) connection "127.0.0.1:38914" response transport failed `read tcp 127.0.0.1:38914->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.464+05:30 [Error] receiving packet: read tcp 127.0.0.1:38916->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.464+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(5250439410347406608) connection "127.0.0.1:38916" response transport failed `read tcp 127.0.0.1:38916->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.474+05:30 [Error] receiving packet: read tcp 127.0.0.1:38918->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.474+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(4680468552739577049) connection "127.0.0.1:38918" response transport failed `read tcp 127.0.0.1:38918->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.514+05:30 [Error] receiving packet: read tcp 127.0.0.1:38924->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.514+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(5719571531349095712) connection "127.0.0.1:38924" response transport failed `read tcp 127.0.0.1:38924->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.547+05:30 [Error] receiving packet: read tcp 127.0.0.1:38932->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.548+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(7228387364515451369) connection "127.0.0.1:38932" response transport failed `read tcp 127.0.0.1:38932->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.571+05:30 [Error] receiving packet: read tcp 127.0.0.1:38936->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.572+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-6091977665538649603) connection "127.0.0.1:38936" response transport failed `read tcp 127.0.0.1:38936->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.576+05:30 [Error] receiving packet: read tcp 127.0.0.1:38938->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.576+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-6091977665538649603) connection "127.0.0.1:38938" response transport failed `read tcp 127.0.0.1:38938->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.598+05:30 [Error] receiving packet: read tcp 127.0.0.1:38942->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.598+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-6091977665538649603) connection "127.0.0.1:38942" response transport failed `read tcp 127.0.0.1:38942->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.610+05:30 [Error] receiving packet: read tcp 127.0.0.1:38946->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.610+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(3723702572688491634) connection "127.0.0.1:38946" response transport failed `read tcp 127.0.0.1:38946->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.642+05:30 [Error] receiving packet: read tcp 127.0.0.1:38948->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.642+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(5674937932471330691) connection "127.0.0.1:38948" response transport failed `read tcp 127.0.0.1:38948->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.699+05:30 [Error] receiving packet: read tcp 127.0.0.1:38958->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.699+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(5092648755081520658) connection "127.0.0.1:38958" response transport failed `read tcp 127.0.0.1:38958->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.703+05:30 [Error] receiving packet: read tcp 127.0.0.1:38960->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.703+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(5092648755081520658) connection "127.0.0.1:38960" response transport failed `read tcp 127.0.0.1:38960->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.745+05:30 [Error] receiving packet: read tcp 127.0.0.1:38962->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.745+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(6588117699123557572) connection "127.0.0.1:38962" response transport failed `read tcp 127.0.0.1:38962->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.863+05:30 [Error] receiving packet: read tcp 127.0.0.1:38970->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.863+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(2397815615280772255) connection "127.0.0.1:38970" response transport failed `read tcp 127.0.0.1:38970->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.874+05:30 [Error] receiving packet: read tcp 127.0.0.1:38988->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.874+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(6217126857172552375) connection "127.0.0.1:38988" response transport failed `read tcp 127.0.0.1:38988->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:08.928+05:30 [Error] receiving packet: read tcp 127.0.0.1:38990->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:08.928+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(8848013257586689734) connection "127.0.0.1:38990" response transport failed `read tcp 127.0.0.1:38990->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:09.064+05:30 [Error] receiving packet: read tcp 127.0.0.1:38994->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:09.064+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-6765289489456722572) connection "127.0.0.1:38994" response transport failed `read tcp 127.0.0.1:38994->127.0.0.1:9113: i/o timeout`
2023-01-06T23:49:09.134+05:30 [Error] receiving packet: read tcp 127.0.0.1:39010->127.0.0.1:9113: i/o timeout
2023-01-06T23:49:09.134+05:30 [Error] [GsiScanClient:"127.0.0.1:9113"] req(-1884295273201314730) connection "127.0.0.1:39010" response transport failed `read tcp 127.0.0.1:39010->127.0.0.1:9113: i/o timeout`
2023/01/06 23:49:09 Rebalance progress: 100
2023/01/06 23:49:13 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: _default
2023/01/06 23:49:15 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: _default
2023/01/06 23:49:18 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: _default
2023/01/06 23:49:19 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:49:21 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:49:23 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:49:24 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:49:26 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:49:28 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:49:29 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: _default
2023/01/06 23:49:32 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: _default
2023/01/06 23:49:33 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: _default
2023/01/06 23:49:34 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:49:36 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:49:38 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:49:38 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:49:40 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:49:42 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:49:42 Dropping the secondary index idx_secondary
2023/01/06 23:49:42 Index dropped
2023/01/06 23:49:42 Dropping the secondary index idx_secondary
2023/01/06 23:49:43 Index dropped
2023/01/06 23:50:07 Executed N1ql statement: create index idx_secondary on `bucket_1`.`_default`.`c1`(age)
2023/01/06 23:50:13 Index status is: Ready for index: idx_secondary
2023/01/06 23:50:13 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/06 23:50:16 Executed N1ql statement: create index idx_secondary on `bucket_%2`.`_default`.`c1`(age)
2023/01/06 23:50:18 Index status is: Ready for index: idx_secondary
2023/01/06 23:50:18 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/06 23:50:21 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:50:22 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c1
--- PASS: TestTwoNodeSwapRebalance (193.82s)
=== RUN   TestSingleNodeSwapRebalance
2023/01/06 23:50:24 In TestSingleNodeSwapRebalance
2023/01/06 23:50:24 Adding node: https://127.0.0.1:19002 with role: index to the cluster with uri: /pools/default/serverGroups/0/addNode
2023/01/06 23:50:35 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9002 (role index, serverGroup: Group 1), response: {"otpNode":"n_2@127.0.0.1"}
2023/01/06 23:50:35 Removing node(s): [127.0.0.1:9004] from the cluster
2023/01/06 23:50:41 Rebalance progress: 12.5
2023/01/06 23:50:45 Rebalance progress: 65.22203947368422
2023/01/06 23:50:50 Rebalance progress: 65.2249923544514
2023/01/06 23:50:55 Rebalance progress: 65.225
2023/01/06 23:51:00 Rebalance progress: 65.22203182813561
2023/01/06 23:51:06 Rebalance progress: 65.2249974514838
2023/01/06 23:51:10 Rebalance progress: 65.2249923544514
2023/01/06 23:51:16 Rebalance progress: 76.3625
2023/01/06 23:51:20 Rebalance progress: 86.75
2023/01/06 23:51:25 Rebalance progress: 100
2023/01/06 23:51:28 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: _default
2023/01/06 23:51:30 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: _default
2023/01/06 23:51:32 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: _default
2023/01/06 23:51:33 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:51:35 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:51:37 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:51:39 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:51:41 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:51:44 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:51:45 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: _default
2023/01/06 23:51:47 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: _default
2023/01/06 23:51:49 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: _default
2023/01/06 23:51:50 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:51:51 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:51:54 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:51:55 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:51:56 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:51:59 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:52:00 Dropping the secondary index idx_secondary
2023/01/06 23:52:00 Index dropped
2023/01/06 23:52:00 Dropping the secondary index idx_secondary
2023/01/06 23:52:00 Index dropped
2023/01/06 23:52:24 Executed N1ql statement: create index idx_secondary on `bucket_1`.`_default`.`c1`(age)
2023/01/06 23:52:29 Index status is: Ready for index: idx_secondary
2023/01/06 23:52:29 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/06 23:52:32 Executed N1ql statement: create index idx_secondary on `bucket_%2`.`_default`.`c1`(age)
2023/01/06 23:52:39 Index status is: Ready for index: idx_secondary
2023/01/06 23:52:39 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/06 23:52:41 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:52:43 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c1
--- PASS: TestSingleNodeSwapRebalance (141.56s)
=== RUN   TestReplicaRepair
2023/01/06 23:52:45 In TestReplicaRepair
2023/01/06 23:52:45 Failing over: [127.0.0.1:9003]
2023/01/06 23:52:52 Rebalance progress: 100
2023/01/06 23:52:52 Adding node: https://127.0.0.1:19001 with role: index to the cluster with uri: /pools/default/serverGroups/93c2efc37299674c67f8076b2d06040d/addNode
2023/01/06 23:53:05 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9001 (role index, serverGroup: Group 2), response: {"otpNode":"n_1@127.0.0.1"}
2023/01/06 23:53:10 Rebalance progress: 16.66666666666667
2023/01/06 23:53:15 Rebalance progress: 63.43859196572688
2023/01/06 23:53:20 Rebalance progress: 63.45963554630342
2023/01/06 23:53:25 Rebalance progress: 63.47017543859649
2023/01/06 23:53:30 Rebalance progress: 63.49649122807018
2023/01/06 23:53:35 Rebalance progress: 63.51227617625319
2023/01/06 23:53:40 Rebalance progress: 63.528056598935
2023/01/06 23:53:45 Rebalance progress: 100
2023/01/06 23:53:48 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: _default
2023/01/06 23:53:50 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: _default
2023/01/06 23:53:52 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: _default
2023/01/06 23:53:53 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:53:55 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:53:58 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:53:58 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:54:01 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:54:03 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:54:04 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: _default
2023/01/06 23:54:06 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: _default
2023/01/06 23:54:08 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: _default
2023/01/06 23:54:09 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:54:11 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:54:14 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:54:15 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:54:17 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:54:19 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:54:21 Dropping the secondary index idx_secondary
2023/01/06 23:54:21 Index dropped
2023/01/06 23:54:21 Dropping the secondary index idx_secondary
2023/01/06 23:54:21 Index dropped
2023/01/06 23:54:46 Executed N1ql statement: create index idx_secondary on `bucket_1`.`_default`.`c1`(age)
2023/01/06 23:54:49 Index status is: Ready for index: idx_secondary
2023/01/06 23:54:50 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/06 23:54:52 Executed N1ql statement: create index idx_secondary on `bucket_%2`.`_default`.`c1`(age)
2023/01/06 23:55:00 Index status is: Ready for index: idx_secondary
2023/01/06 23:55:00 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/06 23:55:03 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:55:05 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c1
--- PASS: TestReplicaRepair (141.57s)
=== RUN   TestReplicaRepairAndSwapRebalance
2023/01/06 23:55:07 In TestReplicaRepairAndSwapRebalance
2023/01/06 23:55:07 Failing over: [127.0.0.1:9002]
2023/01/06 23:55:08 Adding node: https://127.0.0.1:19003 with role: index to the cluster with uri: /pools/default/serverGroups/93c2efc37299674c67f8076b2d06040d/addNode
2023/01/06 23:55:17 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9003 (role index, serverGroup: Group 2), response: {"otpNode":"n_3@127.0.0.1"}
2023/01/06 23:55:17 Adding node: https://127.0.0.1:19004 with role: index to the cluster with uri: /pools/default/serverGroups/0/addNode
2023/01/06 23:55:27 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9004 (role index, serverGroup: Group 1), response: {"otpNode":"n_4@127.0.0.1"}
2023/01/06 23:55:27 Removing node(s): [127.0.0.1:9001] from the cluster
2023/01/06 23:55:33 Rebalance progress: 12.5
2023/01/06 23:55:38 Rebalance progress: 38.83141447368422
2023/01/06 23:55:43 Rebalance progress: 38.84177631578948
2023/01/06 23:55:48 Rebalance progress: 38.84621710526316
2023/01/06 23:55:53 Rebalance progress: 38.85065789473685
2023/01/06 23:55:58 Rebalance progress: 38.85657894736843
2023/01/06 23:56:03 Rebalance progress: 38.85953947368422
2023/01/06 23:56:08 Rebalance progress: 50
2023/01/06 23:56:13 Rebalance progress: 76.32993421052633
2023/01/06 23:56:18 Rebalance progress: 76.34177631578947
2023/01/06 23:56:23 Rebalance progress: 76.3402960526316
2023/01/06 23:56:28 Rebalance progress: 76.35065789473686
2023/01/06 23:56:33 Rebalance progress: 76.35361842105264
2023/01/06 23:56:38 Rebalance progress: 76.35657894736842
2023/01/06 23:56:43 Rebalance progress: 86.75
2023/01/06 23:56:48 Rebalance progress: 100
2023/01/06 23:56:51 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: _default
2023/01/06 23:56:52 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: _default
2023/01/06 23:56:54 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: _default
2023/01/06 23:56:55 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:56:58 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:57:00 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:57:01 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:57:03 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:57:05 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:57:06 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: _default
2023/01/06 23:57:08 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: _default
2023/01/06 23:57:10 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: _default
2023/01/06 23:57:10 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:57:12 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:57:14 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:57:14 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:57:16 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:57:18 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/06 23:57:19 Dropping the secondary index idx_secondary
2023/01/06 23:57:19 Index dropped
2023/01/06 23:57:19 Dropping the secondary index idx_secondary
2023/01/06 23:57:19 Index dropped
2023/01/06 23:57:43 Executed N1ql statement: create index idx_secondary on `bucket_1`.`_default`.`c1`(age)
2023/01/06 23:57:45 Index status is: Ready for index: idx_secondary
2023/01/06 23:57:46 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/06 23:57:48 Executed N1ql statement: create index idx_secondary on `bucket_%2`.`_default`.`c1`(age)
2023/01/06 23:57:50 Index status is: Ready for index: idx_secondary
2023/01/06 23:57:50 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/06 23:57:52 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:57:53 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c1
--- PASS: TestReplicaRepairAndSwapRebalance (167.97s)
=== RUN   TestBuildDeferredIndexesAfterRebalance
2023/01/06 23:57:55 In TestBuildDeferredIndexesAfterRebalance
2023/01/06 23:57:55 Build command issued for the deferred indexes [idx_secondary_defer], bucket: bucket_1, scope: _default, coll: _default
2023/01/06 23:57:55 Waiting for the index idx_secondary_defer to become active
2023/01/06 23:57:55 Waiting for index 1220760736043346311 to go active ...
2023/01/06 23:57:56 Waiting for index 1220760736043346311 to go active ...
2023/01/06 23:57:57 Index is 1220760736043346311 now active
2023/01/06 23:57:57 Build command issued for the deferred indexes [idx_secondary_defer], bucket: bucket_1, scope: _default, coll: c1
2023/01/06 23:57:57 Waiting for the index idx_secondary_defer to become active
2023/01/06 23:57:57 Waiting for index 15670330224437858203 to go active ...
2023/01/06 23:57:58 Waiting for index 15670330224437858203 to go active ...
2023/01/06 23:57:59 Waiting for index 15670330224437858203 to go active ...
2023/01/06 23:58:00 Waiting for index 15670330224437858203 to go active ...
2023/01/06 23:58:01 Index is 15670330224437858203 now active
2023/01/06 23:58:01 Build command issued for the deferred indexes [idx_secondary_defer], bucket: bucket_1, scope: _default, coll: c2%
2023/01/06 23:58:01 Waiting for the index idx_secondary_defer to become active
2023/01/06 23:58:01 Waiting for index 6364894215169513211 to go active ...
2023/01/06 23:58:02 Waiting for index 6364894215169513211 to go active ...
2023/01/06 23:58:03 Index is 6364894215169513211 now active
2023/01/06 23:58:03 Build command issued for the deferred indexes [idx_secondary_defer], bucket: bucket_%2, scope: _default, coll: _default
2023/01/06 23:58:03 Waiting for the index idx_secondary_defer to become active
2023/01/06 23:58:03 Waiting for index 14139101824143094344 to go active ...
2023/01/06 23:58:04 Waiting for index 14139101824143094344 to go active ...
2023/01/06 23:58:05 Waiting for index 14139101824143094344 to go active ...
2023/01/06 23:58:06 Index is 14139101824143094344 now active
2023/01/06 23:58:06 Build command issued for the deferred indexes [idx_secondary_defer], bucket: bucket_%2, scope: _default, coll: c1
2023/01/06 23:58:06 Waiting for the index idx_secondary_defer to become active
2023/01/06 23:58:06 Waiting for index 14592630485592537080 to go active ...
2023/01/06 23:58:07 Waiting for index 14592630485592537080 to go active ...
2023/01/06 23:58:08 Index is 14592630485592537080 now active
2023/01/06 23:58:09 Build command issued for the deferred indexes [idx_secondary_defer], bucket: bucket_%2, scope: _default, coll: c2%
2023/01/06 23:58:09 Waiting for the index idx_secondary_defer to become active
2023/01/06 23:58:09 Waiting for index 8652753261807732159 to go active ...
2023/01/06 23:58:10 Waiting for index 8652753261807732159 to go active ...
2023/01/06 23:58:11 Waiting for index 8652753261807732159 to go active ...
2023/01/06 23:58:12 Index is 8652753261807732159 now active
2023/01/06 23:58:14 scanIndexReplicas: Scanning all for index: idx_secondary_defer, bucket: bucket_1, scope: _default, collection: _default
2023/01/06 23:58:16 scanIndexReplicas: Scanning all for index: idx_secondary_defer, bucket: bucket_1, scope: _default, collection: c1
2023/01/06 23:58:17 scanIndexReplicas: Scanning all for index: idx_secondary_defer, bucket: bucket_1, scope: _default, collection: c2%
2023/01/06 23:58:19 scanIndexReplicas: Scanning all for index: idx_secondary_defer, bucket: bucket_%2, scope: _default, collection: _default
2023/01/06 23:58:20 scanIndexReplicas: Scanning all for index: idx_secondary_defer, bucket: bucket_%2, scope: _default, collection: c1
2023/01/06 23:58:22 scanIndexReplicas: Scanning all for index: idx_secondary_defer, bucket: bucket_%2, scope: _default, collection: c2%
--- PASS: TestBuildDeferredIndexesAfterRebalance (27.78s)
=== RUN   TestDropIndexAfterRebalance
2023/01/06 23:58:23 In TestDropIndexAfterRebalance
2023/01/06 23:58:23 Dropping the secondary index idx_secondary
2023/01/06 23:58:23 Index dropped
2023/01/06 23:58:23 Dropping the secondary index idx_secondary_defer
2023/01/06 23:58:23 Index dropped
2023/01/06 23:58:23 Dropping the secondary index idx_secondary
2023/01/06 23:58:23 Index dropped
2023/01/06 23:58:23 Dropping the secondary index idx_secondary_defer
2023/01/06 23:58:23 Index dropped
2023/01/06 23:58:23 Dropping the secondary index idx_secondary
2023/01/06 23:58:24 Index dropped
2023/01/06 23:58:24 Dropping the secondary index idx_secondary_defer
2023/01/06 23:58:24 Index dropped
2023/01/06 23:58:24 Dropping the secondary index idx_secondary
2023/01/06 23:58:24 Index dropped
2023/01/06 23:58:24 Dropping the secondary index idx_secondary_defer
2023/01/06 23:58:24 Index dropped
2023/01/06 23:58:24 Dropping the secondary index idx_secondary
2023/01/06 23:58:24 Index dropped
2023/01/06 23:58:24 Dropping the secondary index idx_secondary_defer
2023/01/06 23:58:24 Index dropped
2023/01/06 23:58:24 Dropping the secondary index idx_secondary
2023/01/06 23:58:24 Index dropped
2023/01/06 23:58:24 Dropping the secondary index idx_secondary_defer
2023/01/06 23:58:25 Index dropped
2023/01/07 00:00:35 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_1, scope: _default, collection: _default
2023/01/07 00:00:35 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary_defer not found., index: idx_secondary_defer, bucket: bucket_1, scope: _default, collection: _default
2023/01/07 00:00:35 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:00:35 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary_defer not found., index: idx_secondary_defer, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:00:35 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_1, scope: _default, collection: c2%
2023/01/07 00:00:35 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary_defer not found., index: idx_secondary_defer, bucket: bucket_1, scope: _default, collection: c2%
2023/01/07 00:00:35 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_%2, scope: _default, collection: _default
2023/01/07 00:00:35 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary_defer not found., index: idx_secondary_defer, bucket: bucket_%2, scope: _default, collection: _default
2023/01/07 00:00:35 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:00:35 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary_defer not found., index: idx_secondary_defer, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:00:35 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/07 00:00:35 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary_defer not found., index: idx_secondary_defer, bucket: bucket_%2, scope: _default, collection: c2%
--- PASS: TestDropIndexAfterRebalance (132.21s)
=== RUN   TestRebalanceAfterDropIndexes
2023/01/07 00:00:35 In TestRebalanceAfterDropIndexes
2023/01/07 00:00:35 Adding node: https://127.0.0.1:19001 with role: index to the cluster with uri: /pools/default/serverGroups/93c2efc37299674c67f8076b2d06040d/addNode
2023/01/07 00:00:46 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9001 (role index, serverGroup: Group 2), response: {"otpNode":"n_1@127.0.0.1"}
2023/01/07 00:00:47 Adding node: https://127.0.0.1:19002 with role: index to the cluster with uri: /pools/default/serverGroups/0/addNode
2023/01/07 00:00:59 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9002 (role index, serverGroup: Group 1), response: {"otpNode":"n_2@127.0.0.1"}
2023/01/07 00:00:59 Removing node(s): [127.0.0.1:9003 127.0.0.1:9004] from the cluster
2023/01/07 00:01:05 Rebalance progress: 10
2023/01/07 00:01:10 Rebalance progress: 10
2023/01/07 00:01:15 Rebalance progress: 38.11538461538462
2023/01/07 00:01:20 Rebalance progress: 38.12
2023/01/07 00:01:25 Rebalance progress: 38.12
2023/01/07 00:01:30 Rebalance progress: 38.11769230769231
2023/01/07 00:01:35 Rebalance progress: 38.12
2023/01/07 00:01:40 Rebalance progress: 50
2023/01/07 00:01:45 Rebalance progress: 57
2023/01/07 00:01:50 Rebalance progress: 78.12
2023/01/07 00:01:55 Rebalance progress: 78.11538461538461
2023/01/07 00:02:00 Rebalance progress: 78.12
2023/01/07 00:02:05 Rebalance progress: 78.12
2023/01/07 00:02:10 Rebalance progress: 78.12
2023/01/07 00:02:15 Rebalance progress: 100
2023/01/07 00:02:18 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_1, scope: _default, collection: _default
2023/01/07 00:02:18 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary_defer not found., index: idx_secondary_defer, bucket: bucket_1, scope: _default, collection: _default
2023/01/07 00:02:18 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: _default
2023/01/07 00:02:20 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: _default
2023/01/07 00:02:21 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:02:21 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary_defer not found., index: idx_secondary_defer, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:02:21 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:02:24 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:02:24 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_1, scope: _default, collection: c2%
2023/01/07 00:02:24 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary_defer not found., index: idx_secondary_defer, bucket: bucket_1, scope: _default, collection: c2%
2023/01/07 00:02:24 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: c2%
2023/01/07 00:02:27 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: c2%
2023/01/07 00:02:28 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_%2, scope: _default, collection: _default
2023/01/07 00:02:28 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary_defer not found., index: idx_secondary_defer, bucket: bucket_%2, scope: _default, collection: _default
2023/01/07 00:02:28 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: _default
2023/01/07 00:02:30 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: _default
2023/01/07 00:02:31 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:02:31 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary_defer not found., index: idx_secondary_defer, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:02:31 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:02:33 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:02:34 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/07 00:02:34 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary_defer not found., index: idx_secondary_defer, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/07 00:02:34 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/07 00:02:37 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: c2%
--- PASS: TestRebalanceAfterDropIndexes (123.23s)
=== RUN   TestCreateIndexsAfterRebalance
2023/01/07 00:02:38 In TestCreateIndexesAfterRebalance
2023/01/07 00:02:41 Executed N1ql statement: create index idx_secondary on `bucket_1`.`_default`.`_default`(age)
2023/01/07 00:02:51 Index status is: Ready for index: idx_secondary
2023/01/07 00:02:51 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/07 00:02:51 Executed N1ql statement: create index idx_secondary_defer on `bucket_1`.`_default`.`_default`(age) with {"defer_build":true}
2023/01/07 00:02:56 Index status is: Created for index: idx_secondary_defer
2023/01/07 00:02:56 Index status is: Created for index: idx_secondary_defer (replica 1)
2023/01/07 00:02:58 Executed N1ql statement: create index idx_secondary on `bucket_1`.`_default`.`c1`(age)
2023/01/07 00:03:06 Index status is: Ready for index: idx_secondary
2023/01/07 00:03:06 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/07 00:03:06 Executed N1ql statement: create index idx_secondary_defer on `bucket_1`.`_default`.`c1`(age) with {"defer_build":true}
2023/01/07 00:03:16 Index status is: Created for index: idx_secondary_defer
2023/01/07 00:03:16 Index status is: Created for index: idx_secondary_defer (replica 1)
2023/01/07 00:03:19 Executed N1ql statement: create index idx_secondary on `bucket_1`.`_default`.`c2%`(age)
2023/01/07 00:03:25 Index status is: Ready for index: idx_secondary
2023/01/07 00:03:25 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/07 00:03:26 Executed N1ql statement: create index idx_secondary_defer on `bucket_1`.`_default`.`c2%`(age) with {"defer_build":true}
2023/01/07 00:03:31 Index status is: Created for index: idx_secondary_defer
2023/01/07 00:03:31 Index status is: Created for index: idx_secondary_defer (replica 1)
2023/01/07 00:03:34 Executed N1ql statement: create index idx_secondary on `bucket_%2`.`_default`.`_default`(age)
2023/01/07 00:03:41 Index status is: Ready for index: idx_secondary
2023/01/07 00:03:41 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/07 00:03:41 Executed N1ql statement: create index idx_secondary_defer on `bucket_%2`.`_default`.`_default`(age) with {"defer_build":true}
2023/01/07 00:03:46 Index status is: Created for index: idx_secondary_defer
2023/01/07 00:03:46 Index status is: Created for index: idx_secondary_defer (replica 1)
2023/01/07 00:03:49 Executed N1ql statement: create index idx_secondary on `bucket_%2`.`_default`.`c1`(age)
2023/01/07 00:03:51 Index status is: Ready for index: idx_secondary
2023/01/07 00:03:51 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/07 00:03:52 Executed N1ql statement: create index idx_secondary_defer on `bucket_%2`.`_default`.`c1`(age) with {"defer_build":true}
2023/01/07 00:04:01 Index status is: Created for index: idx_secondary_defer
2023/01/07 00:04:01 Index status is: Created for index: idx_secondary_defer (replica 1)
2023/01/07 00:04:04 Executed N1ql statement: create index idx_secondary on `bucket_%2`.`_default`.`c2%`(age)
2023/01/07 00:04:11 Index status is: Ready for index: idx_secondary
2023/01/07 00:04:11 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/07 00:04:12 Executed N1ql statement: create index idx_secondary_defer on `bucket_%2`.`_default`.`c2%`(age) with {"defer_build":true}
2023/01/07 00:04:21 Index status is: Created for index: idx_secondary_defer
2023/01/07 00:04:21 Index status is: Created for index: idx_secondary_defer (replica 1)
2023/01/07 00:04:24 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: _default
2023/01/07 00:04:26 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:04:28 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c2%
2023/01/07 00:04:31 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: _default
2023/01/07 00:04:33 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:04:35 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c2%
--- PASS: TestCreateIndexsAfterRebalance (118.87s)
=== RUN   TestRebalanceAfterDroppedCollections
2023/01/07 00:04:37 In TestRebalanceAfterDroppedCollections
2023/01/07 00:04:37 Dropped collection c1 for bucket: bucket_1, scope: _default, body: {"uid":"4"}
2023/01/07 00:04:38 Dropped collection c2% for bucket: bucket_1, scope: _default, body: {"uid":"5"}
2023/01/07 00:04:38 Dropped collection c1 for bucket: bucket_%2, scope: _default, body: {"uid":"4"}
2023/01/07 00:04:38 Dropped collection c2% for bucket: bucket_%2, scope: _default, body: {"uid":"5"}
2023/01/07 00:04:39 Adding node: https://127.0.0.1:19003 with role: index to the cluster with uri: /pools/default/serverGroups/93c2efc37299674c67f8076b2d06040d/addNode
2023/01/07 00:05:00 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9003 (role index, serverGroup: Group 2), response: {"otpNode":"n_3@127.0.0.1"}
2023/01/07 00:05:00 Adding node: https://127.0.0.1:19004 with role: index to the cluster with uri: /pools/default/serverGroups/0/addNode
2023/01/07 00:05:14 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9004 (role index, serverGroup: Group 1), response: {"otpNode":"n_4@127.0.0.1"}
2023/01/07 00:05:14 Removing node(s): [127.0.0.1:9001 127.0.0.1:9002] from the cluster
2023/01/07 00:05:20 Rebalance progress: 10
2023/01/07 00:05:25 Rebalance progress: 38.12
2023/01/07 00:05:30 Rebalance progress: 38.11954763213697
2023/01/07 00:05:35 Rebalance progress: 38.11992460535616
2023/01/07 00:05:40 Rebalance progress: 50
2023/01/07 00:05:45 Rebalance progress: 78.10714285714286
2023/01/07 00:05:50 Rebalance progress: 78.12
2023/01/07 00:05:55 Rebalance progress: 90
2023/01/07 00:06:00 Rebalance progress: 100
2023/01/07 00:06:03 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: _default
2023/01/07 00:06:06 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_1, scope: _default, collection: _default
2023/01/07 00:06:08 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: _default
2023/01/07 00:06:09 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:06:09 Scan failed as expected with error: Index Not Found - cause: GSI index #primary not found., index: #primary, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:06:09 Scan failed as expected with error: Index Not Found - cause: GSI index idx_partitioned not found., index: idx_partitioned, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:06:09 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_1, scope: _default, collection: c2%
2023/01/07 00:06:09 Scan failed as expected with error: Index Not Found - cause: GSI index #primary not found., index: #primary, bucket: bucket_1, scope: _default, collection: c2%
2023/01/07 00:06:09 Scan failed as expected with error: Index Not Found - cause: GSI index idx_partitioned not found., index: idx_partitioned, bucket: bucket_1, scope: _default, collection: c2%
2023/01/07 00:06:09 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: _default
2023/01/07 00:06:11 scanIndexReplicas: Scanning all for index: #primary, bucket: bucket_%2, scope: _default, collection: _default
2023/01/07 00:06:13 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: _default
2023/01/07 00:06:14 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:06:14 Scan failed as expected with error: Index Not Found - cause: GSI index #primary not found., index: #primary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:06:14 Scan failed as expected with error: Index Not Found - cause: GSI index idx_partitioned not found., index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:06:14 Scan failed as expected with error: Index Not Found - cause: GSI index idx_secondary not found., index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/07 00:06:14 Scan failed as expected with error: Index Not Found - cause: GSI index #primary not found., index: #primary, bucket: bucket_%2, scope: _default, collection: c2%
2023/01/07 00:06:14 Scan failed as expected with error: Index Not Found - cause: GSI index idx_partitioned not found., index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: c2%
--- PASS: TestRebalanceAfterDroppedCollections (96.58s)
=== RUN   TestRebalancePanicTestsSetup
2023/01/07 00:06:14 In DropAllSecondaryIndexes()
2023/01/07 00:06:14 Index found:  #primary_defer
2023/01/07 00:06:14 Dropped index #primary_defer
2023/01/07 00:06:14 Index found:  idx_partitioned_defer
2023/01/07 00:06:14 Dropped index idx_partitioned_defer
2023/01/07 00:06:14 Index found:  #primary
2023/01/07 00:06:14 Dropped index #primary
2023/01/07 00:06:14 Index found:  #primary
2023/01/07 00:06:15 Dropped index #primary
2023/01/07 00:06:15 Index found:  idx_secondary_defer
2023/01/07 00:06:15 Dropped index idx_secondary_defer
2023/01/07 00:06:15 Index found:  idx_secondary_defer
2023/01/07 00:06:15 Dropped index idx_secondary_defer
2023/01/07 00:06:15 Index found:  idx_partitioned
2023/01/07 00:06:15 Dropped index idx_partitioned
2023/01/07 00:06:15 Index found:  #primary
2023/01/07 00:06:16 Dropped index #primary
2023/01/07 00:06:16 Index found:  #primary
2023/01/07 00:06:16 Dropped index #primary
2023/01/07 00:06:16 Index found:  idx_partitioned_defer
2023/01/07 00:06:16 Dropped index idx_partitioned_defer
2023/01/07 00:06:16 Index found:  #primary_defer
2023/01/07 00:06:16 Dropped index #primary_defer
2023/01/07 00:06:16 Index found:  idx_secondary
2023/01/07 00:06:17 Dropped index idx_secondary
2023/01/07 00:06:17 Index found:  idx_partitioned
2023/01/07 00:06:17 Dropped index idx_partitioned
2023/01/07 00:06:17 Index found:  idx_secondary
2023/01/07 00:06:17 Dropped index idx_secondary
2023/01/07 00:06:24 Deleted bucket bucket_1, responseBody: 
2023/01/07 00:06:25 Deleted bucket bucket_%2, responseBody: 
2023/01/07 00:06:40 Removing node(s): [127.0.0.1:9001 127.0.0.1:9002 127.0.0.1:9003 127.0.0.1:9004] from the cluster
2023/01/07 00:06:45 Rebalance progress: 100
2023/01/07 00:06:45 Adding node: https://127.0.0.1:19001 with role: index to the cluster with uri: /pools/default/serverGroups/93c2efc37299674c67f8076b2d06040d/addNode
2023/01/07 00:06:58 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9001 (role index, serverGroup: Group 2), response: {"otpNode":"n_1@127.0.0.1"}
2023/01/07 00:06:58 Adding node: https://127.0.0.1:19002 with role: index to the cluster with uri: /pools/default/serverGroups/0/addNode
2023/01/07 00:07:06 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9002 (role index, serverGroup: Group 1), response: {"otpNode":"n_2@127.0.0.1"}
2023/01/07 00:07:11 Rebalance progress: 100
2023/01/07 00:07:11 Created bucket bucket_1, responseBody: 
2023/01/07 00:07:11 WaitForBucketCreation: Checking bucket(bucket_1) creation for host: 127.0.0.1:9000
2023/01/07 00:07:12 WaitForBucketCreation: Checking bucket(bucket_1) creation for host: 127.0.0.1:9001
2023/01/07 00:07:13 WaitForBucketCreation: Checking bucket(bucket_1) creation for host: 127.0.0.1:9002
2023/01/07 00:07:13 Created collection succeeded for bucket: bucket_1, scope: _default, collection: c1, body: {"uid":"2"}
2023/01/07 00:07:13 TestIndexPlacement: Manifest for bucket: bucket_1, scope: _default, collection: c1 is: map[uid:2]
2023/01/07 00:07:13 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9000, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:07:13 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9001, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:07:13 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9002, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:07:13 Received OK response from ensureManifest, bucket: bucket_1, uid: 2
2023/01/07 00:07:17 Executed N1ql statement: create index idx_secondary on `bucket_1`.`_default`.`c1`(age)
2023/01/07 00:07:23 Index status is: Ready for index: idx_secondary
2023/01/07 00:07:23 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/07 00:07:27 Executed N1ql statement: create index idx_partitioned on `bucket_1`.`_default`.`c1`(emalid) partition by hash(meta().id)
2023/01/07 00:07:33 Index status is: Ready for index: idx_partitioned
2023/01/07 00:07:33 Index status is: Ready for index: idx_partitioned (replica 1)
2023/01/07 00:07:34 Executed N1ql statement: create index idx_partitioned_defer on `bucket_1`.`_default`.`c1`(balance) partition by hash(meta().id)  with {"defer_build":true}
2023/01/07 00:07:38 Index status is: Created for index: idx_partitioned_defer
2023/01/07 00:07:38 Index status is: Created for index: idx_partitioned_defer (replica 1)
2023/01/07 00:07:38 Created bucket bucket_%2, responseBody: 
2023/01/07 00:07:38 WaitForBucketCreation: Checking bucket(bucket_%2) creation for host: 127.0.0.1:9000
2023/01/07 00:07:40 WaitForBucketCreation: Checking bucket(bucket_%2) creation for host: 127.0.0.1:9001
2023/01/07 00:07:40 WaitForBucketCreation: Checking bucket(bucket_%2) creation for host: 127.0.0.1:9002
2023/01/07 00:07:40 Created collection succeeded for bucket: bucket_%2, scope: _default, collection: c1, body: {"uid":"2"}
2023/01/07 00:07:40 TestIndexPlacement: Manifest for bucket: bucket_%2, scope: _default, collection: c1 is: map[uid:2]
2023/01/07 00:07:40 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9000, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:07:40 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9001, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:07:40 WaitForCollectionCreation: Checking collection creation for host: 127.0.0.1:9002, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:07:40 Received OK response from ensureManifest, bucket: bucket_%2, uid: 2
2023/01/07 00:07:44 Executed N1ql statement: create index idx_secondary on `bucket_%2`.`_default`.`c1`(age)
2023/01/07 00:07:53 Index status is: Ready for index: idx_secondary
2023/01/07 00:07:53 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/07 00:07:56 Executed N1ql statement: create index idx_partitioned on `bucket_%2`.`_default`.`c1`(emalid) partition by hash(meta().id)
2023/01/07 00:08:03 Index status is: Ready for index: idx_partitioned
2023/01/07 00:08:03 Index status is: Ready for index: idx_partitioned (replica 1)
2023/01/07 00:08:03 Executed N1ql statement: create index idx_partitioned_defer on `bucket_%2`.`_default`.`c1`(balance) partition by hash(meta().id)  with {"defer_build":true}
2023/01/07 00:08:08 Index status is: Created for index: idx_partitioned_defer
2023/01/07 00:08:08 Index status is: Created for index: idx_partitioned_defer (replica 1)
--- PASS: TestRebalancePanicTestsSetup (114.39s)
=== RUN   TestRebalancePanicAtMasterShardTokenScheduleAck
2023/01/07 00:08:08 In TestRebalancePanicAtMasterShardTokenScheduleAck
2023/01/07 00:08:08 Changing config key indexer.shardRebalance.cancelOrPanic to value panic
2023/01/07 00:08:08 Changing config key indexer.shardRebalance.cancelOrPanicTag to value Master_ShardToken_ScheduleAck
2023/01/07 00:08:09 Adding node: https://127.0.0.1:19003 with role: index to the cluster with uri: /pools/default/serverGroups/93c2efc37299674c67f8076b2d06040d/addNode
2023/01/07 00:08:20 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9003 (role index, serverGroup: Group 2), response: {"otpNode":"n_3@127.0.0.1"}
2023/01/07 00:08:20 Adding node: https://127.0.0.1:19004 with role: index to the cluster with uri: /pools/default/serverGroups/0/addNode
2023/01/07 00:08:34 AddNodeWithServerGroup: Successfully added node: 127.0.0.1:9004 (role index, serverGroup: Group 1), response: {"otpNode":"n_4@127.0.0.1"}
2023/01/07 00:08:34 Removing node(s): [127.0.0.1:9001 127.0.0.1:9002] from the cluster
2023-01-07T00:08:37.290+05:30 [Error] WatcherServer.runOnce() : tryNum: 1, peer: 127.0.0.1:9118, error: dial tcp 127.0.0.1:9118: connect: connection refused
2023-01-07T00:08:37.291+05:30 [Error] WatcherServer.runOnce() : tryNum: 1, peer: 127.0.0.1:9118, error: dial tcp 127.0.0.1:9118: connect: connection refused
2023-01-07T00:08:39.064+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9118.  Error = EOF. Kill Pipe.
2023-01-07T00:08:39.064+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023-01-07T00:08:39.064+05:30 [Error] PeerPipe.doRecieve() : ecounter error when received mesasage from Peer 127.0.0.1:9118.  Error = EOF. Kill Pipe.
2023-01-07T00:08:39.065+05:30 [Error] WatcherServer.runOnce() : Watcher terminated unexpectedly.
2023/01/07 00:08:39 Rebalance failed. See logs for detailed reason. You can try again.
2023/01/07 00:08:46 http://127.0.0.1:9120/rebalanceCleanupStatus
2023/01/07 00:08:46 &{GET http://127.0.0.1:9120/rebalanceCleanupStatus HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9120 map[] map[]  map[]      0xc000138000}
2023/01/07 00:08:46 &{404 Not Found 404 HTTP/1.1 1 1 map[Content-Length:[19] Content-Type:[text/plain; charset=utf-8] Date:[Fri, 06 Jan 2023 18:38:46 GMT] X-Content-Type-Options:[nosniff]] 0xc00cfc2ec0 19 [] false false map[] 0xc00833bb00 }
2023/01/07 00:08:46 rebalanceCleanupStatus failed
2023/01/07 00:08:47 http://127.0.0.1:9120/rebalanceCleanupStatus
2023/01/07 00:08:47 &{GET http://127.0.0.1:9120/rebalanceCleanupStatus HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9120 map[] map[]  map[]      0xc000138000}
2023/01/07 00:08:47 &{404 Not Found 404 HTTP/1.1 1 1 map[Content-Length:[19] Content-Type:[text/plain; charset=utf-8] Date:[Fri, 06 Jan 2023 18:38:47 GMT] X-Content-Type-Options:[nosniff]] 0xc0262b3300 19 [] false false map[] 0xc00833bc00 }
2023/01/07 00:08:47 rebalanceCleanupStatus failed
2023/01/07 00:08:48 http://127.0.0.1:9120/rebalanceCleanupStatus
2023/01/07 00:08:48 &{GET http://127.0.0.1:9120/rebalanceCleanupStatus HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9120 map[] map[]  map[]      0xc000138000}
2023/01/07 00:08:48 &{404 Not Found 404 HTTP/1.1 1 1 map[Content-Length:[19] Content-Type:[text/plain; charset=utf-8] Date:[Fri, 06 Jan 2023 18:38:48 GMT] X-Content-Type-Options:[nosniff]] 0xc005140000 19 [] false false map[] 0xc0000a3200 }
2023/01/07 00:08:48 rebalanceCleanupStatus failed
2023/01/07 00:08:49 http://127.0.0.1:9120/rebalanceCleanupStatus
2023/01/07 00:08:49 &{GET http://127.0.0.1:9120/rebalanceCleanupStatus HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9120 map[] map[]  map[]      0xc000138000}
2023/01/07 00:08:49 &{404 Not Found 404 HTTP/1.1 1 1 map[Content-Length:[19] Content-Type:[text/plain; charset=utf-8] Date:[Fri, 06 Jan 2023 18:38:49 GMT] X-Content-Type-Options:[nosniff]] 0xc005140700 19 [] false false map[] 0xc008506800 }
2023/01/07 00:08:49 rebalanceCleanupStatus failed
2023/01/07 00:08:50 http://127.0.0.1:9120/rebalanceCleanupStatus
2023/01/07 00:08:50 &{GET http://127.0.0.1:9120/rebalanceCleanupStatus HTTP/1.1 1 1 map[Authorization:[Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=] Content-Type:[application/x-www-form-urlencoded; charset=UTF-8]]   0 [] false 127.0.0.1:9120 map[] map[]  map[]      0xc000138000}
2023/01/07 00:08:50 &{404 Not Found 404 HTTP/1.1 1 1 map[Content-Length:[19] Content-Type:[text/plain; charset=utf-8] Date:[Fri, 06 Jan 2023 18:38:50 GMT] X-Content-Type-Options:[nosniff]] 0xc01e9662c0 19 [] false false map[] 0xc0000a3300 }
2023/01/07 00:08:50 rebalanceCleanupStatus failed
2023/01/07 00:08:50 Waiting for rebalance cleanup to finish on node: 127.0.0.1:9003
2023/01/07 00:08:54 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:08:55 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:09:26 Scan failed as expected with error: index idx_partitioned_defer fails to come online after 30s, index: idx_partitioned_defer, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:09:26 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:09:28 scanIndexReplicas: Scanning all for index: idx_partitioned, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:09:59 Scan failed as expected with error: index idx_partitioned_defer fails to come online after 30s, index: idx_partitioned_defer, bucket: bucket_%2, scope: _default, collection: c1
2023/01/07 00:09:59 Dropping the secondary index idx_secondary
2023/01/07 00:09:59 Index dropped
2023/01/07 00:09:59 Dropping the secondary index idx_secondary
2023/01/07 00:10:00 Index dropped
2023/01/07 00:10:24 Executed N1ql statement: create index idx_secondary on `bucket_1`.`_default`.`c1`(age)
2023/01/07 00:10:28 Index status is: Ready for index: idx_secondary
2023/01/07 00:10:28 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/07 00:10:32 Executed N1ql statement: create index idx_secondary on `bucket_%2`.`_default`.`c1`(age)
2023/01/07 00:10:39 Index status is: Ready for index: idx_secondary
2023/01/07 00:10:39 Index status is: Ready for index: idx_secondary (replica 1)
2023/01/07 00:10:42 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_1, scope: _default, collection: c1
2023/01/07 00:10:43 scanIndexReplicas: Scanning all for index: idx_secondary, bucket: bucket_%2, scope: _default, collection: c1
--- PASS: TestRebalancePanicAtMasterShardTokenScheduleAck (157.63s)
=== RUN   TestRebalanceStorageDirCleanup
2023/01/07 00:10:46 cleanupStorageDir: Cleaning up /opt/build/goproj/src/github.com/couchbase/indexing/secondary/tests/serverlesstests/shard_rebalance_storage_dir
--- PASS: TestRebalanceStorageDirCleanup (0.02s)
PASS
ok  	github.com/couchbase/indexing/secondary/tests/serverlesstests	1796.045s
Indexer Go routine dump logged in /opt/build/ns_server/logs/n_1/indexer_serverless_pprof.log
curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 76580    0 76580    0     0  2712k      0 --:--:-- --:--:-- --:--:-- 2876k
Indexer Go routine dump logged in /opt/build/ns_server/logs/n_2/indexer_serverless_pprof.log
curl: /opt/build/install/lib/libcurl.so.4: no version information available (required by curl)
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 76815    0 76815    0     0  2540k      0 --:--:-- --:--:-- --:--:-- 2778k

Integration tests

git submodule init; git submodule update --init --force
Submodule path 'gauntlet': checked out '4e2424851a59c6f4b4edfdb7e36fa6a0874d6300'
Submodule path 'java_sdk_client': checked out '5dd338995c16ac2f5b187729e549b28862060732'
Submodule path 'lib/capellaAPI': checked out 'eaee55e75f8d02f4cb435d5d89c8062db614b894'
Submodule path 'magma_loader/DocLoader': checked out 'bd442d704d3aa88e6cbb06b7bc7dfc68dd675c66'
echo "Running gsi integration tests with 4 node cluster"
Running gsi integration tests with 4 node cluster
scripts/start_cluster_and_run_tests.sh b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini conf/simple_gsi_n1ql.conf 1 1 gsi_type=memory_optimized
Printing gsi_type=memory_optimized
gsi_type=memory_optimized
In here
-p makefile=True,gsi_type=memory_optimized
/opt/build/testrunner /opt/build/testrunner
make[1]: Entering directory '/opt/build/ns_server'
cd build && make --no-print-directory ns_dataclean
Built target ns_dataclean
make[1]: Leaving directory '/opt/build/ns_server'
make[1]: Entering directory '/opt/build/ns_server'
cd build && make --no-print-directory all
[  0%] Built target event_ui_build_prepare
[  0%] Built target ns_ui_build_prepare
[  0%] Building Go Modules target ns_minify_js using Go 1.18.7
[  0%] Built target ns_minify_js
[  0%] Building Go Modules target ns_minify_css using Go 1.18.7
[  0%] Built target ns_minify_css
[ 50%] Built target query_ui_build_prepare
[ 50%] Built target fts_ui_build_prepare
[ 50%] Built target cbas_ui_build_prepare
[ 50%] Built target backup_ui_build_prepare
[ 50%] Built target ui_build
==> enacl (compile)
[ 50%] Built target enacl
[100%] Built target kv_mappings
[100%] Built target ns_cfg
==> ale (compile)
[100%] Built target ale
==> chronicle (compile)
[100%] Built target chronicle
==> ns_server (compile)
[100%] Built target ns_server
==> gen_smtp (compile)
[100%] Built target gen_smtp
==> ns_babysitter (compile)
[100%] Built target ns_babysitter
==> ns_couchdb (compile)
[100%] Built target ns_couchdb
[100%] Building Go target ns_goport using Go 1.19.2
[100%] Built target ns_goport
[100%] Building Go target ns_generate_cert using Go 1.19.2
[100%] Built target ns_generate_cert
[100%] Building Go target ns_godu using Go 1.19.2
[100%] Built target ns_godu
[100%] Building Go target ns_gosecrets using Go 1.19.2
[100%] Built target ns_gosecrets
[100%] Building Go target ns_generate_hash using Go 1.18.7
[100%] Built target ns_generate_hash
==> chronicle (escriptize)
[100%] Built target chronicle_dump
make[1]: Leaving directory '/opt/build/ns_server'
/opt/build/testrunner
INFO:__main__:Checking arguments...
INFO:__main__:Conf filename: conf/simple_gsi_n1ql.conf
INFO:__main__:Test prefix: gsi.indexscans_gsi.SecondaryIndexingScanTests
INFO:__main__:Test prefix: gsi.indexcreatedrop_gsi.SecondaryIndexingCreateDropTests
INFO:__main__:Test prefix: gsi.cluster_ops_gsi.SecondaryIndexingClusterOpsTests
INFO:__main__:TestRunner: start...
INFO:__main__:Global Test input params:
INFO:__main__:
Number of tests initially selected before GROUP filters: 11
INFO:__main__:--> Running test: gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi
INFO:__main__:Logs folder: /opt/build/testrunner/logs/testrunner-23-Jan-07_00-11-43/test_1
*** TestRunner ***
{'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi',
 'conf_file': 'conf/simple_gsi_n1ql.conf',
 'gsi_type': 'memory_optimized',
 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini',
 'makefile': 'True',
 'num_nodes': 4,
 'spec': 'simple_gsi_n1ql'}
Logs will be stored at /opt/build/testrunner/logs/testrunner-23-Jan-07_00-11-43/test_1

./testrunner -i b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini -p makefile=True,gsi_type=memory_optimized -t gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index,groups=simple:equals:no_orderby_groupby:range,dataset=default,doc-per-day=1,use_gsi_for_primary=True,use_gsi_for_secondary=True,GROUP=gsi

Test Input params:
{'groups': 'simple:equals:no_orderby_groupby:range', 'dataset': 'default', 'doc-per-day': '1', 'use_gsi_for_primary': 'True', 'use_gsi_for_secondary': 'True', 'GROUP': 'gsi', 'ini': 'b/resources/dev-4-nodes-xdcr_n1ql_gsi.ini', 'cluster_name': 'dev-4-nodes-xdcr_n1ql_gsi', 'spec': 'simple_gsi_n1ql', 'conf_file': 'conf/simple_gsi_n1ql.conf', 'makefile': 'True', 'gsi_type': 'memory_optimized', 'num_nodes': 4, 'case_number': 1, 'total_testcases': 11, 'last_case_fail': 'False', 'teardown_run': 'False', 'logs_folder': '/opt/build/testrunner/logs/testrunner-23-Jan-07_00-11-43/test_1'}
Run before suite setup for gsi.indexscans_gsi.SecondaryIndexingScanTests.test_multi_create_query_explain_drop_index
suite_setUp (gsi.indexscans_gsi.SecondaryIndexingScanTests) ... -->before_suite_name:gsi.indexscans_gsi.SecondaryIndexingScanTests.suite_setUp,suite: ]>
2023-01-07 00:11:44 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 127.0.0.1 with username:Administrator, attempt#1 of 5
2023-01-07 00:11:44 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 127.0.0.1 as Administrator
2023-01-07 00:11:44 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: linux, distribution_version: default
2023-01-07 00:11:44 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] socket error while connecting to http://127.0.0.1:9000/pools/default error [Errno 111] Connection refused 
2023-01-07 00:11:47 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] socket error while connecting to http://127.0.0.1:9000/pools/default error [Errno 111] Connection refused 
2023-01-07 00:11:53 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] socket error while connecting to http://127.0.0.1:9000/pools/default error [Errno 111] Connection refused 
2023-01-07 00:12:05 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjphc2Rhc2Q=', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:asdasd
http://127.0.0.1:9000/pools/default with status False: unknown pool
2023-01-07 00:12:05 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running?
2023-01-07 00:12:05 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://127.0.0.1:9000/pools/default body:  headers: {'C