Can't access MongoDB cluster after primary node fails
I have an unusual problem. I have configured a 3 node replica set using MongoDB Community edition version 3.6.2, with SSL and Basic-Auth. I can connect to the cluster when the PRIMARY node is the node that I ran rs.initiate() on.
Node1 = Initial node where I ran rs.initiate() and added in additional replicas.
Node2 = Secondary
Node3 = Secondary
All nodes in the replica set have a priority of 10 and votes of 1.
If I then stop Node1 when it is the PRIMARY, I can see one of the other nodes switching to be a PRIMARY, currently Node3 on my servers, but I cannot connect to the cluster after that. Even though there is a primary node available.
In fact the only connection I can make is a direct connection to that node that became the PRIMARY. The normal clustered connection string won't work at all. Once I'm connected onto that node I can run an rs.status() and can see both nodes are currently available, Node3 as primary, Node2 as secondary and Node1 is unreachable.
I'm just wondering if anyone has some idea about what could possibly be wrong here.
I'm using the standard connection string format for a cluster
mongodb://user:password@node1:27017,node2:27017,node3:27017/dbName??maxIdleTimeMS=60000&readPreference=primary&ssl=true
Figured this out eventually. There were two things in play here.
Was to do with the fact the cluster was connecting public DNS names, when a node in the cluster failed, it ignores the nodes on the connection string and attempts to connect to a node using the DNS name it was registered with in the cluster. Those registration are inside a private subnet, so not publicly accessibl and there for could not be seen by the external clients. When I was setting up the cluster I had put a registration in my hosts file, to point to the initial node in the cluster using it's public ip address. That is why it could reconnect if one of the other nodes failed and the primary node was available and nominated as the master.
I needed to put both the public and private DNS names into the digital certificates I created for each node in the cluster. I put the public DNS as a Subject Alternative Name in the certificate.
Anyway two takeaways for this for me
MongoDB only uses the connection string for it's initial connection and settings, after that in a cluster it ignores the connection string and uses the internal cluster registrations for access the nodes.
Make a note of all the bloody things I do (like change my private host file) when doing things like this going forward. If I had of remember that it would have made my life simpler.
Cheers ...
链接地址: http://www.djcxy.com/p/61662.html下一篇: 主节点失败后无法访问MongoDB群集